# Coding Cheat Sheet

Length of a string:

C++:

 stringVar.length()

Python:

 len(stringVar)

Adding a character to a string: c++:

// Appending the string.
// concatenating the string.
// Appending the string.
init = init + add; 

### Sorting a 2D Vector

To make a sorting function do as follows:

bool sortFunc(const vector<int> &v1, const vector<int> &v2)
{
return v1[0] < v2[0];
}

### Time Complexity

#### Difference Between O(log(n)) and O(nlog(n))

Think of it as O(n*log(n)), i.e. “doing log(n) work n times”. For example, searching for an element in a sorted list of length n is O(log(n)). Searching for the element in n different sorted lists, each of length n is O(n*log(n)).

Remember that O(n) is defined relative to some real quantity n. This might be the size of a list, or the number of different elements in a collection. Therefore, every variable that appears inside O(…) represents something interacting to increase the runtime. O(n*m) could be written O(n_1 + n_2 + … + n_m) and represent the same thing: “doing n, m times”.

Let's take a concrete example of this, mergesort. For n input elements: On the very last iteration of our sort, we have two halves of the input, each half size n/2, and each half is sorted. All we have to do is merge them together, which takes n operations. On the next-to-last iteration, we have twice as many pieces (4) each of size n/4. For each of our two pairs of size n/4, we merge the pair together, which takes n/2 operations for a pair (one for each element in the pair, just like before), i.e. n operations for the two pairs.

From here, we can extrapolate that every level of our mergesort takes n operations to merge. The big-O complexity is therefore n times the number of levels. On the last level, the size of the chunks we're merging is n/2. Before that, it's n/4, before that n/8, etc. all the way to size 1. How many times must you divide n by 2 to get 1? log(n). So we have log(n) levels. Therefore, our total runtime is O(n (work per level) * log(n) (number of levels)), n work.

• Hash table
• Binary Search Tree
• Set
• List
• LRU Cache

### Hashtable

A hashtable allows for the rapid lookup of data based on a key. It works as follows:

1. a hash function converts a key to an index.

Learned:

1. unordered_set: It is an ordered set of unique numbers that are stored by a key, which is equal to the number in each element. They are not ordered and allow for the fast retrieval and insertion of each element (constant time O(1)). To make them you use the following syntax:
#include <unordered_set>

std::unordered_set<int> my_set;

my_set.insert(12);

std::unordered_set<int>::iterator it = my_set.find(13);

if(it == my_set.end())
{
}
else
{
std::cout << "Found: " << it* << std::endl;
// Erase it:
my_set.erase(it);
// my_set.erase(13); // you can erase by iterator or by key, the function will return 0 or 1 depending on if erased.
}

### Passing Yearbooks

We are giving an array that represents the parth a yearbook will take to get signed. The index of each element represents the student. We optimize by keeping track of what students are part of a circuit of signatures. Those student's path do not then need to be calculated.

### First Missing Positive

This a problem that requires us to use constant extra memory, forcing us to use the input array itself to encode more data on it without losing the element information. Information about key assumptions must be had.

The minimum positive element is going to be anywhere between 1 and n+1. Here is why. If we have the perfectly ordered array with n = 5: [1 2 3 4 5] The next positive element is 6 (n+1). The smaller possible positive element is going to be 1. This means our range spans the same number of elements as we have in the input. Which means if there was a way to overlay this boolean state of whether or not we have seen this number in the array, we can solve this problem with no further memory required.

We do this in three steps:

First pass: Search for a 1, and if we find any numbers out of possible solutions (negative or greater than n), we reset them to 1.

Second pass: Map the value of each element to its index and then set that value to negative if present in the array. Hard problems would not be hard if they were easy to explain.

Thirst pass: Look for the first non present negative number. If not found, handle the edge case and return n+1.

YOU GO THIS!

### Permutations and Combinations

Theres like… all these different kinds of permutations and combinations with repetitions, with unique values, etc.

#### Unique permutations with unique result

1. Sort input candidates.
2. Make a boolean flag array to track used elements.
3. Recurse over the canditations, choosing and building a solution, and then unchoosing.
4. If the next option is the same, skip it.

### Power Function

Implement your own power function. The following is the naive algorithm and takes too much time:

class Solution {
public:
double myPow(double x, int n) {

double ans = 1;

if(n == 0)
return 1;

for(int i = 1; i <= abs(n); i++)
{
ans = x * ans;
}

if(n < 0)
ans = 1/ans;

return ans;

}
};

class Solution {
public:
double fastPow(double x, long long n)
{
if ( n == 0)
return 1.0;
double half = fastPow(x, n / 2);
if( n % 2 == 0)
{
return half * half;
}
else
{
return half * half * x;
}
}

double myPow(double x, int n) {
long long N = n;
if(N < 0)
{
x = 1/x;
N = -N;
}

return fastPow(x, N);
}
};
• coding_cheat_sheet.txt