# Differences

This shows you the differences between two versions of the page.

Both sides previous revision Previous revision Next revision | Previous revision | ||

coding_cheat_sheet [2020/05/09 21:13] paul |
coding_cheat_sheet [2020/06/01 19:04] (current) paul |
||
---|---|---|---|

Line 16: | Line 16: | ||

</code> | </code> | ||

+ | ====Time Complexity==== | ||

+ | |||

+ | === Difference Between O(log(n)) and O(nlog(n)) === | ||

+ | |||

+ | https://stackoverflow.com/questions/55876342/difference-between-ologn-and-onlogn/55876397 | ||

+ | |||

+ | Think of it as O(n*log(n)), i.e. "doing log(n) work n times". For example, searching for an element in a sorted list of length n is O(log(n)). Searching for the element in n different sorted lists, each of length n is O(n*log(n)). | ||

+ | |||

+ | Remember that O(n) is defined relative to some real quantity n. This might be the size of a list, or the number of different elements in a collection. Therefore, every variable that appears inside O(...) represents something interacting to increase the runtime. O(n*m) could be written O(n_1 + n_2 + ... + n_m) and represent the same thing: "doing n, m times". | ||

+ | |||

+ | Let's take a concrete example of this, mergesort. For n input elements: On the very last iteration of our sort, we have two halves of the input, each half size n/2, and each half is sorted. All we have to do is merge them together, which takes n operations. On the next-to-last iteration, we have twice as many pieces (4) each of size n/4. For each of our two pairs of size n/4, we merge the pair together, which takes n/2 operations for a pair (one for each element in the pair, just like before), i.e. n operations for the two pairs. | ||

+ | |||

+ | From here, we can extrapolate that every level of our mergesort takes n operations to merge. The big-O complexity is therefore n times the number of levels. On the last level, the size of the chunks we're merging is n/2. Before that, it's n/4, before that n/8, etc. all the way to size 1. How many times must you divide n by 2 to get 1? log(n). So we have log(n) levels. Therefore, our total runtime is O(n (work per level) * log(n) (number of levels)), n work. | ||

+ | |||

+ | |||

+ | * Doubly Linked List | ||

+ | * Linked list | ||

+ | * Hash table | ||

+ | * Binary Search Tree | ||

+ | * Set | ||

+ | * List | ||

+ | * LRU Cache | ||

+ | |||

+ | |||

+ | ==== Hashtable ==== | ||

+ | |||

+ | A hashtable allows for the rapid lookup of data based on a key. It works as follows: | ||

+ | |||

+ | - a hash function converts a key to an index. | ||

+ | - the value is stored or retrieved from that index. | ||

+ | |||

+ | ===== Scratch Notes ===== | ||

+ | |||

+ | Learned: | ||

+ | |||

+ | - unordered_set: It is an ordered set of unique numbers that are stored by a key, which is equal to the number in each element. They are not ordered and allow for the fast retrieval and insertion of each element (constant time O(1)). To make them you use the following syntax: | ||

+ | |||

+ | <code c++> | ||

+ | #include <unordered_set> | ||

+ | |||

+ | std::unordered_set<int> my_set; | ||

+ | |||

+ | my_set.insert(12); | ||

+ | |||

+ | std::unordered_set<int>::iterator it = my_set.find(13); | ||

+ | |||

+ | if(it == my_set.end()) | ||

+ | { | ||

+ | std::cout << "13 not found!" << std::endl; | ||

+ | } | ||

+ | else | ||

+ | { | ||

+ | std::cout << "Found: " << it* << std::endl; | ||

+ | // Erase it: | ||

+ | my_set.erase(it); | ||

+ | // my_set.erase(13); // you can erase by iterator or by key, the function will return 0 or 1 depending on if erased. | ||

+ | } | ||

+ | </code> | ||

+ | |||

+ | ==== Passing Yearbooks ==== | ||

+ | |||

+ | We are giving an array that represents the parth a yearbook will take to get signed. The index of each element represents the student. We optimize by keeping track of what students are part of a circuit of signatures. Those student's path do not then need to be calculated. | ||

+ | | ||

+ | |