Differences
This shows you the differences between two versions of the page.
Both sides previous revision Previous revision Next revision | Previous revision | ||
algorithm_techniques [2020/08/15 17:06] paul [Graphs] |
algorithm_techniques [2020/10/09 19:39] (current) |
||
---|---|---|---|
Line 1: | Line 1: | ||
====== Algorithm Techniques ====== | ====== Algorithm Techniques ====== | ||
+ | |||
+ | For explanations of problems see [[Algorithm Problems]]. | ||
+ | |||
+ | For helpful reference, see [[Coding Cheat Sheet]]. | ||
An algorithm is a well-defined computational procedure that takes in a set of values as inputs and produces a set of values as outputs. An algorithm is thus a sequence of computational steps that transform input into output. | An algorithm is a well-defined computational procedure that takes in a set of values as inputs and produces a set of values as outputs. An algorithm is thus a sequence of computational steps that transform input into output. | ||
Line 88: | Line 92: | ||
{{ :: | {{ :: | ||
+ | |||
+ | ===== Heaps ===== | ||
+ | |||
+ | Heaps are implemented in C++ as priority_queue. | ||
+ | |||
+ | See [[https:// | ||
+ | |||
+ | Heaps allow us to put data into them and keep track of the mininum or maximum | ||
+ | value. It does not ordered each individual element. | ||
==== Heap Sort ==== | ==== Heap Sort ==== | ||
Line 260: | Line 273: | ||
When traversing recursively, | When traversing recursively, | ||
- | 1. What is my base case that I am checking for? | + | - What is my base case that I am checking for? |
- | 2. What nodes will I call again recursively? | + | |
+ | |||
+ | {{: | ||
+ | |||
+ | {{: | ||
+ | |||
+ | ==== Iterative Postorder Traversal ==== | ||
+ | |||
+ | You can accomplish a postorder traversal iteratively with two stacks. Here is | ||
+ | the algorithm: | ||
+ | |||
+ | < | ||
+ | 1. Put root in stack 1. | ||
+ | 2. While Stack 1 isn't empty | ||
+ | 3. pop top element of stack one and put it in stack 2 | ||
+ | 4. put the left and right child of that element in stack 1 | ||
+ | 5. Print contents of stack 2 | ||
+ | </ | ||
===== Greedy Algorithms ===== | ===== Greedy Algorithms ===== | ||
Line 364: | Line 394: | ||
$$o(g(n)) = \{f(n)\ :\ for\ any\ positive\ constant\ c>0,\ there\ exists\ a\ constant\\\ | $$o(g(n)) = \{f(n)\ :\ for\ any\ positive\ constant\ c>0,\ there\ exists\ a\ constant\\\ | ||
n_0>0\ such\ that\ 0\leq f(n)\ < c*g(n)\ for\ all\ n \geq n_0\}.$$ | n_0>0\ such\ that\ 0\leq f(n)\ < c*g(n)\ for\ all\ n \geq n_0\}.$$ | ||
+ | |||
+ | |||
+ | ==== Difference Between O(log(n)) and O(nlog(n)) ==== | ||
+ | |||
+ | https:// | ||
+ | |||
+ | Think of it as O(n*log(n)), | ||
+ | searching for an element in a sorted list of length n is O(log(n)). Searching | ||
+ | for the element in n different sorted lists, each of length n is O(n*log(n)). | ||
+ | |||
+ | Remember that O(n) is defined relative to some real quantity n. This might be | ||
+ | the size of a list, or the number of different elements in a collection. | ||
+ | Therefore, every variable that appears inside O(...) represents something | ||
+ | interacting to increase the runtime. O(n*m) could be written O(n_1 + n_2 + ... + | ||
+ | n_m) and represent the same thing: "doing n, m times" | ||
+ | |||
+ | Let's take a concrete example of this, mergesort. For n input elements: On the | ||
+ | very last iteration of our sort, we have two halves of the input, each half size | ||
+ | n/2, and each half is sorted. All we have to do is merge them together, which | ||
+ | takes n operations. On the next-to-last iteration, we have twice as many pieces | ||
+ | (4) each of size n/4. For each of our two pairs of size n/4, we merge the pair | ||
+ | together, which takes n/2 operations for a pair (one for each element in the | ||
+ | pair, just like before), i.e. n operations for the two pairs. | ||
+ | |||
+ | From here, we can extrapolate that every level of our mergesort takes n | ||
+ | operations to merge. The big-O complexity is therefore n times the number of | ||
+ | levels. On the last level, the size of the chunks we're merging is n/2. Before | ||
+ | that, it's n/4, before that n/8, etc. all the way to size 1. How many times must | ||
+ | you divide n by 2 to get 1? log(n). So we have log(n) levels. Therefore, our | ||
+ | total runtime is O(n (work per level) * log(n) (number of levels)), n work. | ||
===== Backtracking ===== | ===== Backtracking ===== | ||
Line 404: | Line 464: | ||
===== Graphs ===== | ===== Graphs ===== | ||
- | Undirected | + | Graphs |
- | {{: | + | ==== Types of Graphs ===== |
- | Assumptions | + | === Undirected Graphs and Directed Graphs === |
+ | |||
+ | Undirected Graphs don't have edges to them. You can go from A to B or B to A. No problems! | ||
+ | |||
+ | {{: | ||
+ | |||
+ | === Directed Acyclical Graphs (DAGS) === | ||
+ | |||
+ | {{: | ||
+ | |||
+ | These graphs don't have any cycles, so you can you explore the whole connected graph without entering a loop. | ||
+ | |||
+ | ==== Adjacency Lists ==== | ||
+ | |||
+ | Graphs can be represented by Adjacency Matrices, and Adjacency lists. Adjacency | ||
+ | Matrices are rarely used because they take up '' | ||
+ | are 2D boolean arrays that specify if a vertex I is connected to vertex J. | ||
+ | |||
+ | {{: | ||
+ | |||
+ | Adjacency lists | ||
+ | Representing a graph with adjacency lists combines adjacency matrices with edge lists. For each vertex iii, store an array of the vertices adjacent to it. We typically have an array of |V| adjacency lists, one adjacency list per vertex. Here's an adjacency-list representation of the social network graph: | ||
+ | |||
+ | {{: | ||
+ | |||
+ | In code, we represent these adjacency lists with a vector of vector of ints: | ||
+ | < | ||
+ | [ [1, 6, 8], | ||
+ | [0, 4, 6, 9], | ||
+ | [4, 6], | ||
+ | [4, 5, 8], | ||
+ | [1, 2, 3, 5, 9], | ||
+ | [3, 4], | ||
+ | [0, 1, 2], | ||
+ | [8, 9], | ||
+ | [0, 3, 7], | ||
+ | [1, 4, 7] ] | ||
+ | </ | ||
+ | |||
+ | [[https:// | ||
+ | |||
+ | ==== Graph Edges ==== | ||
+ | |||
+ | A **Back Edge** is an edge that connects a vertex to a vertex that is discovered before it's parent. | ||
+ | |||
+ | Think about it like this: When you apply a DFS on a graph you fix some path that the algorithm chooses. Now in the given case: 0-> | ||
+ | |||
+ | Are 2-1, 3-2, and 4-3 back edges too? For a different path they can be. Suppose your DFS chooses 0-> | ||
+ | |||
+ | {{: | ||
+ | |||
+ | ==== Graph Traversal ==== | ||
You can traverse a Graph in two ways, Depth First Traversal and Breath First Traversal. | You can traverse a Graph in two ways, Depth First Traversal and Breath First Traversal. | ||
- | ==== Depth First Traversal ==== | + | === Depth First Traversal === |
+ | |||
+ | As the name implies, Depth First Traversal, or Depth First Search traversal, | ||
+ | involves going all the way down to vertex that has no other possible unexplored | ||
+ | vertices connected to it, and then backtracking to a previous node. | ||
+ | |||
+ | The best analogy for this is leaving breadcrumbs while exploring a maze. If you | ||
+ | encounter your breadcrumbs, | ||
+ | explored area. Thanks to Erik Demaine for this analogy. | ||
{{: | {{: | ||
- | DFT is accomplished with a **Stack** | + | DFT is accomplished with a **Stack**. |
- | ==== Breath First Traversal ==== | + | The basic algorithm: |
- | BFT is accomplished with a **Queue** | + | < |
+ | 1. Push first vertex onto the stack. | ||
+ | 2. Mark first vertex as visited. | ||
+ | 3. Repeat while Stack is not empty | ||
+ | 4. Visit next vertex adjacent to the one on top of the stack | ||
+ | 5. Push this vertex on the stack | ||
+ | 6. Mark this vertex as visited | ||
+ | 7. If there isn' | ||
+ | 8. pop vertex off top of the stack | ||
+ | </ | ||
+ | |||
+ | For **Directed Graphs** or graphs that contain disconnected graphs, we need an extra step to make sure we visit all vertices in the graph. | ||
+ | |||
+ | |||
+ | {{: | ||
+ | |||
+ | V is a set of vertices in our graph. Adj is the adjacency list. | ||
+ | |||
+ | {{: | ||
+ | |||
+ | |||
+ | ==== Breath First Traversal ==== | ||
{{: | {{: | ||
+ | |||
+ | BFT is accomplished with a **Queue**. | ||
+ | |||
+ | < | ||
+ | 1. Push first vertex into queue | ||
+ | 2. Mark first vertex as seen | ||
+ | 3. While queue not empty | ||
+ | 4. Pop front of the queue. | ||
+ | 5. Visit all neighboring verticles | ||
+ | 6. If they are not seen | ||
+ | 7. Push them onto the queue. | ||
+ | </ | ||
+ | |||
+ | ===== Permutations and Combinations ===== | ||
+ | |||
+ | Theres like... all these different kinds of permutations and combinations with repetitions, | ||
+ | |||
+ | ==== Unique permutations with unique result ==== | ||
+ | |||
+ | [[https:// | ||
+ | |||
+ | - Sort input candidates. | ||
+ | - Make a boolean flag array to track used elements. | ||
+ | - Recurse over the candidates, choosing and building a solution, and then unchoosing. | ||
+ | - If the next option is the same, skip it. | ||
+ | |||
+ | ===== Loop Detection / Floyd Algo ===== | ||
+ | |||
+ |