Differences

This shows you the differences between two versions of the page.

Link to this comparison view

Both sides previous revision Previous revision
Next revision
Previous revision
algorithm_techniques [2020/08/15 16:16]
paul [Intervals]
algorithm_techniques [2020/10/09 19:39] (current)
Line 1: Line 1:
 ====== Algorithm Techniques ====== ====== Algorithm Techniques ======
 +
 +For explanations of problems see [[Algorithm Problems]].
 +
 +For helpful reference, see [[Coding Cheat Sheet]].
  
 An algorithm is a well-defined computational procedure that takes in a set of values as inputs and produces a set of values as outputs. An algorithm is thus a sequence of computational steps that transform input into output. An algorithm is a well-defined computational procedure that takes in a set of values as inputs and produces a set of values as outputs. An algorithm is thus a sequence of computational steps that transform input into output.
Line 88: Line 92:
  
 {{ ::merge_sort.png?direct&400 |}} {{ ::merge_sort.png?direct&400 |}}
 +
 +===== Heaps =====
 +
 +Heaps are implemented in C++ as priority_queue. 
 +
 +See [[https://stackoverflow.com/questions/39514469/argument-for-o1-average-case-complexity-of-heap-insertion|this discussions]].
 +
 +Heaps allow us to put data into them and keep track of the mininum or maximum
 +value. It does not ordered each individual element.
  
 ==== Heap Sort ==== ==== Heap Sort ====
Line 260: Line 273:
 When traversing recursively, always thing of the following: When traversing recursively, always thing of the following:
  
-1. What is my base case that I am checking for? +  - What is my base case that I am checking for? 
-2. What nodes will I call again recursively?+  What nodes will I call again recursively? 
 + 
 +{{:pasted:20200816-072505.png?500}} 
 + 
 +{{:pasted:20200816-072543.png?500}} 
 + 
 +==== Iterative Postorder Traversal ==== 
 + 
 +You can accomplish a postorder traversal iteratively with two stacks. Here is 
 +the algorithm: 
 + 
 +<code> 
 +1. Put root in stack 1. 
 +2. While Stack 1 isn't empty 
 +3.   pop top element of stack one and put it in stack 2 
 +4.   put the left and right child of that element in stack 1 
 +5. Print contents of stack 2 
 +</code>
  
 ===== Greedy Algorithms ===== ===== Greedy Algorithms =====
Line 364: Line 394:
 $$o(g(n)) = \{f(n)\ :\ for\ any\ positive\ constant\ c>0,\ there\ exists\ a\ constant\\\ $$o(g(n)) = \{f(n)\ :\ for\ any\ positive\ constant\ c>0,\ there\ exists\ a\ constant\\\
 n_0>0\ such\ that\ 0\leq f(n)\ < c*g(n)\ for\ all\ n \geq n_0\}.$$ n_0>0\ such\ that\ 0\leq f(n)\ < c*g(n)\ for\ all\ n \geq n_0\}.$$
 +
 +
 +==== Difference Between O(log(n)) and O(nlog(n)) ====
 +
 +https://stackoverflow.com/questions/55876342/difference-between-ologn-and-onlogn/55876397
 +
 +Think of it as O(n*log(n)), i.e. "doing log(n) work n times". For example,
 +searching for an element in a sorted list of length n is O(log(n)). Searching
 +for the element in n different sorted lists, each of length n is O(n*log(n)).
 +
 +Remember that O(n) is defined relative to some real quantity n. This might be
 +the size of a list, or the number of different elements in a collection.
 +Therefore, every variable that appears inside O(...) represents something
 +interacting to increase the runtime. O(n*m) could be written O(n_1 + n_2 + ... +
 +n_m) and represent the same thing: "doing n, m times".
 +
 +Let's take a concrete example of this, mergesort. For n input elements: On the
 +very last iteration of our sort, we have two halves of the input, each half size
 +n/2, and each half is sorted. All we have to do is merge them together, which
 +takes n operations. On the next-to-last iteration, we have twice as many pieces
 +(4) each of size n/4. For each of our two pairs of size n/4, we merge the pair
 +together, which takes n/2 operations for a pair (one for each element in the
 +pair, just like before), i.e. n operations for the two pairs.
 +
 +From here, we can extrapolate that every level of our mergesort takes n
 +operations to merge. The big-O complexity is therefore n times the number of
 +levels. On the last level, the size of the chunks we're merging is n/2. Before
 +that, it's n/4, before that n/8, etc. all the way to size 1. How many times must
 +you divide n by 2 to get 1? log(n). So we have log(n) levels. Therefore, our
 +total runtime is O(n (work per level) * log(n) (number of levels)), n work.
  
 ===== Backtracking ===== ===== Backtracking =====
Line 404: Line 464:
 ===== Graphs ===== ===== Graphs =====
  
-Undirected GraphsGraphs don't have edges to them. You can go +Graphs are a great way of storing the way data depends on data. 
 + 
 +==== Types of Graphs ===== 
 + 
 +=== Undirected Graphs and Directed Graphs === 
 + 
 +Undirected Graphs don't have edges to them. You can go from A to B or B to A. No problems! 
 + 
 +{{:pasted:20200820-171741.png}} 
 + 
 +=== Directed Acyclical Graphs (DAGS) === 
 + 
 +{{:pasted:20200820-171816.png?500}} 
 + 
 +These graphs don't have any cycles, so you can you explore the whole connected graph without entering a loop. 
 + 
 +==== Adjacency Lists ==== 
 + 
 +Graphs can be represented by Adjacency Matrices, and Adjacency lists. Adjacency 
 +Matrices are rarely used because they take up ''V²'' space. Adjacency Matrices 
 +are 2D boolean arrays that specify if a vertex I is connected to vertex J. 
 + 
 +{{:pasted:20200820-165811.png}} 
 + 
 +Adjacency lists 
 +Representing a graph with adjacency lists combines adjacency matrices with edge lists. For each vertex iii, store an array of the vertices adjacent to it. We typically have an array of |V| adjacency lists, one adjacency list per vertex. Here's an adjacency-list representation of the social network graph: 
 + 
 +{{:pasted:20200820-165933.png}} 
 + 
 +In code, we represent these adjacency lists with a vector of vector of ints: 
 +<code> 
 +[ [1, 6, 8], 
 +  [0, 4, 6, 9], 
 +  [4, 6], 
 +  [4, 5, 8], 
 +  [1, 2, 3, 5, 9], 
 +  [3, 4], 
 +  [0, 1, 2], 
 +  [8, 9], 
 +  [0, 3, 7], 
 +  [1, 4, 7] ] 
 +</code> 
 + 
 +[[https://www.khanacademy.org/computing/computer-science/algorithms/graph-representation/a/representing-graphs| Resource for more info.]] 
 + 
 +==== Graph Edges ==== 
 + 
 +A **Back Edge** is an edge that connects a vertex to a vertex that is discovered before it's parent. 
 + 
 +Think about it like this: When you apply a DFS on a graph you fix some path that the algorithm chooses. Now in the given case: 0->1->2->3->4. As in the article mentioned, the source graph contains the edges 4-2 and 3-1. When the DFS reaches 3 it could choose 1 but 1 is already in your path so it is a back edge and therefore, as mentioned in the source, a possible alternative path. 
 + 
 +Are 2-1, 3-2, and 4-3 back edges too? For a different path they can be. Suppose your DFS chooses 0->1->3->2->4 then 2-1 and 4-3 are back edges. 
 + 
 +{{:pasted:20200820-025218.png?500}} 
 + 
 +==== Graph Traversal ==== 
 + 
 +You can traverse a Graph in two ways, Depth First Traversal and Breath First Traversal. 
 + 
 +=== Depth First Traversal === 
 + 
 +As the name implies, Depth First Traversal, or Depth First Search traversal, 
 +involves going all the way down to vertex that has no other possible unexplored 
 +vertices connected to it, and then backtracking to a previous node.  
 + 
 +The best analogy for this is leaving breadcrumbs while exploring a maze. If you 
 +encounter your breadcrumbs, turn back to the next closes turn, and turn into an 
 +explored area. Thanks to Erik Demaine for this analogy. 
 + 
 +{{:pasted:20200815-170533.png?400}} 
 + 
 +DFT is accomplished with a **Stack**.  
 + 
 +The basic algorithm: 
 + 
 +<code> 
 +1. Push first vertex onto the stack. 
 +2. Mark first vertex as visited. 
 +3. Repeat while Stack is not empty 
 +4.    Visit next vertex adjacent to the one on top of the stack 
 +5.    Push this vertex on the stack 
 +6.    Mark this vertex as visited 
 +7.    If there isn't a vertex to visit 
 +8.      pop vertex off top of the stack  
 +</code> 
 + 
 +For **Directed Graphs** or graphs that contain disconnected graphs, we need an extra step to make sure we visit all vertices in the graph. 
 + 
 + 
 +{{:pasted:20200820-014828.png?400}}{{:pasted:20200820-014702.png?400}} 
 + 
 +V is a set of vertices in our graph. Adj is the adjacency list. 
 + 
 +{{:pasted:20200820-021645.png?500}} 
 +    
 + 
 +==== Breath First Traversal ==== 
 + 
 +{{:pasted:20200815-170635.png?400}} 
 + 
 +BFT is accomplished with a **Queue**. 
 + 
 +<code> 
 +1. Push first vertex into queue 
 +2. Mark first vertex as seen 
 +3. While queue not empty 
 +4.    Pop front of the queue. 
 +5.    Visit all neighboring verticles  
 +6.    If they are not seen 
 +7.      Push them onto the queue. 
 +</code> 
 + 
 +===== Permutations and Combinations ===== 
 + 
 +Theres like... all these different kinds of permutations and combinations with repetitions, with unique values, etc. 
 + 
 +==== Unique permutations with unique result ==== 
 + 
 +[[https://www.youtube.com/watch?v=9lQnt4p7_nE|Youtube video on this]] 
 + 
 +  - Sort input candidates. 
 +  - Make a boolean flag array to track used elements. 
 +  - Recurse over the candidates, choosing and building a solution, and then unchoosing. 
 +  - If the next option is the same, skip it.
  
-{{:pasted:20200815-161634.png}}+===== Loop Detection / Floyd Algo =====
  
  
  • algorithm_techniques.1597508206.txt.gz
  • Last modified: 2020/08/15 16:16
  • by paul