For general techniques, see Algorithm Techniques.
For helpful reference, see Coding Cheat Sheet.
An algorithm interview is like being in a race in a maze with hidden clues and signs to tell you where you should go, all while trying to get to the finish line in under 30 minutes. If you miss a sign, you're fucked. If you go too slow, you're fucked. So just go steady, watch the signs and down blow past them!
Rules:
Given a non-empty list of words, return the k most frequent elements. Your answer should be sorted by frequency from highest to lowest. If two words have the same frequency, then the word with the lower alphabetical order comes first.
This problem is broken in two parts:
For storing the words and their frequency there are two ways to do it:
For ordering you can do the following:
Implementation of the logic is straight forward. The tricky part is setting the custom ordereding in C++. Use the following for ordering a prioirty queue with the frequency and then lexicographically as a fallback.
typedef pair<string, int> p_d; class Compare{ public: bool operator() (const p_d& a, const p_d& b){ if(a.second == b.second){ return a.first > b.first; }else { return a.second < b.second; } } }; priority_queue<p_d, vector<p_d>, Compare> heap;
To create a custom compare ordered container and not fall into C++ errors, creating a compare class works.
Given an integer matrix, find the length of the longest increasing path.
Solved this with a dfs approach. Spent a bit of time with seen, and didn't even need it in the end! The numbers are increasing so you don't need to check for that.
array of positions getNeighbors(pos, the matrix) return top if inbounds and bigger int explore and return the longest path (Pos, seen set of positions) have we seen it? yes: return 0 add this position to seen getNeighbors of this position (in bounds and increasing positions) int path = 0; iterate over neighbors path = max(1 + explore(neighbor), path) remove this pos from seen return path keep running max start from every element in the array explore it, get longest path, compare to runnign max return the max
class Solution { public: vector<vector<int>> memo; int rows; int cols; bool inBounds(vector<int> &p){ if(p[0] < 0 || p[0] >= rows || p[1] < 0 || p[1] >= cols){ return false; } return true; } vector<vector<int>> getNeighbors(vector<vector<int>>& matrix, vector<int> &p){ int val = matrix[p[0]][p[1]]; vector<int> l = p, r = p, t = p, b = p; l[1]--; r[1]++; t[0]--; b[0]++; vector<vector<int>> out; if(inBounds(l) && matrix[l[0]][l[1]] > val) { out.push_back(l); } if(inBounds(r) && matrix[r[0]][r[1]] > val) { out.push_back(r); } if(inBounds(t) && matrix[t[0]][t[1]] > val) { out.push_back(t); } if(inBounds(b) && matrix[b[0]][b[1]] > val) { out.push_back(b); } return out; } int explore(vector<vector<int>>& matrix, vector<int> &p){ if(memo[p[0]][p[1]] > 0){ return memo[p[0]][p[1]]; } int path = 1; vector<vector<int>> nays = getNeighbors(matrix, p); for(auto& n : nays){ path = max(path, 1 + explore(matrix,n)); } memo[p[0]][p[1]] = path; return path; } int longestIncreasingPath(vector<vector<int>>& matrix) { if(matrix.size() == 0){ return 0; } rows = matrix.size(); cols = matrix[0].size(); memo = vector<vector<int>>(rows, vector<int>(cols, 0)); int pathMax = 0; for(int r = 0; r < rows; r++){ for(int c = 0; c < cols; c++){ vector<int> p = {r,c}; pathMax = max(pathMax, explore(matrix, p ) ); } } return pathMax; } };
Given a sorted (in ascending order) integer array nums
of n elements and a
target value, write a function to search target in nums
. If target exists, then
return its index, otherwise return -1.
This is BEDROCK code that I sometimes forget. Binary Search of sorted data. This
is what gets you O( Log N )
search of data.
Algorithm:
Left pointer = 0 and right pointer = to last element While left is less then or equal to right Find middle Check if our target is in the middle, return it if we found it If our target is less than the middle element explore left, mid; If our target is greater than the middle explore mid+1 and right; return -1 if not found.
Code:
int search(vector<int>& nums, int target) { int left = 0; int right = nums.size(); int mid; while(left<=right){ mid = left + (right-left)/2; if (nums[mid]==target){ return mid; } else if (nums[mid] > target) right = mid -1; else left = mid+1; } return -1; }
Given an array nums
, write a function to move all 0
's to the end of it while
maintaining the relative order of the non-zero elements.
This is a classic problem of iterating over an array with two pointers, a standard one and a fast one.
Algorithm:
Create a pointer called last_good that starts at zero Iterate i from 0 to end if Nums[i] is not 0 swap it with last good, and increment last good
Given an input string (s
) and a pattern (p
), implement regular expression
matching with support for '.' and '*'.
This problem can be solved recursively without too much code, once you break into these 3 cases!
We call isMatch(string s, string p)
recursively, giving a new input and a
new pattern string, and we operate on the first character. This calls for using
the string::substr()
function which is slow, but easy. We can also speed
this up by implementing start pointers.
Algorithm:
Code:
bool isMatch(string s, string p) { int p_len = p.length(), s_len = s.length(); // base case if(p_len == 0) { return s_len == 0; } // CASE 1 if(p_len == 1 || (p_len > 1 && p[1] != '*') ) { if(s_len == 0) return false; if(p[0] == '.' || p[0] == s[0]) return isMatch(s.substr(1), p.substr(1)); return false; } // CASE 2 second char is a * else { if( isMatch( s, p.substr(2) ) ) return true; int i = 0; while(i < s_len && (s[i] == p[0] || p[0] == '.')) { if(isMatch(s.substr(i+1), p.substr(2))) return true; i++; } return false; } }
This code can be optimized by creating an overloaded function that takes in a
start index for p
and s
.
Given an input string (s) and a pattern (p), implement wildcard pattern matching with support for '?' and '*'.
'?' Matches any single character. '*' Matches any sequence of characters (including the empty sequence).
This is a DP problem but can be solved recursively. I solved it recursively but it still fails leetcode. But I am getting comfortable with breaking the cases down. These kinds of problems lend themselves well to flowcharts!
Algorithm:
Code:
bool helperMatch(string &s, int p_s, string &p, int p_p) { // EMPTY CASE if(p_p >= p.length()) { if(p_s >= s.length()){ return true; }else{ return false; } } // p[0] is a '?' or a character else if(p[p_p] == '?' || isalpha(p[p_p])){ // s is empty if(p_s >= s.length()){ return false; } // s not empty else{ // character match if(p[p_p] == '?' || p[p_p] == s[p_s]){ return helperMatch(s,p_s + 1, p, p_p + 1); } // characters don't match else{ return false; } } } // p[0] is a '*' int i = 0; while(i <= s.length()) { if( helperMatch( s, p_s + i, p, p_p + 1 ) ){ return true; }else{ i++; } } return false; } bool isMatch(string s, string p) { int i = 1; while(i < p.length()){ if(p[i-1]=='*' && p[i] == '*'){ p.erase(i,1); }else{ i++; } } return helperMatch(s, 0, p, 0); }
Ah what a classic! Couple of ways of doing this one. Also good to know the brute force.
For every height h
, find the maximum left height, and the maximum right height.
The water at that h
will be the minimum of the two minus h
.
The DP solution is really straight forward! You just do two passes, use the minimum from each pass and add it to a result. O(1) time by O(n) space.
Algorithm:
Create res, and max_h, set to 0 Create a vector of left max heights Iterate h over heights from 0 to end max_h = max of h and max_h left max push in max_h - h, this is the max this height can support Reset max_h to 0 Iterate h over heights from end to 0 max_h = max of h and max_h res += min(left max at i, max_h - h) return res
Code:
int trap(vector<int>& height) { int res = 0, max_h = 0; // iterate from left vector<int> left_max; for(int i = 0; i < height.size(); i++) { max_h = max(max_h, height[i]); left_max.push_back(max_h - height[i]); } // reset the maximum max_h = 0; for(int i = height.size() - 1; i >= 0; i--) { max_h = max(max_h, height[i]); res += min(left_max[i], max_h - height[i]); } return res; }
This is the nifty 0(1) time and space solution. Start with two pointers at the end and track max left and max right. Move up the pointer of the smallest element. This solution works because you are moving the smaller of the sides, so you are always calculating the minimum water with you can trap for each element, which is what we need.
Algorithm:
Create a left pointer L = 0, right pointer R to size - 1 Create left max LM and right max LM and a res of 0 While L < R IF H[L] is smaller than H[R] Set LM to MAX(LM, H[L]) RES += LM - H[L] Increment L Else Set RM to MAX(RM, H[R]) RES += RM - H[R] Decrement R Return Res
Code:
int trap(vector<int>& h) { // left pointer, right pointer, left max, right max, result int l = 0, r = h.size() - 1, lm = 0, rm = 0, res = 0; while(l < r) { // move up the pointer to the smaller element if(h[l] < h[r]) { lm = max(h[l], lm); res += lm - h[l++]; }else { rm = max(h[r], rm); res += rm - h[r--]; } } return res; }
Given two words word1
and word2
, find the minimum number of operations
required to convert word1
to word2
.
You have the following 3 operations permitted on a word:
I learned how to solve this using DP. It is really cool how the memo table maps to the problem. The 3 operations, insert, delete, and replace map amazingly well to it.
This is a great video on the problem.
Algorithm:
Code:
int minDistance(string word1, string word2) { int m = word1.length(), n = word2.length(); // return non zero length if a length is zero if(n*m == 0) return n+m; // init memo vector to all zero's vector<vector<int>> memo(m + 1, vector<int>(n + 1,0)); // init left and top rows for(int i = 0; i <= m; i++) memo[i][0] = i; for(int j = 0; j <= n; j++) memo[0][j] = j; for(int i = 1; i <= m; i++) { for(int j = 1; j <= n; j++) { if(word1[i-1] == word2[j-1]) { memo[i][j] = memo[i-1][j-1]; } else { memo[i][j] = min(min(memo[i][j-1], memo[i-1][j-1]), memo[i-1][j]) + 1; } } } // Return the last element return memo[m][n]; }
Given two strings s and t, return true if they are both one edit distance apart, otherwise return false.
I initially solved this problem using 3 hard-coded cases. This was not very elegant and I learned a trick how to get the data in function in a way that eliminates code.
The trick was to ensure that string 1 is always bigger than string 2. The way to do that was to call the function recursively with the flipped strings in the case that string 1 was smaller than string 2.
We also rely on the fact that that the substrings after the non matching character should be the same.
Algorithm:
Set bigger equal to string 1 length, and smaller equal to string 2 length If bigger < smaller return call functions with flipped strings if bigger == smaller we need to do a replace iterate i over bigger if s1[i] != s2[i] return s1.substring(i+1) == s2.substring(i+1) if we get here that means strings were equal to return false if bigger - smaller = 1 iterate i over smaller if s1[i] != s2[i] return s1.substring(i+1) == s2.substring(i) return true if we get here return false if we get here, because it means strings are more than 1 away from each other
Code:
bool isOneEditDistance(string s, string t) { // we want bigger s, and smaller t int bigger = s.length(), smaller = t.length(); if(bigger < smaller) return isOneEditDistance(t, s); // replace or equal if(bigger == smaller) { for(int i = 0; i < bigger; i++) { if(s[i] != t[i]) return(s.substr(i+1) == t.substr(i+1)); } // if we get to here, no replacement was made and the strings are equal // so we return false return false; } // Delete one character from bigger s else if(bigger - smaller == 1) { for(int i = 0; i < smaller; i++) { if(s[i] != t[i]) return(s.substr(i+1) == t.substr(i)); } // if we get here it means the last character in the bigger string is what remains return true; } return false; }
Given two words word1
and word2
, find the minimum number of steps required to
make word1
and word2
the same, where in each step you can delete one character
in either string.
The cool thing about this problem is that when you solve it iteratively, the Dynamic problem version becomes really easy, and also a natural next step.
This problem breaks down into searching for the maximum common subsequence. This search breaks down in a recursive character by character scan that goes as follows:
string1
or string2
are empty, return 0string1
and recursestring2
and recurseI first solved this problem by starting from the beginning of the strings and recursively calling with trimmed substrings. This solves the problem with $O(2^{max(m+n)})$.
If you change the recursive function just a bit, it makes it a breeze to convert to DP using a memo matrix. All you have to do is start from the end, instead of passing substrings, pass two pointers.
Here is the algorithm:
FUNCTION: maxSub(string S1 and S2, pointers M and N, memo matrix) if M or N are 0, return 0 if memo[M][N] is valid, return it int x = 1 if S1[M-1] is equal to S2[N-1] int SAME = maxSub(S1, S2, M-1, N-1, memo) + x int DEL_S1 = maxSub(S1, S2, M-1, N, memo) int DEL_S2 = maxSub(S1, S2, M, N-1, memo) return the max of the 3 FUNCTION isValid(S1, S2) M = S1 length, N = S2 length Make a memo vector that is M+1 and N+1, init all to -1 return ( M + N - 2* maxSub(S1, S2, M, N, memo)
Code:
// make our own fancy 3 way max function int max(int x, int y , int z) { return ::max(::max(x,y),z); } // return max equal sub string of s1 and s2 int maxSub(string& s1, string& s2, int m, int n, vector<vector<int>> &memo) { if(m * n == 0) return 0; // If it's in the memo, return it! if (memo[m][n] != -1) { return memo[m][n]; } int x = (s1[m - 1] == s2[n - 1]) ? 1 : 0; int same = maxSub(s1, s2, m - 1, n - 1,memo) + x; int del_s1 = maxSub(s1, s2, m - 1, n,memo); int del_s2 = maxSub(s1, s2, m, n - 1,memo); memo[m][n] = max(same, del_s1, del_s2); return memo[m][n]; } int minDistance(string word1, string word2) { int m = word1.length(), n = word2.length(); vector<vector<int>> memo(m+1, vector<int>(n+1, -1)); int max_sub = maxSub(word1, word2, m, n, memo); return(m + n - 2*max_sub); }
Given two integer arrays A
and B
, return the maximum length of an subarray
that appears in both arrays.
This is a very simple DP algorithm! The brute force way is $O(n^3)$ and fails at pretty small strings.
For brute force, iterate over every possible combination of indices for A
and B
. For each possible index, if the characters are the same, run a while
loop and see how far you can go till both characters are not longer the same.
The DP algorithm is really simple. You create a 2D memo matrix that is 1 larger
on both axes. Then iterate over i
and j
and if the characters at
A[i-1]
and B[j-1]
are equal, set the element memo[i][j]
to one
larger than the upper left diagonal.
Algorithm:
Code:
int findLength(vector<int>& A, vector<int>& B) { int m = A.size(), n = B.size(); vector<vector<int>> memo(m+1, vector<int>(n+1, 0)); int ans = 0; for(int i = 1; i <= m; i++) { for(int j = 1; j <= n; j++) { if(A[i-1] == B[j-1]) { memo[i][j] = memo[i - 1][j - 1] + 1; ans = max(memo[i][j], ans); } } } return ans; }
Find duplicate and missing number.
Input is an array of unsorted consecutive integers.
Has 1 duplicated number, and a missing number.
For example {5, 2, 3, 2, 1}
, missing 4, duplicate 2.
I got this problem for a phone screen. I first solved in in O(N log(n))
by
sorting first, then with a hint I was able to solve it in O(N)
using a set
and tracking max
and min
in a two pass. Dima told me to always watch out
for max and min when consecutive numbers are involved.
Algorithm O(N log n)
:
Sort input array Iterate i start at index 1 over array if(num[i] == num[i - 1] then it is a duplicate else if(num[i] == num[i]-2 it is a missing element return duplicate and missing
Algorithm O(N)
:
create min set to INT_MAX and max set to INT_MIN create unordered set iterate n over input if n in set dupe = n insert n in set min = min ( min and n) max = max ( max and n) iterate i from min to max if i is not found in our set, we have our missing!
Given an array nums containing n + 1 integers where each integer is between 1 and n (inclusive), prove that at least one duplicate number must exist. Assume that there is only one duplicate number, find the duplicate one.
Crazy cool algorithm called Floyd's Algorithm!!
For this to work conceptually we have to treat it as a linked list. The cool thing is that because there is one duplicate, and they are consecutive numbers, they won't point out of bounds.
This is a two part problem, we first detect a cycle in the linked list, and then we detect where the cycle started which is the duplicate.
Algorithm:
p = nums[0] and q = to num [0] DETECT CYCLE while true move p twice move q once if p == q, break ( we have a cycle ) FIGURE OUT WHERE CYCLE IS BASED ON FLOYD'S ALGO q = nums[0] while(p != q) move p once move q once return q :)
Code:
int findDuplicate(vector<int>& nums) { int p = nums[0], q = nums[0]; while(true) { p = nums[p]; p = nums[p]; q = nums[q]; if( p == q) break; } q = nums[0]; while(p != q) { p = nums[p]; q = nums[q]; } return p; }
You're given a list of n integers arr[0..(n-1)]
. You must compute a list
output[0..(n-1)]
such that, for each index i
(between 0 and n-1
,
inclusive), output[i]
is equal to the median of the elements arr[0..i]
(rounded down to the nearest integer).
I first solved this with a set, but this took n/2
time to find the middle elements. I then solved it with two heaps with.
Code:
vector <int> findMedian(vector <int> arr) { vector<int> out; priority_queue<int> max_heap; // just have to remember a default priority_queue in c++ is a max_heap priority_queue<int, vector<int>, greater<int>> min_heap; for(auto& num : arr){ // If both heaps empty if(max_heap.size() == 0 && min_heap.size() == 0){ max_heap.push(num); } // we assume that max heap will not be empty else{ if(num > max_heap.top()){ min_heap.push(num); }else{ max_heap.push(num); } } // Balance the heaps CAREFUL WITH SIZE TYPES if((int)max_heap.size() - (int)min_heap.size() > 1){ min_heap.push(max_heap.top()); max_heap.pop(); } else if((int)min_heap.size() - (int)max_heap.size() > 1){ max_heap.push(min_heap.top()); min_heap.pop(); } // Calculate median if(max_heap.size() == min_heap.size()){ out.push_back( (max_heap.top() + min_heap.top() ) / 2 ); }else if(max_heap.size() > min_heap.size()){ out.push_back(max_heap.top()); }else{ out.push_back(min_heap.top()); } } return out; }
Interview problem.
Create an iterator for a list of sorted lists that implements a constructor
SortedIterator()
, next()
, and hasNext()
functions.
Great problem. I first dove into this problem by suggesting sorting out input lists. This would not work on large sets. Do not liberally try sorting data first. Explore the problem and see what can be done. In the end this problem can be solved with a min heap, by keeping track of the smallest leading element of the array, along with its indices.
I initially solved this problem with a multimap to take care of the i, j
indices. A multimap has worse time complexity than a heap becasue it has
O(log(n))
insertion times, and you need to do an insertion with every
next()
operation.
Algorithm:
class HeapElement HeapElemetn constructor value i, j // convert to min heap overload the < operator a.i > b.i priority_queue<HeapElement> minHeap FUNC constructure( lists ) for i to lists size if list[i] not empty insert HeapElement(list[i][0], i, 0) into min heap FUNC next() if hasNext() is false return -1 get a copy of top element of heap, call it prevTop pop off the top element int out = prevTop.val prevTop.j++ if prevTop.j is bounds of its list set prevTop.val to value at prevTop i j insert heapElement into min_heap; return out; FUNC hasNext() return size of heap > 0
Code:
// this will contain the data in our heap class HeapElement { public: int val, i, j; HeapElement(int _val, int _i, int _j) : val(_val), i(_i), j(_j){} }; // Overload the less than operator bool operator<(const HeapElement &a, const HeapElement &b){ return a.val > b.val; } class SortedIterator { private: // we need a reference to our input list const vector<vector<int>> &lists; priority_queue<HeapElement> minHeap; public: // Constructor SortedIterator(const vector<vector<int>> &_lists) : lists(_lists){ for(int i = 0; i < (int)lists.size(); i++){ if(lists[i].size() > 0){ minHeap.push(HeapElement(lists[i][0], i, 0)); } } } // our iterator next function int next(){ if(hasNext() == false){ return -1; } HeapElement prevTop = minHeap.top(); minHeap.pop(); int out = prevTop.val; if(++prevTop.j < (int)lists[prevTop.i].size()){ prevTop.val = lists[prevTop.i][prevTop.j]; minHeap.push(prevTop); } return out; } // check if we have a next value bool hasNext(){ return minHeap.size() > 0; } };
Find the unique paths a robot can take to get to an end of an array. He can only move one down or one right.
We solve this problem by creating a 2d vector of the path space. We then add up the possible ways we can enter a square, but adding the possible ways its top square has, and its left square has. The top row and left column start with 1, but theres only one way to enter those.
Algorithm:
create memo array set memo[0][:] to 1 set memo[:][0] to 1 for i 1 to m for j 1 to m memo[i][j] = memo[i-1][j] + memo[i][j-1] return memo[size - 1][memo[0]size-1]
Code:
int uniquePaths(int m, int n) { // vector of all 1's vector<vector<int>> memo(m,vector<int>(n,1)); for(int i = 1; i < m; i++) for(int j = 1; j < n; j++) memo[i][j] = memo[i-1][j] + memo[i][j-1]; return memo[m-1][n-1]; }
I solved this problem in a pretty complicated, but fun way! The bulk of the work is processing the string, which is a pain in C++. Learn the ways of string manipulation! While avoiding having to learn how to use stringstreams of course :).
[1] very long complicated solution showclass Solution { public: enum NodeType { NUM, MULTIPLY, DIVIDE, ADD, SUBTRACT}; typedef list< pair<NodeType, int> > MyList; MyList exp; list< MyList::iterator > listAS; list< MyList::iterator > listMD; void process(string& s) { // get rid of spaces int i = 0; while(i < s.length()) { if (s[i] == ' ') s.erase(i,1); else i++; } s += ' '; int p = 1, start = 0; while(p < s.length()) { if(!isdigit(s[p])) { string snum = s.substr(start, p - start); exp.push_back( make_pair(NUM, stoi(snum)) ); switch(s[p]){ case '/': exp.push_back( make_pair(DIVIDE,0) ); listMD.push_back( prev(exp.end()) ); break; case '*': exp.push_back( make_pair(MULTIPLY,0) ); listMD.push_back( prev(exp.end()) ); break; case '+': exp.push_back( make_pair(ADD,0) ); listAS.push_back( prev(exp.end()) ); break; case '-': exp.push_back( make_pair(SUBTRACT,0) ); listAS.push_back( prev(exp.end()) ); break; } start = p + 1; } p++; } return; } int calculate(string s) { process(s); for(auto& it : listMD) { if(it→first == DIVIDE) prev(it)→second = prev(it)→second / next(it)→second; else if(it→first == MULTIPLY) prev(it)→second = prev(it)→second * next(it)→second; exp.erase(next(it)); exp.erase(it); } for(auto& it : listAS) { if(it→first == ADD) prev(it)→second = prev(it)→second + next(it)→second; else if(it→first == SUBTRACT) prev(it)→second = prev(it)→second - next(it)→second; exp.erase(next(it)); exp.erase(it); } return exp.front().second; } };
Here is a much more concise solution I found online:
int calculate(string s) { stack<int> myStack; char sign = '+'; int res = 0, tmp = 0; for (unsigned int i = 0; i < s.size(); i++) { if (isdigit(s[i])) tmp = 10*tmp + s[i]-'0'; if (!isdigit(s[i]) && !isspace(s[i]) || i == s.size()-1) { if (sign == '-') myStack.push(-tmp); else if (sign == '+') myStack.push(tmp); else { int num; if (sign == '*' ) num = myStack.top()*tmp; else num = myStack.top()/tmp; myStack.pop(); myStack.push(num); } sign = s[i]; tmp = 0; } } while (!myStack.empty()) { res += myStack.top(); myStack.pop(); } return res; }
Reverse a linked list from position m to n. Do it in one-pass.
I struggled to get the cases down with this one. You have to think simply! You maintain 3 pointers, prev and cur, and next, which you get from cur. With this amazingly simple algorithim you don't even need a temp object.
What you do is move prev and cur till cur is on the first node m
to be
reversed.
ListNode* reverseBetween(ListNode* head, int m, int n) { ListNode dh; dh.next = head; ListNode *pre = &dh, *cur = head; n = n - m; // move pre and cur down till we hit m while(m > 1){ pre = pre->next; cur = cur->next; m--; } // handle our swaps while(n > 0){ ListNode* next = cur->next; cur->next = next->next; next->next = pre->next; pre->next = next; n--; } return dh.next; }
Given a linked list, return the node where the cycle begins. If there is no
cycle, return null
.
To represent a cycle in the given linked list, we use an integer pos which
represents the position (0-indexed) in the linked list where tail connects to.
If pos is -1, then there is no cycle in the linked list.
Super super cool Floyd's algorithm for loop detection. The following algorithm does all the proper checks in case of null lists and also lists that are not cycles.
Algorithm:
Make pointer q and q both equal to head while q and p, and p->next p advance twice q advance once if p == q, break When we get here we either have a loop or an NULL end. if( !p OR !p->next) // the OR order is important, so we don't have runtime errors We have the end, so no loop! return NULL We have loop, so we have to find where it starts q = head while p != q p advance once q advance once return q // or q, it doesn't matter :)
Code:
ListNode *detectCycle(ListNode *head) { ListNode *q = head, *p = head; while(q && p && p->next != NULL) { p = p->next; p = p->next; q = q->next; if(q == p) break; } if(!p || p->next == NULL) return NULL; q = head; while(p != q) { p = p->next; q = q->next; } return q; }
Given a node from a Circular Linked List which is sorted in ascending order, write a function to insert a value insertVal into the list such that it remains a sorted circular list.
This problem becomes hard because of the cases you have to check for, for proper insertion of the value.
You have to check these cases:
prev
and curr
pointer.prev
> curr
. That means we fit if we are greater than prev
, OR we are less than curr
.Took me a while to nail down these cases. It is VERY easy to get lost in nested if statements.
class Solution { public: void insertAfter(Node* node, int i) { Node* temp = new Node(i); temp->next = node->next; node->next = temp; } Node* insert(Node* head, int insert_val) { // Lets take care of the NULL case if(head == NULL) { Node* temp = new Node(insert_val); temp->next = temp; return temp; } // two pointers Node *p = head, *n = head->next; while(n != head) { if(p->val <= insert_val && insert_val <= n->val) { insertAfter(p, insert_val); return head; } else if(p->val > n->val) { if(p->val <= insert_val || insert_val <= n->val) { insertAfter(p, insert_val); return head; } } p = n; n = n->next; } insertAfter(p, insert_val); return head; } };
Print out a binary tree in the form of a 2d string array with spacing between each element.
The devil of this problem is to have the proper spacing between elements as you can down the levels. This problem requires the following information:
I had to peek at the answer for this one, but practiced how to BFS iteratively with a queue and a nested while loop.
The algorithm for putting values in the correct places is also very smart, with a left and right pointer and then just placing items in the middle of them.
Given the head
of a singly linked list where elements are sorted in ascending
order, convert it to a height balanced BST.
The first step to realize is that you need to find the middle. Then you must realize that the middle is going to be a root node. Then you must realize that you need to split list in a left half and a right half. The left half of the list can be used to rescurse into, the head of which will be the left child, and same with the right list. You can do it!
Algorithm:
FUNCTION split(head) if head is null return head slow pointer and faster pointer equal to head prev pointer while fast and fast next is not null prev = slow fast = fast next slow = slow next prev next = null return slow FUNCTION treenode sortedListToBST(head) if head is null return NULL listnode mid = split(head) TreeNode root = new treenode mid val if mid != head left = sortedListToBST(head); right = sortedList(mid->next) return root
ListNode* split(ListNode *head){ if(head == NULL){ return NULL; } ListNode *slow = head, *fast = head, *prev = head; while(fast != NULL && fast->next != NULL){ prev = slow; slow = slow->next; fast = fast->next->next; } prev->next = NULL; return slow; } TreeNode* sortedListToBST(ListNode* head) { if(head == NULL){ return NULL; } ListNode *mid = split(head); TreeNode *root = new TreeNode(mid->val); // left list is going to be started with head // right list is started with mid->next // if we have a real left list if(mid != head){ root->left = sortedListToBST(head); } root->right = sortedListToBST(mid->next); return root; }
Given a binary tree, return the zigzag level order traversal of its nodes' values. (ie, from left to right, then right to left for the next level and alternate between).
Just have to realize to do a BFS with a queue and output the data reversed every other level. When doing BFS with a stack, you have to deliniate the current level and the next level. Doing it with a null node is a nice way.
Algorithm:
init a queue of nodes put in the root in the queue put in null node output array (2d) empty level array while queue inst' empty set cur to queue front pop it off if(cur is not null) put in our level array if children aren't null, push into queue else we reached the end of the level if rev is true reverse level array push in our level array clear the level array toggle rev if q isn't empty push in a null
Given a non-empty binary tree, find the maximum path sum. There can be negative numbers.
I was asked this in an interview. The core concept is that at every node, the maximum will either be the following:
This problem can be simplified really well into short code by making use of
max()
calls on return data. Also we need to keep track of a global max, and
also return the max path, either left or right.
explore function (node, global max) -> max path from this node if this node is null, return 0 left max = max(recursive call on left child, 0) right max = max(recursive call on right child, 0) max both = node val + max (left, right max) global max = max(global max, both) return node val + max(left, right)
int explore(TreeNode* node, int& max_sum) { if(node == NULL){ return 0; } int max_left = max(explore(node->left, max_sum), 0); int max_right = max(explore(node->right, max_sum), 0); int max_both = node->val + max_left + max_right; max_sum = max(max_both, max_sum); return node->val + max(max_left, max_right); } int maxPathSum(TreeNode* root) { int max = INT_MIN; explore(root, max); return max; }
I was given this problem at a interview and boy I did not get through. I then went back to code it a month later after more studying, and I still flubbed it!!!
What a great problem!!! The trick here is to realize that if two people are running on a circle, no one is really ahead of anyone at any given time. You can add a positive number to either and get to the same spot as the other one.
This thinking helps framing on how to deal with wrapping. In this problem, we need to create keys for each string that define their letters relative position. It is up to you to pick what is greater than what, just keep it simple. You got this man, it makes sense!
Algorithm:
FUNCTION: CALC DISTANCE BETWEEN CHARACTER A AND CHARACTER B -> return a char Int d = A - B if D is negative, then D += 'z' - 'a' + 1 Reutrn D :) FUNCTION GROUP SHIFTED -> RETURN vector of vector of strings create an OUTPUT vector of vector of strings Create a map of string, int : key is the "relationship between letters" and value is the OUTPUT index for this group of strings that has the same key Iterate s over input strings Create a temp key (string) Iterate i over s, starting at position 1. temp key += DISTANCE(s[i-1], s[i]) IF key is found in map: add s to the right spot in OUTPUT else add s to a NEW spot in output and add this index to the MAP RETURN OUTPUT
Code:
// how much we have to add to a to get to b char dist(char a, char b) { int d = b - a; return (d < 0) ? d += ('z' - 'a' + 1) : d; } vector<vector<string>> groupStrings(vector<string>& strings) { vector<vector<string>> out; unordered_map<string, int> shift_map; for(auto& s : strings) { // build our key for this string string key; for(int i = 1; i < s.length(); i++ ) { key += dist(s[i-1], s[i]); } if(shift_map.find(key) == shift_map.end()) { // not in the map, create new entry out.push_back({s}); shift_map[key] = out.size() - 1; } else { out[shift_map[key]].push_back(s); } } return out; }
TODO
int minTotalDistance(vector<vector<int>>& grid) { if(grid.size() == 0){ return 0; } // find all peeps vector<Point> peeps; for(int i = 0; i < grid.size(); i++){ for(int j = 0; j < grid[0].size(); j++){ if(grid[i][j] == 1){ peeps.push_back(Point(i,j)); } } } int minDist = INT_MAX; for(int i = 0; i < grid.size(); i++){ for(int j = 0; j < grid[0].size(); j++){ // iterate through all our peeps int totalDist = 0; for(auto& peep : peeps){ totalDist += Point::dist(peep, Point(i,j)); } minDist = min(totalDist, minDist); } } return minDist; }
We are giving an array that represents the path a yearbook will take to get signed. The index of each element represents the student. We optimize by keeping track of what students are part of a circuit of signatures. Those student's path do not then need to be calculated.
Algorithm:
FUNCTION Explore(Yearbook PATH, vertex V, set SEEN) if V in SEEN return insert V in SEEN explore(PATH[V-1]) FUNC signature(Yearbook PATH) out vector initalized to PATH size + 1 of -1's iterate i from 1 to PATH size inclusive if out[i] != -1 explore(i) iterate E over elements in SEEN out[E] = SEEN size out.erase(out.begin) return out
void explore(vector<int> &arr, unordered_set<int> &seen, int v) { if(seen.count(v) > 0){ return; } seen.insert(v); explore(arr, seen, arr[v-1]); return; } vector <int> findSignatureCounts(vector <int> arr) { int n = arr.size(); vector<int> out(n, -1); for(int i = 1; i <= n; i++){ // an unexplored yearbook if(out[i-1] == -1){ unordered_set<int> seen; explore(arr, seen, i); for(auto& e : seen){ out[e-1] = seen.size(); } } } return out; }
I spent all day on this :) and learned a lot! This problem requires you to understand that this a graph cycle detection problem. Once you realize that, you can solve the problem.
There is another way of solving this problem with an “in degree” array where you track count of dependencies. Video about it here.
The input is directed graph that is not guaranteed to be strongly connected. An approach is to a DFS on the starting vertex of every edge. Then walk through that path and mark it seen. If you end up a previously seen node, you have a cycle, so there's no way you can take the classes!
Keep in note that you have remove a leaf nodes from the seen list once we finish on it.
We also keep a “good” list of vertices we have visited and fully explored. That way we don't have to explore them again! This is an optimization that changes our complexity from $O(V²+E²)$ to $O(V+E)$.
Algorithm:
OPTIMIZATION: Build unordered set of explored vertices called GOOD Build adjacency list Adj (unoredered map of vector<int>) Iterate over edges, adj[left vertex].push_back(right vertex) Iterate v over Adj list OPT: If v is in good, CONTINUE, no need to check create seen set of ints If cycleDFS of v vertex returns TRUE, we have a cycle return false // Return true if there is a cycle, false otherwise FUNCTION cycleDFS( take in Adj, seen, and vertex v) OPT: If v is in good, return false, no need to check check if v in seen If true, return true, there is a cycle iterate over n Neighbors of vertex v OPT: If n is in good, continue If cycleDFS of n is true return true // if we are here there are no cycles Remove v from seen, so we can backtrack without false positives return false
class Solution { public: // This marks nodes that are already explored as good unordered_set<int> good; bool cycleDFS(unordered_map<int, vector<int>>& adj, int v, unordered_set<int>& seen) { if(good.find(v) != good.end()) return false; // If we found this node before, we have a cycle if(seen.find(v) != seen.end()) return true; seen.insert(v); // Iterate over neighbors for(auto& n : adj[v]) { if(good.find(n) != good.end()) continue; if( cycleDFS(adj, n, seen) ) return true; } // we have come to the end of this exploration. seen.erase(v); good.insert(v); return false; } bool canFinish(int numCourses, vector<vector<int>>& prerequisites) { // Build the adjacency list unordered_map<int, vector<int>> adj; for(auto& e : prerequisites) { adj[e[0]].push_back(e[1]); } // Iterate over the adjancey list, and run cycleDFS on each node for(auto& v : adj) { // This is our optimization. If we enouter a vertex we have already // marked as good, we don't a DFS on it if(good.find(v.first) != good.end()) continue; // we start an empty seen set for every exploration // if we didn't we would encounter nodes from other explorations unordered_set<int> seen {}; if(cycleDFS(adj, v.first, seen)) { return false; } } return true; } };
This problem is an extension of Course Schedule. We use a topological sort algorithm to solve this problem. The only difference between this and the Course Schedule one problem is that we maintain a list of taken classes, which we use an output. We also iterate over all the classes n.
Algorithm:
class Solution { public: // we need to keep a set to allow O(1) lookups of elements present, // but we still need to keep an order which will be output vector<int> taken_list; unordered_set<int> taken; bool DFS(unordered_map< int, vector<int> >& adj, unordered_set<int> &seen, int v) { if(taken.find(v) != taken.end()) return true; if(seen.find(v) != seen.end()) return false; // only run neighbor check on vertices that have adj list if(adj.count(v) > 0) { // insert v into seen to mark that we saw it on this path seen.insert(v); // iterate over v's neighbors for(auto& n : adj[v]) { // if we taken this class we don't need dfs if(taken.find(n) != taken.end()) continue; if( DFS(adj, seen, n) == false) return false; } seen.erase(v); } // add the class to the taken list taken.insert(v); taken_list.push_back(v); return true; } vector<int> findOrder(int numCourses, vector<vector<int>>& prerequisites) { taken_list.reserve(numCourses); // build adj list unordered_map< int, vector<int> > adj; for(auto& e : prerequisites){ adj[e[0]].push_back(e[1]); } for(int v = 0; v < numCourses; v++) { // if we already took the class, then we don't have to explore it if(taken.find(v) != taken.end()) continue; unordered_set <int> seen; if( DFS(adj, seen, v) == false) return {}; } return taken_list; } };
There are n
different online courses numbered from 1
to n
. Each
course has some duration (course length) t
and deadling d
day. A course
should be taken continuously for t
days and must be finished before or on
the dth
day. You will start at the 1st day.
Given n
online courses represented by pairs (t,d)
, your task is to find the
maximal number of courses that can be taken.
The technique here is a two stage process. First we sort by end dates. That lets us start trying to fit courses. We keep track of time and start at 0.
Then we try the next course. If you can take it, great, take it! If you can't take it, we try something special.
What we do is see if we can replace it with a course that we already took, that is longer. If we can do that, then great, take that one instead AND shift back our time by the time that we saved. SO the only thing we end up needing is a multiset of durations of courses we've taken. Then at the end we just return the size of the set!
Algrithm:
make a multiset of numbers sort by end date start a last_end int to 0 iterate i from 0 to end of courses if we can take the class : if last_end + courses[i][0] <= courses[i][1] take it: last_end += courses[i][0], set insert courses[i][0] if we CAN't take the class is the largest element in the taken set longer or EQUAL to this course we can't take? delete the largest elemetn in the set, and insert this new one. last_end doesnt' stay the same! It gets shifted back by our time saved return size of set
Code:
int scheduleCourse(vector<vector<int>>& courses) { // set of durations of classes that we have taken multiset<int, greater<int>> taken; // sort our courses by end date sort(courses.begin(), courses.end(), [](vector<int>&a, vector<int>&b){return a[1]<b[1];}); int last_end = 0; for(int i = 0; i < courses.size(); i++) { // can you this course? is the end time smaller equal to deadline if(courses[i][0] + last_end <= courses[i][1]) { last_end += courses[i][0]; taken.insert(courses[i][0]); } else { // lets try to swap this course the biggest possible taken one int old_len = *taken.begin(); if(old_len >= courses[i][0]) { // swap the courses taken.erase(taken.begin()); taken.insert(courses[i][0]); // we also have to shift over our time savings! last_end -= old_len - courses[i][0]; } } } return taken.size(); }
There are a number of balloons whose width is defined by the position in 1D space as start and end pairs. Some of theses balloons may overlap. Calculated the mininum number of arrows required to pop all balloons.
This is an iteresting problem that is smiliar to the meeting room problem. You sort by end position and greedily check if the next balloon overlaps. If it does, great, we already have an arrow to pop it. If it doesn't overlap, then we need a new arrow for it.
Algorithm:
FUNCTION findMinArrows( 2d array of baloon bounds) sort INPUT array by end locations ANS = 0, LAST_END = LONG_MIN Iterate i from 0 to INPUT size if start of INPUT[i] is greater than LAST_END // need a new arrow! ANS++ LAST_END = end of INPUT[i] return ANS
Code:
int findMinArrowShots(vector<vector<int>>& points) { // Sort by end first sort(points.begin(), points.end(),[](vector<int>&a, vector<int>&b){return a[1]<b[1];}); // Initialize last end to smaller number than possible with int long last_end = LONG_MIN, ans = 0; for(int i = 0; i < points.size(); i++) { // If the current point ends after the previous, we need a new arrow if(points[i][0] > last_end) { ans++; // Update last end last_end = points[i][1]; } } return ans; }
There is one meeting room in a firm. There are N meetings in the form of start times and end times. The task is to find the maximum number of meetings that can be accommodated in the meeting room. Print all meeting numbers.
I first solved this recursively by sorting with start times, and then recursively trying the maximum of every possible path. You get the right answer but with a $O(2^n)$ time. Amazingly if you sort by end times, and iterate over the meetings and pick the one that fits, you will come to the right answer. This is called a greedy appraoch.
Algorithm:
sort by end times set last end to a negative number iterate i over meetings if start time of i is greater or equal to last end, ans++, last meeting = this one print out the meeting return ans
Code:
int greedyMeeting(vector<vector<int>> &m) { sort(m.begin(), m.end(),[](vector<int>&a, vector<int>&b){ return a[1]<b[1];}); int last_end = -1, ans = 0; for(int i = 0; i < (int)m.size(); i++) { if(m[i][0] >= last_end) { // take the meeting last_end = m[i][1]; ans++; cout << "Taking meeting number: " << ans << " - " << m[i][0] << ":" << m[i][1] << endl; } } return ans; }
Given an array of meeting time intervals consisting of start and end times
\[\[s1,e1],[s2,e2],…]
(si < ei)
, find the minimum number of conference rooms
required.
Fantastic question! My first approach was to do the brute force. You check every interval and see if it conflicts with each other. My first issue was figuring out how to check for overlap. I had nested if statements and it was not concise.
To do this, you simply check if the maximum start is greater or equal to the minumum end, If that's true then there is no overlap.
The next approach was to sort by starting inveral. You do this in C++ by creating a compare function like so:
bool compare(const vector<int>& a, const vector<int>&b) { // a will go before b if this is true: return ( a[0] < b[0] ); } sort(my_vector.begin(), my_vector.end(), compare);
After sorting the intervals by start time, I then created a list of rooms, (which just contained indices of intervals). I then processed each interval in order. For every interval I check through my room's list. If I find a room with no overlap, I insert the index of that interval and return. If i dont find a room, I create a new room and put the interval index in there.
There is a lot of wasted processing here.
First, we don't even need to a full list of intervals in a room, we can just keep track of the last meeting because they are in order.
Second, we are iterating through each room every time to find a fit. What we should be doing is using a set, and more importantly in C++ a multiset of rooms. That way we always just check the room with the earliest end time.
Third, all we need to store are end times! We then compare a start time with the earliest end time and we're good! Very elegant solution to this.
bool compare(const vector<int> &a, const vector<int> &b) { // element a[0] should come before b[0] if it is smaller return a[0] < b[0]; } class Solution{ public: void process(multiset<int> &rooms, vector<int>& interval) { // check to see if earliest ending meeting is <= to our start if(*rooms.begin() <= interval[0]) { // erase our old meeting because we are gonna put in the new one :) rooms.erase(rooms.begin()); } // put in the new end time rooms.insert(interval[1]); } int minMeetingRooms(vector<vector<int>> &intervals) { if(intervals.size() == 0) return 0; // sort our intervals by start time, earliest first sort(intervals.begin(), intervals.end(), compare); // make a multiset of rooms so we can store end times of the last meeting // there may be end times that end at the same time, so we need to allow for // dupes multiset<int> rooms = { intervals[0][1] }; for(int i = 1; i < intervals.size(); i++) { process(rooms, intervals[i]); } return rooms.size(); } };
Frog jump problem. Fucking confuused. Wtf??? I think i solved it but have no idea how it works out.vTheres a test case that I don't understand. I figured out what I did wrong, i misread the probem!!! k is the previous jump. My memo approach was spot on tho. NEED TO WRITE NOTES ON THIS.
Had to memo it to make it.
Given an array of non-negative integers, you are initially positioned at the first index of the array. Each element in the array represents your maximum jump length at that position. Determine if you are able to reach the last index.
This is a deceivingly medium kind of problem. You would think that it would be very easy, but nopppeeeee. It is actually a dynamic problem that can be solved with just a few lined of code in a greedy way.
The idea is to iterate i from 0 to either the end of the array or the until we reach a maximum reach that we can.
FUNC: isPossible(array INPUT) MAX_REACH = 0, i = 0; Increment i from 0 to either INPUT size or MAX_REACH MAX_REACH = max ( MAX_REACH, INPUT[i]+i ) return true if i got to the end of the array
This a problem that requires us to use constant extra memory, forcing us to use the input array itself to encode more data on it without losing the element information. Information about key assumptions must be had.
The minimum positive element is going to be anywhere between 1 and n+1. Here is why.
If we have the perfectly ordered array with n = 5: [1 2 3 4 5]
The next positive element is 6 (n+1)
.
The smaller possible positive element is going to be 1. This means our range spans the same number of elements as we have in the input. Which means if there was a way to overlay this boolean state of whether or not we have seen this number in the array, we can solve this problem with no further memory required.
We do this in three steps:
First pass: Search for a 1, and if we find any numbers out of possible solutions (negative or greater than n), we reset them to 1.
Second pass: Map the value of each element to its index and then set that value to negative if present in the array. Hard problems would not be hard if they were easy to explain.
Thirst pass: Look for the first non present negative number. If not found, handle the edge case and return n+1.
YOU GO THIS!
https://www.youtube.com/watch?v=9lQnt4p7_nE
Implement your own power function. The following is the naive algorithm and takes too much time:
class Solution { public: double myPow(double x, int n) { double ans = 1; if(n == 0) return 1; for(int i = 1; i <= abs(n); i++) { ans = x * ans; } if(n < 0) ans = 1/ans; return ans; } };
class Solution { public: double fastPow(double x, long long n) { if ( n == 0) return 1.0; double half = fastPow(x, n / 2); if( n % 2 == 0) { return half * half; } else { return half * half * x; } } double myPow(double x, int n) { long long N = n; if(N < 0) { x = 1/x; N = -N; } return fastPow(x, N); } };
Did a vector erase first but that wasn't ideal. I then learned how to do it with a two pointer approach. First pointer is a “last valid” location and second pointer sweeps through array. As it sweeps and find good ones, it swaps them with the valid. Valid then becomes my new array size.
Suppose an array sorted in ascending order is rotated at some pivot unknown to you beforehand. You are given a target value to search. If found in the array return true, otherwise return false. Version I has no duplicates, and version II has duplicates.
We deal with this problem but splitting the search space in half, with a left and right half. One of these halves will have a sorted array, which we can use to check if the number we are looking for is in it or not.
Cases:
To deal with duplicates we elagantly add a 4th case as an else
that
decrements the right pointer by one. For this to work we also have to add a
check in the first case to see if the right pointer is the number we are looking
for.
Code:
int search(vector<int>& nums, int target) { int left = 0, right = nums.size() - 1; // For this type of binary search we need to handle the case where left is the same as right while(left <= right){ int mid = left + (right - left) / 2; // Check if element is in the middle of the right one // We add the right check for the duplicate if(nums[mid] == target || nums[right] == target){ return true; } // pivot is in left. right is sorted else if(nums[left] > nums[mid]){ // check if we are in the right if(nums[mid] < target && target <= nums[right] ){ left = mid + 1; }else{ right = mid - 1; } } // pivot is in right. left is sorted else if(nums[mid] > nums[right]){ if(nums[left] <= target && target < nums[mid]){ right = mid - 1; } else{ left = mid + 1; } // nums[left] is equal or smaller than nums[mid] // nums[mid] is equal or smaller than nums[right] // This check is for dealing with duplicates. }else { right--; } } return false; }
An array sorted in ascending order is rotated at some pivot unknown to you
beforehand. Find the minimum element. The array may contain duplicates.
eg.
[4,5,6,7,0,1,2])
This problem groups together a bunch of problems: binary search, finding a pivot, and dealing with duplicates. The issue with this problem is running into problems dealing with edge cases, such as an array with all duplicates.
The solution is to break it down into its concise cases while maintaining correctness of the algorithm by not skipping over any elements in the search.
The solution structure is to maintain a low and high pointer and iterating in a while loop with the condition that low is smaller than high. At the start of every iteration we calculate the middle index.
The concise cases are as follows:
Then return the low pointer
l m h 9 0 1 2 3 l h 2 2 2 1 2
int findMin(vector<int>& nums) { int low = 0, high = nums.size() - 1; while(low < high){ int mid = low + (high - low) / 2; if(nums[mid] > nums[high]){ low = mid + 1; } else if(nums[mid] < nums[low]){ high = mid; } else{ high -= 1; } } return nums[low]; }
Given a linked list and a value x, partition it such that all nodes less than x come before nodes greater than or equal to x. You should preserve the original relative order of the nodes in each of the two partitions.
Make sure elements greater. Had a hard time with this one because I picked a bad approach from the start. I was moving elements to the back of the list if they met a certain condition. Problem was that I entered an infinite loop because I would get to the nodes I pushed back and keep pushing them back.
I then solved it by looking for groups of nodes that are bigger than x
. I
Then moved them with a node after them that was less than x
.
Algorithm:
ListNode* partition(ListNode* head, int x) { ListNode dh(0, head); ListNode *prev = &dh, *cur = head; while(cur != NULL){ if(prev->next->val < x){ prev = prev->next; } if(cur->val >= x){ while(cur->next != NULL && cur->next->val >= x){ cur = cur->next; } if(cur->next == NULL){ return dh.next; } ListNode* next = cur->next; cur->next = next->next; next->next = prev->next; prev->next = next; }else{ cur = cur->next; } } return dh.next; }
There is a MUCH EASIER way to do this! Build two lists, one less than and one greater than, and then connect them in the end!!!!!!! Watch this video.
What a great problem! I first tried a greedy approach by picking the largest
element first, setting it and its neighbors to 0, and then picking the next
largest element, and repeat n/2
times. This fails for [2,3,2]]
as it
generates 3 instead of 4.
Dynamic programming with bottom up processing. If we are asked to solve a problem for the n'th thing, we solve for the i'th thing n times.
Code Recursive:
int explore(vector<int>& nums, int i){ // base case if(i >= nums.size()){ return 0; } int yes = 0, no = 0; // choose yes = nums[i] + explore(nums, i+2); // unchoose no = explore(nums, i+1); return max(yes, no); } int rob(vector<int>& nums) { return explore(nums, 0); }
Code DP:
int rob(vector<int>& nums) { // solve for the first 3 cases if(nums.size() == 0) { return 0; } if(nums.size() == 1) { return nums[0]; } if(nums.size() == 2) { return max(nums[0], nums[1]); } // create the dp matrix vector<int> dp(nums.size(), 0); // build the dp matrix and seed it with the first two cases dp[0] = nums[0]; dp[1] = max(nums[0], nums[1]); // iterate over the rest for(int i = 2; i < nums.size(); i++){ dp[i] = max(dp[i-2] + nums[i], dp[i-1]); } return dp.back(); }
Code Simple DP (Kadane):
int rob(vector<int>& nums) { int prev = 0, cur = 0; for(auto& num : nums){ // temp keeps the max of one house ago int temp = cur; cur = max(prev + num, cur); prev = temp; } return cur; }
This problem can be solved amazingly easily in a one pass greedy approach. You keep track of a local maximum that can reset, and a global maximum.
int maxSubArray(vector<int>& nums) { int cur_max = nums[0], max_out = nums[0]; for(int i = 1; i < nums.size(); i++) { cur_max = max(nums[i], cur_max + nums[i]); max_out = max(cur_max, max_out); } return max_out; }
This is a tricky algorithm to understand. It touches on dynamic programming. Here's a good write up on it.
The maximum subarray problem can be solved using divide and conquer.
By breaking up the input array into two subarrays of $n/2$ length as follows:
The maximum subarray A[i..j] can only be located in one of 3 places; in the first one, crossing the middle, or the end. To solve this equation we must find the maximum subarray in each of those 3 places and simply pick the biggest one.
Theres lots of little details in the maxCross function. Especially with dealing with finding the maximum of the subarrays when you have negative numbers. Where you start on the mid line counts. You had to go mid to left and mid+1 to the right. If you go mid-1 to the left you're screwed.
Find-Max-Crossing-Subarray Algorithm
int maxCross(vector<int>&nums, int& l, int &mid, int& r) { // We don't have to do bounds checks because we know l won't equal r, // because that's check for in the caller function. // With that knowledge we can set max_l and max_r both to INT_MIN's cause // I know there is going to be at least one element in each. You can also // set to nums[mid] and nums[mid+1] int max_l = INT_MIN, max_r = INT_MIN; // find max left int sum = 0; for(int left = mid; left >= l; left-- ) { sum += nums[left]; max_l = max(sum, max_l); } // find max right sum = 0; for(int right = mid+1; right <= r; right++ ) { sum += nums[right]; max_r = max(sum, max_r); } // return the added sum. return max_l + max_r; } int maxSub(vector<int>& nums, int l, int r) { if(l == r) return nums[l]; int mid = (l+r) / 2; int max_left = maxSub(nums, l, mid); int max_right = maxSub(nums, mid + 1, r); int max_cross = maxCross(nums, l, mid, r); return max(max_left, max(max_right, max_cross) ); } int maxSubArray(vector<int>& nums) { if(nums.size() == 0) return NULL; return maxSub(nums, 0, nums.size()-1); }
This problem can also be solved very easily with a greedy version. Maximum Subarray Greedy
There are variations to the Kadane's algorithm. Here is a buy and sell stock. The idea is to keep track of the minimum price, and then check the current minimum against the current price. You then check that potential profit against a running max_profit variable.
int maxProfit(vector<int>& prices) { int max_profit = 0, min_price = INT_MAX; for(int i = 0; i < prices.size(); i++) { min_price = min(prices[i], min_price); max_profit = max(max_profit, prices[i] - min_price); } return max_profit; }
Very interesting problem, that requires a clever extension of Kadane. The trick here is that negative numbers can flip a minimum to a maximum.
You keep track of a min_so_far
, which is min of current number, current number
times min so for, and current number times max so far.
And you keep track of a max_so_far
which is the max of current number,
current number times max_so_far
and current number times min_so_far
. Then you
keep track of a result max, which you max against max_so_far
.
Amazingly you get to test every possible subarray in one pass. Really awesome!
int maxProduct(vector<int>& nums) { int min_so_far = nums[0], max_so_far = nums[0], result = nums[0]; for(int i = 1; i < nums.size(); i++) { int cur = nums[i]; int temp_max = max(nums[i], max( nums[i] * min_so_far, nums[i] * max_so_far)); min_so_far = min(nums[i], min( nums[i] * min_so_far, nums[i] * max_so_far)); max_so_far = temp_max; result = max(max_so_far, result); } return result; }
Very cool little problem that you solve by creating “product list” which is
similar to a prefix sum. You do a pass from the left and create this product
list, and then come in from the left one element at a time. You rely on the fact
that out[i] = prod * a[i - 1]
. prod
is the running product from the left.
vector<int> productExceptSelf(vector<int>& nums) { vector<int> out(nums.size(), 0); int left_prod = 1; for(int i = 0; i < nums.size() - 1; i++) { left_prod *= nums[i]; out[i] = left_prod; } int right_prod = 1; for(int i = nums.size() - 1; i > 0; i--) { out[i] = right_prod * out[i - 1]; right_prod *= nums[i]; } out[0] = right_prod; return out; }
Given an array of n positive integers and a positive integer s, find the minimal length of a contiguous subarray of which the sum ≥ s. If there isn't one, return 0 instead.
I solved this problem using a two pointer approach. The two pointers bound a subarray whose sum is updated as the pointers move. The trick is how to advanced the pointers. The right pointer moves up if the sum doesn't meet the condition. The left pointer moved up if the sum meets the condition.
I also worried about the case where the left pointer advanced over the right pointer but that is actually fine. The left pointer can only advanced over the right pointer once, at which case the sum will be 0 and invalidate the case.
Algorithm:
Function: min subarray : take in nums array and number s left = 0, min_length = 0, sum = 0 iterate right from 0 to nums size sum += nums[i] while sum is greater than s min_length = min( min_length, i - left + 1 ) remove nums[last] from sum move up left pointer return min_length
Given a string, find the length of the longest substring without repeating characters.
Find the longest substring without repeating characters. Cute use of an unordered_set to keep track of seen characters while implement two pointers for a sliding window. The sliding window is your valid substring. You have a left and right pointer. Advance the right pointer and if the character is not in the set, great, ans = max(right - left, ans). If the character is found in the set, remove the character at the left pointer, advancing the left and try again. Point is you keep advancing the left pointer till you're good to advance the right. Worst case, the left pointer will go all the way to the right and you start with an empty set. Cool!
int lengthOfLongestSubstring(string s) { unordered_set<char> seen; int i = 0, j = 0, n = s.length(), ans = 0; while(i < n && j < n) { if( seen.find(s[j]) == seen.end() ) { seen.insert(s[j++]); ans = max(ans, j - i); } else { seen.erase(s[i++]); } } return ans; }
Given a string S and a string T, find the minimum window in S which will contain all the characters in T in complexity O(n)
.
You can solve this problem not in O(n)
easily with two maps and compare the count of each letter in the dictionary and window map as you do a sliding window. The tricky part is using a “factor” variable that is equal to the unique number of letters and their counts.
You keep a running factor
of the window and run logic on additions and removals of characters. If the character that you added gives you a count equal to the key count of that character you can increase your factor. If when you remove a character it equals one less they key count, you decrement the factor. This gives you an O(1)
check for window validity.
Algorithm:
build a map called dic of letters and count of the t string min_size = 0 start = 0; factor is the number of unique characters of atleast a count factor = size of dict pointer left = 0, right = -1 while right is less than length - 1 advanced the right and add it while window valid // while factors are equal check if min_size is 0 or greater than right - left + 1 start = left update min_size advance the left if min_size == 0, return "" return s.sub(start, min_size) add a character: if char isn't in dic return window[char]++ if window[char] == dic[char] factor++ remove a char: if char isn't in dic return window[char]-- if window[char] == dic[char] - 1 factor--
Code
class Solution { public: unordered_map<char, int> dict, window; int factor; void addChar(char c){ if(dict.count(c) == 0){ return; } window[c]++; if(dict[c] == window[c]){ factor++; } } void removeChar(char c){ if(window.count(c) == 0){ return; } window[c]--; if(dict[c] - 1 == window[c]){ factor--; } } string minWindow(string s, string t) { if(t.length() == 0 || s.length() == 0){ return ""; } for(auto& c : t){ dict[c]++; } factor = 0; int start = 0, min_size = 0; int left = 0, right = -1; while(right < (int)s.length() - 1){ //advance right right++; addChar(s[right]); while(factor == dict.size()){ int win_size = right - left + 1; if(win_size < min_size || min_size == 0){ start = left; min_size = win_size; } removeChar(s[left]); left++; } } return (min_size == 0) ? "" : s.substr(start, min_size); } };
Given a string s and a non-empty string p, find all the start indices of p's anagrams in s.
Nice and mellow problem! Make a key from the pattern of the characters and counts, keep a sliding window on the string with with characters and counts and compare to the key. If it's a match, add the index to the output.
I learned that you can just use the ==
operator on 2d vectors and unordered
maps.
Create key of characters and counts Create a window key of empty characters and counts, and output list Iterate i from 0 to end of input string s add character at i to window key if i - p.length() is >= to zero, remove character from window key compare window and key, if match add i - p length + 1 to output Return output
Vector based code;
vector<int> findAnagrams(string s, string p) { vector<vector<int>> key = vector(26,vector<int>(1,0)); vector<vector<int>> window = vector(26,vector<int>(1,0)); // build our key for(auto& c : p) key[c-'a'][0]++; vector<int> out; for(int i = 0; i < s.length(); i++) { window[s[i]-'a'][0]++; int last_valid = i - p.length(); if(last_valid >= 0) window[s[last_valid]-'a'][0]--; if(check()) out.push_back(last_valid + 1); } return out; }
Map based code:
vector<int> findAnagrams(string s, string p) { unordered_map<char, int> key, window; // build our key for(auto& c : p) key[c]++; vector<int> out; for(int i = 0; i < s.length(); i++) { window[s[i]]++; int last_valid = i - p.length(); if(last_valid >= 0) { window[s[last_valid]]--; if(window[s[last_valid]] == 0) window.erase(s[last_valid]); } if(key == window) out.push_back(last_valid + 1); } return out; }
Given a string s
, return the longest palindromic substring in s
.
This is an O(n²)
solution. There is no easy way to solve this problem in
O(n)
time. The O(n)
solution is here.
You solve this by doing the following:
I fell in a bit of a pitfall with this problem when coding the middle out palindrome checker. I kept getting tripped up on what the set the condition of the while loop. I kept getting off by one and had a hard time coming down with a case that didn't give me off by one. Should I check for the current case and then increment the pointers and invalidate them? What about the initial case?
If yes, then simply subtract one from the output! The assumption that I needed to grasp is that is we check to see if the current case is valid, and we increment, we will always have a result that is one bigger. Which is fine, just need to correct for it.
Algorithm:
int isPali(string, left and right) while char at left and char at right are the same and left and right are in bounds move left left and right right return out with left pointer + 1, and count with r - l + 1 struct POD start len for every letter single = isPali(index index) double = isPali(index - 1, index) pick the bigger one and max against global return substring
Code:
struct Out{ int index; int len; Out(int i = 0, int l = 0){ index = i; len = l; } }; Out pal(string& s, int l , int r){ // while inbounds and same char while(l >= 0 && r < s.length() && s[l] == s[r]){ l--; r++; } // Shift back the start pointer because it's gonna be off by one // r will be off by one to the right, so subtract r from l and then dec 1 return Out(l + 1, r - l - 1); } string longestPalindrome(string s) { // first is index, second is len Out max; // first pass single char mid out for(int i = 0; i < s.length(); i++){ // Single character check Out cur1 = pal(s, i, i); // Double character check Out cur2 = pal(s, i-1, i); // Pick the biggest one Out cur = (cur1.len > cur2.len) ? cur1 : cur2; max = (max.len > cur.len) ? max : cur; } return s.substr(max.index, max.len); }
Given a string s, partition s such that every substring of the partition is a palindrome. Return all possible palindrome partitioning of s.
The trickiest part of this problem is implementing a way of partitioning the
string in a sliding window fashion. The easiest way to do this is to create new
substrings A
and B
but this takes time and memory that unnecessary
because our input string and two pointers, start and end is all we need to
define any substring in S
.
Algorithm:
FUNCTION isPalindrome ( string s, left pointer, right pointer) while left pointer is less than right pointer if character under left pointer is not the same as right pointer character return false return true FUNCTION explore(string S, int START, array of string CUR, array of array of strings OUT) if START is equal to string length push in CUR to OUT, and return Iterate from I = START to S.length() if the string from START to I is NOT a palindrome, continue create a substring that goes from START to I and push it into CUR explore S, i + 1, CUR, OUT pop the last element of CUR Call explore with START of 0
Code:
// return true if string is palindrome, false otherwise bool isPalin(string& in, int start, int end) { int l = start, r = end; while(l < r) { if(in[l++] != in[r--]) return false; } return true; } // partition string into all possible partitions void part(string& s, int start, vector<string> &cur, vector<vector<string>> &out) { if(s.length() == start) { out.push_back(cur); return; } // partitioning for(int i = start; i < s.length() ; i++) { if(!isPalin(s, start, i)) continue; cur.push_back(s.substr(start, i - start + 1)); part(s,i + 1, cur, out); cur.pop_back(); } return; } vector<vector<string>> partition(string s) { vector<vector<string>> out; vector<string> cur; part(s,0, cur, out); return out; }
Easiest is to brute force. It would be n²*k
where k
is the length of the
string.
You solve this by created a hashtable of the words in the array. Then you have these cases to deal with:
The way to deal with prepend and append, is to take the word that you have, pop off a character and save that to a new word. If the new word is a palindrome, and you can find the reversed version of the original word without the popped off bits, you good bro.
Append Example:
// Matching the following: word1 = petonot word2 = ep word3 = tonote revered_word1 = tonotep You keep pooping off and looking and if what you popped off by itself is a palindrome, and you foudn the reverse of whats left, you're good | Reversed | Poppedoff | Popped off | Found Match? | Reversed | Reversed | | | | Palindrome? | Popped off? | Palindrome? | Match? | |----------+-----------+-------------+--------------+-------------+----------| | tonotep | | Yes | No | No | No | | onotep | t | Yes | No | No | No | | notep | to | No | No | No | No | | otep | ton | No | No | No | No | | tep | tono | No | No | No | No | | ep | tonot | Yes | Yes | No | Yes | | p | tonote | No | No | Yes | Yes |
The hashtable gives the 0(1) lookup speed, and avoids the linear search.
Given two strings s and t, determine if they are isomorphic. Two strings are isomorphic if the characters in s can be replaced to get t. All occurrences of a character must be replaced with another character while preserving the order of characters. No two characters may map to the same character but a character may map to itself.
This wasn't THAT easy! I created a map from one string to another but then realized that doesn't work for all cases, because two different characters can't map to the same character. I then created two maps from both directions but the code is pretty hard to read.
A much more elegant solution is to simple run the algorithm twice from both sides.
if char lenghts aren't the same, return false keep a dictionary array of characters for i 0 to s size tchar = t[pointer], schar = s[pointer] if dictionary[tchar] is equal to some flag character, set it equal to schar if dictionary[tchar] doesn't equal schar, return false return true
bool isIsomorphic(string s, string t) { vector<char> smap(256, '*'); vector<char> tmap(256, '*'); if(s.length() != t.length()){ return false; } for(int i = 0; i < s.length(); i++){ char tchar = t[i], schar = s[i]; if(tmap[tchar] != schar && smap[schar] != tchar){ if(tmap[tchar] == '*' && smap[schar] == '*' ){ tmap[tchar] = schar; smap[schar] = tchar; } else{ return false; } } } return true; } // NEW WAY: bool isIsomorphic(string s, string t) { return isIso(s,t) && isIso(t,s); } bool isIso(string s, string t) { vector<char> map(256, '*'); if(s.length() != t.length()){ return false; } for(int i = 0; i < s.length(); i++){ char tchar = t[i], schar = s[i]; if(map[tchar] == '*'){ map[tchar] = schar; } if(map[tchar] != schar){ return false; } } return true; }
This is a backtracking algorithm where you either close or open a parenthesis.
Given a characters array tasks, return the least number of units of times that the CPU will take to finish all the given tasks.
This problem needs to be broken down into some basic parts:
There are some bits in the algorithm that look tricky, like the min's and max's but if you go back to the basic keys for solving this problem, it will make sense!
Function: least Interval ( tasks, n cooldown) Create a vector of frequencies Sort the frequencies from high to low idle_time = maximum ( frequency - 1 ) * n iterate i from 1 to end Subtract from idle_time the min frequency of the task OR the max freq - 1 return tasks size + max(idle time or 0 incase it's negative
int leastInterval(vector<char>& tasks, int n) { vector<int> freq(26, 0); for(auto& task : tasks){ freq[task - 'A']++; } sort(freq.begin(), freq.end(), greater<int>()); int idle_time = (freq[0] - 1) * n; for(size_t i = 1; i < freq.size(); i++){ idle_time -= min(freq[0] - 1, freq[i]); } return tasks.size() + max(idle_time, 0); }
Took me a little bit of time to get my head around how to do it, but it was not
bad at the end! You pass in a right node
to each explore iteration. The
left child
gets passed in the right child
and then right child gets
passed in NULL
or right child→child
. Thats it!
Algorithm:
Code:
void explore(Node* node, Node* pr) { if(node == NULL) return; node->next = pr; //if we have children: if(node->left && node->right) { // left child explore(node->left, node->right); // right child: if(pr == NULL) explore(node->right, NULL); else explore(node->right, pr->left); } return; } Node* connect(Node* root) { explore(root, NULL); return root; }
This one was took a bit of time to wrap my head around it. I originally starting trying to check for a node with no grand-children. Remember, if you are checking for grandchildren in a recursive tree traversal, you are doing something wrong!
The trick here is that you need to break down the problem to a simple bottom base node, and work out the 4 different conditions:
Left
present, Right
nullLeft
null, Right
presentLeft
and Right
presentThe second important thing is to realise you need to return a tail, which will be the lowest right node of this newly shuffled subtree. When a node is null, it should return null, and when it has no children, it should return itself. Those are two base case returns.
We then have to deal with the return of the node that received either of those two returns from left and right children.
Algorithm:
Code:
TreeNode* flatten_child(TreeNode* node) { if( node == NULL ) return NULL; // no children to move, this is the right tail if( node->left == NULL && node->right == NULL ) return node; TreeNode* left_tail = flatten_child(node->left ); TreeNode* right_tail = flatten_child(node->right); if(node->left != NULL) { left_tail->right = node->right; node->right = node->left; node->left = NULL; } // return the right most real tail if(right_tail == NULL) return left_tail; return right_tail; }
Convert a Binary Search Tree to a sorted Circular Doubly-Linked List in place.
You have to realize that you need to flatten child nodes and then return both head and tail up to the caller. The caller than rearranges themselves in the right way.
Algorithm:
FUNC flatten(node) return: Head and Tail of flattened list if node is null return null head and null tail (Left Head, Left Tail) flatten(Left child) (Right Head, Right Tail) flatten(Right child) If Left Tail not null, Connect Left Tail with node If Reft Head not null, Connect Right Head with node HeadTail out; //Set the proper Head out.head = (Left Head != NULL) ? Left Head : node; //Set the proper Tail out.tail = (Right Tail != NULL) ? Right Tail : node; FUNC caller(node) return: node if node is null return HeadTail = flatten(node) Head left = tail tail right = head return head return out
Code:
// Our return object containing a head and tail class HeadTail{ public: Node* head; Node* tail; HeadTail(){} HeadTail(Node* _head, Node* _tail) : head(_head), tail(_tail) {} }; class Solution { public: HeadTail flatten(Node* node){ if(node == NULL){ return HeadTail(NULL, NULL); } HeadTail leftHT = flatten(node->left); HeadTail rightHT = flatten(node->right); if(leftHT.tail != NULL){ leftHT.tail->right = node; node->left = leftHT.tail; } if(rightHT.head != NULL){ rightHT.head->left = node; node->right = rightHT.head; } HeadTail out; out.head = (leftHT.head != NULL) ? leftHT.head : node; out.tail = (rightHT.tail != NULL) ? rightHT.tail : node; return out; } Node* treeToDoublyList(Node* root) { if(root == NULL){ return NULL; } HeadTail out = flatten(root); out.head->left = out.tail; out.tail->right = out.head; return out.head; } };
Implement an iterator over a binary search tree (BST). Your iterator will be initialized with the root node of a BST. Calling next() will return the next smallest number in the BST.
Struggling hard with this one. First I tried a simple stack, that I would build on initializing that would go down to the current smallest element. This breaks down when having to explore right children of a parent node.
Then I thought about two stacks, but that's ridiculous and does not solve the problem.
Then I tried to think of some kind of process engine, that would change logic based on the step. This was not good.
Then I figured out a way with a stack that tracked a “seen” variable for each node! Which was unnecessary! I was hung up on what to do when encountering the parent node again.
class BSTIterator { public: enum Status{ UNSEEN, SEEN}; stack< pair<TreeNode*, Status> > stack; // look for minimum. void buildMin(TreeNode* node) { if(node == NULL) return; stack.push( make_pair(node, UNSEEN) ); if(node->left) { buildMin(node->left); } return; } BSTIterator(TreeNode* root) { buildMin(root); } /** @return the next smallest number */ int next() { // we have not seen this node if(stack.top().second == UNSEEN) { stack.top().second = SEEN; int out = stack.top().first->val; buildMin( stack.top().first->right ); return out; } // node is seen stack.pop(); return next(); } /** @return whether we have a next smallest number */ bool hasNext() { while(stack.size() > 0 && stack.top().second == SEEN) { stack.pop(); } return (stack.size() > 0) ? true: false; } };
Algorithm:
FUNC: BUILD MIN ( Take in a node ) if node NULL return Add node to stack if node has left child-> BUILDMIN(LEFT CHILD) FUNC: NEXT() Node OUT = stack top pop stack top BUILD MIN( OUT->right child) return OUT's val FUNC: HAS NEXT() return true if size of stack greater then 0, false otherwise
Code:
class BSTIterator { public: stack<TreeNode*> stack; // look for minimum. void buildMin(TreeNode* node) { if(node == NULL) return; stack.push(node); if(node->left) { buildMin(node->left); } } BSTIterator(TreeNode* root) { buildMin(root); } /** @return the next smallest number */ int next() { TreeNode* out = stack.top(); stack.pop(); buildMin(out->right); return out->val; } /** @return whether we have a next smallest number */ bool hasNext() { return (stack.size() > 0) ? true: false; } };
Given a binary tree, imagine yourself standing on the right side of it, return the values of the nodes you can see ordered from top to bottom.
Do a level order traversal, and push the right most element in an output array
Algorithm:
create a queue push first node into the the queue while queue isn't empty level size = queue size take the back of the queue and add it output for( i to level size) cur = pop front of queue if cur has children, put them in the back of the queue
Code:
vector<int> rightSideView(TreeNode* root) { // output result vector<int> out; if(root == NULL) return out; // level queue queue<TreeNode*> levelq; // push in first element, which is the root levelq.push(root); while(!levelq.empty()) { // add the right most element to the output; out.push_back( levelq.back()->val ); // we need to count how many elements are in this level, so we can get the right // most element int level_size = levelq.size(); // pop off all the elements from this level while(level_size-- > 0) { // set cur to be the front of the queue and pop it TreeNode* cur = levelq.front(); levelq.pop(); // if we have children, starting with the left, add them to the queue if(cur->left) levelq.push(cur->left); if(cur->right) levelq.push(cur->right); } } return out; }
We would explore the graph in a depth first way by keep track of levels. We could keep a map of results, so that if we end up at that level, we overwrite whatever is in that result map.
I used a map because I was worried that we would end up touching nodes we thought were at the right most and valid for that level. That's just not true, and silly on my part, because I had the firm assumption (and correct) that the left node was also the first most level at that level.
So all we have to do is start from the right and track levels!!! We can track levels in a really cutesy way too.
Algorithm:
Explore function that takes in node, level, ref to output array if node is null return if level equal out size push in node val to out if right child explore right child, level + 1, output if left child explore left child, level + 1, output return
Code:
void explore(TreeNode* node, int level, vector<int> &out) { if (node == NULL) return; // Right most node we ever touch at this level! if (level == out.size()) out.push_back(node->val); // Check the children if(node->right) explore(node->right, level + 1, out); if(node->left) explore(node->left, level + 1, out); return; } vector<int> rightSideView(TreeNode* root) { vector<int> out; explore(root, 0, out); return out; }
Given a binary tree, find the lowest common ancestor (LCA) of two given nodes in the tree.
This was a fun problem and I solved it in a cutesy way with two lists to keep a trace of the lineage to target node.
Algorithm:
Create two traces lists one for p and for q explore p and build a left trace list explore q and build a right trace list while the two front elements of the traces equal each other set out to the front of one element pop the first element of both traces off // FUNCTION Explore starting from a node with a target and reference of the trace list If node is null, return false Push out node on the trace list if node == target, that means we found it! return true Explore the left, and the right child. If either returns true return true We haven't found the target from this root, pop our element off the trace list return false
Code:
bool explore(TreeNode* node, TreeNode* target, list<TreeNode*>& trace) { if(node == NULL) return false; // push ourselves on the trace trace.push_back(node); if(node == target) return true; if( explore(node->left, target, trace) || explore(node->right, target, trace) ) { return true; } // not found in either child, get rid of our entry from the trace trace.pop_back(); return false; } TreeNode* lowestCommonAncestor(TreeNode* root, TreeNode* p, TreeNode* q) { list<TreeNode*> p_trace, q_trace; explore(root, p, p_trace); explore(root, q, q_trace); // init out TreeNode* out; while(p_trace.front() == q_trace.front()) { out = p_trace.front(); p_trace.pop_front(); q_trace.pop_front(); } return out; }
Given a binary tree, return all root-to-leaf paths.
I fell in a little pitfall with this problem! We have to travel all the way down to the leaf, building a string as we go down, and then triggering an append to an output array when a node has no children.
Algorithm:
Explore with a node, a string and a reference to output array If the node is null, JUST RETURN. (we can be in a node with only one child!) if our string is NOT empty, add the "->" add the number of our node to the string If we don't have any children add the string to the output array and RETURN Explore the left child Explore the right child
Code:
void explore(TreeNode* node, string path, vector<string> &out) { // A null node by itself is NOT a leaf! if(node == NULL) { return; } // Add the arrow if the string isn't empty if(path != "") path += "->"; // Add our number to the string path += to_string(node->val); // If the node doesn't have children, we have a leaf! if(node->left == NULL && node->right == NULL) { out.push_back(path); return; } // Explore the left and the right explore(node->left, path, out); explore(node->right, path, out); return; }
Had a little hiccup understanding the definition, cause I dove right in thinking the root node added up the path. READ THE QUESTION. UNDERSTAND IT.
Go down to the null, and return 0, then add 1 to that as you bubble up. At each node, check the max path on the left, max path on the right, Check if your max is bigger than the output.
Algorithm:
Explore node with out ref if node is null, return 0 left path is explore left node's max path right path is explore right node's max path out is max of left path and right path return max of left path + 1, right path + 1
Code:
int explore(TreeNode* node, int& out) { if(node == NULL) return 0; int lp = explore(node->left, out); int rp = explore(node->right, out); out = max(out, lp + rp); return max(lp + 1, rp + 1); } int diameterOfBinaryTree(TreeNode* root) { int out = 0; explore(root, out); return out; }
You're given a binary tree and need to find the number of paths along the tree that add up to a number.
Dima helped me out with an algorithm that keeps an array of sums. At every node, you add the value of the node to every element in that array, and also push back the value of the node itself. At each node you also iterate through the array and check to see if any values add to up the sum that is requested.
int helper(TreeNode* node, int target, vector<int> sums) { if(node == NULL) return 0; int out = 0; // add current node value to all elemnts of sums // and also check if they are valid solutions for(auto& entry : sums) { entry += node->val; if(entry == target) out++; } // append this elements value to sums sums.push_back(node->val); // check if node itself is a possible solution on its own. if(node->val == target) out++; // return out and children solutions return out + helper(node->left, target, sums) + helper(node->right, target, sums); } int pathSum(TreeNode* root, int sum) { vector<int> sums; return helper(root, sum, sums); }
There is another way to solve this problem with a hashmap based prefix sum. I don't understand how this works. Here is the code:
Also funny, found some poor guy on the internt who also thinks this is magic.
class Solution { public: int count = 0; int k; unordered_map<int,int> h; void helper(TreeNode* node, int cur_sum) { if(node == NULL) return; cur_sum += node->val; if(cur_sum == k) count++; if(h.count(cur_sum - k) > 0) count += h[cur_sum - k]; h[cur_sum]++; helper(node->left, cur_sum); helper(node->right, cur_sum); h[cur_sum]--; } int pathSum(TreeNode* root, int sum) { k = sum; helper(root, 0); return count; } };
Find the longest contiguous subarray that contains equal number of zero's and ones.
I first tried using divide and conquer but that doesn't work. Divide and conquer breaks down the problem into 3 possible solution locations, left, right and crossing the middle. This appraoch fails this problem because the solution cannot be split up because the sub-solutions are not independant. You can have 8 one's on the left, which is an invalid solution.
So this is another pre-fix with hashmap problem. Man i really don't get these problems and don't think they'll be asked.
Given a binary tree, check whether it is a mirror of itself (ie, symmetric around its center).
This can be solved in two ways, recursively in a depth first manner, and iteratively in a breath first manner.
Take in two nodes, which should be reflections of each other. Initially these will be first children left and right.
If they are null and equal, return true, great. We don't recurse past this node because there are no children.
If only one is NULL, then return false. Tree is not symmetric.
Then check the following:
Return all three cases && together.
class Solution{ public: bool isMirror(TreeNode* t1, TreeNode* t2) { if(t1 == NULL && t2 == NULL) return true; if(t1 == NULL || t2 == NULL) return false; // now we have to check if the two nodes are mirrors // if their inner children are mirrors, or if theyre // outer children are mirrors return t1->val == t2->val && isMirror(t1->right, t2->left) && isMirror(t1->left, t2->right); } bool isSymmetric(TreeNode* root) { if(root == NULL) return true; return isMirror(root->left, root->right); } };
Use a queue, push in the first two children.
Pop out 2 nodes at a time, and compare them. Push in the children nodes in symmetric pairs.
class Solution { public: bool isSymmetric(TreeNode* root) { if(root == NULL) return true; queue<TreeNode*> q; q.push(root->left); q.push(root->right); while(q.size() > 0) { TreeNode* t2 = q.front(); q.pop(); TreeNode* t1 = q.front(); q.pop(); if(t1 == NULL && t2 == NULL) continue; if(t1 == NULL || t2 == NULL) return false; if(t1->val != t2->val) return false; // outer pair q.push(t1->left); q.push(t2->right); // inner pair q.push(t1->right); q.push(t2->left); } return true; } };
Given a collection of intervals, merge all overlapping intervals.
Example 1:
Input: intervals = [[1,3],[2,6],[8,10],[15,18]] Output: [[1,6],[8,10],[15,18]] Explanation: Since intervals [1,3] and [2,6] overlaps, merge them into [1,6].
Algorithm:
If input is less than 2 elements return Sort by starting interval Put first interval in out array Iterate i from 1 to input end min end = min(end of last interval in out, end of input[i] if start of last interval in out is LESS OR EQUAL to min end Overlap, set end of last interval to the max end of the two Else No overlap, push in input[i] to out Return out
Code:
vector<vector<int>> merge(vector<vector<int>>& in) { // Take care of edge case if(in.size() < 2) return in; // Sort the intervals by start sort(in.begin(), in.end(), [] (vector<int> &a, vector<int> &b) {return a[0] < b[0];}); // Build the output array vector<vector<int>> out; // Add first interval to our output out.push_back(in[0]); // Iterate from second element to finish for(int i = 1; i < in.size(); i++) { // The max start will always be the new interval, cause we sorted it int max_start = in[i][0]; // Meat of the problem int min_end = min(out.back()[1], in[i][1]); if(max_start <= min_end) out.back()[1] = max(out.back()[1], in[i][1]); else out.push_back(in[i]); } return out; }
You need to do a merge sort in place. One array is big enough to hold both of them. You work with 3 pointers.
One pointer at the end of the long array that will be the output, another pointer at the end of the valid data, and another pointer at the end of the other array.
You take the biggest element and put it in the end of the array and shift the pointers left.
At the end you might still have some shit left in the second array. If you do, just put em in the array.
This was a nice and easy one. Idea is to keep a heap of the size that we require. In C++ we would just use a multiset to allow for duplicates. We iterate through the input array and insert elements into the set. If the set is bigger than k, we remove the smallest element from the set. We then return the smallest element of the set which will be the k'th largest element.
There are 2 different ways of solving this problem:
DFT recursively, BFT.
Depth First Traversal recursively with an unordered map of seen nodes and copies.
Algorithm:
If node is NULL return NULL Add node and copy to map Explore Node with Map iterate over neighbors A if not in Map add copy of A to map. Explore A with Map Push back copy of A to copy of Node neighbors
Code:
void explore(Node* node, unordered_map<Node*, Node*> ©_map) { // iterate over neighbors of node for(auto& A : node->neighbors) { if(copy_map.find(A) == copy_map.end()) { // if we have NOT seen this node yet copy_map[A] = new Node(A->val); explore(A, copy_map); } copy_map[node]->neighbors.push_back(copy_map[A]); } } Node* cloneGraph(Node* node) { if(node == NULL) return NULL; unordered_map<Node*, Node*> copy_map; copy_map[node] = new Node(node->val); explore(node, copy_map); return copy_map[node]; }
Breath First Traversal iteratively with a queue and an ordered map of seen nodes and copies.
Algorithm:
If node is NULL return NULL Insert node, and a copy of that node (with just the value for now) in a map Push node in a queue While queue is not empty deque the front of the queue and call it N iterate over the neighbors of N, call each one A. if A is not in the map Insert A, and a copy into the map add the copy of A to the neighbor list of the copy of N. Return the copy of node.
Code:
Node* cloneGraph(Node* node) { if(node == NULL) return NULL; unordered_map<Node*, Node*> copy_map; queue<Node*> q; // create entry in copy_map of node, copy of node copy_map[node] = new Node(node->val); q.push(node); while(!q.empty()) { Node* n = q.front(); q.pop(); // iterate over adjacent nodes for(auto& a : n->neighbors) { // if we have not visted this node yet if(copy_map.find(a) == copy_map.end()) { // make a copy and add it to our map copy_map[a] = new Node(a->val); // add node to our queue q.push(a); } // we add this newly copied neighbor to the copy of the node we are examining copy_map[n]->neighbors.push_back(copy_map[a]); } } return copy_map[node]; }
Graph problem, we explore connected land parts and then mark them as seen. Then we go through the graph and check if we encounter any new unseen land areas. If we do, increment result counter and explore and mark seen :).
I spent a great deal of effort coding a system that will look at all neighboring elements along with diagonals. The question didn't ask this!!! Remember to read the question!!
I also had a lot of effort in trying to create an ordered map/set of pairs. You can't do that easily in C++, so I just modified the input the input grid.
Algorithm:
Iterate over every element of the graph If the element is unseen land Increment result counter Explore unseen land Mark land as seen Explore all unseen connected land
Code:
vector< vector<int> > landN(vector<vector<char>>& grid, int& i, int& j) { int max_i = grid.size() - 1; int max_j = grid[0].size() - 1; vector<vector<int>> out; // top if(i - 1 >= 0 && grid[i-1][j] == '1') out.push_back({i-1, j}); //left if(j - 1 >= 0 && grid[i][j-1] == '1') out.push_back({i, j-1 }); //down if(j + 1 <= max_j && grid[i][j+1] == '1') out.push_back({i,j + 1}); //right if(i + 1 <= max_i && grid[i+1][j] == '1') out.push_back({i+1, j}); return out; } void explore(vector<vector<char>>& grid, int& i, int& j ) { // assume we are on a land element grid[i][j] = 'X'; // get a list of neighbors that are land vector< vector<int> > neighbors = landN(grid, i, j); for(auto n : neighbors) { explore(grid, n[0], n[1]); } return; } int numIslands(vector<vector<char>>& grid) { int islands = 0; for(int i = 0; i < grid.size(); i++) { for(int j = 0; j < grid[0].size(); j++) { if( grid[i][j] == '1') { islands++; explore(grid, i, j); } } } return islands; }
I don't know you tell me bro.
I did a deep dive in this problem. To solve it, we have to start at a node, color it, and then check to see if any of its neighbors have the same color. Then iterate over the neighbor nodes, if they are uncolored, then color them a different color than their current node.
Algorithm:
Create color map, key will be vertex index and value will be color or unseen color: -1 = unseen, 0 one color, 1 another color Iterate i from 0 to end of graph. Do this to take care of disconnected sub graphs if the vertex at i is unseen (-1) create a stack of ints push i in the stack color the vertex i 0 while stack is not empty vertex cur will equal top element popped from stack // we are now in a colored node. we need to check its neighbors iterate n over neighbors of vertex cur if color of n == -1 meaning not colored/seen yet set color of n the opposite of color of cur suspend cur by pushig it to the stack push n to the stack break out of the iteration if color of n == color of cur return false cause this is not BIPARTE
Code:
bool isBipartite(vector<vector<int>>& graph) { unordered_map<int, int> color; // initialize color map to all -1 for(int i = 0; i < graph.size(); i++) color[i] = -1; // this for loop is to take in account disconnected vertices for(int start = 0; start < graph.size(); start++) { // if a vertex has not been colored. If it has, we've already done a // full search of its connected vertices. We do this for every // disconnected graph if(color[start] == -1) { stack<int> s; s.push(start); // color the first vertex. We assume every vertex in the while loop is // colored color[start] = 0; while(!s.empty()) { // grab the top of the stack int cur = s.top(); s.pop(); // iterate over its children. We are looking for one un colored // vertex to dive deeper into. for(auto n : graph[cur]) { if(color[n] == -1) { // we have to suspend our current vertex, we'll get back // to it later. s.push(cur); // lets push in our new node to search s.push(n); // this sets the color to the opposite of our vertex n // using the bitwise XOR operator color[n] = color[cur] ^ 1; // break out of this vertex's exploration. we want to // pursue this new deeper vertex. break; } else if(color[n] == color[cur]) return false; } } } } return true; }
You have a lock in front of you with 4 circular wheels. Each wheel has 10 slots:
0, 1, 2, 3, 4, 5, 6, 7, 8, 9
. The wheels can rotate freely
and wrap around: for example we can turn 9 to be 0, or 0 to be 9' Each
move consists of turning one wheel one slot.
The lock initially starts at 0000
, a string representing the state of the 4
wheels.
You are given a list of deadends dead ends, meaning if the lock displays any of
these codes, the wheels of the lock will stop turning and you will be unable to
open it.
Given a target representing the value of the wheels that will unlock the lock,
return the minimum total number of turns required to open the lock, or -1 if it
is impossible.
This is a really interesting problem that I thought was a backtracking problem at first. It is a graph problem. Each string has 8 neighbors and you have to make the mental connection that for every turn you can only change one digit at a time. You also need to realize that a BFS is the way to go here, because we are looking for one target, and if you do DFS you'll be digging into every path.
Standard BFS techniques apply. I got a litle tripped up with implementing a seperator based system for the queue instead of keeping track of layer size.
Also for keeping track of turns you have to realize that every layer is a turn.
Algorithm:
Build a set of deadends and a seen set Create a queuefor BFS. For less code, you keep a count for the layer add '0000' while the queue isn't empty increment a turn count iterate the number of the size of the queue pop off the top element if the element is our target, we're good if the element isn't a dead or seen add it to seen add each of its neighbors the queue return -1 if we get out
Code:
int openLock(vector<string>& deadends, string target) { unordered_set<string> d(deadends.begin(), deadends.end()), seen; queue<string> q; q.push("0000"); int turns = 0; while(!q.empty()){ int layer = q.size(); while(layer--){ string cur = q.front(); q.pop(); if(cur == target){ return turns; } if(d.count(cur) == 0 && seen.count(cur) == 0){ // Push in our cur to seen seen.insert(cur); for(int d = -1; d < 2; d+=2){ for(int i = 0; i < 4; i++){ // generate 8 strings that are increments and decrements string n = cur; n[i] += d; n[i] = (n[i] > '9') ? '0' : (n[i] < '0') ? '9' : n[i]; q.push(n); } } } } // We are done with a layer so increment the code turns++; } return -1; }
This is a really interesting graph problem, and theres a couple of parts to to it. The trick to understand is that bulk of the problems lies with accounts tying other accounts together. For example:
Account 1: A B Account 2: C D Account 3: A C
A trick early in this problem was simplify the email strings, and the whole name crap, and boil it down to the root problem. Strings that if found in other accounts, need to be merged into one.
First is building the graph itself, in the form of an adjacency list. Next is to create a map of seen emails, with the key being the first node they were seen in. Then iterate over the accounts, and for each account, iterate over the emails. If the email address has been found already, add an adjacency entry connecting that node to the node the email address was found.
After this we'll have a nifty adjacency list that defines our graph, which is all we need to build our output list of accounts.
What we do is do Depth First Traversal on each node we find. Throughout that traversal, we keep adding all the emails we see into a set for that node, and also mark each node as seen as we explore it. That's it!
Algorithm:
Build an array of sets of ints to serve as adjancency list Initialize the array to have empty set for every index in accounts Create a map called EMAP of key: email string and value: index of account first seen with it Iterate A over accounts iterate E for all emails of A if E is in EMAP create adjacency connections: A to EMAP[E] and EMAP[E] to A else add EMAP[E] = A Build our output: create a vector of vectors of strings called OUT Create a SEEN set of ints iterate A over Accounts if A isn't in SEEN: create a vector of strings, and add in the name as the first element Pass this vector and node A into Explore! Explore will fill in the emails Return OUT! SUBROUTINE: Explore DFS with params: accounts ref, adjacency list, seen set output string set ref node index if the node is in the seen set, RETURN add the nodes emails to the emails output string set iterate over the neighbors explore them!
Code:
// Depth first traversal of our graph, that builds a set of emails for that connected group void dfs(vector<vector<string>>& accounts, vector< set<int> > &adj, unordered_set<int> &seen, int node, set<string> &emails) { if(seen.find(node) != seen.end()) return; // node not seen, add its email to the emails set, bool first = true; for(auto& e : accounts[node]){ if(first){ first = false; continue;} emails.insert(e); } //mark it seen seen.insert(node); //explore its neighbors for(auto& n : adj[node]) dfs(accounts,adj,seen, n, emails); return; } vector<vector<string>> accountsMerge(vector<vector<string>>& accounts) { // initialize empty adj list, set to avoid duplicate adjacencies vector< set<int> > adj(accounts.size(), set<int>()); unordered_map< string, int > emap; // Build the adj list and emails map for(int id = 0; id < accounts.size(); id++) { // iterate over emails see if any are found for(int j = 1; j < accounts[id].size(); j++){ if(emap.find(accounts[id][j]) != emap.end()) { adj[emap[accounts[id][j]]].insert(id); adj[id].insert(emap[ accounts[id][j] ]); } else emap[accounts[id][j]] = id; } } // build output vector<vector<string>> out; unordered_set<int> seen; for(int i = 0; i < accounts.size(); i++) { if(seen.find(i) != seen.end()){ continue; } // we haven't found this yet, so build it, starting with name out.push_back( {accounts[i][0]} ); set<string> emails; dfs(accounts, adj, seen, i, emails); // iterate over emails set and add them to out for(auto& e : emails) { out.back().push_back(e); } } return out; }
To give you some perspective, this was my first crack at this algo:
Did a vector erase first but that wasn't ideal. I then learned how to do it with a two pointer approach. First pointer is a “last valid” location and second pointer sweeps through array. As it sweeps and find good ones, it swaps them with the valid. Valid then becomes my new array size.
Really interesting one cause theres dupes and rotations. Idea is to split the array and based on its bounds, you have 3 possible situations:
You are given an array of k linked-lists lists, each linked-list is sorted in ascending order. Merge all the linked-lists into one sorted linked-list and return it.
Very nice little problem. Many ways to solve but here's the nice optimized way. We have list of nodes and we need to keep track of the minimum one. We make use of an ordered container to do this. This container needs to keep our list of nodes ordered by the value of the container and also store a handle to that node.
Algorithm:
Create a dummyhead and a pointer last that points to it Create a multimap with key (value of node) and value (pointer to node) Iterate over input lists and insert every ListNode and value into the map only add non null elements While multimap has at least 2 elements pop off the smallest node from the mulitmap = cur set last->next = to cur set last = last next if cur next isn't null add it to the multimap if multimap is only one element make last next point to it break return dummyhead next
Code:
ListNode* mergeKLists(vector<ListNode*>& lists) { if(lists.size() == 0 ) return NULL; ListNode dh, *last; last = &dh; // create the map of first elements multimap<int, ListNode*> mmap; for(auto& ln : lists) { if(ln != NULL) mmap.insert(make_pair(ln->val,ln)); } while(mmap.size() > 1) { // set a current node to the minimum node and remove it ListNode* cur = mmap.begin()->second; mmap.erase(mmap.begin()); // set our last pointer to point to this min element last->next = cur; last = last->next; // if there are still nodes in this list, put the next one in the map if(cur->next != NULL) { mmap.insert(make_pair(cur->next->val, cur->next)); } } //mmap is going to have one element left if(mmap.size() == 1) last->next = mmap.begin()->second; return dh.next; }
Given the head of a linked list, return the list after sorting it in ascending order.
This problem is easily solved by using ordered containers giving you O(n log
n)
time and O(n)
space complexity.
You can get O(log n)
space complexity by doing a merge sort. The trick with
merge sort is to use a fast and slow pointer to find the middle of the list.
We do this recursively and our base case is a NULL
and a single node.
Algorithm:
Merge sort divide the list in two mergesort left list mergesort right list merge left and right
Code:
ListNode* mergeSort(ListNode* node){ // If we're a null node if(node == NULL){ return NULL; } // If we're a single node if(node->next == NULL){ return node; } // find the middle with fast and slow pointer ListNode *fast = node, *slow = node, *slowPrev = NULL; while(fast && fast->next){ slowPrev = slow; slow = slow->next; fast = fast->next->next; } // break the lists in two if(slowPrev){ slowPrev->next = NULL; } // Node is gonna be head of left list, // slow is head of right list ListNode* left = mergeSort(node); ListNode* right = mergeSort(slow); // Merge two sorted lists: ListNode dh, *cur; cur = &dh; while(left && right){ // find the smallest of the two if(left->val < right->val){ // left has the smaller value cur->next = left; left = left->next; }else{ // right has the smaller value cur->next = right; right = right->next; } cur = cur->next; } // Take care of any remaining nodes if(left){ cur->next = left; } else if(right){ cur->next = right; } return dh.next; } ListNode* sortList(ListNode* head) { return mergeSort(head); }
The all time favorite :)
We make use of the splice
function in its single element version to move an
element by its iterator to the top of the list.
void splice (iterator position, list& x, iterator i)
Code:
class LRUCache { public: list< pair<int, int> > lru; unordered_map< int, list< pair<int,int> >::iterator > table; int cap; LRUCache(int capacity) { cap = capacity; } int get(int key) { // we search the table for the element auto search = table.find(key); // if not found, return -1 if(search == table.end()) { return -1; } // if found, move up the lru node lru.splice(lru.begin(), lru, search->second); return search->second->second; } void put(int key, int value) { // we search the table for the element auto search = table.find(key); // if found, move lru and update value if(search != table.end()) { lru.splice(lru.begin(), lru, search->second); lru.front().second = value; return; } // if we don't find it, need a newone lru.push_front(make_pair(key, value)); table[key] = lru.begin(); // check size if(lru.size() > cap) { table.erase(lru.back().first); lru.pop_back(); } return; } };
Given two words (beginWord and endWord), and a dictionary's word list, find all shortest transformation sequence(s) from beginWord to endWord, such that: - Only one letter can be changed at a time - Each transformed word must exist in the word list. Note that beginWord is not a transformed word.
I learned some new concepts with this problem. Particularly how to store a trace of a path when doing BFS.
This problem breaks down in first identifying that this is a graph problem. We have a bunch of vertices that are one edit distance away. We then have to find the SHORTEST paths to the end.
If we needed to find all paths, an exhaustive DFS would work. But in this case we need all shortest paths. This means if we do a BFS, all the shortest paths will be on one layer.
At first I built a graph represented by an adjacency list, which was
O(length_of_word * number_of_words²)
with a memory space of
O(number_of_words²)
. Then I did a one edit away function. You can also build
a map of word combinations of each letter removed, requiring
O(number_of_letters*number_of_words)
space.
The other tricky part is the handling seen words. I handled it by removing words in the wordset that we used on this level.
Algorithm:
Create a wordList set to give us O(1) lookup in the words Create a queue of string paths Create a bool flag called done and set to false Create an output list of strings Add in the begin word in the queue as the seed path. While we're not done and q isn't empty: Grab the level size While levesize-- get the path in the front of the q and pop it If the path is the end word add the path to the output list and set done to true add all the neighbors of this word to the base path push these new paths to the queue Add the added words to a visited list remove all the visited words from the word list
Code:
vector<vector<string>> findLadders(string beginWord, string endWord, vector<string>& wL) { queue<vector<string>> q; unordered_set<string> wordList; for(auto w : wL){ wordList.insert(w); } q.push({beginWord}); bool done = false; vector<vector<string>> out; // Do a BSF Search while(!done && !q.empty()){ int level = q.size(); // run this loop for every path in this level list<string> visited; while(level--){ vector<string> path = q.front(); q.pop(); if(path.back() == endWord){ out.push_back(path); done = true; } // add our the neighbors: // Theres multiple ways of doing this. for(int i = 0; i < path.back().length(); i++){ string orig = path.back(); for(char c = 'a'; c <= 'z' ; c++){ orig[i] = c; if(wordList.count(orig)){ path.push_back(orig); q.push(path); path.pop_back(); visited.push_back(orig); } } } } for(auto & w : visited){ wordList.erase(w); } } return out; }
Given two words (beginWord and endWord), and a dictionary's word list, find the length of the shortest transformation sequence(s) from beginWord to endWord, such that: - Only one letter can be changed at a time - Each transformed word must exist in the word list. Note that beginWord is not a transformed word.
I did this problem after I did part 2 and it was easier. There are a couple of things that are different that you don't need. Namely, we don't need to track paths, so it becomes a much easier BFS problem.
Algorithm:
build a set of words make a queue of paths and add first word in it int dist = 0 while q isn't empty dist++ layer = q size make a seen word list while(layer--) pop off path in the q curword = end of path if curword is out end word retun dist iterate over every char in the curword mod = curword iterate c from a to z mod[i] = c if(mod in word set) add it to the path and add it to the q add mod to the seen word list remove all the seen words from the wordSet dist++ return 0
Code:
int ladderLength(string beginWord, string endWord, vector<string>& wordList) { unordered_set<string> wordSet; // build the word set for(auto& w : wordList){ wordSet.insert(w); } queue<string> q; q.push(beginWord); int dist = 0; while(!q.empty()){ dist++; int layer = q.size(); while(layer--){ string cur = q.front(); q.pop(); if(cur == endWord){ return dist; } for(int i = 0; i < cur.length(); i++){ string mod = cur; for(char c = 'a'; c <= 'z'; c++){ mod[i] = c; if(wordSet.count(mod) != 0){ wordSet.erase(mod); q.push(mod); } } } } } return 0; }
A linked list is given such that each node contains an additional random pointer which could point to any node in the list or null. Return a deep copy of the list.
I first approached this problem by thinking I can just two pass it and build a vector array that I can use to index the nodes. I had the right idea but the random pointers point to a node and not a number, and the numbers aren't unique. What is unique is the memory location.
You have to look at the problem and not be scared of writing down the high level algo. That's the whole point of psuedo code, very high level code.
I solved it with a map. You can also solve this problem recursively, also with a map and a very neat way of building the copy. The recursive function returns a copy of the node that was called with the following cases:
Algorithm Iterative:
make a dummy head make a curcopy pointer, cur original while cur orignal isn't null curcopy next = new node with val of original add curcopy to a vector curcopy = curcopy next curorig = curorign next second pass for random index: walk the original list and keep a counter: int randpointer = cur random vector[counter]->rand = vector[rand] counter++ return dummy head next
Code Iterative:
Node* copyRandomList(Node* head) { Node dh(0), *curOrig = head, *curCopy; curCopy = &dh; // key is orig, val is the copy unordered_map<Node*, Node*> map; // make a first pass copy while(curOrig != NULL){ curCopy->next = new Node(curOrig->val); // add it to the array map[curOrig] = curCopy->next; // advanced our pointers curCopy = curCopy->next; curOrig = curOrig->next; } // second pass to copy the random pointers curOrig = head; while(curOrig){ map[curOrig]->random = map[curOrig->random]; curOrig = curOrig->next; } return dh.next; }
Algorithm Recursive:
maintain a visisted map key: original val: copy helper(node) return the copy if(null ) return null if( node is in visited ) return the copy Node copy = new node with original's val Copy next = helperCopy(original's next) Copy rand = helperCopy(original's rand) add copy to the visited map return copy
Code Recursive:
class Solution { public: unordered_map<Node*, Node*> seen; Node* helper(Node* orig){ if(!orig){ return NULL; } if(seen.count(orig)){ return seen[orig]; } Node* copy = new Node(orig->val); // This is very important that we add it our seen list here, beause if not // we will hit infinite recursion with the random pointers seen[orig] = copy; copy->next = helper(orig->next); copy->random = helper(orig->random); return copy; } Node* copyRandomList(Node* head) { return helper(head); } };
Given an array of integers nums and an integer target, return indices of the two numbers such that they add up to target.
You can quickly do an O(n²)
by checking every pair. You can do it quicker
with O(n)
time and space by building a dictionary and looking for the
complement number. You have to keep in mind that you can't return the same
number!
You can two pass it by building a dictionary, or do it better and simpler by building the numbers as we go along and looking for the complement in the previous numbers.
Algorithm:
iterate over all the numbers with an index check if complement exists in the map return it if so add this number to the map
Code:
vector<int> twoSum(vector<int>& nums, int target) { // key: number val val: index unordered_map<int,int> map; for(int i = 0; i < nums.size(); i++){ if(map.count(target-nums[i])){ return {i, map[target-nums[i]]}; } map[nums[i]] = i; } return {}; }