Dynamic Programming is an algorithm design method used for solving problems whose solutions are viewed as a result of making a set/sequence of decisions.

  • One way of making these decisions is to make them one at a time in a step-wise (sequential) manner and never make any erroneous decision.
  • When applying Greedy Methods, for many problems it is not possible to make step-wise decisions based on local information available at every step in such a manner that the sequence of decisions in optimal.

Example - Using coins what is the minimum number of coins you can pick such that the sum is given that any coin can be picked any number of times?

  • Greedy Method - , thus coins
  • Optimal Solution - , thus coins

Difference between DnC and DP

  1. DnC - Break up the problem into smaller and independent problems.
  2. DP - Break up the problem into a series of smaller but overlapping problems.

Single Source Shortest Path (SSSP)

Djikstra’s Single Source Shortest Path

Always gives the optimal solution to the SSSP problem, provided that all edges in the graph have positive weight edges. May or may not give the optimal solution for graphs with negative weight edges.

Approaches -

  1. Based on Matrix (Gives only SSSP cost not path)
  2. Based on Spanning Tree (Gives both cost and path)

Matrix Based approach

In this approach we use a matrix to keep track of the total cost of reaching a certain node starting from some source node .

Setup -

  • The distance/cost to each node in the graph from is tracked using an array called .
  • The cost between any two neighbouring nodes can be found using the cost function by doing .
  • We go through multiple steps by “relaxing” the node with the cheapest cost in to explore more of the graph.
  • When all nodes have been relaxed and explored, we terminate the algorithm.
  • Any node in the graph that isn’t reachable in the current step would have a cost of .
  • The matrix being constructed will have the value of for any step in its row.

The matrix would look like -

Steps -

  1. Explore the source node and identify all the immediate neighbours of and their costs. As rest all nodes in the graph are unreachable at this point, they all would have a cost of .
  2. Relaxation - Identify the unexplored/unrelaxed node with the cheapest cost in the previous step. By travelling through from , all neighbours of are now reachable from . The cost of reaching any neighbour of will be . Only relax if the new cost is strictly less than the old cost.
  1. Repeat Step 2. for each node in the graph. Terminate when all nodes have been relaxed.

Spanning Tree Approach

The tree obtained using this method would not be the Minimum Cost Spanning Tree.

Steps -

  1. Start with as the initial node .
  2. Explore to find the immediate neighbours. Pick the cheapest unexplored node in the graph as the new and draw a path between the node and its parent.
  3. Repeat step 2 until all nodes are explored.

Time Complexity:

  • (without heap)
  • (with heap)
  • If , then

Bellman-Ford algorithm

If the graph has 0 or more negative weight edges but no negative cycles, then the Bellman-Ford algorithm guarantees an optimal solution to the SSSP problem. If a graph has a negative cycle, then no algorithm would work.

Setup -

  • The graph has nodes and edges.

Steps -

  1. Take each edge one by one and relax it. This is one cycle.
  2. Continue doing such cycles until the costs don’t change anymore. Only do a maximum of cycles.
  3. If the costs still update after doing cycles (i.e the cycle), then the graph has a negative cycle and the SSSP for it cannot be solved.

Time Complexity:

  • (if the graph is a complete graph as )

All Pairs Shortest Path (ASSP)

Floyd-Warshall’s Algorithm

Djikstra’s SSSP algorithm can be used to calculate ASSP as well with a TC of . But it won’t be able to find the optimal ASSP solution for graphs with negative edges. That’s why we use Floyd-Warshall’s Algorithm as it can handle negative edges.

Setup -

  1. Let there be nodes in the graph where is the set of all nodes/vertices.
  2. Have an adjacency matrix of size which shows the cost of accessing any node’s immediate neighbours. Any node that isn’t an immediate neighbour of a node will a cost of .

Steps -

  1. Start with .
  2. If , using construct an new adjacency matrix where any holds the cost of reaching from via . This can be done by doing -
  1. Increment and repeat Step 2 until .
  2. The final adjacency matrix will hold the costs of the shortest paths for all pair of vertices in the graph.

Time & Space Complexity -

  1. Time Complexity -
  2. Space Complexity -

0/1 Knapsack (Binary Knapsack)

Unlike the Fractional Knapsack Problem, in Binary Knapsack an item can either be entirely included or entirely excluded. The objective is to maximize the profit by picking items in such a way that their total weight doesn’t exceed the Knapsack capacity.

For such a problem the recurrence algorithm is -

This means -

  1. If weight of object () is more than the Knapsack Capacity, exclude the object.
  2. If the weight of the object is less than the Knapsack Capacity, consider two cases where you can either include the object in the Knapsack or you can exclude the object from the Knapsack. Compare the Knapsack profits in both cases and pick the largest one.

Tabulation Method

Better than following textual notes regarding this, it’s better to follow the procedure demonstrated by Mr. Abdul Bari. This method can be easily implemented as a Bottom-Up Iterative solution for solving 0/1 Knapsack.

Complexity -

  1. Time Complexity -
  2. Space Complexity -

Sum of Subsets (SOS)

For such a problem the recurrence algorithm is -

Longest Common Subsequence (LCS)

A subsequence of a string is made by deleting some or none characters from the string, while keeping the order in which the characters occurs.

A substring of a string is a subsequence of a string made using contiguous characters.

Example - In abdace and babce, bace and abce are two substrings of length 4.

There are subsequences possible for a string of characters. So a brute force algorithm will take have the time complexity of .

Pseudocode -

LCS(A,B) {
	// of size mxn
	mem_table = [[0 for (i=0;i<=len(A);i++)] for (j=0;j<=len(B);j++)]
 
	for (i=0;i<=len(A);i++) {
		for (j=0;j<=len(B);j++) {
			if (A[i] == B[j]) {
				mem_table[i][j] = 1 + mem_table[i-1][j-1]
			} else {
				mem_table[i][j] = max(mem_table[i-1][j], mem_table[i][j-1])
			}
		}
	}
}

How the tabulation method looks like on paper. To follow this, watch Mr Abdul Bari solve it -

Here we are attempting to find the LCS for “longest” and “stone”.

Complexity -

  1. Time Complexity -
  2. Space Complexity -

Matrix Chain Multiplication (MCM)

Let and be two matrices of sizes and . The number of scalar multiplication required in the matrix multiplication is .

So say we have multiple matrices like . Their matrix product can be achieved by multiplying the matrices in any order, either by doing or or . MCM attempts to find the optimal order in which the matrices should be multiplied to minimize the amount of scalar multiplication required.

The number of ways to parenthesize the multiplication of matrices corresponds to the number of full binary trees with leaves (each leaf representing a matrix).

This number is given by the Catalan number -

Each such binary tree represents a distinct order of matrix multiplication.