Table of Contents

  1. Divide the bigger problem into smaller sub-problems.
  2. Solve the sub-problems and combine their results if required to get the solution of the original problem.
  3. In general a problem is said to be small if it can be solved in one/two basic operations.

DnC problems are of two types -

  1. Symmetric - The problem is divided into sub-problems of same sizes.
  2. Asymmetric - The problem is divided into sub-problems of different sizes. Example of Symmetric DnC -
DnC(A,1,n) {
	if Small(1,n) {
		return S(A,1,n)
	} else {
		m = Divide(1,n)
		s1 = DnC(A,1,m)
		s2 = DnC(A,m+1,n)
		Combine(s1,s2)
	}
}

Here is the TC of the “Divide” and “Conquer” operations -

  1. Divide is Mandatory.
  2. Conquer is Optional.

Min or Max of an array

Pseudocode and TC

MinMax(A,n) {
	min = max = A[1]
	for (i=2; i<=n; i++) {
		if (A[i] < min) {
			min = A[i]
		} else if (A[i] > max) {
			max = A[i]
		}
	}
}

The Time Complexity equation is -

CaseComparisons
Best Case
(In Descending Order)
Worst Case
(In Ascending Order)

Average Case

Algo 2 - Using Divide and Conquer

  1. Divide the array into two halves.
  2. Recursively split each half until the subarray size becomes:
    • 1 element → min = max = that element
    • 2 elements → find min and max using one comparison
  3. Conquer (base case):
    For each smallest subarray, compute its minimum and/or maximum directly.
  4. Combine:
    • Compare the minimums of the left and right subarrays to get the overall minimum.
    • Compare the maximums of the left and right subarrays to get the overall maximum.
  5. Repeat the combine step bottom-up until the results for the entire array are obtained.

Pseudocode and TC

MinMaxDnC(i,j) {                   // Return is like (min, max)
	if (i==j) return (a[i], a[i])  // Single element sub-array 
	else if(i==j-1) {              // Double element sub-array
		if (a[i] < a[j]) {
			return (a[i], a[j])
		}
		return (a[j], a[i])
	}
 
	mid=(i+j)/2
	(min, max) = MinMaxDnC(i,mid)
	(min1, max1) = MinMaxDnC(mid+1,j)
	
	if (max1 > max) max = max1
	if (min1 < min) min = min1
	
	return (min, max)
}

The Time Complexity equation is -

CaseComparisons
(Linear Search)
Comparisons
(Divide n Conquer)
Best Case
(In Descending Order)
Worst Case
(In Ascending Order)

Average Case

Properties -

  1. Better worst case and average case performance than linear search.
  2. Linear Search - Binary Search - (due to the Recursion Stack)

Binary Search

The maximum number of nodes at any level of the Binary Tree is - Total nodes in a Binary Tree of height -

Using this we can also find the height of a binary tree with some nodes -

Thus -

  • Minimum Possible height of a binary tree -
  • Minimum Possible height of a binary tree -

Pseudocode and TC

BinSearch(a,i,l,x) {
	if (l==i) {
		if (x==a[i]) {
			return i
		}
		return 0
	}
	
	mid = floor((i+1)/2)
	if (x==a[mid]) return mid
	else if (x<a[mid]) return BinSearch(a,i,mid-1,x)
	else return BinSearch(a,mid+1,l,x)
}

The Time Complexity equation is -

Case
Best Case
(element is at mid)
Worst Case
  1. For Recursive BinSearch (Recursion Stack) For iterative BinSearch

Merge Sort

The idea is two merge two sorted sub-arrays to make a larger sorted array.

Merging

The sub-arrays are merged by using two pointers and which start from the start of each sub-array. The elements at these pointers are compared and the smaller/larger (depending upon the ordering required) one of the two is picked and that pointer is incremented.

  1. Minimum number of comparisons – min(m,n) where m and n are sizes of the two sub-arrays.
  2. Max number of comparisons - m+n-1 When the first m-1 elements of the L1 are smaller than the first element of L2 but the last element of L1 is greater than last element of L2.

So for merging.

Merge + Sort Time Complexity

def merge_sort(A):
  size = len(A)
  if size <= 1:
    return A
 
  mid = size//2
  s1 = merge_sort(A[0:mid])
  s2 = merge_sort(A[mid:])
 
  i = j = 0
  m, n = len(s1), len(s2)
  temp = []
 
  while True:
    if i == m:
      temp.extend(s2[j:])
      break
    elif j == n:
      temp.extend(s1[i:])
      break
    elif s1[i] < s2[j]:
      temp.append(s1[i])
      i+=1
    elif s2[j] < s1[i]:
      temp.append(s2[j])
      j+=1
 
  return temp

The merge sort algorithm firstly divides the array into subarrays and then performs the merge operation on them. Thus the is like -

For , . Thus

Because in Merge comparisons happen during the merge step, not during division, the number of comparisons is regardless of the input order. Hence, Merge Sort has no best case or worst case arrangement.

Case
Best Case
(element is at mid)
Worst Case
Overall

Properties

  1. Not In-place due to the need for an additional array temp of size .
  2. Is Stable.
  3. The Recursion Stack would be of due to the division.
  4. The Auxiliary memory however would be of as a temporary array would be required for merge + sort results.
  5. Thus total .
  6. Number of function calls during recursion - .

2-way Merge Sort

  1. A variant of merge sort where we just make pairs of elements without dividing the original array from the mid. So only the “Conquer” part of Merge-Sort is done, not “Divide”.
  2. Each pass processes one level of the tree.

But this might not always work as the array needs to be of length of the form of . Example -

One way of solving this is by sending [3,8] up one level as it is and merging it with [2,5,7,20].

Quick Sort

Quick Sort uses a Partitioning algorithm first to partition the original array into sub-arrays. Each sub-array is then again partitioned further.

Partitioning

The time complexity of the partitioning algorithm would be -

Sorting

Best Case -

Worst Case -

Case
Best Case
(array partitions from the mid each time)
Worst Case
(when input in ascending or descending order,
the partitioned sub-array will have elements)

Properties -

  1. Not In-place.
  2. Not Stable.
  3. Counter-intuitive as gives worst case for sorted inputs.
  4. due to recursion stack.

Master’s Method

Only applicable for Symmetric DnC. Mathematically -

  1. Example - . Here . Thus Master’s Method applies.
  2. Even when an equation might not seem to satisfy Master’s Method on the first glance, using Change of Variable might make it into an equation that would satisfy Master’s Method. See Question 3 for an example.

Case 1

If is for some , then

Case 2

If is for some , such that -

Case 3

If is for some , , for some

Matrix Multiplication

Here . Calculating one element of we need multiplications and addition. In general terms we need multiplications and additions to calculate one element of .

Non Divide-n-Conquer

Matrix Addition -

Matrix Multiplication -

A non-DnC method of Matrix Multiplication would have TC . The TC of Matrix Addition is .

Divide-n-Conquer

We can consider a matrix of size to be made up of 4 matrices of sizes .

Here , where is further a matrix multiplication between two matrices.

  1. If the time complexity of multiplication of two matrix is , then time complexity of multiplication of two matrix is .
  2. In calculation of we require such matrix multiplications and 1 matrix addition.
  3. There are a total of such submatrices of that need to be calculated. Using the above information we can say that -

Strassen’s Method

  1. In DnC based matrix multiplication there are subproblems involved in getting .
  2. Time complexity can be reduced if the number of subproblems is decreased from .

Using this above method, by creating the matrices , can still be calculated. However now we only require subproblems instead of .

Long Integer Multiplication

We use 2-way split or 3-way split and then solve. There exist 3 methods of implementing these splits -

  1. Simple DnC -
  2. AK optimization DnC -
  3. T&C optimization DnC -

Master’s Method can be used to find the above TCs.

Coefficient of for some k-way split -

Simple DnCAK OptimizationT&C Optimization
2-way split433
3-way split985
4-way split16157
k-way split
(generalized)
Reccurrence Equation -

Here is the coefficient of the k-way split based upon the DnC strategy used.


Questions

Q1)

Sol - Here we know that the worst case time complexity for merge sort is -

Thus for elements, merge sort would take 6 minutes in the worst case.


Q2)

Sol - The worst possible positions would be either index 0 or index 24. Thus .


Q3) Solve the following using Master's Method -

Sol - In its current form, the equation will not satisfy any case of the Master’s Method. We can use Change of Variable to try and convert this into an equation which will satisfy Master’s Method.

Let , so . Let’s say for some function , . Then we can write the above equation in terms of like -

Now Master’s Method is applicable on this equation. Here . Case 2 of Master’s Method would satisfy this and thus we’d get .

As , .


Q4) In a variant of Quick Sort, suppose the n/4th smallest element is picked as the pivot for partitioning. What would be the time complexity reccurrence?

Sol - We know that after partitioning, the pivot is placed at its correct position. Thus the element would be placed at the index of the array. The two sub-arrays that would be solved would be of size and . Thus,