Design and Analysis of algorithms
1. Basic Iterative Algorithms
2. Basic Iterative Sorting Algorithms
3. Binary Search with Divide and Conquer approach
4. Merge, Heap, Quick and Randomized Quick Sort
5. Selection Problem with Divide and Conquer approach
6. Fractional Knapsack, Job Sequencing with Deadlines, Kruskal’s and Prim’s Algorithm
7. Dynamic Programming
8. Algorithms using Backtracking program
9. Implement approximation algorithm
Theory section
GCD of two numbers is the largest number that divides both of them. A simple way to find GCD is to factorize both numbers and multiply common factors.
Algorithm
Time complexity
Best case:
O(1)
Worst case:
O(log n)
Average case:
O(log n)
Space complexity:
O(1)
Auxiliary space complexity:
O(1)
Sorting in place:
Yes
Stable:
Yes
The Fibonacci numbers are the numbers in the following integer sequence. 0, 1, 1, 2, 3, 5, 8, 13, 21, 34, ...
In mathematical terms, the sequence F(n)
of Fibonacci numbers is defined by the recurrence relation
with seed values F(0) = 0 and F(1) = 1.
Algorithm
Time complexity
Best case:
O(1)
Worst case:
O(2^n)
Average case:
O(2^n)
Space complexity:
O(1)
Auxiliary space complexity:
O(1)
Sorting in place:
Yes
Stable:
Yes
Sequential search is a method for finding a particular value in a list, that consists of checking every one of its elements, one at a time and in sequence, until the desired one is found.
Algorithm
Time complexity
Best case:
O(1)
Worst case:
O(n)
Average case:
O(n)
Space complexity:
O(1)
Auxiliary space complexity:
O(1)
Sorting in place:
Yes
Stable:
Yes
Binary search is a fast search algorithm with run-time complexity of Ο(log n)
. This search algorithm works on the principle of divide and conquer. For this algorithm to work properly, the data collection should be in the sorted form.
Algorithm
Time complexity
Best case:
O(1)
Worst case:
O(log n)
Average case:
O(log n)
Space complexity:
O(1)
Auxiliary space complexity:
O(1)
Sorting in place:
Yes
Stable:
Yes
Bubble Sort is the simplest sorting algorithm that works by repeatedly swapping the adjacent elements if they are in wrong order. The algorithm, which is a comparison sort, is named for the way smaller or larger elements "bubble" to the top of the list. Although the algorithm is simple, it is too slow and impractical for most problems even when compared to insertion sort. It can be practical if the input is usually in sort order but may occasionally have some out-of-order elements nearly in position. The complexity of bubble sort is O(n^2) in the worst and average case because the entire array needs to be iterated for every element. However, the complexity is O(n) in the best case because the array needs to be iterated only once when the array is already sorted. Bubble sort is adaptive. It means that for almost sorted array, it gives a good performance. Bubble sort is stable. It means that the same element in the array will not be swapped. Bubble sort is a comparison sort. It means that it sorts the array by comparing the elements of the array. Bubble sort is an in-place sorting algorithm. It means that it does not require any extra space. Bubble sort is not a recursive algorithm.
Algorithm
Time complexity
Best case:
O(n)
Worst case:
O(n^2)
Average case:
O(n^2)
Worst case space complexity:
O(1)
Auxiliary space complexity:
O(1)
Sorting in place:
Yes
Stable:
Yes
Selection sort is a sorting algorithm, specifically an in-place comparison sort. It has O(n^2) time complexity, making it inefficient on large lists, and generally performs worse than the similar insertion sort. Selection sort is noted for its simplicity and has performance advantages over more complicated algorithms in certain situations, particularly where auxiliary memory is limited.
Algorithm
Time complexity
Best case:
O(n^2)
Worst case:
O(n^2)
Average case:
O(n^2)
Worst case space complexity:
O(1)
Auxiliary space complexity:
O(1)
Sorting in place:
Yes
Stable:
No
Insertion sort is a simple sorting algorithm that builds the final sorted array (or list) one item at a time.
Algorithm
Time complexity
Best case:
O(n)
Worst case:
O(n^2)
Average case:
O(n^2)
Worst case space complexity:
O(1)
Auxiliary space complexity:
O(1)
Sorting in place:
Yes
Stable:
Yes
Binary search is a fast search algorithm with run-time complexity of Ο(log n)
. This search algorithm works on the principle of divide and conquer. For this algorithm to work properly, the data collection should be in the sorted form.
Algorithm
Time complexity
Best case:
O(1)
Worst case:
O(log n)
Average case:
O(log n)
Space complexity:
O(1)
Auxiliary space complexity:
O(1)
Sorting in place:
Yes
Stable:
Yes
Merge sort is an efficient, general-purpose, comparison-based sorting algorithm. Most implementations produce a stable sort, which means that the order of equal elements is the same in the input and output. Merge sort is a divide and conquer algorithm that was invented by John von Neumann in 1945.
Algorithm
Time complexity
Best case:
O(n log n)
Worst case:
O(n log n)
Average case:
O(n log n)
Worst case space complexity:
O(n)
Auxiliary space complexity:
O(n)
Sorting in place:
No
Stable:
Yes
Heap sort is a comparison-based sorting technique based on Binary Heap data structure. It is similar to selection sort where we first find the maximum element and place the maximum element at the end. We repeat the same process for the remaining elements.
Algorithm
Time complexity
Best case:
O(n log n)
Worst case:
O(n log n)
Average case:
O(n log n)
Worst case space complexity:
O(1)
Auxiliary space complexity:
O(1)
Sorting in place:
Yes
Stable:
No
Quicksort is an efficient sorting algorithm. Developed by British computer scientist Tony Hoare in 1959 and published in 1961, it is still a commonly used algorithm for sorting. When implemented well, it can be about two or three times faster than its main competitors, merge sort and heapsort.
Algorithm
Time complexity
Best case:
O(n log n)
Worst case:
O(n^2)
Average case:
O(n log n)
Worst case space complexity:
O(log n)
Auxiliary space complexity:
O(log n)
Sorting in place:
Yes
Stable:
No
Randomized quick sort is an efficient sorting algorithm. Developed by British computer scientist Tony Hoare in 1959 and published in 1961, it is still a commonly used algorithm for sorting. When implemented well, it can be about two or three times faster than its main competitors, merge sort and heapsort.
Algorithm
Time complexity
Best case:
O(n log n)
Worst case:
O(n^2)
Average case:
O(n log n)
Worst case space complexity:
O(log n)
Auxiliary space complexity:
O(log n)
Sorting in place:
Yes
Stable:
No
Selection problem is to find the kth smallest element in an unsorted array. For example, the kth smallest element in [1, 4, 3, 2, 5] is 3, and the kth smallest element in [1, -1, 2, -2, 3] is -2.
Algorithm
Time complexity
Best case:
O(n)
Worst case:
O(n^2)
Average case:
O(n)
Worst case space complexity:
O(log n)
Auxiliary space complexity:
O(log n)
Sorting in place:
Yes
Stable:
No
Given weights and values of n items, we need to put these items in a knapsack of capacity W to get the maximum total value in the knapsack. In the 0-1 Knapsack problem, we are not allowed to break items. We either take the whole item or don’t take it. In Fractional Knapsack, we can break items for maximizing the total value of knapsack. This problem in which we can break an item is also called the fractional knapsack problem.
Algorithm
Time complexity
Best case:
O(n log n)
Worst case:
O(n log n)
Average case:
O(n log n)
Worst case space complexity:
O(n)
Auxiliary space complexity:
O(n)
Sorting in place:
No
Given an array of jobs where every job has a deadline and associated profit if the job is finished before the deadline. It is also given that every job takes a single unit of time, so the minimum possible deadline for any job is 1. How to maximize total profit if only one job can be scheduled at a time.
Algorithm
Time complexity
Best case:
O(n^2)
Worst case:
O(n^2)
Average case:
O(n^2)
Worst case space complexity:
O(n)
Auxiliary space complexity:
O(n)
Sorting in place:
No
Stable:
No
Kruskal's algorithm is a minimum-spanning-tree algorithm which finds an edge of the least possible weight that connects any two trees in the forest. It is a greedy algorithm in graph theory as it finds a minimum spanning tree for a connected weighted graph adding increasing cost arcs at each step.
Algorithm
Time complexity
Best case:
O(E log V)
Worst case:
O(E log V)
Average case:
O(E log V)
Worst case space complexity:
O(V)
Auxiliary space complexity:
O(V)
Sorting in place:
No
Stable:
No
Prim's algorithm is a greedy algorithm that finds a minimum spanning tree for a weighted undirected graph. This means it finds a subset of the edges that forms a tree that includes every vertex, where the total weight of all the edges in the tree is minimized.
Algorithm
Time complexity
Best case:
O(E log V)
Worst case:
O(E log V)
Average case:
O(E log V)
Worst case space complexity:
O(V)
Auxiliary space complexity:
O(V)
Sorting in place:
No
Stable:
No
Given weights and values of n items, we need to put these items in a knapsack of capacity W to get the maximum total value in the knapsack. In the 0-1 Knapsack problem, we are not allowed to break items. We either take the whole item or don’t take it.
Algorithm
Time complexity
Best case:
O(nW)
Worst case:
O(nW)
Average case:
O(nW)
Worst case space complexity:
O(nW)
Auxiliary space complexity:
O(nW)
Sorting in place:
No
Stable:
No
Given a sequence of matrices, find the most efficient way to multiply these matrices together. The problem is not actually to perform the multiplications, but merely to decide in which order to perform the multiplications.
Algorithm
Time complexity
Best case:
O(n^3)
Worst case:
O(n^3)
Average case:
O(n^3)
Worst case space complexity:
O(n^2)
Auxiliary space complexity:
O(n^2)
Sorting in place:
No
Stable:
No
Given two strings str1 and str2 and below operations that can performed on str1. Find minimum number of edits (operations) required to convert ‘str1’ into ‘str2’.
Insert
Remove
Replace
Algorithm
Time complexity
Best case:
O(mn)
Worst case:
O(mn)
Average case:
O(mn)
Worst case space complexity:
O(mn)
Auxiliary space complexity:
O(mn)
Sorting in place:
No
Stable:
No
The Floyd Warshall Algorithm is for solving the All Pairs Shortest Path problem. The problem is to find shortest distances between every pair of vertices in a given edge weighted directed Graph. The Floyd Warshall algorithm works by successively improving an estimate on the shortest path between two vertices, until the estimate is optimal.
Algorithm
Time complexity
Best case:
O(V^3)
Worst case:
O(V^3)
Average case:
O(V^3)
Worst case space complexity:
O(V^2)
Auxiliary space complexity:
O(V^2)
Sorting in place:
No
Stable:
No
Given a set of cities and distance between every pair of cities, the problem is to find the shortest possible route that visits every city exactly once and returns to the starting point. Note the difference between Hamiltonian Cycle and TSP. The Hamiltonian cycle problem is to find if there exist a tour that visits every city exactly once. Here we know that Hamiltonian Tour exists (because the graph is complete) and in fact many such tours exist, the problem is to find a minimum weight Hamiltonian Cycle.
Algorithm
Time complexity
Best case:
O(V^2 * 2^V)
Worst case:
O(V^2 * 2^V)
Average case:
O(V^2 * 2^V)
Worst case space complexity:
O(V * 2^V)
Auxiliary space complexity:
O(V * 2^V)
Sorting in place:
No
Stable:
No
The N Queen is the problem of placing N chess queens on an N×N chessboard so that no two queens attack each other. For example, following is a solution for 4 Queen problem.
Algorithm
Time complexity
Best case:
O(n!)
Worst case:
O(n!)
Average case:
O(n!)
Worst case space complexity:
O(n^2)
Auxiliary space complexity:
O(n^2)
Sorting in place:
No
Stable:
No
Given a set of non-negative integers, and a value sum, determine if there is a subset of the given set with sum equal to given sum.
Algorithm
Time complexity
Best case:
O(n * sum)
Worst case:
O(n * sum)
Average case:
O(n * sum)
Worst case space complexity:
O(n * sum)
Auxiliary space complexity:
O(n * sum)
The vertex cover problem is the optimization problem of finding the smallest vertex cover for a given graph. A vertex cover of a graph is a set of vertices such that each edge of the graph is incident to at least one vertex of the set. A vertex cover of minimum size is called a minimum vertex cover. The vertex cover problem is a classical optimization problem in computer science and is a typical example of an NP-hard optimization problem that has an approximation algorithm.
Algorithm
Time complexity
Best case:
O(V + E)
Worst case:
O(V + E)
Average case:
O(V + E)
Worst case space complexity:
O(V + E)
Auxiliary space complexity:
O(V + E)
Sorting in place:
No
Stable:
No
Last updated