## Kruskal’s Algorithm-

• Kruskal’s Algorithm is a famous greedy algorithm.
• It is used for finding the Minimum Spanning Tree (MST) of a given graph.
• To apply Kruskal’s algorithm, the given graph must be weighted, connected and undirected.

## Kruskal’s Algorithm Implementation-

The implementation of Kruskal’s Algorithm is explained in the following steps-

### Step-01:

• Sort all the edges from low weight to high weight.

### Step-02:

• Take the edge with the lowest weight and use it to connect the vertices of graph.
• If adding an edge creates a cycle, then reject that edge and go for the next least weight edge.

### Step-03:

• Keep adding edges until all the vertices are connected and a Minimum Spanning Tree (MST) is obtained.

### Thumb Rule to Remember

The above steps may be reduced to the following thumb rule-

• Simply draw all the vertices on the paper.
• Connect these vertices using edges with minimum weights such that no cycle gets formed.

## Kruskal’s Algorithm Time Complexity-

 Worst case time complexity of Kruskal’s Algorithm = O(ElogV) or O(ElogE)

### Analysis-

• The edges are maintained as min heap.
• The next edge can be obtained in O(logE) time if graph has E edges.
• Reconstruction of heap takes O(E) time.
• So, Kruskal’s Algorithm takes O(ElogE) time.
• The value of E can be at most O(V2).
• So, O(logV) and O(logE) are same.

### Special Case-

• If the edges are already sorted, then there is no need to construct min heap.
• So, deletion from min heap time is saved.
• In this case, time complexity of Kruskal’s Algorithm = O(E + V)

## Problem-01:

Construct the minimum spanning tree (MST) for the given graph using Kruskal’s Algorithm-

## Solution-

To construct MST using Kruskal’s Algorithm,

• Simply draw all the vertices on the paper.
• Connect these vertices using edges with minimum weights such that no cycle gets formed.

### Step-07:

Since all the vertices have been connected / included in the MST, so we stop.

Weight of the  MST

= Sum of all edge weights

= 10 + 25 + 22 + 12 + 16 + 14

= 99 units

To gain better understanding about Kruskal’s Algorithm,

Watch this Video Lecture

To practice previous years GATE problems based on Kruskal’s Algorithm,

Watch this Video Lecture

Next Article- Prim’s Algorithm Vs Kruskal’s Algorithm

Get more notes and other study material of Design and Analysis of Algorithms.

Watch video lectures by visiting our YouTube channel LearnVidFun.

## Prim’s Algorithm-

• Prim’s Algorithm is a famous greedy algorithm.
• It is used for finding the Minimum Spanning Tree (MST) of a given graph.
• To apply Prim’s algorithm, the given graph must be weighted, connected and undirected.

## Prim’s Algorithm Implementation-

The implementation of Prim’s Algorithm is explained in the following steps-

### Step-01:

• Randomly choose any vertex.
• The vertex connecting to the edge having least weight is usually selected.

### Step-02:

• Find all the edges that connect the tree to new vertices.
• Find the least weight edge among those edges and include it in the existing tree.
• If including that edge creates a cycle, then reject that edge and look for the next least weight edge.

### Step-03:

• Keep repeating step-02 until all the vertices are included and Minimum Spanning Tree (MST) is obtained.

## Prim’s Algorithm Time Complexity-

Worst case time complexity of Prim’s Algorithm is-

• O(ElogV) using binary heap
• O(E + VlogV) using Fibonacci heap

### Time Complexity Analysis

• If adjacency list is used to represent the graph, then using breadth first search, all the vertices can be traversed in O(V + E) time.
• We traverse all the vertices of graph using breadth first search and use a min heap for storing the vertices not yet included in the MST.
• To get the minimum weight edge, we use min heap as a priority queue.
• Min heap operations like extracting minimum element and decreasing key value takes O(logV) time.

So, overall time complexity

= O(E + V) x O(logV)

= O((E + V)logV)

= O(ElogV)

This time complexity can be improved and reduced to O(E + VlogV) using Fibonacci heap.

## Problem-01:

Construct the minimum spanning tree (MST) for the given graph using Prim’s Algorithm-

## Solution-

The above discussed steps are followed to find the minimum cost spanning tree using Prim’s Algorithm-

### Step-06:

Since all the vertices have been included in the MST, so we stop.

Now, Cost of Minimum Spanning Tree

= Sum of all edge weights

= 10 + 25 + 22 + 12 + 16 + 14

= 99 units

## Problem-02:

Using Prim’s Algorithm, find the cost of minimum spanning tree (MST) of the given graph-

## Solution-

The minimum spanning tree obtained by the application of Prim’s Algorithm on the given graph is as shown below-

Now, Cost of Minimum Spanning Tree

= Sum of all edge weights

= 1 + 4 + 2 + 6 + 3 + 10

= 26 units

To gain better understanding about Prim’s Algorithm,

Watch this Video Lecture

To practice previous years GATE problems based on Prim’s Algorithm,

Watch this Video Lecture

Next Article- Kruskal’s Algorithm

Get more notes and other study material of Design and Analysis of Algorithms.

Watch video lectures by visiting our YouTube channel LearnVidFun.

## Topological Sort-

 Topological Sort is a linear ordering of the vertices in such a way that if there is an edge in the DAG going from vertex ‘u’ to vertex ‘v’, then ‘u’ comes before ‘v’ in the ordering.

It is important to note that-

• Topological Sorting is possible if and only if the graph is a Directed Acyclic Graph.
• There may exist multiple different topological orderings for a given directed acyclic graph.

## Topological Sort Example-

Consider the following directed acyclic graph-

For this graph, following 4 different topological orderings are possible-

• 1 2 3 4 5 6
• 1 2 3 4 6 5
• 1 3 2 4 5 6
• 1 3 2 4 6 5

## Applications of Topological Sort-

Few important applications of topological sort are-

• Scheduling jobs from the given dependencies among jobs
• Instruction Scheduling
• Determining the order of compilation tasks to perform in makefiles
• Data Serialization

## Problem-01:

Find the number of different topological orderings possible for the given graph-

## Solution-

The topological orderings of the above graph are found in the following steps-

### Step-01:

Write in-degree of each vertex-

### Step-02:

• Vertex-A has the least in-degree.
• So, remove vertex-A and its associated edges.
• Now, update the in-degree of other vertices.

### Step-03:

• Vertex-B has the least in-degree.
• So, remove vertex-B and its associated edges.
• Now, update the in-degree of other vertices.

### Step-04:

There are two vertices with the least in-degree. So, following 2 cases are possible-

In case-01,

• Remove vertex-C and its associated edges.
• Then, update the in-degree of other vertices.

In case-02,

• Remove vertex-D and its associated edges.
• Then, update the in-degree of other vertices.

### Step-05:

Now, the above two cases are continued separately in the similar manner.

In case-01,

• Remove vertex-D since it has the least in-degree.
• Then, remove the remaining vertex-E.

In case-02,

• Remove vertex-C since it has the least in-degree.
• Then, remove the remaining vertex-E.

### Conclusion-

For the given graph, following 2 different topological orderings are possible-

• A B C D E
• A B D C E

## Problem-02:

Find the number of different topological orderings possible for the given graph-

## Solution-

The topological orderings of the above graph are found in the following steps-

### Step-01:

Write in-degree of each vertex-

### Step-02:

• Vertex-1 has the least in-degree.
• So, remove vertex-1 and its associated edges.
• Now, update the in-degree of other vertices.

### Step-03:

There are two vertices with the least in-degree. So, following 2 cases are possible-

In case-01,

• Remove vertex-2 and its associated edges.
• Then, update the in-degree of other vertices.

In case-02,

• Remove vertex-3 and its associated edges.
• Then, update the in-degree of other vertices.

### Step-04:

Now, the above two cases are continued separately in the similar manner.

In case-01,

• Remove vertex-3 since it has the least in-degree.
• Then, update the in-degree of other vertices.

In case-02,

• Remove vertex-2 since it has the least in-degree.
• Then, update the in-degree of other vertices.

### Step-05:

In case-01,

• Remove vertex-4 since it has the least in-degree.
• Then, update the in-degree of other vertices.

In case-02,

• Remove vertex-4 since it has the least in-degree.
• Then, update the in-degree of other vertices.

### Step-06:

In case-01,

• There are 2 vertices with the least in-degree.
• So, 2 cases are possible.
• Any of the two vertices may be taken first.

Same is with case-02.

### Conclusion-

For the given graph, following 4 different topological orderings are possible-

• 1 2 3 4 5 6
• 1 2 3 4 6 5
• 1 3 2 4 5 6
• 1 3 2 4 6 5

## Problem-03:

Consider the directed graph given below. Which of the following statements is true?

1. The graph does not have any topological ordering.
2. Both PQRS and SRPQ are topological orderings.
3. Both PSRQ and SPRQ are topological orderings.
4. PSRQ is the only topological ordering.

## Solution-

• The given graph is a directed acyclic graph.
• So, topological orderings exist.
• P and S must appear before R and Q in topological orderings as per the definition of topological sort.

Thus, Correct option is (C).

## Problem-04:

Consider the following directed graph-

The number of different topological orderings of the vertices of the graph is ________ ?

## Solution-

Number of different topological orderings possible = 6.

(The solution is explained in detail in the linked video lecture.)

To gain better understanding about Topological Sort,

Watch this Video Lecture

To practice previous years GATE problems on Topological Sort,

Watch this Video Lecture

Next Article- Shortest Path Problems

### Other Popular Sorting Algorithms-

Get more notes and other study material of Design and Analysis of Algorithms.

Watch video lectures by visiting our YouTube channel LearnVidFun.

## Floyd Warshall Algorithm-

• Floyd Warshall Algorithm is a famous algorithm.
• It is used to solve All Pairs Shortest Path Problem.
• It computes the shortest path between every pair of vertices of the given graph.
• Floyd Warshall Algorithm is an example of dynamic programming approach.

Floyd Warshall Algorithm has the following main advantages-

• It is extremely simple.
• It is easy to implement.

### Algorithm-

Floyd Warshall Algorithm is as shown below-

```Create a |V| x |V| matrix               // It represents the distance between every pair of vertices as given

For each cell (i,j) in M do-

if i = = j

M[ i ][ j ] = 0                 // For all diagonal elements, value = 0

if (i , j) is an edge in E

M[ i ][ j ] = weight(i,j)       // If there exists a direct edge between the vertices, value = weight of edge

else

M[ i ][ j ] = infinity          // If there is no direct edge between the vertices, value = ∞

for k from 1 to |V|

for i from 1 to |V|

for j from 1 to |V|

if M[ i ][ j ] > M[ i ][ k ] + M[ k ][ j ]
M[ i ][ j ] = M[ i ][ k ] + M[ k ][ j ]```

### Time Complexity-

• Floyd Warshall Algorithm consists of three loops over all the nodes.
• The inner most loop consists of only constant complexity operations.
• Hence, the asymptotic complexity of Floyd Warshall algorithm is O(n3).
• Here, n is the number of nodes in the given graph.

### When Floyd Warshall Algorithm Is Used?

• Floyd Warshall Algorithm is best suited for dense graphs.
• This is because its complexity depends only on the number of vertices in the given graph.
• For sparse graphs, Johnson’s Algorithm is more suitable.

## Problem-

Consider the following directed weighted graph-

Using Floyd Warshall Algorithm, find the shortest path distance between every pair of vertices.

## Solution-

### Step-01:

• Remove all the self loops and parallel edges (keeping the lowest weight edge) from the graph.
• In the given graph, there are neither self edges nor parallel edges.

### Step-02:

• Write the initial distance matrix.
• It represents the distance between every pair of vertices in the form of given weights.
• For diagonal elements (representing self-loops), distance value = 0.
• For vertices having a direct edge between them, distance value = weight of that edge.
• For vertices having no direct edge between them, distance value = ∞.

Initial distance matrix for the given graph is-

### Step-03:

Using Floyd Warshall Algorithm, write the following 4 matrices-

To learn how to write these matrices, watch this video here.

The last matrix D4 represents the shortest path distance between every pair of vertices.

### Remember-

• In the above problem, there are 4 vertices in the given graph.
• So, there will be total 4 matrices of order 4 x 4 in the solution excluding the initial distance matrix.
• Diagonal elements of each matrix will always be 0.

To gain better understanding about Floyd Warshall Algorithm,

Watch this Video Lecture

Next Article- Knapsack Problem

Get more notes and other study material of Design and Analysis of Algorithms.

Watch video lectures by visiting our YouTube channel LearnVidFun.

## Quick Sort-

• Quick Sort is a famous sorting algorithm.
• It sorts the given data items in ascending order.
• It uses the idea of divide and conquer approach.
• It follows a recursive algorithm.

## Quick Sort Algorithm-

Consider-

• a = Linear Array in memory
• beg = Lower bound of the sub array in question
• end = Upper bound of the sub array in question

Then, Quick Sort Algorithm is as follows-

```Partition_Array (a , beg , end , loc)

Begin

Set left = beg , right = end , loc = beg

Set done = false

While (not done) do

While ( (a[loc] <= a[right] ) and (loc ≠ right) ) do

Set right = right - 1

end while

if (loc = right) then

Set done = true

else if (a[loc] > a[right]) then

Interchange a[loc] and a[right]

Set loc = right

end if

if (not done) then

While ( (a[loc] >= a[left] ) and (loc ≠ left) ) do

Set left = left + 1

end while

if (loc = left) then

Set done = true

else if (a[loc] < a[left]) then

Interchange a[loc] and a[left]

Set loc = left

end if

end if

end while

End```

## How Does Quick Sort Works?

• Quick Sort follows a recursive algorithm.
• It divides the given array into two sections using a partitioning element called as pivot.

The division performed is such that-

• All the elements to the left side of pivot are smaller than pivot.
• All the elements to the right side of pivot are greater than pivot.

After dividing the array into two sections, the pivot is set at its correct position.

Then, sub arrays are sorted separately by applying quick sort algorithm recursively.

## Quick Sort Example-

Consider the following array has to be sorted in ascending order using quick sort algorithm-

Quick Sort Algorithm works in the following steps-

### Step-01:

Initially-

• Left and Loc (pivot) points to the first element of the array.
• Right points to the last element of the array.

So to begin with, we set loc = 0, left = 0 and right = 5 as-

### Step-02:

Since loc points at left, so algorithm starts from right and move towards left.

As a[loc] < a[right], so algorithm moves right one position towards left as-

Now, loc = 0, left = 0 and right = 4.

### Step-03:

Since loc points at left, so algorithm starts from right and move towards left.

As a[loc] > a[right], so algorithm swaps a[loc] and a[right] and loc points at right as-

Now, loc = 4, left = 0 and right = 4.

### Step-04:

Since loc points at right, so algorithm starts from left and move towards right.

As a[loc] > a[left], so algorithm moves left one position towards right as-

Now, loc = 4, left = 1 and right = 4.

### Step-05:

Since loc points at right, so algorithm starts from left and move towards right.

As a[loc] > a[left], so algorithm moves left one position towards right as-

Now, loc = 4, left = 2 and right = 4.

### Step-06:

Since loc points at right, so algorithm starts from left and move towards right.

As a[loc] < a[left], so we algorithm swaps a[loc] and a[left] and loc points at left as-

Now, loc = 2, left = 2 and right = 4.

### Step-07:

Since loc points at left, so algorithm starts from right and move towards left.

As a[loc] < a[right], so algorithm moves right one position towards left as-

Now, loc = 2, left = 2 and right = 3.

### Step-08:

Since loc points at left, so algorithm starts from right and move towards left.

As a[loc] > a[right], so algorithm swaps a[loc] and a[right] and loc points at right as-

Now, loc = 3, left = 2 and right = 3.

### Step-09:

Since loc points at right, so algorithm starts from left and move towards right.

As a[loc] > a[left], so algorithm moves left one position towards right as-

Now, loc = 3, left = 3 and right = 3.

Now,

• loc, left and right points at the same element.
• This indicates the termination of procedure.
• The pivot element 25 is placed in its final position.
• All elements to the right side of element 25 are greater than it.
• All elements to the left side of element 25 are smaller than it.

Now, quick sort algorithm is applied on the left and right sub arrays separately in the similar manner.

## Quick Sort Analysis-

• To find the location of an element that splits the array into two parts, O(n) operations are required.
• This is because every element in the array is compared to the partitioning element.
• After the division, each section is examined separately.
• If the array is split approximately in half (which is not usually), then there will be log2n splits.
• Therefore, total comparisons required are f(n) = n x log2n = O(nlog2n).

 Order of Quick Sort = O(nlog2n)

## Worst Case-

• Quick Sort is sensitive to the order of input data.
• It gives the worst performance when elements are already in the ascending order.
• It then divides the array into sections of 1 and (n-1) elements in each call.
• Then, there are (n-1) divisions in all.
• Therefore, here total comparisons required are f(n) = n x (n-1) = O(n2).

 Order of Quick Sort in worst case = O(n2)

The advantages of quick sort algorithm are-

• Quick Sort is an in-place sort, so it requires no temporary memory.
• Quick Sort is typically faster than other algorithms.

(because its inner loop can be efficiently implemented on most architectures)

• Quick Sort tends to make excellent usage of the memory hierarchy like virtual memory or caches.
• Quick Sort can be easily parallelized due to its divide and conquer nature.

The disadvantages of quick sort algorithm are-

• The worst case complexity of quick sort is O(n2).
• This complexity is worse than O(nlogn) worst case complexity of algorithms like merge sort, heap sort etc.
• It is not a stable sort i.e. the order of equal elements may not be preserved.

To gain better understanding about Quick Sort Algorithm,

Watch this Video Lecture

Next Article- Topological Sort

### Other Popular Sorting Algorithms-

Get more notes and other study material of Design and Analysis of Algorithms.

Watch video lectures by visiting our YouTube channel LearnVidFun.