Month: November 2018

C-SCAN Disk Scheduling | Disk Scheduling

Disk Scheduling Algorithms-

 

Before you go through this article, make sure that you have gone through the previous article on Disk Scheduling.

 

We have discussed-

  • Disk scheduling algorithms are used to schedule multiple requests for accessing the disk.
  • The purpose of disk scheduling algorithms is to reduce the total seek time.

 

Various disk scheduling algorithms are-

 

 

In this article, we will discuss about C-SCAN Disk Scheduling Algorithm.

 

C-SCAN Disk Scheduling Algorithm-

 

  • Circular-SCAN Algorithm is an improved version of the SCAN Algorithm.
  • Head starts from one end of the disk and move towards the other end servicing all the requests in between.
  • After reaching the other end, head reverses its direction.
  • It then returns to the starting end without servicing any request in between.
  • The same process repeats.

 

Also Read- FCFS Disk Scheduling Algorithm

 

Advantages-

 

  • The waiting time for the cylinders just visited by the head is reduced as compared to the SCAN Algorithm.
  • It provides uniform waiting time.
  • It provides better response time.

 

Disadvantages-

 

  • It causes more seek movements as compared to SCAN Algorithm.
  • It causes the head to move till the end of the disk even if there are no requests to be serviced.

 

Also Read- SSTF Disk Scheduling Algorithm

 

PRACTICE PROBLEM BASED ON C-SCAN DISK SCHEDULING ALGORITHM-

 

Problem-

 

Consider a disk queue with requests for I/O to blocks on cylinders 98, 183, 41, 122, 14, 124, 65, 67. The C-SCAN scheduling algorithm is used. The head is initially at cylinder number 53 moving towards larger cylinder numbers on its servicing pass. The cylinders are numbered from 0 to 199. The total head movement (in number of cylinders) incurred while servicing these requests is _______.

 

Solution-

 

 

Total head movements incurred while servicing these requests

= (65 – 53) + (67 – 65) + (98 – 67) + (122 – 98) + (124 – 122) + (183 – 124) + (199 – 183) + (199 – 0) + (14 – 0) + (41 – 14)

= 12 + 2 + 31 + 24 + 2 + 59 + 16 + 199 + 14 + 27

= 386

 

Alternatively,

Total head movements incurred while servicing these requests

= (199 – 53) + (199 – 0) + (41 – 0)

= 146 + 199 + 41

= 386

 

To gain better understanding about C-SCAN Disk Scheduling Algorithm,

Watch this Video Lecture

 

Next Article- LOOK Disk Scheduling Algorithm

 

Get more notes and other study material of Operating System.

Watch video lectures by visiting our YouTube channel LearnVidFun.

SCAN Algorithm | Disk Scheduling Algorithms

Disk Scheduling Algorithms-

 

Before you go through this article, make sure that you have gone through the previous article on Disk Scheduling.

 

We have discussed-

  • Disk scheduling algorithms are used to schedule multiple requests for accessing the disk.
  • The purpose of disk scheduling algorithms is to reduce the total seek time.

 

Various disk scheduling algorithms are-

 

 

In this article, we will discuss about SCAN Disk Scheduling Algorithm.

 

SCAN Disk Scheduling Algorithm-

 

  • As the name suggests, this algorithm scans all the cylinders of the disk back and forth.
  • Head starts from one end of the disk and move towards the other end servicing all the requests in between.
  • After reaching the other end, head reverses its direction and move towards the starting end servicing all the requests in between.
  • The same process repeats.

 

NOTE-

 

  • SCAN Algorithm is also called as Elevator Algorithm.
  • This is because its working resembles the working of an elevator.

 

Also Read- FCFS Disk Scheduling Algorithm

 

Advantages-

 

  • It is simple, easy to understand and implement.
  • It does not lead to starvation.
  • It provides low variance in response time and waiting time.

 

Disadvantages-

 

  • It causes long waiting time for the cylinders just visited by the head.
  • It causes the head to move till the end of the disk even if there are no requests to be serviced.

 

Also Read- SSTF Disk Scheduling Algorithm

 

PRACTICE PROBLEM BASED ON SCAN DISK SCHEDULING ALGORITHM-

 

Problem-

 

Consider a disk queue with requests for I/O to blocks on cylinders 98, 183, 41, 122, 14, 124, 65, 67. The SCAN scheduling algorithm is used. The head is initially at cylinder number 53 moving towards larger cylinder numbers on its servicing pass. The cylinders are numbered from 0 to 199. The total head movement (in number of cylinders) incurred while servicing these requests is _______.

 

Solution-

 

 

Total head movements incurred while servicing these requests

= (65 – 53) + (67 – 65) + (98 – 67) + (122 – 98) + (124 – 122) + (183 – 124) + (199 – 183) + (199 – 41) + (41 – 14)

= 12 + 2 + 31 + 24 + 2 + 59 + 16 + 158 + 27

= 331

 

Alternatively,

Total head movements incurred while servicing these requests

= (199 – 53) + (199 – 14)

= 146 + 185

= 331

 

To gain better understanding about SCAN Disk Scheduling Algorithm,

Watch this Video Lecture

 

Next Article- C-SCAN Disk Scheduling Algorithm 

 

Get more notes and other study material of Operating System.

Watch video lectures by visiting our YouTube channel LearnVidFun.

SSTF Algorithm | Disk Scheduling Algorithms

Disk Scheduling Algorithms-

 

Before you go through this article, make sure that you have gone through the previous article on Disk Scheduling.

 

We have discussed-

  • Disk scheduling algorithms are used to schedule multiple requests for accessing the disk.
  • The purpose of disk scheduling algorithms is to reduce the total seek time.

 

Various disk scheduling algorithms are-

 

 

In this article, we will discuss about SSTF Disk Scheduling Algorithm.

 

SSTF Disk Scheduling Algorithm-

 

  • SSTF stands for Shortest Seek Time First.
  • This algorithm services that request next which requires least number of head movements from its current position regardless of the direction.
  • It breaks the tie in the direction of head movement.

 

Advantages-

 

  • It reduces the total seek time as compared to FCFS.
  • It provides increased throughput.
  • It provides less average response time and waiting time.

 

Disadvantages-

 

  • There is an overhead of finding out the closest request.
  • The requests which are far from the head might starve for the CPU.
  • It provides high variance in response time and waiting time.
  • Switching the direction of head frequently slows down the algorithm.

 

PRACTICE PROBLEMS BASED ON SSTF DISK SCHEDULING ALGORITHM-

 

Problem-01:

 

Consider a disk queue with requests for I/O to blocks on cylinders 98, 183, 41, 122, 14, 124, 65, 67. The SSTF scheduling algorithm is used. The head is initially at cylinder number 53 moving towards larger cylinder numbers on its servicing pass. The cylinders are numbered from 0 to 199. The total head movement (in number of cylinders) incurred while servicing these requests is _______.

 

Solution-

 

 

Total head movements incurred while servicing these requests

= (65 – 53) + (67 – 65) + (67 – 41) + (41 – 14) + (98 – 14) + (122 – 98) + (124 – 122) + (183 – 124)

= 12 + 2 + 26 + 27 + 84 + 24 + 2 + 59

= 236

 

Problem-02:

 

Consider a disk system with 100 cylinders. The requests to access the cylinders occur in following sequence-

4, 34, 10, 7, 19, 73, 2, 15, 6, 20

Assuming that the head is currently at cylinder 50, what is the time taken to satisfy all requests if it takes 1 ms to move from one cylinder to adjacent one and shortest seek time first policy is used?

  1. 95 ms
  2. 119 ms
  3. 233 ms
  4. 276 ms

 

Solution-

 

 

Total head movements incurred while servicing these requests

= (50 – 34) + (34 – 20) + (20 – 19) + (19 – 15) + (15 – 10) + (10 – 7) + (7 – 6) + (6 – 4) + (4 – 2) + (73 – 2)

= 16 + 14 + 1 + 4 + 5 + 3 + 1 + 2 + 2 + 71

= 119

 

Time taken for one head movement = 1 msec. So,

Time taken for 119 head movements

= 119 x 1 msec

= 119 msec

 

Thus, Option (B) is correct.

 

To gain better understanding about SSTF Disk Scheduling Algorithm,

Watch this Video Lecture

 

Next Article- SCAN Disk Scheduling Algorithm

 

Get more notes and other study material of Operating System.

Watch video lectures by visiting our YouTube channel LearnVidFun.

Disk Scheduling | Disk Scheduling Algorithms

Disk Scheduling-

 

Before you go through this article, make sure that you have gone through the previous article on Magnetic Disk.

 

Disk scheduling is a technique used by the operating system to schedule multiple requests for accessing the disk.

 

Disk Scheduling Algorithms-

 

  • The algorithms used for disk scheduling are called as disk scheduling algorithms.
  • The purpose of disk scheduling algorithms is to reduce the total seek time.

 

Various disk scheduling algorithms are-

 

 

  1. FCFS Algorithm
  2. SSTF Algorithm
  3. SCAN Algorithm
  4. C-SCAN Algorithm
  5. LOOK Algorithm
  6. C-LOOK Algorithm

 

In this article, we will discuss about FCFS Disk Scheduling Algorithm.

 

FCFS Disk Scheduling Algorithm-

 

  • As the name suggests, this algorithm entertains requests in the order they arrive in the disk queue.
  • It is the simplest disk scheduling algorithm.

 

Advantages-

 

  • It is simple, easy to understand and implement.
  • It does not cause starvation to any request.

 

Disadvantages-

 

  • It results in increased total seek time.
  • It is inefficient.

 

PRACTICE PROBLEM BASED ON FCFS DISK SCHEDULING ALGORITHM-

 

Problem-

 

Consider a disk queue with requests for I/O to blocks on cylinders 98, 183, 41, 122, 14, 124, 65, 67. The FCFS scheduling algorithm is used. The head is initially at cylinder number 53. The cylinders are numbered from 0 to 199. The total head movement (in number of cylinders) incurred while servicing these requests is _______.

 

Solution-

 

 

Total head movements incurred while servicing these requests

= (98 – 53) + (183 – 98) + (183 – 41) + (122 – 41) + (122 – 14) + (124 – 14) + (124 – 65) + (67 – 65)

= 45 + 85 + 142 + 81 + 108 + 110 + 59 + 2

= 632

 

To gain better understanding about FCFS Disk Scheduling Algorithm,

Watch this Video Lecture

 

Next Article- SSTF Disk Scheduling Algorithm

 

Get more notes and other study material of Operating System.

Watch video lectures by visiting our YouTube channel LearnVidFun.

Process Synchronization | Practice Problems

Process Synchronization-

 

Before you go through this article, make sure that you have gone through the previous articles on Process Synchronization.

 

We have discussed the following synchronization mechanisms-

 

In this article, we will discuss practice problems based on synchronization mechanisms.

 

PRACTICE PROBLEMS BASED ON SYNCHRONIZATION MECHANISMS-

 

Problem-01:

 

The enter_CS( ) and leave_CS( ) functions to implement critical section of a process are realized using test and set instruction as follows-

 

void enter_CS(X)
{
   while (test-and-set(X));
}
void leave_CS(X)
{
   X = 0;
}

 

In the above solution, X is a memory location associated with the CS and is initialized to 0. Now, consider the following statements-

  1. The above solution to CS problem is deadlock-free
  2. The solution is starvation free
  3. The processes enter CS in FIFO order
  4. More than one process can enter CS at the same time.

Which of the above statements is true?

  1. I only
  2. I and II
  3. II and III
  4. IV only

 

Solution-

 

Clearly, the given mechanism is test and set lock which has the following characteristics-

  • It ensures mutual exclusion.
  • It ensures freedom from deadlock.
  • It may cause the process to starve for the CPU.
  • It does not guarantee that processes will always execute in a FIFO order otherwise there would be no starvation.

 

Thus, Option (A) is correct.

 

Problem-02:

 

Fetch_And_Add(X,i) is an atomic Read-Modify-Write instruction that reads the value of memory location X, increments it by the value i and, returns the old value of X. It is used in the pseudo code shown below to implement a busy-wait lock. L is an unsigned integer shared variable initialized to 0. The value of 0 corresponds to lock being available, while any non-zero value corresponds to the lock being not available.

 

AcquireLock(L)
{
   while (Fetch_And_Add(L,1))
   L = 1;
}

ReleaseLock(L)
{
   L = 0;
}

 

This implementation-

  1. fails as L can overflow
  2. fails as L can take on a non-zero value when the lock is actually available
  3. works correctly but may starve some processes
  4. works correctly without starvation

 

Solution-

 

The given synchronization mechanism has been implemented as-

 

 

Working-

 

This synchronization mechanism works as explained in the following scenes-

 

Scene-01:

 

  • A process P1 arrives.
  • It executes the Fetch_And_Add(L, 1) instruction.
  • Since lock value is set to 0, so it returns value 0 to the while loop and sets the lock value to 0+1=1.
  • The returned value 0 breaks the while loop condition.
  • Process P1 enters the critical section and executes.

 

Scene-02:

 

  • Another process P2 arrives.
  • It executes the Fetch_And_Add(L, 1) instruction.
  • Since lock value is now set to 1, so it returns value 1 to the while loop and sets the lock value to 1+1=2.
  • The returned value 1 does not break the while loop condition.
  • Process P2 executes the next instruction L=1 and sets the lock value to 1 and again checks the condition.
  • The lock value keeps changing from 1 to 2 and then 2 to 1.
  • The process P2 is trapped inside an infinite while loop.
  • The while loop keeps the process P2 busy until the lock value becomes 0 and its condition breaks.

 

Scene-03:

 

  • Process P1 comes out of the critical section and sets the lock value to 0.
  • The while loop condition breaks.
  • Now, process P2 waiting for the critical section enters the critical section.

 

Failure of the Mechanism-

 

  • The mechanism fails to provide the synchronization among the processes.
  • This is explained below-

 

Explanation-

 

The occurrence of following scenes may lead to two processes present inside the critical section-

 

Scene-01:

 

  • A process P1 arrives.
  • It executes the Fetch_And_Add(L, 1) instruction.
  • Since lock value is set to 0, so it returns value 0 to the while loop and sets the lock value to 0+1=1.
  • The returned value 0 breaks the while loop condition.
  • Process P1 enters the critical section and executes.

 

Scene-02:

 

  • Another process P2 arrives.
  • It executes the Fetch_And_Add(L, 1) instruction.
  • Since lock value is now set to 1, so it returns value 1 to the while loop and sets the lock value to 1+1=2.
  • The returned value 1 does not break the while loop condition.
  • Now, as process P2 is about to enter the body of while loop, it gets preempted.

 

Scene-03:

 

  • Process P1 comes out of the critical section and sets the lock value to 0.

 

Scene-04:

 

  • Process P2 gets scheduled again.
  • It resumes its execution.
  • Before preemption, it had already satisfied the while loop condition.
  • Now, it begins execution from next instruction.
  • It sets the lock value to 1 and here the blunder happens.
  • This is because the lock is actually available and lock value = 0 but P2 itself sets the lock value to 1.
  • Then, it checks the condition and now there is no one who can set the lock value to zero.
  • Thus, Process P2 gets trapped inside the infinite while loop forever.
  • All the future processes too will be trapped inside the infinite while loop forever.

 

Thus,

  • Although mutual exclusion could be guaranteed but still the mechanism fails.
  • This is because lock value got set to a non-zero value even when the lock was available.

 

Also,

This synchronization mechanism leads to overflow of value ‘L’.

 

Explanation-

 

  • When every process gets preempt after executing the while loop condition, the value of lock will keep increasing with every process.
  • When first process arrives, it returns the value 0 and sets the lock value to 0+1=1 and gets preempted.
  • When second process arrives, it returns the value 1 and sets the lock value to 1+1=2 and gets preempted.
  • When third process arrives, it returns the value 2 and sets the lock value to 2+1=3 and gets preempted.
  • Thus, for very large number of processes preempting in the above manner, L will overflow.

 

Also,

This synchronization mechanism does not guarantee bounded waiting and may cause starvation.

 

Explanation-

 

  • There might exist an unlucky process which when arrives to execute the critical section finds it busy.
  • So, it keeps waiting in the while loop and eventually gets preempted.
  • When it gets rescheduled and comes to execute the critical section, it finds another process executing the critical section.
  • So, again, it keeps waiting in the while loop and eventually gets preempted.
  • This may happen several times which causes that unlucky process to starve for the CPU.

 

Thus, Options (A) and (B) are correct.

 

Problem-03:

 

Consider the methods used by processes P1 and P2 for accessing their critical sections whenever needed, as given below. The initial values of shared Boolean variables S1 and S2 are randomly assigned.

 

Method used by P1 Method used by P2
while (S1 == S2);

Critical Section

S1 = S2

while (S1 != S2);

Critical Section

S2 = !S1

 

Which one of the following statements describes the properties achieved?

  1. Mutual exclusion but not progress
  2. Progress but not mutual exclusion
  3. Neither mutual exclusion nor progress
  4. Both mutual exclusion and progress

 

Solution-

 

The initial values of shared Boolean variables S1 and S2 are randomly assigned. The assigned values may be-

  • S1 = 0 and S2 = 0
  • S1 = 0 and S2 = 1
  • S1 = 1 and S2 = 0
  • S1 = 1 and S2 = 1

 

Case-01: If S1 = 0 and S2 = 0-

 

In this case,

  • Process P1 will be trapped inside an infinite while loop.
  • However, process P2 gets the chance to execute.
  • Process P2 breaks the while loop condition, executes the critical section and then sets S2 = 1.
  • Now, S1 = 0 and S2 = 1.
  • Now, process P2 can not enter the critical section again but process P1 can enter the critical section.
  • Process P1 breaks the while loop condition, executes the critical section and then sets S1 = 1.
  • Now, S1 = 1 and S2 = 1.
  • Now, process P1 can not enter the critical section again but process P2 can enter the critical section.

 

Thus,

  • Processes P1 and P2 executes the critical section alternately starting with process P2.
  • Mutual exclusion is guaranteed.
  • Progress is not guaranteed because if one process does not execute, then other process would never be able to execute again.
  • Processes have to necessarily execute the critical section in strict alteration.

 

Case-02: If S1 = 0 and S2 = 1-

 

In this case,

  • Process P2 will be trapped inside an infinite while loop.
  • However, process P1 gets the chance to execute.
  • Process P1 breaks the while loop condition, executes the critical section and then sets S1 = 1.
  • Now, S1 = 1 and S2 = 1.
  • Now, process P1 can not enter the critical section again but process P2 can enter the critical section.
  • Process P2 breaks the while loop condition, executes the critical section and then sets S2 = 0.
  • Now, S1 = 1 and S2 = 0.
  • Now, process P2 can not enter the critical section again but process P1 can enter the critical section.

 

Thus,

  • Processes P1 and P2 executes the critical section alternately starting with process P1.
  • Mutual exclusion is guaranteed.
  • Progress is not guaranteed because if one process does not execute, then other process would never be able to execute again.
  • Processes have to necessarily execute the critical section in strict alteration.

 

Case-03: If S1 = 1 and S2 = 0-

 

  • This case is same as case-02.

 

Case-04: If S1 = 1 and S2 = 1-

 

  • This case is same as case-01.

 

Thus, Overall we can conclude-

  • Processes P1 and P2 executes the critical section alternatively.
  • Mutual exclusion is guaranteed.
  • Progress is not guaranteed because if one process does not execute, then other process would never be able to execute again.
  • Processes have to necessarily execute the critical section in strict alteration.

 

Thus, Option (A) is correct.

 

To watch video solutions and practice other problems,

Watch this Video Lecture

 

Next Article- Introduction to Semaphores

 

Get more notes and other study material of Operating System.

Watch video lectures by visiting our YouTube channel LearnVidFun.