Synchronization and Critical Regions

  • Reentrant kernels require synchronization
    • Two processes that require the same data structure may corrupt the stored information if they are not synchronized in suspending and resuming.
      1. The first kernel control path, A, reads the variable and determines that there is just one available item.
      2. Another kernel control path, B, is activated and reads the same variable, which still contains the value 1.
      3. B decreases V and starts using the resource item.
      4. A resumes the execution; because it has already read the value of V, it assumes that it can decrease V and take the resource item, which B already uses.
      5. As a final result, V contains –1, and two kernel control paths use the same resource item with potentially disastrous effects.
    • Computations which depend on scheduling of two processes is bad. There is a race condition to see who finishes first

 

  • Atomic Operation: A single noninterruptible operation.
    • This would have been useful in solving the case above as B can not interrupt A
    • However most data access cannot be completed in 1 operation such as SLL arbitrary removes O(n)
  • Critical Region: Code segments that must be finished by each process before beginning another section of code

Synchronization Techniques

  • Preemptive Kernel: Allows scheduler to forcibly change the process running
  • Disabling Preemption can ensure in uniprocessor systems that a process will finish first as the scheduler cannot intervene and crashing will not occur
    • On multiprocessor systems because more than one CPU has access to the same data this is not enough to ensure multiple CPUs won’t edit the data and mess it up and cause a race condition
  • Disabling all hardware interrupts before entering critical regions and enabling again after the code segment has finished executing.  This process is slow and may freeze.
    • Multiprocessor systems is infeasible for this method (why?)

Semaphores for Synchronization (uni+multiprocessor systems)

  • Semaphore: A counter associated with a data structure.  Checked by all kernel threads before attempting access.  It may contain, an integer, list of processes waiting to access, two atomic methods which increment or decrements the integer in the semaphore
    down(); up();
    • down() decrements and if <0 adds process to scheduler after suspending
    • up() increments and if >=0 reactivates one or more processes
  • Every data structure has a semaphore initialized to 1
  • If a kernel control path wishes to access a data structure down()  is run and if the semaphore >=0 granted access.  Else it is scheduled.
  • When finished the kernel control path will run up() to allow access to the next process

Spin Locks for Synchronization (better for multi processor systems)

  • Spin Lock: Similar to a semaphore but without the process list.  If a process finds the data structure as “locked” it will “spin” in a look repeatedly (some small execution) until available
    • In a uniprocessor system this is useless as the CPU will enter an endless loop as the process that is occuyping the kernel control path (looping process) will not give up to the one that originally “locked it”.  As there is no other processor to “unlock” the data structure causing system hang

 

  • Deadlock: Infinite loop in a system that causes hanging
    • May occur if too many locks are used.  Complexity is too high.
    • Problem can be solved by requesting locks in a predefined order by priority

 

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s