Corbis Images FALL 2000
Lecture Notes 6:



PROCESS MANAGEMENT 2

In a typical system, many processes are competing for relatively few resources. Lack of process synchronization can result in:

  • Deadlock
  • Starvation

In early operating systems this problem was solved by outside intervention, with a user or operator manually terminating a job.

DEADLOCK is more serious because it affects more than one job. Because resources are tied up, the entire system grinds to a halt. Think of traffic gridlock, when some blocks an intersection.

Deadlocks were relatively rare in older batch systems because the job cards specified exact requirements. Job would not be move to READY status unless resources were available. Interactive systems have to be much more flexible - user of the program actually determines what resources are required.

Deadlock: occurs when nonpreemptable resources are allocated to jobs that eventually require other nonpreemptable resources - resources that have been locked by another job.

Some examples:

Deadlocks on file requests : If jobs are allowed to request files and hold them for the duration, deadlock can occur. Suppose a company has two main application programs, purchasing and sales. If both processes are active, they need to access two files INVENTORY and SUPPLIERS to keep track of daily transactions. Suppose the following occurs:

  • The purchasing program accesses the SUPPLIER file to place an order.
  • The sales program access the INVENTORY file to reserve some parts needed.
  • The purchasing program doesn't release the SUPPLIER file, but requests the INVENTORY file to verify the amount on hand before placing an order.
  • Sales doesn't release the INVENTORY file, but requests the SUPPLIER file.

This is deadlock. Similar situation occurs in databases. In a database, subsections can locked by the current process to prevent changes from occurring due to other processes. One process can lock a record from another, and the other process can lock a record needed by the first process - you have deadlock.

(If processes are not allowed to lock records during transactions the following could occur. (You are a students who has submitted a change of address to the university at the same time as grades are being entered)

  • The grades process accesses your record in the university database and copies it to its work area.
  • The address modification process accessed you record and copied it to its work area.
  • The grades process changes your record by entering your new grades.
  • The address process writes in your new address.
  • The grades process finishes first and rewrites its version of your record back to the database.
  • Then the address process finished and writes its updates record back to the database.

Consequently, you haven't attended school this semester.

This is an example of a race between processes.

Deadlock in device allocation: can occur when several processes request and hold on to dedicated devices, and none of the jobs can continue because each is waiting for a resource.

Deadlock in spooling: Many users submit jobs to printer buffer ("virtual device") at the same time, such that not one complete output file is submitted. No more pages can be accepted, but not can be printed.

In general the conditions for deadlock are:

  • Mutual exclusion (i.e. allowing only one process to have access to a resource).
  • Resource holding (process holds onto resource, and will not release).
  • No preemption. (No ability to temporarily reallocate resources.
  • Circular wait (each process involved in the deadlock is waiting for another process to release its hold.

If one condition can be removed, then deadlock will not occur- but remember process racing - may end up with bad data.

Strategies for handling deadlock

There are basically three ways that the operating system can deal with deadlocks:

  • Prevent one of the four conditions from occurring.

Always have some mutual exclusion because there is only one CPU, and dedicated devices can only be allocated to one user at a time. Can possible use some form of spooling, but then are likely to develop another form of deadlock. Usually can't prevent mutual exclusion.

Only solution to resource holding is to give every job what it needs, and not let another job into the system - i.e. no multiprogramming. Means resources stay idle and must know needs beforehand. Won't work for interactive systems.

OS could be allowed to preempt, just like it does in time slicing. However, preemption in the middle of IO operations would be much harder to do - might get half written files, etc. How could the process control block (PCB) handle this.

Circular wait can be handled if the OS prevents the formation of a circle. Some solutions are available for this, but only for batch, since jobs must anticipate the order in which they require resources.

  • Avoid the deadlock if it becomes probable.

Even if the conditions cannot be removed, the OS can avoid deadlock of it knows in advance the sequence of requests. The Bankers Algorithm (Dijkstra,1965) has been applied to this problem. Essentially the algorithm is designed to prevent a system from moving from a safe state to an unsafe state. A safe state is one in which at least one job can finish because there are enough resources to satisfy its maximum needs. The system must not accept any jobs that would lead it into an unsafe state.

The banker's algorithm is not practical in interactive systems. Resources must remain constant. If a piece of hardware is broken algorithm won't work.

  • Detect the deadlock when it has occurred and recover from it gracefully.

There are several recovery algorithms, but they all have one feature in common - they all require at least one victim - an expendable job that is removed from the deadlock. Possible recovery methods are:

  • Terminate all jobs and restart them from the beginning (Ctrl-Alt-Del).
  • Terminate only the jobs involved in the deadlock.
  • Terminate jobs involved in the deadlock one at a time, checking to see if deadlock is solved after each removal.
  • Terminate a job not involved in the deadlock, add its resources to the deadlock and see if it resolves.
  • Stop new jobs from entering the system, allowing non-deadlocked jobs to run to completion. Doesn't always work.

STARVATION

Starvation: The result of conservative allocation of resources where a single job is kept from execution because it is kept waiting for resources that never become available.

See handout for starving philosopher's algorithm.

 

CONCURRENT PROCESSES

Parallel processing: a situation in which two or more processors operate in unison (multiprocessing).

The Processor Manager has to coordinate activity of each processor. Some systems allocate a CPU to each process or job. Others subdivide individual instructions, so each can be processed simultaneously - this is concurrent processing.

Typical multiprocessing configurations

Master/Slave:

Single processor system with additional "slave" processors.

Master processor is responsible for entire system (files, devices, memory, processors). Has serious disadvantages:

  • Reliability no higher than for single processor.
  • Poor use of resources, because slave can be become free while master is busy.
  • More interrupts because slave processes must interrupt master every time.

Loosely coupled configuration:

Several complete computer systems, each with their own memory, IO devices, CPU and OS. Each processor controls its own resources. However, each processor can communicate with the others. When a job arrives, it's assigned to one processor. Once allocated, job remains with that processor. Each processor has a table showing which processor has what.

Symmetric Configuration:

One main memory, one set of devices interacting with multiple, equal status processors. Must be well-synchronised - special software called "process synchronisation software" is used. Each processor uses the same scheduling algorithm. Any given job or task may be executed by several different processors during run time. This is concurrent processing.

Concurrent programming

In multiprocessing, one job may use several processors to execute sets of instructions in parallel.

Most programming languages are serial - instructions are executed one at a time.

Eg. Compute A = 3 * B * C + 4 / (D + E)** (F-G)


STEP#

Operation

Result

1

(F-G)

Store result in T1

2

(D+E)

Store sum in T2

3

(T2)**(T1)

Store power in T1

4

4/(T1)

Store result in T2

5

3*B

Store product in T1

6

(T1) * C

Store result in T1

7

(T1)*(T2)

Store sum in A

However, this would be different with a programming language that allows for concurrent processing. Using COBEGIN and COEND to indicate instructions that can be processed concurrently.

COBEGIN
T1= 3 * B
T2 = D + E
T3 = F - G
COEND
T4=T1+C
T5 = T2**T3
COEND
A=T4+4/T5

STEP#

Processor

Operation

Result

1

1

3 * B

Store in T1

 

2

D + E

Store in T2

 

3

F - G

Store in T3

2

1

(T1) * C

Store in T4

 

2

(T2)**(T3)

Store in T5

3

1

4/(T5)

Store in T1

4

1

(T4)+(T1)

Store in A

This increases the computational speed, but increased the complexity, especially for the programmer. For the first version of concurrent programming, programmer had to explicitly state which instructions could be executed in parallel.

Newer compilers can detect which instructions can be executed in parallel. Concurrent processing can dramatically reduce the complexity of working with array operations within loops, performing matrix multiplication, parallel searches and sorting and merging.



Eg.   For (int i=1; i < 3; i++)
            A(i) = B(i) + C(i);

Each of three processors could do one step.



UNIX

Process Management

  • UNIX uses a time slicing policy.
  • Picks process with highest priority to be run first.
  • Processes that have used a lot of CPU time get lower priority.
  • If several jobs have same priority, they are handled round robin.
  • Overall effect of this policy is to balance IO bound jobs with CPU bound jobs.

UNIX uses several tables to keep the system running:

Process table: resides in memory

User table: resides in memory only while process is running.

UNIX is a true multitasking system.

An unusual feature of UNIX is that it gives the user the capability of executing on program from another, using the fork command. This command gives the second program all the attributes of the second program, such as any open files and saves the first program in its original form. Then the fork command splits the program into two copies, which are both running from the statement after the fork command. Each process has its own unique ID number. The child process runs asynchronously with the parent process.

The wait command can suspend the parent process until the child process is finished.

 

 

MVS

Processor Management

Managed by the JES (Job Entry SubSystem).

User must understand a great deal about processor management to write the Job Control Language necessary to ensure that the job gets handled correctly. The first functions of the JES are:

  • Read the jobs' JCL instructions, and check them for accuracy
  • If the JCL is bad, cancel the job; otherwise spool the JCL onto a DASD.
  • Turn control over to the "converter", a piece of software that converts JCL to machine language.
  • JES then selects which job will run based on systems' priority system.
  • Any output of the job that is produced as it is run is spooled by the JES to DASD and when the job has finished running, JES returns all resources to the system.