Wednesday, September 25, 2013

PROCESS MANAGEMENT

PROCESS MANAGEMENT

Process

A process is a program in execution. The execution of a process must progress in a sequential fashion. Definition of process is following.

A process is defined as an entity which represents the basic unit of work to be implemented in the system.

Components of process are following.


1 Object Program
Code to be executed.


2 Data
Data to be used for executing the program.
3 Resources
While executing the program, it may require some resources.

4 Status
Verifies the status of the process execution.A process can run to completion only when all requested resources have been allocated to the process. Two or more processes could be executing the same program, each using their own data and resources.

Program

A program by itself is not a process. It is a static entity made up of program statement while process is a dynamic entity. Program contains the instructions to be executed by processor.

A program takes a space at single place in main memory and continues to stay there. A program does not perform any action by itself.

Process States
As a process executes, it changes state. The state of a process is defined as the current activity of the process.

Process can have one of the following five states at a time.


1 New
The process is being created.

2 Ready
The process is waiting to be assigned to a processor. Ready processes are waiting to have the processor allocated to them by the operating system so that they can run.

3 Running
Process instructions are being executed (i.e. The process that is currently being executed).

4 Waiting
The process is waiting for some event to occur (such as the completion of an I/O operation).

5 Terminated
The process has finished execution.

Process States


Process Control Block, PCB

Each process is represented in the operating system by a process control block (PCB) also called a task control block. PCB is the data structure used by the operating system. Operating system groups all information that needs about particular process.

PCB contains many pieces of information associated with a specific process which are described below.


1 Pointer
Pointer points to another process control block. Pointer is used for maintaining the scheduling list.

2 Process State
Process state may be new, ready, running, waiting and so on.

3 Program Counter
Program Counter indicates the address of the next instruction to be executed for this process.

4 CPU registers
CPU registers include general purpose register, stack pointers, index registers and accumulators etc. number of register and type of register totally depends upon the computer architecture.

5 Memory management information
This information may include the value of base and limit registers, the page tables, or the segment tables depending on the memory system used by the operating system. This information is useful for deallocating the memory when the process terminates.

6 Accounting information
This information includes the amount of CPU and real time used, time limits, job or process numbers, account numbers etc.

Process Control Block




Process control block includes CPU scheduling, I/O resource management, file management information etc.. The PCB serves as the repository for any information which can vary from process to process. Loader/linker sets flags and registers when a process is created. If that process get suspended, the contents of the registers are saved on a stack and the pointer to the particular stack frame is stored in the PCB. By this technique, the hardware state can be restored so that the process can be scheduled to run again.


Process Scheduling

Definition

The process scheduling is the activity of the process manager that handles the removal of the running process from the CPU and the selection of another process on the basis of a particular strategy.

Process scheduling is an essential part of a Multiprogramming operating system. Such operating systems allow more than one process to be loaded into the executable memory at a time and loaded process shares the CPU using time multiplexing.


Scheduling Queues

Scheduling queues refers to queues of processes or devices. When the process enters into the system, then this process is put into a job queue. This queue consists of all processes in the system. The operating system also maintains other queues such as device queue. Device queue is a queue for which multiple processes are waiting for a particular I/O device. Each device has its own device queue.

This figure shows the queuing diagram of process scheduling.

Queue is represented by rectangular box.

The circles represent the resources that serve the queues.

The arrows indicate the process flow in the system.




Queuing Diagram

Queues are of two types

Ready queue

Device queue

A newly arrived process is put in the ready queue. Processes waits in ready queue for allocating the CPU. Once the CPU is assigned to a process, then that process will execute. While executing the process, any one of the following events can occur.

The process could issue an I/O request and then it would be placed in an I/O queue.

The process could create new sub process and will wait for its termination.

The process could be removed forcibly from the CPU, as a result of interrupt and put back in the ready queue.

Two State Process Model

Two state process model refers to running and non-running states which are described below.


1 Running
When new process is created by Operating System that process enters into the system as in the running state.

2 Not Running
Processes that are not running are kept in queue, waiting for their turn to execute. Each entry in the queue is a pointer to a particular process. Queue is implemented by using linked list. Use of dispatcher is as follows. When a process is interrupted, that process is transferred in the waiting queue. If the process has completed or aborted, the process is discarded. In either case, the dispatcher then selects a process from the queue to execute.

Schedulers

Schedulers are special system softwares which handles process scheduling in various ways.Their main task is to select the jobs to be submitted into the system and to decide which process to run. Schedulers are of three types

Long Term Scheduler

Short Term Scheduler

Medium Term Scheduler

Long Term Scheduler

It is also called job scheduler. Long term scheduler determines which programs are admitted to the system for processing. Job scheduler selects processes from the queue and loads them into memory for execution. Process loads into the memory for CPU scheduling. The primary objective of the job scheduler is to provide a balanced mix of jobs, such as I/O bound and processor bound. It also controls the degree of multiprogramming. If the degree of multiprogramming is stable, then the average rate of process creation must be equal to the average departure rate of processes leaving the system.

On some systems, the long term scheduler may not be available or minimal. Time-sharing operating systems have no long term scheduler. When process changes the state from new to ready, then there is use of long term scheduler.

Short Term Scheduler

It is also called CPU scheduler. Main objective is increasing system performance in accordance with the chosen set of criteria. It is the change of ready state to running state of the process. CPU scheduler selects process among the processes that are ready to execute and allocates CPU to one of them.

Short term scheduler also known as dispatcher, execute most frequently and makes the fine grained decision of which process to execute next. Short term scheduler is faster than long term scheduler.

Medium Term Scheduler




Medium term scheduling is part of the swapping. It removes the processes from the memory. It reduces the degree of multiprogramming. The medium term scheduler is in-charge of handling the swapped out-processes.

Medium Term Scheduler

Running process may become suspended if it makes an I/O request. Suspended processes cannot make any progress towards completion. In this condition, to remove the process from memory and make space for other process, the suspended process is moved to the secondary storage. This process is called swapping, and the process is said to be swapped out or rolled out. Swapping may be necessary to improve the process mix.


Context Switch

A context switch is the mechanism to store and restore the state or context of a CPU in Process Control block so that a process execution can be resumed from the same point at a later time. Using this technique a context switcher enables multiple processes to share a single CPU. Context switching is an essential part of a multitasking operating system features.

When the scheduler switches the CPU from executing one process to execute another, the context switcher saves the content of all processor registers for the process being removed from the CPU, in its process descriptor. The context of a process is represented in the process control block of a process.

Context switch time is pure overhead. Context switching can significantly affect performance as modern computers have a lot of general and status registers to be saved. Content switching times are highly dependent on hardware support. Context switch requires ( n + m ) bxK time units to save the state of the processor with n general registers, assuming b are the store operations are required to save n and m registers of two process control blocks and each store instruction requires K time units.

Context Switch


Some hardware systems employ two or more sets of processor registers to reduce the amount of context switching time. When the process is switched, the following information is stored.

Program Counter

Scheduling Information

Base and limit register value

Currently used register

Changed State

I/O State

Accounting


Scheduling algorithms

'll discuss four major scheduling algorithms here which are following

First Come First Serve (FCFS) Scheduling

Shortest-Job-First (SJF) Scheduling

Priority Scheduling

Round Robin(RR) Scheduling

Multilevel Queue Scheduling

First Come First Serve (FCFS)


Jobs are executed on first come, first serve basis.

Easy to understand and implement.

Poor in performance as average wait time is high.

First Come First Serve Scheduling Algorithm
Wait time of each process is following

Process Wait Time : Service Time - Arrival Time
P0 0 - 0 = 0
P1 5 - 1 = 4
P2 8 - 2 = 6
P3 16 - 3 = 13
Average Wait Time: (0+4+6+13) / 4 = 5.55

Shortest Job First (SJF)

Best approach to minimize waiting time.

Impossible to implement

Processer should know in advance how much time process will take.



Shortest Job First Scheduling Algorithm
Wait time of each process is following

Process Wait Time : Service Time - Arrival Time
P0 3 - 0 = 3
P1 0 - 0 = 0
P2 16 - 2 = 14
P3 8 - 3 = 5
Average Wait Time: (3+0+14+5) / 4 = 5.50

Priority Based Scheduling

Each process is assigned a priority. Process with highest priority is to be executed first and so on.

Processes with same priority are executed on first come first serve basis.

Priority can be decided based on memory requirements, time requirements or any other resource requirement.

Priority Scheduling Algorithm
Wait time of each process is following

Process Wait Time : Service Time - Arrival Time
P0 0 - 0 = 0
P1 3 - 1 = 2
P2 8 - 2 = 6
P3 16 - 3 = 13
Average Wait Time: (0+2+6+13) / 4 = 5.25

Round Robin Scheduling

Each process is provided a fix time to execute called quantum.

Once a process is executed for given time period. Process is preempted and other process executes for given time period.

Context switching is used to save states of preempted processes.

Round Robin Scheduling Algorithm
Wait time of each process is following

Process Wait Time : Service Time - Arrival Time
P0 (0-0) + (12-3) = 9
P1 (3-1) = 2
P2 6-2) + (15-9) = 10
P3 (9-3) + (18-12) = 12
Average Wait Time: (9+2+10+12) / 4 = 8.25

Multi Queue Scheduling

Multiple queues are maintained for processes.

Each queue can have its own scheduling algorithms.

Priorities are assigned to each queue.

Multi Queue Scheduling Algorithm

No comments:

Post a Comment