Process Management
Process is allocated resources (such as main memory) and is available for scheduling.
A process is not the same as a program. Each process has a state which includes:
May include accounting information and other information to be used in scheduling (priority, for example).
Process creation -- Parent process creates (or requests OS to create) a child process. One parent can have several children.
- In UNIX, this is done by a fork system call. Child gets new memory containing a copy of parent's memory (to allow communication to child). This can be expensive. Both start execution at instruction right after fork call; return code to child is 0, to parent is child's process id (or -1 if err); fork is a c function which returns a value.
- So the code typically tests the return code returned by the fork call; if the code is 0, (that is, this is the child executing), it executes an execve system call to load another file into memory and start its execution.
- Parent may either wait or continue (remember "&" at end of a command line? This tells the shell program whether or not to wait for child to complete.)
Schedulers
- High level schedulers (mostly for batch systems) selects from queue of waiting processes. Often tries to balance job mix (e.g. mix of I/O bound and CPU bound tasks). Often call the job scheduler.
- Low level scheduler: selects a process from the ready queue. Often called the dispatcher.
- Medium level scheduler: purpose is to improve performance by selecting processes to swap out. Usually means that the memory allocated to them is reallocated to other processes to improve performance. Swapped out processes must later be reinstated.
CPU burst -- "Typical" amount of execution time until a process must wait (e.g. it issues an I/O request).
f | . .
r | . .
e | . .
q | . .
. |. . .
+---------------------------------
Execution time til interrupt
Context switch -- Switch CPU from one process to another. Involves overhead. These occur frequently and hence should be very fast. Usually involves hardware support, e.g. one instruction to save all register contents to memory (to PCB). Also may involve several sets of registers.
Scheduling Algorithms -- Several are presented below. Their evaluation involves several criteria:
- Low overhead (the faster the better)
- CPU utilization: generally, the higher the better (unless the CPU is busier running OS code)
- Throughput (e.g., number of processes completed/hr)
- Turnaround time / response time: time duration from request until complete for one process.
- Ready queue waiting time - might only consider time a process spends waiting for service in the ready queue since other times that the process spends (executing, I/O wait) are beyond control of the scheduler.
- First Come First Serve (FIFO)
- Advantage: simple
- Problem: short jobs way wait a long time: if a 1 millisec job shows up right after a 1 hr job, the short job has a very long wait time (compared to its execution time).
- Shortest Job First (SJF)
- Can prove SJF gives shortest turnaround time.
- Problems:
- Often scheduler cannot determine execution time beforehand. Is used in a batch environment where user is required to state maximum executino time.
- May also lead to starvation (if short jobs keep arriving, long jobs may never get to run.)
- May be preemptive or nonpreemptive (should you replace currently executing process with newly arrived shorter process?)
- Priority Scheduling -- Ready queue maintained in priority order, where priority of a process is:
- determined by OS (e.g., run system processes before user processes use average time of last CPU bursts, number of open files, memory size (to get memory hogs out as quickly as possible))
- purchased by user
- based on corporate policy (e.g. give high priority to some project viewed as very important to company)
- Round Robin Scheduling -- Fixed time slice to each waiting process.
- Advantage: "smooth" performance (all users get "equal" treatment and -- at least sorta -- each process takes about the same multiple of real time to complete. The smaller the slice, the smoother, the at the price of more overhead.)
- Problem: costs of context switch.
Considerations for schedulers:
- possibility of starvation
- possibility of deadlock (normally not a problem with dispatcher)
- overhead of context switching (what is the "optimal" size of the "quantum."
- runtime overhead of scheduling algorithm (much more of a concern for the dispatcher, less so for the job scheduler).
Other approaches:
- Multiple queues
high priority
^ +------------------+
| ----->| system processes |----->
| +------------------+
| +------------------+
| ----->| interactive proc |----->
| +------------------+
| +------------------+
| ----->| batch processes |----->
| +------------------+
low priority
- So system processes (which are "ready") are run before any interactive processes. And batch processes only run if there are no "ready" system or interactive processes. (This same thing can be done with a single queue kept in order by process type, so the use of multiple queues is in part an implementation technique which would be justified based on speed.)
- Direct assignment -- for example
- 50% of CPU goes to interactive processes
- 20% of CPU goes to batch processes,
- 30% of CPU goes to real-time process.
- Typically done with separate queues for each process type, and choose which queue to use based on recent CPU utilization for each type (e.g. in the last 100 milliseconds, interactive processes have gotten a total of 50 milliseconds, real-time 30 milliseconds and batch only 10 milliseconds, so pick the next process off of the ready queue for batch processes.
- Multilevel feedback queues
new +-----------+ ^ high priority
--------->| |-- quantum = 8 |
+-----------+ \ |
/------------------/ |
\ +-----------+ |
---| |-- quantum = 32 |
+-----------+ \ |
/------------------/ |
\ +-----------+ |
---| | preemptive FCFS |
+-----------+ |
- The idea is that all newly arriving processes get a quick slice of the CPU (we may allow the process to circulate through the highest priority queue two or three times) so if the process is short, it finishes very quickly. But if the process requires more CPU time if will not finish in this first few tries and will drop to a lower priority queue. Very long running processes will eventually be moved to the bottom queue and only run when the higher priority queues are empty. This tends to give quick response time to very short tasks, and automatically give low priority to long running processes.
Evaluation of alternative scheduling concepts: Since OSs have been around for a long time now, much work has been done evaluating different ideas for scheduling. The techniques for doing this include:
- deterministic modeling by hand,
- queueing theory (an area of statistics): given arrival rates and service times, can often compute expected wait times.
- simulations (similar in part to what you are doing for OSP): can have some parts simulated (such as process arrivals or process memory addressing patterns) and try different scheduling algorithms. Need to capture "typical" set of processes for a "typical" customer.
Copyright ©2017, G. Hill Price
Send comments to G. Hill Price