But focus in this class is on necessary (well, largely necessary) OS services
Process Management
Task is a synonym for process. So is the same as job (at least mostly -- some jobs can have several associated tasks).
Standard process management tasks:
Memory Management
Memory hierarchy: +-----+ +-----------+ +------------------------+ |cache| | primary | | secondary | | | | memory | | memory/storage | +-----+ +-----------+ +------------------------+ very fast fast slow very exp. mod. exp. mod. cheap very small medium hugeFor an instruction to executed or data to be read, it must be in cache.
Memory management tasks:
As an example of OS services, consider possible actions required to execute the C++ function call:
write( a, b, c );
(I am ignoring the compiler task of translating this statement in a high-level language into machine code which can be efficiently executed on the target hardware -- except the observation that in standard C++ function, 'a' may be the name of the file to which the values of the variables b and c are to be written. From this statement you can't tell if it is a variable or a file -- depends on how it is declared. The compiler handles this and generates appropriate code.)
The C++ compiler depends on a run-time support library. The routines necessary to translate from an internal representation (as floating point or integers, for example, to an external representation as a string of ascii characters), to read data from files or write data to files are contained in those libraries. Part of translation of the C++ code into machine code requires the use of a linker which links together the machine code directly translated from the user programmer with the machine code which is in the run-time library. It must use the OS to locate that library code and extract the part of the library which this user program requires.)
So some typical steps which must occur to "execute" this statement (after the compiler has translated it and the linker has located necessary library routines and the loader has placed the code in primary memory):
Consider the OLD MS-DOS:
+-------------------------------------+ | application program | +-------------------------------------+ | | V | +---------------------------+ | | resident system programs | | +---------------------------+ | | | | V | | +--------------+ | | | MS-DOS device| | | | drivers | | | +--------------+ | | | | | V V V +-------------------------------------+ | ROM BIOS device drivers | +-------------------------------------+Because of this structure, OS cannot "enforce" much of anything (since software can easily bypass OS). Also changes (particularly to ROM BIOS) is much more difficult since it is not hidden from the application programs. If all calls had to go through OS routines, the ROM BIOS could be changed requiring only a change to a few OS routines (at least part of the time).
UNIX: much the same: what had started out as a clean simple elegant design becomes progressive more obtuse.
+---------------------------------------------------------+ | users | +---------------------------------------------------------+ < user interface | shells | | compilers/interpreters (other tools) | | system libraries | system call +---------------------------------------------------------+ < interface to | | kernel | signals file system CPU scheduling | | terminal handling swapping memory mgmt | | character I/O system block I/O system | | terminal drivers disk drivers | kernel +---------------------------------------------------------+ < interface to | term cntrls | device cntrls | mem cntrls | hardware | "terminal" | disk & mouse | physical mem. | +---------------------------------------------------------+Problem: kernel has become too large, thus hard to understand, hard to change (subtle interconnections).
Desired structure (modern Micro Kernals): layered approach (to better organize that amorphous blob called the kernel).
Six layers:
Key concept: much that is viewed as "better performance" is not based on technical decisions but rather on policy decisions. It is a policy decision to give interactive users better response time at the expense of more run-time overhead (due to more frequent context switching). Many OS design decisions are based on an idea of "fairness" but this is not a technical term -- it is a subjective human concept.
This is a much simplied presentation, but should give a basic understanding of the type of system you are using at ODU. Basic system architecture:
Subnet A --+--------+--------+--------+--------+--------+--------+-- Ethernet | | | | | | | Cable | | | | | | | +-----+ +-----+ +-----+ +-----+ +-----+ +-----+ +------+ | WS 1| | WS 2| | WS 3| | WS 4| | WS 5| | WS 6| | File | | | | | | | | | | | | | |Server|----- | | | | | | | | | | | | | | | +-----+ +-----+ +-----+ +-----+ +-----+ +-----+ +------+ | | | +------+ | | Disks| | | | | +------+ | V Access to other netsEach workstation is a complete computer, with CPU and memory, but usually no disks (though some do have local for swap space (memory) and tmp space (some programs need temporary file space). Each workstation "thinks" it has disk drives for the UNIX file system, but the disk drive controller (a piece of software that knows how to send and receive data from disk) redirects all disk requests to the network, where they are picked up by the fileserver, the data are then sent to or retrieved from its disks and the result sent back to the requesting workstation, where the local disk controller then tells the other parts of the OS running there that the disk transfer is complete. To the rest of the OS, whether the transfer went to a local disk or across the network to a fileserver (really just another workstation with real disk drives) is completely transparent.
Index | Previous | Next |
---|
Copyright ©2014, G. Hill Price
Send comments to G. Hill Price