CS 771/871 Operating Systems
[ Home | Class
Roster | Syllabus | Status | Glossary | Search
| Course Notes]
Workstation Model
- Set of Workstations connected
on LAN
- Each workstation has an owner
(permanent or shared)
- Secondary Storage (figure
4-11)
- Diskless (cheaper,
symmetric, software installation, quieter)
- Local Disks
- Paging and
temp files
- " plus
binaries
- " plus
file caching
- Complete local
file system (PC model)
Problems:
- Bursty nature of some
computations
- Mobil computing
Utilizing Idle Workstations
RSH: spawn a process on another
(idle) machine
- Finding idle machine (defining
idle):
location transparency
- Consistent process
environment:
context transparency
- Contention for CPU cycles:
performance transparency
Find Idle Machines
Server driven algorithms:
- Idle workstation register with
central registry
- Or broadcasts, others record
(distributed)
Acquiring the idle processor should be mutually
exclusive
Client driven algorithms:
- Broadcasts need for processing
power
- Negotiates functionality and
performance
- Processor could respond with
delay proportional to load.
Proper Processor Context
- Same file system view
- Same environment variables
- Same working directory
- Clock
- Remote I/O to user
- How to use local disk (if
present)
Return of Owner
- Do nothing (owner perceives
slower machine)
- Kill intruding process
file system inconsistencies possible
- Migrate process to new machine
including state info kept in kernel
forward RPC reply/requests
migrate child processes
temp files
Processor Pool Model
Compute server accessed through
Graphics Terminals
Processors are no longer "owned"
Easier add 20% more processors to
pool than upgrade all workstations by 20%.
Pooling resources is more efficient
as shown by queuing analysis:
Other Factors
Above result is very general and
argues for centralization
Processors are only idle is there is insufficient work in
toto.
However:
- Some things don't scale well
(air conditioning, raw speed)
- Reliability and fault
tolerance
- Uniform delay (lack of jitter)
may be more important than absolute delay
- Limited inherent parallelism
Hybrid models: sufficient personal
processing power for common tasks with Processor Pool for
rarer bursty computation tasks.
Processor Allocation Models
- Assume identical machines or
homogenous pools
- migratory vs non-migratory
strategies
- objective function (what to
optimize)
- CPU utilization
- response time
- response ratio
- network traffic
- file caching
Processor Allocation: Design Issues
- Deterministic vs heuristic
- Centralized vs distributed
- Optimal vs sub-optimal
- Local vs global (transfer
policy)
- Sender vs receiver initiated
(location policy)
Processor Allocation: Implementation Issues
Performance depends on:
- Meaningful metrics (what is
load?)
- Global state dissemination
- Timeliness of global state
information
- Cost of migration
- Complexity of allocation
algorithm
- Performance of allocation
algorithm
- Stability
Graph Theoretic Deterministic Algorithm
Assumes:
- Number of processes fixed
- CPU and memory requirements
known a priori
- Average traffic between
processors known
- Number of processes greater
than number of processors
Create weighted graph with
processes as nodes and traffic between processes as
(weighted) edges.
Cut graph into "n"
partitions (one for each processor) which minimizes weight of
edges between cuts (traffic between processors) while
maintaining constraints on intraprocessor resource usage.
Graph Example
Using Data for fig 4-17:
|
A |
B |
C |
D |
E |
F |
G |
H |
I |
A |
|
3 |
|
|
2 |
|
6 |
|
|
B |
3 |
|
2 |
|
2 |
1 |
|
|
|
C |
|
2 |
|
3 |
|
8 |
|
|
5 |
D |
|
|
3 |
|
|
|
|
|
4 |
E |
2 |
2 |
|
|
|
|
3 |
4 |
|
F |
|
1 |
|
|
|
|
|
1 |
1 |
G |
6 |
|
|
|
3 |
|
|
4 |
|
H |
|
|
|
|
4 |
1 |
4 |
|
2 |
I |
|
|
5 |
4 |
|
5 |
|
2 |
|
Sorting edges:
CF |
AG |
FH |
FI |
DI |
EH |
GH |
AB |
CD |
EG |
BC |
AE |
BE |
HI |
BF |
FH |
8 |
6 |
5 |
5 |
4 |
4 |
4 |
3 |
3 |
3 |
2 |
2 |
2 |
2 |
1 |
1 |
Homework: What algorithms could you develop? subject
to what constraints?
Up-Down Algorithm
Essentially a priority queue with
dynamically set priorities based on needs and usage with some
hysteresis.
Co-ordinator keeps usage table with
on entry per user(workstation):
- Initially 0
- User accumulates penalties
points for using other workstations (function of
time): punish gluttons
- Penalty points removed for
waiting on unsatisfied request: reward patience
- Penalty points removed over
time after usage stops (history): time forgives
gluttons.
- Unsatisfied requests are
queued in inverse order of penalty points.
Simulations show good performance
over a variety of loads.
New users favored over old.
Hierarchical Algorithms
Distributed centralized:
- Divide processors into work
groups
- Each group has coordinator
- Work groups can be organized
into larger work groups ad
finitum
- Top level should be replicated
(committee)
- Requests for processors goes
to immediate coordinator
- If not satisfiable, bumped up
hierarchy
MICROS algorithm
- Job requests all processors
needs in beginning (S)
- Coordinators keep track of
available resources under its control
- Allocate R >= S processors
to job if available
- If not available, propagate
upwards
- If upper level coordinator
believes can satisfy, splits and propagates downward
- If estimates wrong and
insufficient processors available, start over.
Complicated by simultaneous
requests at different points in hierarchy
out of date information
deadlocks
race conditions
Client vs Server Initiated
Client requests when needed
- either poll or broadcast
- What about global state
(system has mean load X, my load relative to X)
- If none found after suitable
effort, do myself
- Under high loads, lots of
futile requests
Server announces availability
- either poll or broadcast
- What is definition of idle
(lightly loaded in a busy network)
- Network traffic devoted to
balancing lighter under heavy loads
Could build a hybrid, plus keep
some memory of past histories.
Bidding Algorithms
Uses market model of supply and
demand (efficient market theory says will reach optimum
equilibrium if instantaneous communication of state).
- Servers advertise services and
price
- Clients (customers) shop
around for best price giving required service
- Actual price is negotiated in
a bidding process to balance supply/demand
Can greed be programmed in?
Scheduling Process Sets
Question: Assume a job requires
"n" independent tasks. Argue why there would be no
advantage to time slicing on a uniprocessor.
Is this still true for a distributed system?
Co-Scheduling a set of jobs which
communicate can speed up a process set.
Copyright chris wild 1996.
For problems or questions regarding this web contact [Dr. Wild].
Last updated: September 26, 1996.