Virtual Memory
Objectives
·
To describe the benefits of a
virtual memory system
·
To explain the concepts of demand paging & page-replacement algorithms, and allocation of page frames
· To discuss the principle of the working-set model
Background
Virtual
memory
– separation of user logical memory from physical memory:
·
Only
part of
the program needs to be in memory for execution
·
Logical
address space can therefore be much larger than physical address space
·
Allows
address spaces to be shared by several processes
Virtual
Memory Larger than Physical
Memory
Shared Library Using Virtual Memory
Demand Paging
Bring a page into
memory only when it is needed
Page Table When Some Pages Are Not in Main Memory
Page Fault
·
If
there is a reference to a page, first reference to that page will trap to
operating system:
page
fault
·
Get
empty frame
·
Swap
page into frame
·
Reset
page table
·
Set
validation bit = v
·
Restart
the instruction that caused the page fault
Steps in Handling a Page Fault
Performance of Demand Paging
·
Page
Fault Rate 0 £ p £ 1.0
o
if
p = 0 no page faults
o
if
p = 1, every reference is a fault
·
Effective
Access Time (EAT)
EAT
= (1 – p) x memory access + p x (page fault overhead)
overhead = swap page out+
swap page in+ restart overhead.
Example
·
Memory
access time = 200 nanoseconds
·
Average
overhead time = 8 milliseconds
·
EAT
= (1 – p) x 200 (nanoseconds) + p x 8 (milliseconds)
·
If
one access out of 1,000 causes a page fault, then
EAT
= 8200 nanoseconds.
·
This
is a slowdown by a factor of 40!!
Copy-on-Write
·
Copy-on-Write
(COW) allows both parent and child processes to initially share the
same pages in memory
If either process modifies a shared page, only then is the page copied
·
COW
allows more efficient process creation as only modified pages are copied
Before Process 1
Modifies Page C
After Process 1
Modifies Page C
What happens if there is no free frame?
Page replacement – find some page in memory, but not really in use, swap
it out
Performance – want an algorithm which will result in minimum number
of page faults
Basic Page Replacement
·
Find
the location of the desired page on disk
·
Find
a free frame:
-
If there is a free frame, use it
-
If there is no free frame, use a page replacement algorithm to select a victim
frame
·
Bring the desired page into the (newly) free frame;
update the page and frame tables
·
Restart
the process
Page Replacement Algorithms
ü We want to have the lowest page-fault rate.
ü
Evaluate
algorithms by running it on a particular string of memory
references and
Compute the number of page faults on that
string.
Algorithms:
·
FIFO: First-In-First-Out
·
Optimal (replace
the page that will not be
used for longest time in
the future).
·
LRU: Least Recently Used.
·
LFU: Least Frequently Used,
·
MFU: Most Frequently Used.
FIFO Page Replacement
Example (# of page faults = 15)
FIFO Illustrating
Belady’s Anomaly
Optimal Page Replacement Example (#
of page faults = 9)
LRU Page Replacement Example (#
of page faults = 12)
Stack implementation of LRU – keep a stack
of page numbers:
· Page referenced:
move it to the top
· No search for
replacement, just replace the page at the bottom of the stack.
Counting Algorithms
Keep
a counter of the number of references that have been made to each page.
· LFU
Algorithm: replaces page with least count
· MFU
Algorithm:
replaces page with most count.
Based on the argument
that:
the page with the
smallest count was probably just brought in and has yet to be used.
Frame Allocation
·
Each
process needs a minimum number of pages
·
Two
major allocation schemes
o
fixed equal allocation
o
priority allocation
Priority Allocation
·
Use
a proportional allocation scheme using priorities rather than size
·
If
process Pi generates a page fault,
select for
replacement a frame from a process with lower priority number
Global vs. Local Allocation
·
Global replacement – process selects a replacement frame from the
set of all frames;
one process can
take a frame from another
·
Local replacement – each process
selects from only its own set of
allocated frames
Thrashing
· If
a process does not have “enough” pages, the page-fault rate is very high. This leads to:
o
low CPU utilization
o
operating system thinks that it needs to
increase the degree of multiprogramming
o
another process added to the system
· Thrashing
º a
process is busy swapping pages in and out
Demand Paging and Thrashing
· Why does demand paging
work?
Locality model
§ Process
migrates from one locality to another
§ Localities
may overlap
· Why does thrashing occur?
S size of
locality > total memory size
Working-Set Model
· D º working-set
window º a fixed
number of page references
Example: 10,000 instruction
· WSSi (working set of Process Pi)
=
total number of pages referenced in
the most recent D
(varies in time)
o
if D too small will not encompass entire locality
o
if D too large will encompass several localities
o
if D = ¥ Þ will encompass entire program
· D
= S WSSi º total demand frames
· if
D > m Þ Thrashing
· Policy
if D > m, then
suspend one of the processes
Page-Fault Frequency Scheme
·
Establish
“acceptable” page-fault rate
ü If actual rate
too low, process loses frames
ü If actual rate
too high, process gains frames
Other Issues
Ø Prepaging
·
To reduce the large number of page faults that occurs
at process startup
·
Prepage all or some of the pages a process will
need, before they are referenced
Ø Page Size
Page size selection must take into consideration:
· fragmentation
· table size
· I/O overhead
·
locality
Ø Program structure
Int [128,128] data;
Each row is
stored in one page
Program 1:
for (j = 0; j
<128; j++)
for (i = 0; i < 128;
i++)
data[i,j] = 0;
128 x 128 = 16,384 page faults
Program 2:
for (i = 0; i < 128;
i++)
for (j = 0; j < 128;
j++)
data[i,j] = 0;
128 page faults
Ø I/O Interlock
Pages must
sometimes be locked into memory
Consider I/O -
Pages that are used for copying a file from a device must be locked
from being
selected for eviction by a page replacement algorithm