1) programs can be bigger that physical memory size since only a portion of them may actually be in physical memory.
2) higher degree of multiprogramming is possible since only portions of programs are in memory
Operating System's goals are to make virtual memory efficient and transparent to the user.
Demand paging is a common way for OSs to implement virtual memory. Demand paging ("lazy pager") only brings a page into physical memory when it is needed. A "Loaded bit" is used in a page table entry to indicate if the page is in memory or only on disk.
A page fault occurs when the CPU generates a logical address for a page that is not in physical memory. The MMU will cause a page-fault trap (interrupt) to the OS.
1) Check page table to see if the page is valid (exists in logical address space). If it is invalid, terminate the process; otherwise continue.
2) Find a free frame in physical memory (take one from the free-frame list or replace a page currently in memory).
3) Schedule a disk read operation to bring the page into the free page frame. (We might first need to schedule a previous disk write operation to update the virtual memory copy of a "dirty" page that we are replacing.)
4) Since the disk operations are soooooo slooooooow, the OS would context switch to another ready process selected from the ready queue.
5) After the disk (a DMA device) reads the page into memory, it involves an I/O completion interrupt. The OS will then update the PCB and page table for the process to indicate that the page in now in memory and the process is ready to run.
6) When the process is selected by the short-term scheduler to run, it repeats the instruction that caused the page fault. The memory reference that caused the page fault will now succeed.
Performance of Demand Paging
To achieve acceptable performance degradation (5-10%) of our virtual memory, we must have a very low page fault rate (probability that a page fault will occur on a memory reference).
When does a CPU perform a memory reference?
1) Where is it located?
If it is in memory, then each memory reference in the program, results in two memory accesses; one for the page table entry, and another to perform the desired memory access.
Solution: TLB (Translation-lookaside Buffer) - small, fully-associative cache to hold PT entries
Ideally, when the CPU generates a memory reference, the PT entry is found in the TLB, the page is in memory, and the block with the page is in the cache, so NO memory accesses are needed.
However, each CPU memory reference involves two cache lookups and these cache lookups must be done sequentially, i.e., first check TLB to get physical frame # used to build the physical address, then use the physical address to check the tag of the L1 cache.
Alternatively, the L1 cache can contain virtual addresses. This allows the TLB and cache access to be done in parallel. If the cache hits, the result of the TLB is not used. If the cache misses, then the address translation is under way and used by the L2 cache.
2) Ways to handle large page tables:
Page table for each process can be large
e.g., 32-bit address, 4 KB (212 bytes) pages, byte-addressable memory, 4 byte PT entry
1 M (220) of page table entries, or 4MB for the whole page table with 4 byte page table entries
a) two-level page table - the first level (the "directory") acts as an index into the page table which is scattered across several pages. Consider a 32-bit example with 4KB pages and 4 byte page table entries.
b) inverted page table - use a hash table of what's actually in the physical memory frames to reduce the size of the necessary data structures
Design issues for Paging Systems
1) Want as many (partial) processes in memory (high degree of multiprogramming) as possible so we have better CPU & I/O utilization allocate as few page frames as possible to each process
2) Want as low of page-fault rate as possible allocate enough page frames to hold all of a process' current working set (which is dynamic as a process changes locality)
Thrashing occurs when processes spend more time in page fault wait than doing useful work
Operating systems need to have
1) frame-allocation algorithm to decide how many frames to allocate to each process
2) page-replacement algorithm to select a page to be removed from memory in order to free up a page frame on a page fault
Page-Replacement Algorithm - selects page to replace when a page fault occurs
Reasonable page-replacement algorithms must
1) not be too expensive to implement w.r.t. time, hardware, or memory
2) have good phase transition - "forget"/replace pages of the old locality of reference when the program moves on to a new locality of reference.
Possible Page-Replacement Algorithms:
B. Optimal Page Replacement Algorithm - select page that's not needed for the longest time in the future (impossible to actually implement)
Allocated = 3
| || ||1||1||1||1||1||1||1||1||4*||4|
|7 page faults|| ||Page fault rate = 7/12|
C. LRU (Least Recently Used) - uses principle of locality to approximate the optimal algorithm
Implementation of LRU Algorithm
What information would we need to keep track of to implement LRU?
1) a counter in each page-table entry indicating the last time-of-use, or
2) stack - (like above tables) indicating the order of usage.
Either of these faithful implementation of LRU would be expensive!!! Both would require a substantial amount of work with each memory access.
Approximations to LRU - that are less expensive to implement
Usually done in software with little hardware support, except for reference (R) bits that are maintained by hardware for each entry in the page tables. (The R-bits are maintained in the TLB)
A. Counter/History Bits - approximate time stamp
Periodically, say every 20 milliseconds, interrupt and have the OS shift the R-bit into the counter/history bits. Such as
On a page fault, select the page with the smallest counter/history bits to replace. For example, consider the following collection of couter/history bits.
B. Second-Chance/Clock Replacement - only store one counter/history bit per page-table entry
For the pages in memory, maintain a circular FIFO queue of pages
C. Not Recently Used (NRU) / Enchanced Second-Chance Algorithm - in addition to the a single reference bit, use the modified/dirty bit to decide which page to replace.
On a page fault, the OS splits pages in memory fall into the following four categories based on their reference and modify bits.
Frame-Allocation Algorithms - to decide how many frames to allocate to each process
Goal: Allocate enough frames to keep all pages used in the current locality of references without allocating too many page frames.
The OS might maintain a pool of free frames in a free-frame list.
1) local page replacement - When process A page faults, consider only replacing pages allocated to process A. Processes are allocated a fixed fraction of the page frames.
2) global-page replacement - When a page fault occurs, consider replacing pages from any process in memory. Page frames are dynamically allocated among processes, i.e., # of page frames of a process varies.
Advantage: as a process' working set grows and shrinks it can be allocated more or less page frames accordingly.
Disadvantage: a process might be replacing a page in the locality of reference for another process. Thrashing is more of a problem in systems that allow global-page replacement.
Implementation of global-page replacement
a) Page-Fault Frequency - OS monitors page-fault rate of each process to decide if it has too many or not enough page frames allocated
Additional, free frames could be allocated from the free-frame list or removed from processes that have too many pages. If the free-frame list is empty and no process has any free frames, then a process might be swapped out.