FALL 2000
Lecture Notes 10:
MEMORY MANAGEMENT 2
PAGED MEMORY ALLOCATION
This memory management technique allows for processes to be stored non-contiguously in memory by dividing each incoming job into pages of equal size. Sections of memory are called page frames. This system works best when disk divisions are the same size as page frames.
Pages do not have to be loaded in adjacent memory blocks. The advantages are:
- empty page frames can be used by any page of any job.
- There is not external fragmentation and little internal except on the last page.
However, size and complexity of overhead is increased because OS has to keep track of every job's pages.
Assume that the page size is 100 lines of instructions. If a program is 350 lines long it will use 4 pages. The last page will not be fully used.
The Memory Manager uses tables to keep track of page utilization:-
- The Job Table, which contains two entries for each active job: the size of the job and the memory location where its Page Map Table is stored.
- The Page Map Table which contains vital info for each page: the page number and its corresponding page frame memory address.
- Memory map table has one entry for each page frame listing its memory location and its status.
At compilation time, every job divided up into pages. E.g. if a job has 350 lines of
instructions, and the page frame size is 100 lines, there will be four page frames, the last one
being not completely full.
To locate a line within a job, need its page # and offset(or displacement). Page numbering
starts at 0, first page contains lines 0 though 99. Offset of a line is how far away it is from
the beginning of the page. For example, line 214 of the job above with 350 lines, will be on page
2, displacement 14.
2
100) 214
200
14
This procedure gives locations of line with respect to the job's pages - but actual memory
location could be anywhere. Need to correlate each of job's pages with page frame numbers via
Page Map Table. Suppose in the above example, PMT looked like this:
Job page # |
Page frame # |
0 |
8 |
1 |
10 |
2 |
5 |
3 |
11 |
To locate page 2 offset 14, the OS has to calculate the following:
(In actuality would not work with lines but with word size or bytes)
Page # = integer quotient from division of job space address by the page size.
Offset = remainder of division as above.
Locate page frame number for required page - is page frame 5 in case above.
Get the address of the beginning of the page frame by multiplying page frame number by the
page size. Then add the displacement to the resulting address. This will tell you exactly where
line 214 is located in memory.
Another example:
Suppose an assembly language program instructs the system to load a value found at line 518. The page frame sizes are set to 512 bytes each. Job must be at least two pages. Page 0 is in page frame # 5 and page 1 is in page frame # 3 (information stored in PMT),
Compute page # & offset 518/512 = 1 remainder 6.
Compute starting address for page frame 3 by multiplying page frame # times page frame
size (3 * 512 = 1536)
Calculate exact address of line 518 by adding offset to beginning of page (1536 + 6 = 1542)
Memory address 1542 holds the value that should be loaded into the register.
This process is called address resolution.
Paging schemes:
- Allow non-contiguous allocation, which uses memory more efficiently.
- Cause an increase in overhead and have some internal fragmentation on last page.
- Key to success is sizing of the page - too small and PMT very large, too big and more
fragmentation.
DEMAND PAGING
Removes the restriction of having the entire job in memory from the beginning to the end
of its processing. Jobs still divided into equal sized pages that initially reside on secondary
storage. When job begins to run, pages are brought into memory as needed.
Often, in programs, while one module is being processed, the others are sitting idle. Not all
pages are necessary at once, e.g.
- Error handling modules only needed when error detected.
- Many modules mutually exclusive (e.g. input v. processing v. output)
- Menu driven programs - menu selection causes one module to run, and while it is running, all
others remain idle.
Important innovation of demand paging is that it make virtual memory (discussed later)
available. Allows user to run jobs that are bigger than the size of main memory.
Key to successful implementation is high-speed direct storage access.
How and when pages swapped depends on predefines policies. OS relies on the Job Table, the
Page Map Table and the Memory Map Table to implement the algorithm. The tables are the same as
for regular paged memory except that the PMT has three new fields for each page:
- Indication of whether the page being requested is currently in memory. Saves the system the
time of re-loading a page if it's already in memory.
- Indication of whether the page's contents have been changed or not. Saves time when returning
pages to secondary storage - pages don't need to be re-written if they haven't been changed.
- Indication of whether the page has been referenced recently. Used to show which pages are
active. This info is used in certain policy schemes to determine which pages should remain in
main memory and which should be swapped out when the system need to make room for other requested
pages.
Job |
Page fr # |
OS |
0 |
OS |
1 |
OS |
2 |
OS |
3 |
Job 3 P0 |
4 |
Job 1 P0 |
5 |
Job 4 P0 |
6 |
Job 1 P2 |
7 |
Job 3 P1 |
8 |
Job 1 P1 |
9 |
Job 2 P0 |
10 |
Job 3 P2 |
11 |
Job 1 P3 |
12 |
Job 4 P1 |
13 |
Job 2 P1 |
14 |
Job 4 P2 |
15 |
Jobs 1, 2 and three completely in memory. PMT for job 4 is like this:
Page |
Status |
Frame # |
0 |
Y |
6 |
1 |
Y |
13 |
2 |
Y |
15 |
3 |
N |
|
4 |
N |
|
5 |
N |
|
What happens when Job 4 requests page 3 be brought into memory?
First of all must check to see whether required page is already in memory, using PMT. If page
is not in memory, as is case for page 3 of job 4, a page interrupt is generated, and a section
of the operating system, called the page interrupt handler takes over. The handler tests
first of all whether there are empty page frames. If they are all busy, page interrupt handler must decide which page is to be swapped out, based on pre-defined policies.
Page interrupt handler algorithm
If there is no free page frame
Then
Select page to be swapped out using policy
Update jobs PMT
If contents modified then
Re-write removed page to disk
Use page # to get disk address of requested page
Read page into memory
Update job's PMT, Memory Map table
Restart interrupted job.
Demand paging is a wonderful solution of inefficient use of memory - however, it is not
without its own problems. If page swapping is excessive, operation becomes inefficient. This is
called thrashing. Used a great deal of OS effort and management but accomplished very
little if a page is removed from memory and very shortly afterwards called back. Can happen when
system is very full and a large number of jobs are competing for a few free frames. Can also
happen within a job if there is a looping structure in the process that spans two pages. For
example, if the loop was to pass through 100 times, and there was only one free page, 100 swaps
would have to be done. A failure to find a page in memory is called a page fault.
Before the Memory Manager can determine which pages are to be swapped out, it needs specific
instructions about each page in memory from the PMT.
PMT looks like this in demand paging:
Page |
Status |
Reference |
Modified |
Page frame |
0 |
1 |
1 |
1 |
5 |
1 |
1 |
0 |
0 |
9 |
2 |
1 |
0 |
0 |
7 |
3 |
1 |
1 |
0 |
12 |
Status bit indicates whether in memory.
Reference bit indicates whether page has been used recently.
Modified bit indicates whether the contents of the page have been changed and is used to
determine if it should be re-written to secondary storage.
All candidates for swapping must have a 1 in the status column.
FIFO just looks at the modified bit and the status bit, but LRU looks at all three bits.
Looking at just referenced and modified bits can have:
Case |
Modified |
Referenced |
1 |
0 |
0 |
2 |
0 |
1 |
3 |
1 |
0 |
4 |
1 |
1 |
LRU would choose Case 1 to swap because has not been referenced and does not need writing.
After that, the next possible choice is 3, not referenced but modified (how can that be?).
The referenced bit is set to 1 every time the page is referenced, but the OS resets them all back
to 0 at regular intervals. Makes all pages suddenly vulnerable.
NOW THE POLICIES
Just in case you thought we were not going to deal with them.
The page replacement policy is crucial to the efficiency of the system and algorithm must be selected carefully. There are in fact entire books written on this topic alone. Probably excellent for getting to sleep.
Two best known are FIFO and LRU (least recently used) .
FIFO is based on the policy that the best page to remove is the one that has been in memory the longest. LRU chooses the pages least recently used.
FIFO policy
A job requests that its pages be processed in the following order:
A,B,A,C,A,B,D,B,A,C,D.
This is a four page job and main memory has two available page frames. The job will issue
9 page interrupts during the 11 page requests during the course of the job (a page interrupt is
generated when a page is brought into memory whether it is swapped or not.) The failure rate is
calculated as 9/11 or 82%. A failure rate this high would be unacceptable. The failure rate is
caused by the limited amount of memory and the order in which the pages are requested by the
program. The page order can't be changes by the system, although the amount of memory can.
However, buying more memory may not always work because if each process wants an unlimited
amount of memory, it will load more pages. This is the FIFO anomaly.
Requests |
A |
B |
A |
C |
A |
B |
D |
B |
A |
C |
D |
Interrupts |
* |
* |
|
* |
* |
* |
* |
|
* |
* |
* |
Page F. 1 |
A |
A |
A |
C |
C |
B |
B |
B |
A |
A |
D |
Page F. 2 |
|
B |
B |
B |
A |
A |
D |
D |
D |
C |
C |
Time |
1 |
2 |
3 |
4 |
5 |
6 |
7 |
8 |
9 |
10 |
11 |
LRU Policy
This policy swaps out the pages that show least amount of recent activity. Theory is that
these pages are least likely to be used again, whereas, if a page is used, it is likely to be
used again soon.
Requests |
A |
B |
A |
C |
A |
B |
D |
B |
A |
C |
D |
Interrupts |
* |
* |
|
* |
|
* |
* |
|
* |
* |
* |
Page F. 1 |
A |
A |
A |
A |
A |
A |
D |
D |
A |
A |
D |
Page F. 2 |
|
B |
B |
C |
C |
B |
B |
B |
B |
C |
C |
Time |
1 |
2 |
3 |
4 |
5 |
6 |
7 |
8 |
9 |
10 |
11 |
There is only one fewer interrupt with this algorithm. Failure rate is (8/11) or 73%.
Worked example - do the above exercises with 3 page frames available. What is the failure
rate with FIFO and LRU?
It's hard to conclude on the basis of one example the relative merits of the two policies,
but the following is true:
LRU functions in such a way that increasing main memory always causes a decrease in or the
same number of page interrupts, whereas FIFO, in rare cases, can actually cause an increase
in interrupts. FIFO is rarely used for this reason.
Demand paging has two distinct advantages:
- a job is not longer constrained to the size of physical memory
- it uses memory so efficiently that sections of a job that are seldom used at all are not
loaded into memory unless specifically requested.
Its disadvantage is the overhead caused by the tables and the page interrupts.
SEGMENTED MEMORY ALLOCATION
Concept of segmentation based on common practice of structuring programs into modules. Each
job is divided into several segments of different sizes, one for each module that contains pieces
that perform the same function. This should be compared with paging, which splits the job into
equal sized pages, regardless of what the code does.
A second difference is that memory is divided into different sized segments in a dynamic
manner as in dynamic partitions.
A Segment Map table is set up for each job. It contains the segment number, the size, the
status, access rights and locations in memory. As well, referenced and modified bits are used
as in demand paging.
Memory manager must keep track of segments. This is done with three tables that combine the
characteristics of demand paging and dynamic partitions. They are:
- Job Table, which lists every job in the system.
- Segment Map Table, which lists details about each segment. (there is an SMT for each job)
- Memory Map Table monitors allocation of main memory.
Segments do not have to be stored contiguously. Specific locations are accessed that same way
as in paging, but because the segments are of different sizes, the offset must be checked to see
if it is out of range of that particular segment.
The disadvantage of this particular model is that it sees the return of external fragmentation.
Sometime re-compaction is used to deal with this.
SEGMENTED/DEMAND PAGING
Offers logical benefits of segmented memory with the physical benefits of paging. In this
scheme, segments are divided into equal sized pages. This means that the disadvantages of
segments are removed. Usual tables as for segments above except that for each segment there is a
Page Map Table. The tables relate to each other as below:
- Job Table has pointer to a Segment Map Table for each job.
- Each segment of a job ahs a pointer to a Page Map Table showing the page frame numbers of
each page.
- Using the Memory Map, the page frame number of a particular segment can be located in main
memory.
The advantage of this model is increased speed because it incorporated that best of both
models. The disadvantage is the high overhead for such a system.
VIRTUAL MEMORY
Demand paging removes the restriction based on maximum program size. The capability of
moving pages at will between main memory and secondary storage introduced the idea of
virtual memory.
Until the implementation of virtual memory, programmers had to write "tight"
programs to fit the available memory, making their code impossible for anyone but the creators
to understand and maintain.
During the second generation, programmers started dividing their programs into sections called
overlays. The program could begin with only the first overlay loaded into memory. Although the OS
did the swapping of overlays, the programmer did the sectioning. It was the programs that were
written to support overlaying that gave the idea for demand paging and segmentation.
Virtual memory works well in a multiprogramming environment because programs spend time
waiting. They wait for IO, the wait for page interrupts, they wait for time slices to be finished.
Virtual memory management has several advantages:
- Job is no longer restricted to free space in memory
- Memory is used more efficiently because the only sections of a job stored are those needed immediately.
- An unlimited amount of multiprogramming is permitted.
- It eliminates external fragmentation and reduces internal fragmentation
The disadvantages are:
- increased hardware costs
- increased overhead for handling page interrupts
- increased software complexity.
WINNT MEMORY MANAGEMENT
In WINNT, the operating system resides in virtual memory for the most part except for the
non-paged pool, which in fact, handles the paging!
The WINNT Virtual memory Manager allows processes to share memory efficiently and provides
certain services that a process can use to manage its virtual memory in the following ways:
- Allocate memory in two stages. First by reserving memory and then by committing memory as
needed. This two-step method allows a process to reserve a large section of virtual memory
without being accounted for it until it's needed.
- Read and/or write protection for virtual memory, allowing processes to share memory when
needed.
- Lock virtual pages in physical memory. This makes sure an important page will not be removed
from memory while a process is using it.
- Object based memory protection - each time a process opens a section object (a block of
memory that can be shared by two or more processes), the security reference objects checks to see
whether the process is allowed to access the object.
WINNT VMM
VMM can manage up to 4Gb of memory. This amount is gained through virtual memory.
Certain things are never removed from main memory - the resident operating system code that
manages virtual memory and sections of the kernel that require maximum performance, such as the
code that dispatches threads for execution.
The paging system is not portable and is implemented and modified for each unique hardware
platform.
Paging policies in a virtual memory system dictate how and when paging is done.
The VM Manager uses a demand paging policy with locality of reference called clustering -
designed to avoid page faults.
There is a set a rules that determines where the virtual page will be loaded in memory.
Any page can be in one of five states, valid, zeroed, free, standby and modified. Valid and
modified pages are currently in use. The page frame data base links together those pages that
are in the same state. Whenever the number of available pages gets to a pre-specified minimum,
the modified page writer writes all the modified pages to disk.
The VM manager uses a FIFO page swap policy, but the policy is local to a process. When a page
fault occurs, only pages related to that process can be freed.
UNIX MEMORY MANAGEMENT
The UNIX kernel, which permanently resides in memory, implements system calls to set up memory
boundaries so that several processes can co-exist in memory at the same time. The remaining
sections of the OS are demand pages into memory. UNIX uses the LRU algorithm.
MVS MEMORY MANAGEMENT
MVS (multiple virtual storage system) allocated memory using a segmented/ demand-paging scheme.
Each user is assigned a virtual storage access space consisting of 256 segments of 64K each.
The access space considered one unit even though part of it is in main memory and part is in
secondary storage. Almost 2000 users can be accommodated with virtual storage in a typical system.
When a program is loaded into virtual memory, it is loaded into contiguous locations within the
virtual address space, but not necessarily in main storage(memory).
|