Memory Management

Virtual Memory: It maps memory addresses used by a program, called virtual addresses, into physical addresses in computer memory.

  • While virtual addresses are linear physical addresses do not have to be

  • Acts as a logical layer between app memory requests and the memory management unit
    • Allows several processes to be executed concurrently
    • Possible to run apps that require more memory than what is available
    • Each process gets its own share of physical memory
    • Programs can be located in any part of physical memory
      • Machine independent code (no need to care about organization of physical memory)

Virtual Address Space Handling

  • Virtual (Logical) Address Space : The address space which a process is allowed to reference
    • Kernel and memory management unit will find physical address
  • Today CPUs will automatically resolve virtual addresses into physical ones
    • RAM is partitioned into page frames of 4 or 8 kB in size
    • Page tables are used to map virtual to physical addresses
  • Memory Area Descriptors: Used to store a virtual address space
    • Created upon process start (execution)
    • Executable code, initialized data, init data, program stack, library, heap data are all memory a process needs
  • Unix uses Demand Paging
    1. Process can start program execution w/o pages in memory
    2. Attempt to access a page
    3. Upon attempting to access a page that doesn’t exist MMU generates exception
    4. Exception handler finds the affected memory region
      1. Allocate free page initialize with data
  • This can save memory as only pages needed will be allocated
  • Allows for copy on write for child processes
    • Child processes will get direct access to pages (read only) from the parent process and only when it needs to alter data will an exception be raised and a new page is allocated for the child process

 

Sectioning RAM

  1. Several megabytes to store kernel code and kernel data structures
  2. Handled by virtual memory system
    1. Kernel requests for buffers, descriptors, and other dynamic kernel data
      structures
    2. Satisfy process requests for generic memory areas and for memory mapping
      of files
    3. Better performance from disks and other buffered devices by means of
      caches
  3. Memory Fragmentation: If the free memory available is not contiguous enough memory requests may fail due to bad page-frame reclaiming algorithms

 

Caching

  • Disks are much slower than RAM.  Therefore delay writing to the disk as long as possible and keep the data used by a process inn RAM even after termination called “dirty buffers”

    • This is done because highly likely a new process will need data from a previous terminated process
    • All OSes will periodically write dirty buffers to disk
    • sync() can force write to disc

 

Allocating Memory (Kernel Memory Allocator KMA): Subsystem tries to satisfy memory requests from all parts of the system.

  • 1. Needs to be fast. 2. Minimize wasted memory. 3. Reduce memory fragmentation 4. Not get in the way of other memory subsystems

 

 

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )

Connecting to %s