Memory Management in Modern OSes: Paging and Caches
Memory Management in Modern OSes: Paging and Caches Modern operating systems use two ideas to manage memory: paging and caches. Paging divides the program’s memory into small blocks called pages and maps them to physical memory. Caches sit closer to the CPU and keep recently used data ready. Together, paging and caches help keep programs safe, fast, and responsive. Paging basics are simple in concept. A process sees a virtual address space, split into pages. The OS stores a page table that translates each page number to a physical frame in RAM. Each page table entry carries the frame number plus flags such as read/write access and whether the page is allowed for the current process. The hardware uses a translation lookaside buffer, or TLB, to speed up these translations. When the CPU accesses data, the TLB check is quick; if the data is not there, a longer page table walk happens, and the translation is filled in. If the data is not in RAM, a page fault occurs. The operating system then loads the needed page from disk, updates the table, and restarts the access. ...