Date of Award
Winter 12-2024
Degree Name
Master of Science (MS)
Degree Type
Thesis
Abstract
Modern computing systems with hierarchical memory structures, such as multiple cache levels, main memory, and external storage, pose significant challenges in optimizing memory usage for algorithm efficiency. Traditional models like the Disk Access Model (DAM) and advancements such as cache-oblivious algorithms have focused on minimizing memory transfers without requiring explicit knowledge of memory hierarchy parameters. However, these approaches assume fixed memory sizes and exclusive cache access, limiting their applicability in real-world, shared-memory environments where memory allocations fluctuate. To address these limitations, cache-adaptive algorithms were developed to dynamically adjust to changing memory profiles, enabling near-optimal performance even in multi-process systems. While substantial progress has been made in analyzing memory performance in sequential algorithms, modern multi-processor systems with private or shared caches necessitate extending these techniques to parallel systems, where optimizing memory usage and achieving efficient performance remains a critical challenge.
This paper focuses primarily on designing a parallel scheduler to enhance cache adaptivity in parallel systems. First, we formalize the concept of parallel cache adaptivity by extending its definition from sequential to parallel systems. This includes examining how algorithms can respond to dynamic memory changes and variability in cache sizes across multiple processors. We then introduce a scheduler that leverages work-stealing strategies while respecting the constraints of cache adaptivity. Finally, we validate our approach by applying this scheduler to specific algorithms, demonstrating its ability to support optimal cache usage and efficient computation in parallel settings.
Language
English (en)
Chair
Kunal Agrawal
Committee Members
Kunal Agrawal, Sanjoy Baruah, Christopher Gill, I-Ting Angelina Lee