Towards Workload-Aware Page Cache Replacement Policies for Hybrid Memories
- 5 October 2015
- conference paper
- conference paper
- Published by Association for Computing Machinery (ACM) in Proceedings of the 2015 International Symposium on Memory Systems
- p. 206-219
- https://doi.org/10.1145/2818950.2818978
Abstract
Die-stacked DRAM is an emerging technology that is expected to be integrated in future systems with off-package memories resulting in a hybrid memory system. A large body of recent research has investigated the use of die-stacked dynamic random-access memory (DRAM) as a hardware-manged last-level cache. This approach comes at the costs of managing large tag arrays, increased hit latencies, and potentially significant increases in hardware verification costs. An alternative approach is for the operating system (OS) to manage the die-stacked DRAM as a page cache for off-package memories. However, recent work in OS-managed page cache focuses on FIFO replacement and related variants as the baseline management policy. In this paper, we take a step back and investigate classical OS page replacement policies and re-evaluate them for hybrid memories. We find that when we use different die-stacked DRAM sizes, the choice of best management policy depends on cache size and application, and can result in as much as a 13X performance difference. Furthermore, within a single application run, the choice of best policy varies over time. We also evaluate co-scheduled workload pairs and find that the best policy varies by workload pair and cache configuration, and that the best-performing policy is typically the most fair. Our research motivates us to continue our investigation for developing workload-aware and cache configuration-aware page cache management policies.Keywords
This publication has 24 references indexed in Scilit:
- Heterogeneous memory architectures: A HW/SW approach for mixing die-stacked and off-package memoriesPublished by Institute of Electrical and Electronics Engineers (IEEE) ,2015
- DI-MMAP—a scalable memory-map runtime for out-of-core data-intensive applicationsCluster Computing, 2013
- Die-stacked DRAM caches for serversPublished by Association for Computing Machinery (ACM) ,2013
- A Dual Grain Hit-Miss Detector for Large Die-Stacked DRAM CachesPublished by EDAA ,2013
- Fundamental Latency Trade-off in Architecting DRAM Caches: Outperforming Impractical SRAM-Tags with a Simple and Practical DesignPublished by Institute of Electrical and Electronics Engineers (IEEE) ,2012
- A Mostly-Clean DRAM Cache for Effective Hit Speculation and Self-Balancing DispatchPublished by Institute of Electrical and Electronics Engineers (IEEE) ,2012
- DI-MMAP: A High Performance Memory-Map Runtime for Data-Intensive ApplicationsPublished by Institute of Electrical and Electronics Engineers (IEEE) ,2012
- Quantifying Locality In The Memory Access Patterns of HPC ApplicationsPublished by Institute of Electrical and Electronics Engineers (IEEE) ,2005
- Interconnect characteristics of 2.5-D system integration schemePublished by Association for Computing Machinery (ACM) ,2001
- Hitting the memory wallACM SIGARCH Computer Architecture News, 1995