CHOP: Adaptive filter-based DRAM caching for CMP server platforms
- 1 January 2010
- conference paper
- conference paper
- Published by Institute of Electrical and Electronics Engineers (IEEE) in HPCA - 16 2010 The Sixteenth International Symposium on High-Performance Computer Architecture
Abstract
As manycore architectures enable a large number of cores on the die, a key challenge that emerges is the availability of memory bandwidth with conventional DRAM solutions. To address this challenge, integration of large DRAM caches that provide as much as 5× higher bandwidth and as low as 1/3rd of the latency (as compared to conventional DRAM) is very promising. However, organizing and implementing a large DRAM cache is challenging because of two primary tradeoffs: (a) DRAM caches at cache line granularity require too large an on-chip tag area that makes it undesirable and (b) DRAM caches with larger page granularity require too much bandwidth because the miss rate does not reduce enough to overcome the bandwidth increase. In this paper, we propose CHOP (Caching HOt Pages) in DRAM caches to address these challenges. We study several filter-based DRAM caching techniques: (a) a filter cache (CHOP-FC) that profiles pages and determines the hot subset of pages to allocate into the DRAM cache, (b) a memory-based filter cache (CHOP-MFC) that spills and fills filter state to improve the accuracy and reduce the size of the filter cache and (c) an adaptive DRAM caching technique (CHOP-AFC) to determine when the filter cache should be enabled and disabled for DRAM caching. We conduct detailed simulations with server workloads to show that our filter-based DRAM caching techniques achieve the following: (a) on average over 30% performance improvement over previous solutions, (b) several magnitudes lower area overhead in tag space required for cache-line based DRAM caches, (c) significantly lower memory bandwidth consumption as compared to page-granular DRAM caches.Keywords
This publication has 27 references indexed in Scilit:
- Scaling the bandwidth wallACM SIGARCH Computer Architecture News, 2009
- Towards practical page coloring-based multicore cache managementPublished by Association for Computing Machinery (ACM) ,2009
- L1 Cache Filtering Through Random Selection of Memory References16th International Conference on Parallel Architecture and Compilation Techniques (PACT 2007), 2007
- Line Distillation: Increasing Cache Capacity by Filtering Unused Words in Cache LinesPublished by Institute of Electrical and Electronics Engineers (IEEE) ,2007
- Adaptive Caches: Effective Shaping of Cache Behavior to Workloads40th Annual IEEE/ACM International Symposium on Microarchitecture (MICRO 2007), 2006
- Increasing the Cache Efficiency by Eliminating NoisePublished by Institute of Electrical and Electronics Engineers (IEEE) ,2006
- Design and optimization of large size and low overhead off-chip cachesIEEE Transactions on Computers, 2004
- Cached DRAM for ILP processor memory access latency reductionIEEE Micro, 2001
- Memory access schedulingACM SIGARCH Computer Architecture News, 2000
- Adapting cache line size to application behaviorPublished by Association for Computing Machinery (ACM) ,1999