Transparent Hardware Management of Stacked DRAM as Part of Memory
- 1 December 2014
- conference paper
- conference paper
- Published by Institute of Electrical and Electronics Engineers (IEEE) in 2014 47th Annual IEEE/ACM International Symposium on Microarchitecture
- No. 10724451,p. 13-24
- https://doi.org/10.1109/micro.2014.56
Abstract
Recent technology advancements allow for the integration of large memory structures on-die or as a die-stacked DRAM. Such structures provide higher bandwidth and faster access time than off-chip memory. Prior work has investigated using the large integrated memory as a cache, or using it as part of a heterogeneous memory system under management of the OS. Using this memory as a cache would waste a large fraction of total memory space, especially for the systems where stacked memory could be as large as off-chip memory. An OS managed heterogeneous memory system, on the other hand, requires costly usage-monitoring hardware to migrate frequently-used pages, and is often unable to capture pages that are highly utilized for short periods of time. This paper proposes a practical, low-cost architectural solution to efficiently enable using large fast memory as Part-of-Memory (PoM) seamlessly, without the involvement of the OS. Our PoM architecture effectively manages two different types of memory (slow and fast) combined to create a single physical address space. To achieve this, PoM implements the ability to dynamically remap regions of memory based on their access patterns and expected performance benefits. Our proposed PoM architecture improves performance by 18.4% over static mapping and by 10.5% over an ideal OS-based dynamic remapping policy.Keywords
This publication has 22 references indexed in Scilit:
- ATCachePublished by Association for Computing Machinery (ACM) ,2014
- Exploring DRAM organizations for energy-efficient and resilient exascale memoriesPublished by Association for Computing Machinery (ACM) ,2013
- Die-stacked DRAM caches for serversPublished by Association for Computing Machinery (ACM) ,2013
- A Mostly-Clean DRAM Cache for Effective Hit Speculation and Self-Balancing DispatchPublished by Institute of Electrical and Electronics Engineers (IEEE) ,2012
- The KSR 1: bridging the gap between shared memory and MPPsPublished by Institute of Electrical and Electronics Engineers (IEEE) ,2002
- An argument for simple COMAPublished by Institute of Electrical and Electronics Engineers (IEEE) ,2002
- Automatically characterizing large scale program behaviorPublished by Association for Computing Machinery (ACM) ,2002
- Hardware-software trade-offs in a direct Rambus implementation of the RAMpage memory hierarchyPublished by Association for Computing Machinery (ACM) ,1998
- The case for SRAM main memoryACM SIGARCH Computer Architecture News, 1996
- DDM-a cache-only memory architectureComputer, 1992