Refine Search

New Search

Results: 8

(searched for: doi:10.1145/3140659.3080245)
Save to Scifeed
Page of 1
Articles per Page
by
Show export options
  Select all
Sandeep Kumar, Aravinda Prasad, Smruti R. Sarangi, Sreenivas Subramoney
Proceedings of the 2021 ACM SIGPLAN International Symposium on Memory Management; https://doi.org/10.1145/3459898.3463907

Abstract:
Modern enterprise servers are increasingly embracing tiered memory systems with a combination of low latency DRAMs and large capacity but high latency non-volatile main memories (NVMMs) such as Intel’s Optane DC PMM. Prior works have focused on the efficient placement and migration of data on a tiered memory system, but have not studied the optimal placement of page tables. Explicit and efficient placement of page tables is crucial for large memory footprint applications with high TLB miss rates because they incur dramatically higher page walk latency when page table pages are placed in NVMM. We show that (i) page table pages can end up on NVMM even when enough DRAM memory is available and (ii) page table pages that spill over to NVMM due to DRAM memory pressure are not migrated back later when memory is available in DRAM. We study the performance impact of page table placement in a tiered memory system and propose Radiant, an efficient and transparent page table management technique that (i) applies different placement policies for data and page table pages,(ii) introduces a differentiating policy for page table pages by placing a small but critical part of the page table in DRAM, and (iii) dynamically and judiciously manages the rest of the page table by transparently migrating the page table pages between DRAM and NVMM. Our implementation on a real system equipped with Intel’s Optane NVMM running Linux reduces the page table walk cycles by 12% and total cycles by 20% on an average. This improves the runtime by 20% on an average for a set of synthetic and real-world large memory footprint applications when compared with various default Linux kernel techniques.
, Hao Huang
Published: 11 February 2021
Frontiers of Computer Science, Volume 15, pp 1-20; https://doi.org/10.1007/s11704-020-9395-3

The publisher has not yet granted permission to display this abstract.
Vamsee Reddy Kommareddy, Simon David Hammond, Clayton Hughes, Ahmad Samih, Amro Awad
Proceedings of the International Symposium on Memory Systems; https://doi.org/10.1145/3357526.3357543

Abstract:
As demands for memory-intensive applications continue to grow, the memory capacity of each computing node is expected to grow at a similar pace. In high-performance computing (HPC) systems, the memory capacity per compute node is decided upon the most demanding application that would likely run on such system, and hence the average capacity per node in future HPC systems is expected to grow significantly. However, since HPC systems run many applications with different capacity demands, a large percentage of the overall memory capacity will likely be underutilized; memory modules can be thought of as private memory for its corresponding computing node. Thus, as HPC systems are moving towards the exascale era, a better utilization of memory is strongly desired. Moreover, upgrading memory system requires significant efforts. Fortunately, disaggregated memory systems promise better utilization by defining regions of global memory, typically referred to as memory blades, which can be accessed by all computing nodes in the system, thus achieving much better utilization. Disaggregated memory systems are expected to be built using dense, power-efficient memory technologies. Thus, emerging nonvolatile memories (NVMs) are placing themselves as the main building blocks for such systems. However, NVMs are slower than DRAM. Therefore, it is expected that each computing node would have a small local memory that is based on either HBM or DRAM, whereas a large shared NVM memory would be accessible by all nodes. Managing such system with global and local memory requires a novel hardware/software co-design to initiate page migration between global and local memory to maximize performance while enabling access to huge shared memory. In this paper we provide support to migrate pages, investigate such memory management aspects and the major system-level aspects that can affect design decisions in disaggregated NVM systems
Sanghoon Cha, Bokyeong Kim, Chang Hyun Park, Jaehyuk Huh
ACM Transactions on Architecture and Code Optimization, Volume 16, pp 1-24; https://doi.org/10.1145/3338505

Abstract:
DRAM caches have emerged as an efficient new layer in the memory hierarchy to address the increasing diversity of memory components. When a small amount of fast memory is combined with slow but large memory, the cache-based organization of the fast memory can provide a SW-transparent solution for the hybrid memory systems. In such DRAM cache designs, their effectiveness is affected by the bandwidth and latency of both fast and slow memory. To quantitatively assess the effect of memory configurations and application patterns on the DRAM cache designs, this article first investigates how three prior approaches perform with six hybrid memory scenarios. From the investigation, we observe no single DRAM cache organization always outperforms the other organizations across the diverse hybrid memory configurations and memory access patterns. Based on this observation, this article proposes a reconfigurable DRAM cache design that can adapt to different hybrid memory combinations and workload patterns. Unlike the fixed tag and data arrays of conventional on-chip SRAM caches, this study advocates to exploit the flexibility of DRAM caches, which can store tags and data to DRAM in any arbitrary way. Using a sample-based mechanism, the proposed DRAM cache controller dynamically finds the best organization from three candidates and applies the best one by reconfiguring the tags and data layout in the DRAM cache. Our evaluation shows that the proposed morphable DRAM cache can outperform the fixed DRAM configurations across six hybrid memory configurations.
Ahmed Abulila, Vikram Sharma Mailthody, Zaid Qureshi, Jian Huang, Nam Sung Kim, Jinjun Xiong, Wen-Mei Hwu
Proceedings of the Twenty-Fourth International Conference on Architectural Support for Programming Languages and Operating Systems pp 971-985; https://doi.org/10.1145/3297858.3304061

ACM SIGOPS Operating Systems Review, Volume 52, pp 13-26; https://doi.org/10.1145/3273982.3273985

Abstract:
Heterogeneous memory management combined with server virtualization in datacenters is expected to increase the software and OS management complexity. State-of-the-art solutions rely exclusively on the hypervisor (VMM) for expensive page hotness tracking and migrations, limiting the benefits from heterogeneity. To address this, we design HeteroOS, a novel application-transparent OS-level solution for managing memory heterogeneity in virtualized systems. The HeteroOS design first makes the guest-OSes heterogeneityaware, and then extracts rich OS-level information about applications' memory usage to place data in the 'right' memory, avoiding page migrations. When such proactive placements are not possible, HeteroOS combines the power of the guest-OSes' information about applications with the VMM's hardware control to track for hotness and migrate only performance-critical pages. Finally, HeteroOS also designs an efficient heterogeneous memory sharing across multiple guest-VMs. Evaluation of HeteroOS with memory, storage, and network-intensive datacenter applications show up to 2x performance improvement compared to the state-of-the-art VMMexclusive approach.
Page of 1
Articles per Page
by
Show export options
  Select all
Back to Top Top