Refine Search

New Search

Result: 1

(searched for: doi:10.1145/3357526.3357543)
Save to Scifeed
Page of 1
Articles per Page
by
Show export options
  Select all
Vamsee Reddy Kommareddy, Jagadish Kotra, Clayton Hughes, Simon David Hammond, Amro Awad, Vamsee Reddy Kommareddy University of Central Florida, United States, United States Jagadish Kotra Advanced Micro Devices, United States Clayton Hughes Sandia National Labs, Amro Awad University of Central Florida, et al.
The International Symposium on Memory Systems; https://doi.org/10.1145/3422575.3422804

Abstract:
With many recent advances in interconnect technologies and memory interfaces, disaggregated memory systems are approaching industrial adoption. For instance, the recent Gen-Z consortium focuses on a new memory semantic protocol that enables fabric-attached memories (FAM), where the memory and other compute units can be directly attached to fabric interconnects. Decoupling of memory from compute units becomes a feasible option as the rate of data transfer increases due to the emergence of novel interconnect technologies, such as Silicon Photonic Interconnects. Disaggregated memories not only enable more efficient use of capacity (minimizes under-utilization) they also allow easy integration of evolving technologies. Additionally, they simplify the programming model at the same time allowing efficient sharing of data. However, the latency of accessing the data in these Fabric Attached disaggregated Memories (FAMs) is dependent on the latency imposed by the fabric interfaces. To reduce memory access latency and to improve the performance of FAM systems, in this paper, we explore techniques to prefetch data from FAMs to the local memory present in the node (PreFAM). We realize that since the memory access latency is high in FAMs, prefetching a cache block (64 bytes) from FAM can be inefficient, since the possibility of issuing demand requests before the completion of prefetch requests, to the same FAM locations, is high. Hence, we explore predicting and prefetching FAM blocks at a distance; prefetching blocks which are going to be accessed in future but not immediately. We show that, with prefetching, the performance of FAM architectures increases by 38.84%, while memory access latency is improved by 39.6%, with only 17.65% increase in the number of accesses to the FAM, on average. Further, by prefetching at a distance we show a performance improvement of 72.23%.
Page of 1
Articles per Page
by
Show export options
  Select all
Back to Top Top