Refine Search

New Search

Results: 6

(searched for: doi:10.1145/3352460.3358325)
Save to Scifeed
Page of 1
Articles per Page
by
Show export options
  Select all
Saba Jamilan, Tanvir Ahmed Khan, Grant Ayers, Baris Kasikci, Heiner Litz
Proceedings of the Seventeenth European Conference on Computer Systems; https://doi.org/10.1145/3492321.3519583

Abstract:
Prefetching which predicts future memory accesses and preloads them from main memory, is a widely-adopted technique to overcome the processor-memory performance gap. Unfortunately, hardware prefetchers implemented in today's processors cannot identify complex and irregular memory access patterns exhibited by modern data-driven applications and hence developers need to rely on software prefetching techniques. We investigate the challenges of enabling effective, automated software data prefetching. Our investigation reveals that the state-of-the-art compiler-based prefetching mechanism falls short in achieving high performance due to its static nature. Based on this insight, we design APT-GET, a novel profile-guided technique that ensures prefetch timeliness by leveraging dynamic execution time information. APT-GET leverages efficient hardware support such as Intel's Last Branch Record (LBR), for collecting application execution profiles with negligible overhead to characterize the execution time of loads. APT-GET then introduces a novel analytical model to find the optimal prefetch-distance and prefetch injection site based on the collected profile to enable timely prefetches. We study APT-GET in the context of 10 real-world applications and demonstrate that it achieves a speedup of up to 1.98× and of 1.30× on average. By ensuring prefetch timeliness, APT-GET improves the performance by 1.25× over the state-of-the-art software data prefetching mechanism.
Heiner Litz, Grant Ayers, Parthasarathy Ranganathan
Proceedings of the 27th ACM International Conference on Architectural Support for Programming Languages and Operating Systems; https://doi.org/10.1145/3503222.3507745

Abstract:
The high access latency of DRAM continues to be a performance challenge for contemporary microprocessor systems. Prefetching is a well-established technique to address this problem, however, existing implemented designs fail to provide any performance benefits in the presence of irregular memory access patterns. The hardware complexity of prior techniques that can predict irregular memory accesses such as runahead execution has proven untenable for implementation in real hardware. We propose a lightweight mechanism to hide the high latency of irregular memory access patterns by leveraging criticality-based scheduling. In particular, our technique executes delinquent loads and their load slices as early as possible, hiding a significant fraction of their latency. Furthermore, we observe that the latency induced by branch mispredictions and other high latency instructions can be hidden with a similar approach. Our proposal only requires minimal hardware modifications by performing memory access classification, load and branch slice extraction, as well as priority analysis exclusively in software. As a result, our technique is feasible to implement, introducing only a simple new instruction prefix while requiring minimal modifications of the instruction scheduler. Our technique increases the IPC of memory-latency-bound applications by up to 38% and by 8.4% on average.
Mao Ni, Lan Chen, Xiaoran Hao, Chenji Liu, Yiheng Zhang, Lei Pan
IEICE Electronics Express, Volume 19, pp 20210499-20210499; https://doi.org/10.1587/elex.19.20210499

Abstract:
Large-sized cache is beneficial to improve CPU performance especially for IoT applications with huge amount of data. However, large-sized SRAM cache also increases chip area and energy which is not friendly to resources limited IoT terminals. The STT-RAM with high storage density and near zero leakage is regarded as an ideal technology to replace SRAM. Prefetching is a vital method to hide the access latency of off-chip memory. Nevertheless, traditional prefetchers for SRAM is inadequate for MRAM cache with read-write asymmetry. Aggressive prefetching for STT-RAM cache would cause cache congestion and dynamic energy rise because of STT-RAM long write latency and high write energy. In response to the above problems, this paper novelty proposes WANCP (Write-awareness Adaptive Non-volatile Cache Prefetch), which adaptively adjusts the prefetch aggressiveness according to the saturation of MSHR (Miss-status Handling Registers) in the L2 cache. Experiments show that, for applications that are sensitive to L2 cache capacity, the CPU performance with STT-RAM L2 cache can be improved by up to 33.2% and 10.9 % on average compared to the same sized SRAM L2 cache. The proposed WANCP can further improve the CPU performance 0.4 % on average, and reduce the prefetch energy by 9.4% on average.
Rahul Bera, Konstantinos Kanellopoulos, Anant Nori, Taha Shahroodi, Sreenivas Subramoney, Onur Mutlu
MICRO-54: 54th Annual IEEE/ACM International Symposium on Microarchitecture; https://doi.org/10.1145/3466752.3480114

Abstract:
Past research has proposed numerous hardware prefetching techniques, most of which rely on exploiting one specific type of program context information (e.g., program counter, cacheline address, or delta between cacheline addresses) to predict future memory accesses. These techniques either completely neglect a prefetcher’s undesirable effects (e.g., memory bandwidth usage) on the overall system, or incorporate system-level feedback as an afterthought to a system-unaware prefetch algorithm. We show that prior prefetchers often lose their performance benefit over a wide range of workloads and system configurations due to their inherent inability to take multiple different types of program context and system-level feedback information into account while prefetching. In this paper, we make a case for designing a holistic prefetch algorithm that learns to prefetch using multiple different types of program context and system-level feedback information inherent to its design. To this end, we propose Pythia, which formulates the prefetcher as a reinforcement learning agent. For every demand request, Pythia observes multiple different types of program context information to make a prefetch decision. For every prefetch decision, Pythia receives a numerical reward that evaluates prefetch quality under the current memory bandwidth usage. Pythia uses this reward to reinforce the correlation between program context information and prefetch decision to generate highly accurate, timely, and system-aware prefetch requests in the future. Our extensive evaluations using simulation and hardware synthesis show that Pythia outperforms two state-of-the-art prefetchers (MLOP and Bingo) by 3.4% and 3.8% in single-core, 7.7% and 9.6% in twelve-core, and 16.9% and 20.2% in bandwidth-constrained core configurations, while incurring only 1.03% area overhead over a desktop-class processor and no software changes in workloads. The source code of Pythia can be freely downloaded from https://github.com/CMU-SAFARI/Pythia.
Mohammad Sadrosadati, Amirhossein Mirhosseini, Ali Hajiabadi, Seyed Borna Ehsani, Hajar Falahati, Hamid Sarbazi-Azad, Mario Drumond, Babak Falsafi, Rachata Ausavarungnirun, Onur Mutlu
ACM Transactions on Computer Systems, Volume 37, pp 1-36; https://doi.org/10.1145/3419973

Abstract:
Graphics Processing Units (GPUs) employ large register files to accommodate all active threads and accelerate context switching. Unfortunately, register files are a scalability bottleneck for future GPUs due to long access latency, high power consumption, and large silicon area provisioning. Prior work proposes hierarchical register file to reduce the register file power consumption by caching registers in a smaller register file cache. Unfortunately, this approach does not improve register access latency due to the low hit rate in the register file cache. In this article, we propose the Latency-Tolerant Register File (LTRF) architecture to achieve low latency in a two-level hierarchical structure while keeping power consumption low. We observe that compile-time interval analysis enables us to divide GPU program execution into intervals with an accurate estimate of a warp’s aggregate register working-set within each interval. The key idea of LTRF is to prefetch the estimated register working-set from the main register file to the register file cache under software control, at the beginning of each interval, and overlap the prefetch latency with the execution of other warps. We observe that register bank conflicts while prefetching the registers could greatly reduce the effectiveness of LTRF. Therefore, we devise a compile-time register renumbering technique to reduce the likelihood of register bank conflicts. Our experimental results show that LTRF enables high-capacity yet long-latency main GPU register files, paving the way for various optimizations. As an example optimization, we implement the main register file with emerging high-density high-latency memory technologies, enabling 8× larger capacity and improving overall GPU performance by 34%.
Page of 1
Articles per Page
by
Show export options
  Select all
Back to Top Top