MRPB: Memory request prioritization for massively parallel processors
- 1 February 2014
- conference paper
- conference paper
- Published by Institute of Electrical and Electronics Engineers (IEEE)
Abstract
Massively parallel, throughput-oriented systems such as graphics processing units (GPUs) offer high performance for a broad range of programs. They are, however, complex to program, especially because of their intricate memory hierarchies with multiple address spaces. In response, modern GPUs have widely adopted caches, hoping to providing smoother reductions in memory access traffic and latency. Unfortunately, GPU caches often have mixed or unpredictable performance impact due to cache contention that results from the high thread counts in GPUs. We propose the memory request prioritization buffer (MRPB) to ease GPU programming and improve GPU performance. This hardware structure improves caching efficiency of massively parallel workloads by applying two prioritization methods-request reordering and cache bypassing-to memory requests before they access a cache. MRPB then releases requests into the cache in a more cache-friendly order. The result is drastically reduced cache contention and improved use of the limited per-thread cache capacity. For a simulated 16KB L1 cache, MRPB improves the average performance of the entire PolyBench and Rodinia suites by 2.65× and 1.27× respectively, outperforming a state-of-the-art GPU cache management technique.Keywords
This publication has 18 references indexed in Scilit:
- Characterizing and improving the use of demand-fetched caches in GPUsPublished by Association for Computing Machinery (ACM) ,2012
- DL: A data layout transformation system for heterogeneous computingPublished by Institute of Electrical and Electronics Engineers (IEEE) ,2012
- Improving GPU performance via large warps and two-level warp schedulingPublished by Association for Computing Machinery (ACM) ,2011
- DymaxionPublished by Association for Computing Machinery (ACM) ,2011
- Many-Thread Aware Prefetching Mechanisms for GPGPU ApplicationsPublished by Institute of Electrical and Electronics Engineers (IEEE) ,2010
- High performance cache replacement using re-reference interval prediction (RRIP)Published by Association for Computing Machinery (ACM) ,2010
- Avoiding cache thrashing due to private data placement in last-level cache for manycore scalingPublished by Institute of Electrical and Electronics Engineers (IEEE) ,2009
- Many-Core vs. Many-Thread Machines: Stay Away From the ValleyIEEE Computer Architecture Letters, 2009
- A closer look at GPUsCommunications of the ACM, 2008
- MerrimacPublished by Association for Computing Machinery (ACM) ,2003