LARA: Locality-aware resource allocation to improve GPU memory-access time
- 15 May 2021
- journal article
- research article
- Published by Springer Science and Business Media LLC in The Journal of Supercomputing
- Vol. 77 (12), 14438-14460
- https://doi.org/10.1007/s11227-021-03854-w
Abstract
No abstract availableKeywords
This publication has 48 references indexed in Scilit:
- Mitigating Prefetcher-Caused Pollution Using Informed Caching Policies for Prefetched BlocksACM Transactions on Architecture and Code Optimization, 2015
- Eliminating Intra-Warp Conflict Misses in GPUPublished by EDAA ,2015
- Adaptive virtual channel partitioning for network-on-chip in heterogeneous architecturesACM Transactions on Design Automation of Electronic Systems, 2013
- Designing on-chip networks for throughput acceleratorsACM Transactions on Architecture and Code Optimization, 2013
- Prefetch-aware shared resource management for multi-core systemsACM SIGARCH Computer Architecture News, 2011
- Bypass and insertion algorithms for exclusive last-level cachesACM SIGARCH Computer Architecture News, 2011
- High performance cache replacement using re-reference interval prediction (RRIP)ACM SIGARCH Computer Architecture News, 2010
- Managing contention for shared resources on multicore processorsCommunications of the ACM, 2010
- Adaptive insertion policies for high performance cachingACM SIGARCH Computer Architecture News, 2007
- Memory access schedulingACM SIGARCH Computer Architecture News, 2000