Thread Cluster Memory Scheduling: Exploiting Differences in Memory Access Behavior
Top Cited Papers
- 1 December 2010
- conference paper
- conference paper
- Published by Institute of Electrical and Electronics Engineers (IEEE)
Abstract
In a modern chip-multiprocessor system, memory is a shared resource among multiple concurrently executing threads. The memory scheduling algorithm should resolve memory contention by arbitrating memory access in such a way that competing threads progress at a relatively fast and even pace, resulting in high system throughput and fairness. Previously proposed memory scheduling algorithms are predominantly optimized for only one of these objectives: no scheduling algorithm provides the best system throughput and best fairness at the same time. This paper presents a new memory scheduling algorithm that addresses system throughput and fairness separately with the goal of achieving the best of both. The main idea is to divide threads into two separate clusters and employ different memory request scheduling policies in each cluster. Our proposal, Thread Cluster Memory scheduling (TCM), dynamically groups threads with similar memory access behavior into either the latency-sensitive (memory-non-intensive) or the bandwidth-sensitive (memory-intensive) cluster. TCM introduces three major ideas for prioritization: 1) we prioritize the latency-sensitive cluster over the bandwidth-sensitive cluster to improve system throughput, 2) we introduce a ``niceness'' metric that captures a thread's propensity to interfere with other threads, 3) we use niceness to periodically shuffle the priority order of the threads in the bandwidth-sensitive cluster to provide fair access to each thread in a way that reduces inter-thread interference. On the one hand, prioritizing memory-non-intensive threads significantly improves system throughput without degrading fairness, because such ``light'' threads only use a small fraction of the total available memory bandwidth. On the other hand, shuffling the priority order of memory-intensive threads improves fairness because it ensures no thread is disproportionately slowed down or starved. We evaluate TCM on a wide variety of multiprogrammed workloads and compare its performance to four previously proposed scheduling algorithms, finding that TCM achieves both the best system throughput and fairness. Averaged over 96 workloads on a 24-core system with 4 memory channels, TCM improves system throughput and reduces maximum slowdown by 4.6%/38.6% compared to ATLAS (previous work providing the best system throughput) and 7.6%/4.6% compared to PAR-BS (previous work providing the best fairness).Keywords
This publication has 15 references indexed in Scilit:
- Thread criticality predictors for dynamic performance, power, and resource management in chip multiprocessorsPublished by Association for Computing Machinery (ACM) ,2009
- Prefetch-Aware DRAM ControllersPublished by Institute of Electrical and Electronics Engineers (IEEE) ,2008
- Meeting pointsPublished by Association for Computing Machinery (ACM) ,2008
- Distributed order scheduling and its application to multi-core dram controllersPublished by Association for Computing Machinery (ACM) ,2008
- Stall-Time Fair Memory Access Scheduling for Chip MultiprocessorsPublished by Institute of Electrical and Electronics Engineers (IEEE) ,2007
- A Burst Scheduling Access Reordering MechanismPublished by Institute of Electrical and Electronics Engineers (IEEE) ,2007
- Fair Queuing Memory SystemsPublished by Institute of Electrical and Electronics Engineers (IEEE) ,2006
- Pinpointing Representative Portions of Large Intel® Itanium® Programs with Dynamic InstrumentationPublished by Institute of Electrical and Electronics Engineers (IEEE) ,2005
- PinPublished by Association for Computing Machinery (ACM) ,2005
- A study of performance impact of memory controller features in multi-processor server environmentPublished by Association for Computing Machinery (ACM) ,2004