Exploiting Concurrent GPU Operations for Efficient Work Stealing on Multi-GPUs
- 1 October 2012
- conference paper
- conference paper
- Published by Institute of Electrical and Electronics Engineers (IEEE)
Abstract
The race for Exascale computing has naturally led the current technologies to converge to multi-CPU/multi-GPU computers, based on thousands of CPUs and GPUs interconnected by PCI-Express buses or interconnection networks. To exploit this high computing power, programmers have to solve the issue of scheduling parallel programs on hybrid architectures. And, since the performance of a GPU increases at a much faster rate than the throughput of a PCI bus, data transfers must be managed efficiently by the scheduler. This paper targets multi-GPU compute nodes, where several GPUs are connected to the same machine. To overcome the data transfer limitations on such platforms, the available soft wares compute, usually before the execution, a mapping of the tasks that respects their dependencies and minimizes the global data transfers. Such an approach is too rigid and it cannot adapt the execution to possible variations of the system or to the application's load. We propose a solution that is orthogonal to the above mentioned: extensions of the Xkaapi software stack that enable to exploit full performance of a multi-GPUs system through asynchronous GPU tasks. Xkaapi schedules tasks by using a standard Work Stealing algorithm and the runtime efficiently exploits concurrent GPU operations. The runtime extensions make it possible to overlap the data transfers and the task executions on current generation of GPUs. We demonstrate that the overlapping capability is at least as important as computing a scheduling decision to reduce completion time of a parallel program. Our experiments on two dense linear algebra problems (Matrix Product and Cholesky factorization) show that our solution is highly competitive with other soft wares based on static scheduling. Moreover, we are able to sustain the peak performance (approx. 310 GFlop/s) on DGEMM, even for matrices that cannot be stored entirely in one GPU memory. With eight GPUs, we archive a speed-up of 6.74 with respect to single-GPU. The performance of our Cholesky factorization, with more complex dependencies between tasks, outperforms the state of the art single-GPU MAGMA code.Keywords
This publication has 11 references indexed in Scilit:
- Productive Programming of GPU Clusters with OmpSsPublished by Institute of Electrical and Electronics Engineers (IEEE) ,2012
- libKOMP, an Efficient OpenMP Runtime System for Both Fork-Join and Data Flow ParadigmsLecture Notes in Computer Science, 2012
- A Class of Hybrid LAPACK Algorithms for Multicore and GPU ArchitecturesPublished by Institute of Electrical and Electronics Engineers (IEEE) ,2011
- QR Factorization on a Multicore Node Enhanced with Multiple GPU AcceleratorsPublished by Institute of Electrical and Electronics Engineers (IEEE) ,2011
- StarPU: a unified platform for task scheduling on heterogeneous multicore architecturesConcurrency and Computation: Practice and Experience, 2010
- Towards dense linear algebra for hybrid GPU accelerated manycore systemsParallel Computing, 2010
- An Extension of the StarSs Programming Model for Platforms with Multiple GPUsLecture Notes in Computer Science, 2009
- A class of parallel tiled linear algebra algorithms for multicore architecturesParallel Computing, 2008
- KAAPIPublished by Association for Computing Machinery (ACM) ,2007
- The implementation of the Cilk-5 multithreaded languagePublished by Association for Computing Machinery (ACM) ,1998