GPU ACCELERATION OF NUMERICAL WEATHER PREDICTION
- 1 December 2008
- journal article
- research article
- Published by World Scientific Pub Co Pte Ltd in Parallel Processing Letters
- Vol. 18 (04), 531-548
- https://doi.org/10.1142/s0129626408003557
Abstract
Weather and climate prediction software has enjoyed the benefits of exponentially increasing processor power for almost 50 years. Even with the advent of large-scale parallelism in weather models, much of the performance increase has come from increasing processor speed rather than increased parallelism. This free ride is nearly over. Recent results also indicate that simply increasing the use of large-scale parallelism will prove ineffective for many scenarios where strong scaling is required. We present an alternative method of scaling model performance by exploiting emerging architectures using the fine-grain parallelism once used in vector machines. The paper shows the promise of this approach by demonstrating a nearly 10 × speedup for a computationally intensive portion of the Weather Research and Forecast (WRF) model on a variety of NVIDIA Graphics Processing Units (GPU). This change alone speeds up the whole weather model by 1.23×.Keywords
This publication has 3 references indexed in Scilit:
- A Revised Approach to Ice Microphysical Processes for the Bulk Parameterization of Clouds and PrecipitationMonthly Weather Review, 2004
- Implementation and performance issues of a massively parallel atmospheric modelParallel Computing, 1995
- A Reformulation and Implementation of the Bryan-Cox-Semtner Ocean Model on the Connection MachineJournal of Atmospheric and Oceanic Technology, 1993