Performance evaluation of explicit finite difference algorithms with varying amounts of computational and memory intensity

•Architectures designed for exascale performance motivate novel algorithmic changes.•Algorithms of varying degrees of memory and computational intensity are evaluated.•Automated code generation facilitates such algorithmic changes.•Storing some of the evaluated derivatives as local variables is show...

Full description

Saved in:
Bibliographic Details
Published inJournal of computational science Vol. 36; p. 100565
Main Authors Jammy, Satya P., Jacobs, Christian T., Sandham, Neil D.
Format Journal Article
LanguageEnglish
Published Elsevier B.V 01.09.2019
Subjects
Online AccessGet full text
ISSN1877-7503
1877-7511
1877-7511
DOI10.1016/j.jocs.2016.10.015

Cover

More Information
Summary:•Architectures designed for exascale performance motivate novel algorithmic changes.•Algorithms of varying degrees of memory and computational intensity are evaluated.•Automated code generation facilitates such algorithmic changes.•Storing some of the evaluated derivatives as local variables is shown to be optimal.•The optimal algorithm is about two times faster than the baseline algorithm. Future architectures designed to deliver exascale performance motivate the need for novel algorithmic changes in order to fully exploit their capabilities. In this paper, the performance of several numerical algorithms, characterised by varying degrees of memory and computational intensity, are evaluated in the context of finite difference methods for fluid dynamics problems. It is shown that, by storing some of the evaluated derivatives as single thread- or process-local variables in memory, or recomputing the derivatives on-the-fly, a speed-up of ∼2 can be obtained compared to traditional algorithms that store all derivatives in global arrays.
ISSN:1877-7503
1877-7511
1877-7511
DOI:10.1016/j.jocs.2016.10.015