[Back]


Talks and Poster Presentations (with Proceedings-Entry):

S. Hunold, A. Carpen-Amarie, J. Träff:
"Reproducible MPI Micro-Benchmarking Isn´t As Easy As You Think (Best Paper Award)";
Talk: 21st European MPI Users' Group Meeting, EuroMPI/ASIA 2014, Kyoto, Japan; 2014-09-09 - 2014-09-12; in: "Proceedings of the 21st European MPI Users' Group Meeting", J. Dongarra, Y. Ishikawa, A. Hori (ed.); ACM, New York, NY, USA (2014), ISBN: 978-1-4503-2875-3; 69 - 76.



English abstract:
The Message Passing Interface (MPI) is the prevalent programming model for supercomputers. Optimizing the performance of individual MPI functions is therefore of great interest for the HPC community. However, a fair comparison of di erent algorithms and implementations requires a statistically sound analysis. It is often overlooked that the time to complete an MPI communication function does not only depend on internal factors such as the algorithm but also on external factors such as the system noise. Most noise produced by the system is uncontrollable without changing the software stack, e.g., the memory allocation method used by the operating system. Possibly controllable factors have not yet been identi ed as such in this context. We investigate several possible factors|which have been discovered in other microbenchmarks|whether they have a signi cant e ect on the execution time of MPI functions. We experimentally and
statistically show that results obtained with other common
benchmarking methods for MPI functions can be misleading when comparing alternatives. To overcome these issues, we explain how to carefully design MPI micro-benchmarking experiments and how to make a fair, statistically sound comparison of MPI implementations.

Keywords:
MPI, benchmarking, statistical analysis


"Official" electronic version of the publication (accessed through its Digital Object Identifier - DOI)
http://dx.doi.org/10.1145/2642769.2642785



Related Projects:
Project Head Jesper Larsson Träff:
MPI

Project Head Jesper Larsson Träff:
ReproPC