[Zurück]


Zeitschriftenartikel:

Q. Kang, J. Träff, R. Al-Bahrani, A. Agrawal, A. Choudhary, W. Liao:
"Scalable Algorithms for MPI Intergroup Allgather and Allgatherv";
Parallel Computing, Volume 85 (2019), S. 220 - 230.



Kurzfassung englisch:
MPI intergroup collective communication defines message transfer patterns between two disjoint groups of MPI processes. Such patterns occur in coupled applications, and in modern scientific application workflows, mostly with large data sizes. However, current implementations in MPI production libraries adopt the "root gathering algorithm", which does not achieve optimal communication transfer time. In this paper, we propose algorithms for the intergroup Allgather and Allgatherv communication operations under single-port communication constraints. We implement the new algorithms using MPI point-to-point and standard intra-communicator collective communication functions. We evaluate their performance on the Cori supercomputer at NERSC. Using message sizes per compute node ranging from 64KBytes to 8MBytes, our experiments show significant performance improvements of up to 23.67 times on 256 compute nodes compared with the implementations of production MPI libraries.

Schlagworte:
Intergroup collective communication, All-to-all broadcast, Allgather, Allgatherv


"Offizielle" elektronische Version der Publikation (entsprechend ihrem Digital Object Identifier - DOI)
http://dx.doi.org/10.1016/j.parco.2019.04.015


Erstellt aus der Publikationsdatenbank der Technischen Universität Wien.