Publications in Scientific Journals:

J. Träff:
"Alternative, uniformly expressive and more scalable interfaces for collective communication in MPI";
Parallel Computing (invited), Volume 38 (2012), Issues 1-2; 26 - 36.

English abstract:
In both the regular and the irregular MPI (Message-Passing Interface) collective communication and reduction interfaces there is a correspondence between the argument lists and certain MPI derived datatypes. As a means to address and alleviate well-known memory
and performance scalability problems in the irregular (or vector) collective interface definitions of MPI we propose to push this correspondence to its natural limit, and replace the interfaces of the MPI collectives with a different set of interfaces that specify all data sizes and displacements solely by means of derived datatypes. This reduces the number of collective (communication and reduction) interfaces from 16 to 10, significantly generalizes the operations, unifies regular and irregular collective interfaces, makes it possible to decouple certain algorithmic decisions from the collective operation, and moves the interface scalability issue from the collective interfaces to the MPI derived datatypes. To complete the proposal we discuss the memory scalability of the derived datatypes and suggest a number of alternative datatypes for MPI, some of which should be of independent interest. A running example illustrates the benefits of this alternative set of collective interfaces. Implementation issues are discussed showing that an implementation can be undertaken within any reasonable MPI library implementation.

Message-Passing Interface (MPI) Collective communication Derived datatypes Scalability

"Official" electronic version of the publication (accessed through its Digital Object Identifier - DOI)

Created from the Publication Database of the Vienna University of Technology.