[Back]


Scientific Reports:

J. Träff:
"Decomposing Collectives for Exploiting Multi-lane Communication";
Report for CoRR - Computing Research Repository; Report No. arXiv:1910.13373, 2019; 77 pages.



English abstract:
Many modern, high-performance systems increase the cumulated node-bandwidth by offering more than a single communication network and/or by having multiple connections to the network. Efficient algorithms and implementations for collective operations as found in, e.g., MPI must be explicitly designed for such multi-lane capabilities. We discuss a model for the design of multi-lane algorithms, and in particular give a recipe for converting any standard, one-ported, (pipelined) communication tree algorithm into a multi-lane algorithm that can effectively use k lanes simultaneously.
We first examine the problem from the perspective of self-consistent performance guidelines, and give simple, full-lane, mock-up implementations of the MPI broadcast, reduction, gather, scatter, allgather, and alltoall operations using only similar operations of the given MPI library itself. The mock-up implementations, contrary to expectation, in many cases show surprising performance improvements with different MPI libraries on a small 36-node dual-socket, dual-lane Intel OmniPath cluster, indicating severe problems with the native MPI library implementations. Our full-lane implementations are in many cases considerably more than a factor of two faster than the corresponding MPI collectives. We see similar results on the larger Vienna Scientific Cluster, VSC-3. These experiments indicate considerable room for improvement of the MPI collectives in current libraries including more efficient use of multi-lane communication.

Created from the Publication Database of the Vienna University of Technology.