Doctor's Theses (authored and supervised):
"Variations on Task Scheduling for Shared Memory Systems";
Supervisor, Reviewer: J. Träff, K. Agrawal, M. Papatriantafilou;
Institut für Informationssysteme, Research Group Parallel Computing,
oral examination: 2014-06-26.
This thesis provides an in-depth discussion of task scheduling for shared memory systems. The topic is approached in a vertical manner, starting with high-level programming model aspects, and moving down to low-level implementation details. On the programming model level existing task-parallel programming models are discussed, as well as programming patterns that can be used on top of task parallel models. This is supplemented by extensions to these models and patterns developed in the context of this work. One such extension is strategy scheduling, which allows for tighter interaction between a task-parallel program and the scheduling system. Strategy scheduling is intended to bridge the gap between specialized scheduling systems, optimized for specific applications, and generic task schedulers.
Theoretically and practically efficient run-time systems are required to support task scheduling. While work-stealing has been proven to be an efficient approach for general task scheduling, more complex techniques are required to support the presented extensions to the task parallel model. The main goal here is to provide good load balancing, while at the same time reducing communication to increase the scalability of applications. Another concern in task scheduling is memory usage. For a large class of task parallel applications efficient greedy schedules exist that will only use the same space per processor as a space-efficient execution on one processor.
Both the run-time system and programming patterns require supporting concurrent data structures. Such data structures have been developed in the context of this work and are discussed in detail. To enable good scalability, the concurrent data structures in this work all
provide strong progress guarantees. Most are lock-free, which guarantees that at least one participant will progress in a bounded number of steps. Some data structures presented in this work are wait-free, guaranteeing progress for all participants.
The techniques presented in this work have been used to create a task scheduling framework called Pheet. Pheet was originally designed as a vehicle for evaluation of new scheduling techniques, and therefore has a highly flexible plug-in architecture based on C++ template
meta-programming. This allows to replace any single component in the task scheduling system, while keeping the rest of a configuration the same, in order to enable direct (performance) comparison between different implementations of a specific component. Pheet is accompanied by the Pheet benchmark suite containing a variety of task parallel micro benchmarks. The Pheet benchmark suite is used to evaluate the performance of Pheet components. While Pheet was originally developed as a tool for research and teaching on task parallel programming, it has developed into a fully fledged scheduling framework, which has been released as open source software.
Created from the Publication Database of the Vienna University of Technology.