[Zurück]


Diplom- und Master-Arbeiten (eigene und betreute):

J. Gruber:
"KLSM: A Relaxed Lock-Free Priority Queue";
Betreuer/in(nen): J. Träff; Institute of Information Systems, Parallel Computing Group, 2015; Abschlussprüfung: 29.02.2016.



Kurzfassung englisch:
Priority queues are abstract data structures which store a set of key/value pairs and allow efficient access to the item with the minimal (maximal) key. Such queues are an important element in various areas of computer science such as algorithmics (e.g. Dijkstra's shortest path algorithm) and operating system (e.g. priority schedulers). The recent trend towards multiprocessor computing requires new implementations of basic data structures which are able to be used concurrently and scale well to large numbers of threads. Lock-free structures promise such scalability by avoiding the use of blocking synchronization primitives. Concurrent priority queues have been extensively researched over the past decades. In recent years, most research within the field has focused on SkipList-based structures, based mainly on the fact that they exhibit very high disjoint access parallelism, i.e., modifications on different elements access disjoint parts of the data structure. Contention between threads and traffic through the cache-coherency protocol can therefore be reduced. However, so far scalability improvements have been very limited. On the one hand, strict priority queues have an inherent sequential bottleneck since each delete_min operation attempts to access a single minimal (maximal) element. On the other, SkipLists have less than optimal cache locality since each node is usually allocated dynamically, which in turn results in fairly low throughput for SkipList-based designs. Relaxed data structures are a new and promising approach in which quality guarantees are weakened in return for improved scalability. The k-LSM is a concurrent, lock-free, and relaxed priority queue design which aims to improve scalability by 1) using arrays as backing data structures and the standard merge algorithm as its central operation (forhigh cache locality); and 2) by relaxing semantic guarantees to allow the delete_min operation to return one of the kP minimal (maximal) elements, where P is the number of threads and k is a configuration parameter. During the course of this thesis, we have implemented an improved standalone version of the \klsm priority queue and explain its design and implementation in detail. We finally evaluate the the \klsm against other state-of-the-art concurrent priority queues including the Spraylist, the Linden queue, and Rihani's Multiqueues. In some benchmarking scenarios, including the popular generic throughput benchmark, the k-LSM priority queue shows exceptional scalability, outperforming all other queues by up to a factor of ten; in others however, its throughput drops dramatically. Out of the other designs, Multiqueues are the clear winner, performing consistently across all benchmarks.

Erstellt aus der Publikationsdatenbank der Technischen Universität Wien.