Technical Report PHD-2011-07

TR#:PHD-2011-07
Class:PHD
Title: Concurrent Data Structures: Methodologies and Inherent Limitations
Authors: Eshcar Hillel
Supervisors: Hagit Attiya
PDFPHD-2011-07.pdf
Abstract: No matter how fast processors get in recent decades, applications find new ways to consume the extra speed. While the trend of increasing software demands continues consistently, the traditional approach of faster processes comes to an end, forcing major processor manufactures to turn to multi-threading and multi-core architectures, in what is called the \emph{concurrency revolution}. This means putting several cores on one chip, each running several threads in parallel. Although driven by hardware changes, it calls for a software revolution, and a fundamental turn toward concurrency in applications is needed to fully exploit the throughput gains of multi-core processors.

At the heart of many concurrent applications lie \emph{concurrent data structures}, the subject of this thesis. Concurrent data structures coordinate access to shared resources; implementing them is hard, and it is a great challenge to make concurrent programming easier for the average programmer. The main goal of this thesis is to provide programmers with tools to enable them writing, understanding, and maintaining concurrent data structures, and concurrent applications in general, more easily.

We provide an algorithm for a multi-word synchronization operation, which facilitates the design of concurrent data structures by allowing access to several data items in one atomic operation. The additional amount of information stored in each data item is constant. In addition, we present a new approach in which implementations are aware of the semantics of the data structure yet they follow a general methodology. In this \emph{built-in coloring} scheme, colors are integrated into items, and the implementation maintains a legal coloring while applying operations to data items. We demonstrate this approach with lock-free algorithms for list-based data structures such as double-ended queue, priority queue and doubly-linked list.

These algorithms are the first to provably decrease interference among operations, thereby increasing the throughput of the execution. In particular, our list-based data structures guarantee that only operations on adjacent items in the list interfere with each other; in our implementations of multi-word synchronization, operations are effectively isolated from other operations that are not nearby, hence, distant operations may proceed concurrently. We suggest measures making it easier to evaluate the interference diameter of the algorithms, and prove their correctness.

\emph{Transactional memory} (TM), in which concurrent processes synchronize via in-memory transactions, has been proposed as an alternative synchronization mechanism. It promises to alleviate many of the challenges of concurrent programming. In its simplest form, the programmer defines a transaction by enclosing a set of statements in an atomic block; the underlying run time system is responsible for executing the transaction, while handling any contention issues raised due to concurrency.

Transactional memory raises a lot of hope for mastering the complexity of concurrent programming. To guide algorithm designers in their attempt to find better and more efficient implementations and to demonstrate which directions are futile, we explore the boundaries and tradeoffs of TM.

We show that there is an inherent tradeoff: no TM implementation can avoid interference on disjoint data and have read-only transactions that always complete successfully while never writing to the memory. In fact, we prove that read-only transactions in such implementations, reading $t$ items, must write to at least $t-1$ memory locations.

In practice, atomic blocks that often constitute the transactions may contain statements with irreversible effects that cannot be executed within the context of a transaction. Therefore, TM must allow accessing the same items from inside and outside a transaction; this is crucial both for interoperability with legacy code and in order to improve the performance of the TM. Supporting \emph{privatization}, allows the programmer to ``isolate'' some items making them private to a process; the process can thereafter access them nontransactionally, without interference by other processes. We study the theoretical complexity of privatization, and show an inherent cost, linear in the number of privatized items. The assumptions needed to prove the bounds indicate that limiting the parallelism of the TM or tracking the items of other transactions are the price to pay for efficient privatization.

CopyrightThe above paper is copyright by the Technion, Author(s), or others. Please contact the author(s) for more information

Remark: Any link to this technical report should be to this page (http://www.cs.technion.ac.il/users/wwwb/cgi-bin/tr-info.cgi/2011/PHD/PHD-2011-07), rather than to the URL of the PDF or PS files directly. The latter URLs may change without notice.

To the list of the PHD technical reports of 2011
To the main CS technical reports page

Computer science department, Technion
edit