|Title:||BQ: A Lock-Free Queue with Batching
|Currently accessibly only within the Technion network|
|Abstract:||Concurrent data structures provide fundamental building blocks for concurrent programming. Standard concurrent data structures may be extended by allowing a sequence of operations to be submitted as a batch for later execution. A sequence of such operations can then be executed more efficiently than the standard execution of one operation at a time. We develop a novel algorithmic extension to the prevalent FIFO queue data structure that exploits such batching scenarios. Our extension allows to apply a sequence of enqueue and dequeue operations to the queue all at once as a batch. We present a novel combinatorial observation on the effect of a mixed sequence of enqueues and dequeues on the queue. Relying on it, a thread that initiates a batch operation performs more of the operation's processing locally, before accessing the shared queue to apply its batch, as well as thereafter. This results in less processing in the shared queue. Moreover, there are fewer accesses to the shared queue due to applying several operations as a unified batch. The shortened processing in the shared queue as well as fewer accesses to it yield improved scalability. An implementation in C++ on a multicore demonstrates a significant performance improvement of up to 16x (depending on the batch lengths and the number of threads), compared to previous queue implementations.|
|Copyright||The above paper is copyright by the Technion, Author(s), or others. Please contact the author(s) for more information|
Remark: Any link to this technical report should be to this page (http://www.cs.technion.ac.il/users/wwwb/cgi-bin/tr-info.cgi/2019/MSC/MSC-2019-28), rather than to the URL of the PDF files directly. The latter URLs may change without notice.
To the list of the MSC technical reports of 2019
To the main CS technical reports page
Computer science department, Technion