Processing Real-time Data Streams on GPU-Based Systems

Speaker:
Uri Verner, Ph.D. Thesis Seminar
Date:
Wednesday, 3.6.2015, 11:30
Place:
Taub 337 (CeClub)
Advisor:
Prof. Assaf Schuster and Prof. Avi Mendelson

Real-time stream processing of Big Data has an increasing demand in modern data centers. There, a continuous torrent of data, created from different streaming data-sources like social networks, video streams, and financial markets, is being processed and analyzed to produce valuable insights about its content, and, in some cases, their value has an expiration date. For example, in a silicon-wafer production inspection system, dozens of high-resolution cameras scan the wafer and generate high-rate streams of images, where defects as small as a pixel and even smaller are detected using image processing algorithms. The inspection is a computationally intensive task that must adhere to strict processing time limitations in order to keep up with the production line. High-end computing platforms, packed with CPUs and compute accelerators, are being used to deal with the large processing need. Optimizing such systems is a very challenging task because they are heterogeneous in both computation and communication. In this talk, I will present my PhD research work on the problem of processing multiple data streams with hard processing-latency bounds on multi-GPU compute nodes. The talk will describe the challenges in work distribution and communication scheduling in such a system, and present new methods that achieve higher utilization of the resources, while guaranteeing hard real-time compliance. The generalization of the previously discussed problems leads to an important new class of scheduling problems where processors affect each other's speed. To demonstrate its usefulness, a problem from this class is used to develop an efficient new scheduler that minimizes the makespan of compute jobs on Intel CPUs.

Back to the index of events