Skip to content (access key 's')
Logo of Technion
Logo of CS Department
Events

Colloquia and Seminars

To join the email distribution list of the cs colloquia, please visit the list subscription page.

Computer Science events calendar in HTTP ICS format for of Google calendars, and for Outlook.
Academic Calendar at Technion site.

Upcoming Colloquia & Seminars

event head separator Perfectly Correct Additive Randomized Encodings
event speaker icon
Alon Yerushalmi (M.Sc. Thesis Seminar)
event date icon
Tuesday, 02.09.2025, 15:00
event location icon

Taub 8 & Zoom

event speaker icon
Advisor:  Prof. Yuval Ishai & Prof Eyal Kushilevitz

An Additive Randomized Encoding (ARE) for a distributed function f(x1,...,xn) reduces the task of securely computing f to computing the sum of locally encoded inputs.
Previous works construct AREs for simple functions such as OR with perfect security but imperfect correctness (namely, the decoder has an incorrect output with small probability), and AREs for general functions with both imperfect correctness and security.

This leaves open the existence of ARE for nontrivial functions with perfect correctness, i.e. where the sum of encodings is always decoded correctly to the function's output. We made progress on these questions, obtaining negative results.

event head separator Coding for Ordered Composite DNA Sequences
event speaker icon
Besart Dollma (M.Sc. Thesis Seminar)
event date icon
Sunday, 14.09.2025, 11:00
event location icon

Taub 601 & Zoom

To increase the information capacity of DNA storage, composite DNA letters were introduced. We propose a novel channel model for composite DNA in which composite sequences are decomposed into ordered non-composite sequences. The model is designed to handle any alphabet size and composite resolution parameter. We study the problem of reconstructing composite sequences of arbitrary resolution over the binary alphabet under substitution errors. We define two families of error-correcting codes and provide lower and upper bounds on their cardinality. In addition, we analyze the case in which a single deletion error occurs in the channel and present a systematic code construction for this setting. Finally, we briefly discuss the channel's capacity, which remains an open problem.

event head separator Mini-Batch Robustness Verification of Deep Neural Networks
event speaker icon
Saar Tzour-Shaday (M.Sc. Thesis Seminar)
event date icon
Wednesday, 17.09.2025, 11:30
event location icon

Mayer 1061, Zoom

event speaker icon
Advisor:  Dr. Dana Drachsler Cohen

Neural network image classifiers are ubiquitous in many safety-critical applications. However, they are susceptible to adversarial attacks. To understand their robustness to attacks, many local robustness verifiers have been proposed to analyze 𝜖-balls of inputs. Yet, existing verifiers introduce a long analysis time or lose too much precision, making them less effective for a large set of inputs. In this work, we propose a new approach to local robustness: group local robustness verification. The key idea is to leverage the similarity of the network computations of certain 𝜖-balls to reduce the overall analysis time. We propose BaVerLy, a sound and complete verifier that boosts the local robustness verification of a set of 𝜖-balls by dynamically constructing and verifying mini-batches. BaVerLy adaptively identifies successful mini-batch sizes, accordingly constructs mini-batches of 𝜖-balls that have similar network computations, and verifies them jointly. If a mini-batch is verified, all 𝜖-balls are proven robust. Otherwise, one 𝜖-ball is suspected as not being robust, guiding the refinement. In the latter case, BaVerLy leverages the analysis results to expedite the analysis of that 𝜖-ball as well as the other 𝜖-balls in the batch. We evaluate BaVerLy on fully connected and convolutional networks for MNIST and CIFAR-10. Results show that BaVerLy scales the common one by one verification by 2.3x on average and up to 4.1x, in which case it reduces the total analysis time from 24 hours to 6 hours.