כדי להצטרף לרשימת תפוצה של קולוקוויום מדעי המחשב, אנא בקר ב דף מנויים של הרשימה.
Computer Science events calendar in HTTP ICS format for of Google calendars, and for Outlook.
Academic Calendar at Technion site.
אודיטוריום 012, קןמה 0
Modern machine learning systems operate in regimes that challenge classical learning-theoretic assumptions. Models are highly overparameterized, trained with simple optimization algorithms, and rely critically on how data is collected and curated. Understanding the limits of learning in these settings requires revisiting both the computational and statistical foundations of learning theory.
A central question in learning theory asks which functions are tractably learnable. Classical complexity results suggest strong computational barriers, motivating a focus on “learnable subclasses” defined by properties of the target function. In this talk, I argue for a different perspective by emphasizing the role of the training distribution. Fixing the learning algorithm (e.g. stochastic gradient descent applied to neural networks), I show that allowing a “positive distribution shift”, where training data is drawn from a carefully chosen auxiliary distribution while evaluation remains on the target distribution, can render several classically hard learning problems tractable.
Beyond computational considerations, I then study statistical limits of learning in modern, overparameterized models using stochastic convex optimization as a theoretical framework. While classical theory often suggests that successful generalization requires avoiding memorization, I show that memorization is in fact unavoidable: achieving high accuracy requires retaining nontrivial information about the training data and can even enable the identification of individual training examples. These results reveal fundamental privacy–accuracy tradeoffs inherent to accurate learning.
Bio: Idan Attias is a postdoctoral researcher at the Institute for Data, Econometrics, Algorithms, and Learning (IDEAL), working with Lev Reyzin (University of Illinois Chicago), Nati Srebro, and Avrim Blum (Toyota Technological Institute at Chicago). He obtained his Ph.D. in Computer Science under the supervision of Aryeh Kontorovich (Ben-Gurion University) and Yishay Mansour (Tel Aviv University and Google Research).
His research focuses on the foundations of machine learning theory and data-driven sequential decision-making. His work has been recognized with a Best Paper Award at ICML ’24 and selection as a Rising Star in Data Science (University of California San Diego ’24). His postdoctoral research is supported by an NSF fellowship, and his Ph.D. studies were fully supported by the Israeli Council for Higher Education Scholarship for Outstanding PhD Students in Data Science.
אודיטוריום 012, קומה 0
Large Language Models (LLMs) have transformed what machines can do and how systems are designed to serve them. These models are both computationally and memory demanding, revealing the limits of traditional optimization methods that once sufficed for conventional systems. A central challenge in building LLM systems is improving system metrics while ensuring response quality.
This talk presents approaches for reducing latency in LLM systems to support interactive applications, from scheduling algorithm design to deployment. It introduces scheduling frameworks that use lightweight predictions of request behavior to make informed decisions about prioritization and memory management across two core settings: standalone LLM inference and API-augmented LLMs that interact with external tools. Across both settings, prediction-guided scheduling delivers substantial latency reductions while remaining practical for deployment.
Bio: Rana Shahout is a Postdoctoral Fellow at Harvard University, working with Michael Mitzenmacher and Minlan Yu. She received her Ph.D. in Computer Science from the Technion and previously worked as a Senior Software Engineer at Mellanox (now NVIDIA). Her research combines machine learning, systems, and algorithmic theory to design efficient and scalable AI systems. Rana is a recipient of the Eric and Wendy Schmidt Postdoctoral Award, the Zuckerman Postdoctoral Fellowship, the Weizmann Institute Women’s Postdoctoral Career Development Award, the VATAT Postdoctoral Fellowship, and first place in the ACC Feder Family Award for Best Student Work in Communications.
3D geometry powers design, perception, and simulation across various domains, including healthcare, transportation, manufacturing, and entertainment. Yet, traditional workflows for creating and editing 3D content are largely inaccessible, requiring expert knowledge and extensive manual effort. Recent advances in deep learning have enabled the generation of 3D assets through simple interfaces, such as natural language prompts. However, these AI models provide limited control, making it hard to refine the asset iteratively, edit it locally, or regulate the modification strength. As a result, the transformative potential of AI for 3D geometry remains underrealized.
My work aims to bridge this gap by pioneering learning-based techniques that combine intuitive user interaction with controllable manipulation and synthesis of 3D geometry. I will show how simple yet expressive interfaces, such as point clicks and text descriptions, can drive core geometry processing and generation tasks, including interactive segmentation, localized texturing, and adjustable surface deformation. These problems pose unique challenges due to the scarcity of high-quality and diverse 3D data. I will show how to address these challenges by designing controllable 3D representations and leveraging the vast prior knowledge encompassed in powerful pretrained AI models for images and language.
Looking ahead, I will outline my future research agenda, expanding the principles of intuitive and controllable AI to dynamic 3D content that interacts with its environment, and discuss how this direction can advance 3D technologies across different fields.
Itai Lang is a postdoctoral researcher at the University of Chicago, working with Professor Rana Hanocka. He received his PhD from Tel Aviv University, where he was advised by Professor Shai Avidan. His research focuses on artificial intelligence for geometry processing, spanning both generative and discriminative tasks. He is particularly interested in developing innovative solutions to make 3D content creation and manipulation intuitive and controllable.
טאוב 401
The field of artificial intelligence (AI) is undergoing a paradigm shift, moving from neural networks trained for narrowly defined tasks (e.g., image classification and machine translation) to general-purpose models such as ChatGPT. These models are trained at unprecedented scales to perform a wide range of tasks, from providing travel recommendations to solving Olympiad-level math problems. As they are increasingly adopted in society, a central challenge is to ensure the alignment of general-purpose models with human preferences. In this talk, I will present a series of works that reveal fundamental pitfalls in existing alignment methods. In particular, I will show that they can: (1) suffer from a flat objective landscape that hinders optimization, and (2) fail to reliably increase the likelihood of generating preferred outputs, sometimes even causing the model to generate outputs with an opposite meaning. Beyond characterizing these pitfalls, our theory provides quantitative measures for identifying when they occur, suggests preventative guidelines, and has led to the development of new data selection and alignment algorithms, validated at large scale in real-world settings. Our contributions address both efficiency challenges and safety risks that may arise in the alignment process. I will conclude with an outlook on future directions, toward building a practical theory in the age of general-purpose AI.
Bio: Noam Razin is a Postdoctoral Fellow at Princeton Language and Intelligence, Princeton University. His research focuses on the fundamentals of artificial intelligence (AI). By combining mathematical analyses with systematic experimentation, he aims to develop theories that shed light on how modern AI works, identify potential failures, and yield principled methods for improving efficiency, reliability, and performance.
Noam earned his PhD in Computer Science at Tel Aviv University, where he was advised by Nadav Cohen. Prior to that, he obtained a BSc in Computer Science (summa cum laude) at The Hebrew University of Jerusalem under the Amirim honors program. For his research, Noam received several honors and awards, including the Zuckerman Postdoctoral Scholarship, the Israeli Council for Higher Education (VATAT) Postdoctoral Scholarship, the Apple Scholars in AI/ML PhD fellowship, the Tel Aviv University Center for AI and Data Science excellence fellowship, and the Deutsch Prize for PhD candidates.
טאוב 401
We introduce a vision-guided robotic system for automated saffron flower harvesting. Using camera-based perception and robotic manipulation, the system detects and cuts whole flowers while preserving their stigmas and avoiding plant damage.
טאוב 601
Understanding how large language models store, retain, and remove knowledge is critical for interpretability, reliability, and compliance with privacy regulations.
My work introduces a geometric perspective on memorization and unlearning by analyzing loss behavior over semantically similar inputs through the Input Loss Landscape.
I show that retained, forgotten, and unseen examples exhibit distinct patterns that reflect active learning, suppressed knowledge, and ignored information.
Building on this observation, I propose REMIND (Residual Memorization In Neighborhood Dynamics), a black-box framework for diagnosing residual memorization. I further introduce a new semantic neighbor generation method that enables controlled exploration of local loss geometry.
These contributions provide interpretable insights into knowledge retention and forgetting, and offer practical tools for auditing, debugging, and enhancing transparency in large language models.