דלג לתוכן (מקש קיצור 's')
אירועים

קולוקוויום וסמינרים

כדי להצטרף לרשימת תפוצה של קולוקוויום מדעי המחשב, אנא בקר ב דף מנויים של הרשימה.

Computer Science events calendar in HTTP ICS format for of Google calendars, and for Outlook.
Academic Calendar at Technion site.

קולוקוויום וסמינרים בקרוב

event head separator Efficient LLM Systems: From Algorithm Design to Deployment
event speaker icon
רנא שחות
event date icon
יום שלישי, 13.01.2026, 10:30
event location icon

אודיטוריום 012, קומה 0

Large Language Models (LLMs) have transformed what machines can do and how systems are designed to serve them. These models are both computationally and memory demanding, revealing the limits of traditional optimization methods that once sufficed for conventional systems. A central challenge in building LLM systems is improving system metrics while ensuring response quality.

This talk presents approaches for reducing latency in LLM systems to support interactive applications, from scheduling algorithm design to deployment. It introduces scheduling frameworks that use lightweight predictions of request behavior to make informed decisions about prioritization and memory management across two core settings: standalone LLM inference and API-augmented LLMs that interact with external tools. Across both settings, prediction-guided scheduling delivers substantial latency reductions while remaining practical for deployment.

Bio: Rana Shahout is a Postdoctoral Fellow at Harvard University, working with Michael Mitzenmacher and Minlan Yu. She received her Ph.D. in Computer Science from the Technion and previously worked as a Senior Software Engineer at Mellanox (now NVIDIA). Her research combines machine learning, systems, and algorithmic theory to design efficient and scalable AI systems. Rana is a recipient of the Eric and Wendy Schmidt Postdoctoral Award, the Zuckerman Postdoctoral Fellowship, the Weizmann Institute Women’s Postdoctoral Career Development Award, the VATAT Postdoctoral Fellowship, and first place in the ACC Feder Family Award for Best Student Work in Communications.

event head separator Pixel Club: Intuitive and Controllable AI for 3D Geometry
event speaker icon
Dr. Itai Lang (The University of Chicago)
event date icon
יום שלישי, 13.01.2026, 11:30
event location icon

506, בניין זיסאפל & זום

3D geometry powers design, perception, and simulation across various domains, including healthcare, transportation, manufacturing, and entertainment. Yet, traditional workflows for creating and editing 3D content are largely inaccessible, requiring expert knowledge and extensive manual effort. Recent advances in deep learning have enabled the generation of 3D assets through simple interfaces, such as natural language prompts. However, these AI models provide limited control, making it hard to refine the asset iteratively, edit it locally, or regulate the modification strength. As a result, the transformative potential of AI for 3D geometry remains underrealized.

My work aims to bridge this gap by pioneering learning-based techniques that combine intuitive user interaction with controllable manipulation and synthesis of 3D geometry. I will show how simple yet expressive interfaces, such as point clicks and text descriptions, can drive core geometry processing and generation tasks, including interactive segmentation, localized texturing, and adjustable surface deformation. These problems pose unique challenges due to the scarcity of high-quality and diverse 3D data. I will show how to address these challenges by designing controllable 3D representations and leveraging the vast prior knowledge encompassed in powerful pretrained AI models for images and language.

Looking ahead, I will outline my future research agenda, expanding the principles of intuitive and controllable AI to dynamic 3D content that interacts with its environment, and discuss how this direction can advance 3D technologies across different fields.
Itai Lang is a postdoctoral researcher at the University of Chicago, working with Professor Rana Hanocka. He received his PhD from Tel Aviv University, where he was advised by Professor Shai Avidan. His research focuses on artificial intelligence for geometry processing, spanning both generative and discriminative tasks. He is particularly interested in developing innovative solutions to make 3D content creation and manipulation intuitive and controllable.

event head separator Fundamentals of Aligning General-Purpose AI
event speaker icon
נועם רזין
event date icon
יום רביעי, 14.01.2026, 13:00
event location icon

טאוב 401

The field of artificial intelligence (AI) is undergoing a paradigm shift, moving from neural networks trained for narrowly defined tasks (e.g., image classification and machine translation) to general-purpose models such as ChatGPT. These models are trained at unprecedented scales to perform a wide range of tasks, from providing travel recommendations to solving Olympiad-level math problems. As they are increasingly adopted in society, a central challenge is to ensure the alignment of general-purpose models with human preferences. In this talk, I will present a series of works that reveal fundamental pitfalls in existing alignment methods. In particular, I will show that they can: (1) suffer from a flat objective landscape that hinders optimization, and (2) fail to reliably increase the likelihood of generating preferred outputs, sometimes even causing the model to generate outputs with an opposite meaning. Beyond characterizing these pitfalls, our theory provides quantitative measures for identifying when they occur, suggests preventative guidelines, and has led to the development of new data selection and alignment algorithms, validated at large scale in real-world settings. Our contributions address both efficiency challenges and safety risks that may arise in the alignment process. I will conclude with an outlook on future directions, toward building a practical theory in the age of general-purpose AI.

Bio: Noam Razin is a Postdoctoral Fellow at Princeton Language and Intelligence, Princeton University. His research focuses on the fundamentals of artificial intelligence (AI). By combining mathematical analyses with systematic experimentation, he aims to develop theories that shed light on how modern AI works, identify potential failures, and yield principled methods for improving efficiency, reliability, and performance.

Noam earned his PhD in Computer Science at Tel Aviv University, where he was advised by Nadav Cohen. Prior to that, he obtained a BSc in Computer Science (summa cum laude) at The Hebrew University of Jerusalem under the Amirim honors program. For his research, Noam received several honors and awards, including the Zuckerman Postdoctoral Scholarship, the Israeli Council for Higher Education (VATAT) Postdoctoral Scholarship, the Apple Scholars in AI/ML PhD fellowship, the Tel Aviv University Center for AI and Data Science excellence fellowship, and the Deutsch Prize for PhD candidates.

event head separator גרף טרנספורמר בסיבוכיות לינארית עם חלוקת צמתים אדפטיבית
event speaker icon
תומר בורידה (הרצאה סמינריונית למגיסטר)
event date icon
יום ראשון, 18.01.2026, 11:00
event location icon

טאוב 401

event speaker icon
מנחה:  דר. אור ליטני

We present ReHub, a novel graph transformer architecture that achieves linear complexity through an efficient reassignment technique between nodes and virtual nodes. Graph transformers have become increasingly important in graph learning for their ability to utilize long-range node communication explicitly, addressing limitations such as oversmoothing and oversquashing found in message-passing graph networks. However, their dense attention mechanism scales quadratically with the number of nodes, limiting their applicability to large-scale graphs. ReHub draws inspiration from the airline industry's hub-and-spoke model, where flights are  assigned to optimize operational efficiency.

In our approach, graph nodes (spokes) are dynamically reassigned to a fixed number of virtual nodes (hubs) at each model layer. Recent work, Neural Atoms, has demonstrated impressive and consistent improvements over GNN baselines by utilizing such virtual nodes; their findings suggest that the number of hubs strongly influences performance. However, increasing the number of hubs typically raises complexity, requiring a trade-off to maintain linear complexity.

Our key insight is that each node only needs to interact with a small subset of hubs to achieve linear complexity, even when the total number of hubs is large. To leverage all hubs without incurring additional computational costs, we propose a simple yet effective adaptive reassignment technique based on hub-hub similarity scores, eliminating the need for expensive node-hub computations.

Our experiments on long-range graph benchmarks indicate a consistent improvement in results over the base method, Neural Atoms, while maintaining a linear complexity instead of O(n^3/2). Remarkably, our sparse model achieves performance on par with its non-sparse counterpart. Furthermore, ReHub outperforms competitive baselines and consistently ranks among the top performers across various benchmarks.

event head separator קטיף אוטומטי של זעפרן
event speaker icon
תום אגמי (הרצאה סמינריונית למגיסטר)
event date icon
יום שלישי, 20.01.2026, 11:30
event location icon

טאוב 401

event speaker icon
מנחה:  פרופ' אלפרד ברוקשטיין

We introduce a vision-guided robotic system for automated saffron flower harvesting. Using camera-based perception and robotic manipulation, the system detects and cuts whole flowers while preserving their stigmas and avoiding plant damage.

event head separator תופעות של זיכרון וביטול-למידה במודלי שפה דרך עדשת נופי ההפסד כפונקציה בקלט
event speaker icon
לירן כהן (הרצאה סמינריונית למגיסטר)
event date icon
יום שני, 26.01.2026, 11:31
event location icon

טאוב 601

event speaker icon
מנחה:  פרופ' אבי מנדלסון

Understanding how large language models store, retain, and remove knowledge is critical for interpretability, reliability, and compliance with privacy regulations.
My work introduces a geometric perspective on memorization and unlearning by analyzing loss behavior over semantically similar inputs through the Input Loss Landscape.

I show that retained, forgotten, and unseen examples exhibit distinct patterns that reflect active learning, suppressed knowledge, and ignored information. 
Building on this observation, I propose REMIND (Residual Memorization In Neighborhood Dynamics), a black-box framework for diagnosing residual memorization. I further introduce a new semantic neighbor generation method that enables controlled exploration of local loss geometry.

These contributions provide interpretable insights into knowledge retention and forgetting, and offer practical tools for auditing, debugging, and enhancing transparency in large language models.