דלג לתוכן (מקש קיצור 's')
אירועים

קולוקוויום וסמינרים

כדי להצטרף לרשימת תפוצה של קולוקוויום מדעי המחשב, אנא בקר ב דף מנויים של הרשימה.

Computer Science events calendar in HTTP ICS format for of Google calendars, and for Outlook.
Academic Calendar at Technion site.

קולוקוויום וסמינרים בקרוב

event head separator מודל מענה לשאלות בהשראת קידוד
event speaker icon
מוסא עראף (הרצאה סמינריונית למגיסטר)
event date icon
יום ראשון, 05.05.2024, 15:00
event location icon
הרצאת זום: 91070061962
event speaker icon
מנחה:   Dr. Kira Radinsky
Methods in question-answering (QA) that transform texts detailing processes into an intermediate code representation, subsequently executed to generate a response to the presented question, have demonstrated promising results in analyzing scientific texts that describe intricate processes. The limitations of these existing text-to-code models are evident when attempting to solve QA problems that require knowledge beyond what is presented in the input text. We propose a novel domain-agnostic model to address the problem by leveraging domain-specific and open-source code libraries. We introduce an innovative QA text-to-code algorithm that learns to represent and utilize external APIs from code repositories, such as GitHub, within the intermediate code representation. The generated code is then executed to answer a question about a text. We present three QA datasets, focusing on scientific problems in the domains of chemistry, astronomy, and biology, for the benefit of the community. Our study demonstrates that our proposed method is a competitive alternative to current state-of-the-art (SOTA) QA text-to-code models and generic SOTA QA models.
event head separator אוטומטים קופצים מעל מילים אינסופיות
event speaker icon
עומר יצחק (הרצאה סמינריונית למגיסטר)
event date icon
יום רביעי, 15.05.2024, 15:30
event location icon
הרצאת זום: 95929737670, סיסמה: OmerMSC וחדר 301
event speaker icon
מנחה:  Dr. S. Almagor
We introduce and study jumping automata over infinite words, a fascinating twist on traditional finite automata. These machines read their input in a non-consecutive manner, defying conventional word order. We explore three distinct semantics: one ensuring every letter is accounted for, another permitting word permutation within fixed windows, and a third allowing permutation within windows of an existentially-quantified bound. Our work covers expressiveness, closure properties, algorithmic characteristics of these models, and more.
event head separator מודלי שפה מבניים לקוד
event speaker icon
שקד ברודי (הרצאה סמינריונית לדוקטורט)
event date icon
יום חמישי, 16.05.2024, 14:00
event location icon
הרצאת זום: 92691399579
event speaker icon
מנחה:  Prof. E. Yahav
In the past few years, software has been at the heart of many applications ranging from appliances to virtual services. Helping software developers to write better code is a crucial task. In parallel, recent developments in machine learning and deep learning in particular have shown great promise in many fields, and code-related tasks in particular. The main challenge is how to represent code in a way that can be used by deep learning models effectively. While code can be treated as a sequence of tokens, it also can be represented using its underlying Abstract Syntax Tree (AST) that contains rich structural information. In this thesis, we investigate the use of the structural nature of code for code-related tasks.

We start by introducing the edit completion task, a new task that requires predicting the next edit operation in a code snippet, given a sequence of contextual edits. This task may be helpful for software developers, as a substantial part of the time spent on writing code is dedicated to editing existing code. We investigate different approaches for this task and show that harnessing the structural nature of code can be beneficial for this task. Then, we generalize different structural approaches for addressing code-related tasks and show that the combined approach can be beneficial for different tasks such as edit completion and code summarization.

The use of Graph Neural Networks (GNNs) is common for structural code representation. We reveal that the commonly used Graph Attention Network (GAT) model has a limitation in capturing complex relationships in graphs due to its attention mechanism. We provide an analysis of the expressive power of GAT and introduce a simple fix to it, GATv2, that has proven to be more powerful. Our experiments show that GATv2 gains better performance in different tasks in is more robust to noisy graphs.

The Transformer architecture is widely used in the natural language processing (NLP) field as a model for sequence processing. However, this architecture can be considered as a special case of GNNs. Layer Normalization is a key component in the Transformer architecture. In the last part of this thesis, we investigate the expressivity role of Layer Normalization in the Transformers' attention. We provide a novel geometric interpretation of Layer Normalization and show its importance to the attention mechanism that follows it in the Transformer architecture.

Our work provides both practical and theoretical contributions to the field of using deep neural models on code.