כדי להצטרף לרשימת תפוצה של קולוקוויום מדעי המחשב, אנא בקר ב דף מנויים של הרשימה.
Computer Science events calendar in HTTP ICS format for of Google calendars, and for Outlook.
Academic Calendar at Technion site.
טאוב 601
Recent breakthroughs in artificial intelligence, particularly in deep learning, have revolutionized our ability to analyze and interpret biological data, including DNA, RNA, and protein sequences. These advances have led to significant progress in critical fields such as medicine, agriculture, and biotechnology.
In this talk, I will present my research on applying deep learning, especially natural language processing (NLP) techniques, to address central challenges in bioinformatics. I will briefly discuss parallels between natural language and biological sequences, along with strategies for adapting NLP methods to account for the key differences between them. I will begin by presenting how the choice of tokenizer can improve model performance while significantly reducing memory usage, a common challenge in deep learning.
Next, I will discuss our deep learning models for computing multiple sequence alignments, demonstrating that transformer architectures can produce accurate results. I will then present a hybrid model capable of generating ancestral sequences without requiring a specific sequence alignment or phylogenetic tree as input. Our results show that this approach performs competitively and, in some cases, surpasses traditional methods.
Finally, I will introduce a large language model trained to predict protein function, which outperforms traditional search algorithms, particularly for proteins with low similarity to known sequences. Overall, my research highlights how deep learning and NLP offer powerful new solutions to long-standing challenges in bioinformatics.
The recently introduced Consistency models pose an efficient alternative to diffusion algorithms, enabling rapid and good quality image synthesis. These methods overcome the slowness of diffusion models by directly mapping noise to data, while maintaining a (relatively) simpler training. Consistency models enable a fast one- or few-step generation, but they typically fall somewhat short in sample quality when compared to their diffusion origins. In this work we propose a novel and highly effective technique for post-processing Consistency-based generated images, enhancing their perceptual quality. Our approach utilizes a joint classifier-discriminator model, in which both portions are trained adversarially. While the classifier aims to grade an image based on its assignment to a designated class, the discriminator portion of the very same network leverages the softmax values to assess the proximity of the input image to the targeted data manifold, thereby serving as an Energy-based Model. By employing example-specific projected gradient iterations under the guidance of this joint machine, we refine synthesized images and achieve an improved FID scores on the ImageNet 64x64 dataset for both Consistency-Training and Consistency-Distillation techniques.