דלג לתוכן (מקש קיצור 's')
Logo of Technion
Logo of CS Department
אירועים

קולוקוויום וסמינרים

כדי להצטרף לרשימת תפוצה של קולוקוויום מדעי המחשב, אנא בקר בדף מנויים של הרשימה.
Computer Science events calendar in HTTP ICS format for of Google calendars, and for Outlook.
Academic Calendar at Technion site.

קולוקוויום וסמינרים בקרוב

  • סוכריות קוונטיות ושימושיהן
    event speaker icon
    רומן שפירא, הרצאה סמינריונית למגיסטר
    event date icon
    יום חמישי, 16.12.2021, 12:30
    event location icon
    Zoom Lecture: 95103666191
    event speaker icon
    מנחה:  Prof. Tal Mor
    The field of quantum information is becoming more known to the general public. However, effectively demonstrating the concepts underneath quantum science and technology to the general public can be a challenging job. In this work, we present ``Quantum Candies'' (Qandies), a model for intuitively describing basic concepts in quantum information without the need for complex algebra or the concept of superpositions. We discuss several properties of Qandies, including their relation to quantum theory (Qubits), quantum entanglement (Super-Dense Coding) and beyond quantum theory (Non-Local Boxes). We also present applications of Qandies in Quantum Cryptography, including the well-known quantum key distribution (QKD) protocol BB84, Quantum Bit Commitment, Quantum Digital Signature (QDS) protocols, and measurement-device-independent QKD.
  • גוטונט: שיטה מהירה ומונוקולורית לחשיפת וחקירת סצנה
    event speaker icon
    תום אברך, הרצאה סמינריונית למגיסטר
    event date icon
    יום ראשון, 19.12.2021, 11:30
    event location icon
    Zoom Lecture: 93278373918
    event speaker icon
    מנחה:  Prof. E. Rivlin, and Dr. Chaim Baskin
    Autonomous scene exposure and exploration in localization- and communication-denied areas -- useful for finding targets in unknown scenes, mainly when direct maneuvering of the vehicle is impossible -- remains a challenging problem in computer navigation. In this work we propose a novel deep learning-based navigation approach that is able to solve this problem and demonstrate its ability in an even more complicated setup, i.e., when computational power is limited. Our method works directly with the RGB camera input, not requiring any expensive sensors, and produces two coordinates, which we call ''Goto pixel'' and ''Lookat pixel'', delineating the movement and perception directions, correspondingly. These flying-instruction pixels are optimized to expose the largest amount of currently unexplored areas. In addition, we propose a way to generate a navigation-oriented dataset, enabling efficient training of our method using RGB and depth images. Tests conducted in a simulator achieve promising results in terms of the quantity of areas unveiled and the distances to targets.
  • Pixel Club: Computational Imagingfor Sensing High-speed Phenomena
    event speaker icon
    Mark Sheinin (Carnegie Mellon University)
    event date icon
    יום שלישי, 4.1.2022, 13:30
    event location icon
    Zoom Lecture: 9245008892
    Despite recent advances in sensor technology, capturing high-speed video at high-spatial resolutionsremains a challenge. This is because, in a conventional camera, the available bandwidth limits either the maximum sampling frequency or thecaptured spatial resolution. In this talk, I am going to cover our recent works that use computational imaging to allow high-speed high-resolution imagingunder certain conditions. First I will describe Diffraction Line Imaging, a novel imaging principle that combines diffractive optics with 1D (line) sensorsto allow high-speed positioning of light sources (e.g., motion capture markers,car headlights) as well structured light 3D scanning with line illumination andline sensing. Second, I will present a recent work that generalizes Diffraction Line Imaging to handle a new class of scenes, resulting in new applicationdomains such as high-speed imaging for Particle Image Velocimetry and imaging combustible particles. Lastly, I will present a novel method for sensingvibrations at high speeds (up to 63kHz), for multiple scene sources a tonce, using sensors rated for only 130Hz operation. I will presentresults from our method that include capturing vibration caused by audio sources(e.g. speakers, human voice, and musical instruments) and analysing thevibration modes of a tuning fork. Bio: Mark Sheinin is a Post-doctoral Research Associate at Carnegie Mellon University's RoboticInstitute at the Illumination and Imaging Laboratory. He received his Ph.D. inElectrical Engineering from the Technion - Israel Institute of Technology in2019. His work has received a Best Student Paper Award at IEEE CVPR 2017. He is therecipient of the Porat Award for Outstanding Graduate Students, the Jacobs-QualcommFellowship in 2017, and the Jacobs Distinguished Publication Award in 2018. Hisresearch interests include computational photography and computer vision.