דלג לתוכן (מקש קיצור 's')
אירועים

אירועים והרצאות בפקולטה למדעי המחשב ע"ש הנרי ומרילין טאוב

event speaker icon
שי פלדמן (הרצאה סמינריונית למגיסטר)
event date icon
יום רביעי, 06.10.2021, 11:30
event location icon
Zoom Lecture: 95325451293
event speaker icon
מנחה: Dr. Yaniv Romano
Learning algorithms are increasingly prevalent within consequential real-world systems, where reliability is an essential consideration: confidently deploying learning algorithms requires more than high prediction accuracy in controlled testbeds. In this talk, we will introduce recent advancements in quantile regression—a general technique for assessing the prediction uncertainty in regression problems. In the first part, we will modify quantile regression and introduce a novel loss function that drives the predictive model (e.g., a deep net) to accurately assess the uncertainty across sub-populations. We will discuss the theoretical properties of our proposed loss functions and empirically show that our proposal leads to improved coverage across all regions of feature-space, as evaluated by several metrics. In the second, we will focus on the more challenging multiple-output setting, where the uncertainty is now represented by a region in the multidimensional space, rather than an interval. We will introduce a method to generate such predictive regions, combining neural representation learning with quantile regression. The new framework we offer can generate small regions of arbitrarily complex shapes, a property that existing methods lack. In fact, the regions we construct are significantly smaller (sometimes by a factor of 100) compared to existing techniques. Lastly, we will provide distribution-free finite sample coverage guarantees for the constructed uncertainty regions, and validate our theory on real benchmark datasets.