דלג לתוכן (מקש קיצור 's')
אירועים

אירועים והרצאות בפקולטה למדעי המחשב ע"ש הנרי ומרילין טאוב

event speaker icon
אור שפריר (האונ' העברית)
event date icon
יום שלישי, 15.01.2019, 11:30
event location icon
חדר 337, בניין טאוב למדעי המחשב
The driving force behind convolutional and recurrent networks — two of the most successful deep learning architectures to date — is their expressive power. Despite its wide acceptance and vast empirical evidence, formal analyses supporting this belief are scarce. The primary notions for formally reasoning about expressiveness are efficiency and inductive bias. Efficiency refers to the ability of a network architecture to realize functions that require an alternative architecture to be much larger. Inductive bias refers to the prioritization of some functions over others given prior knowledge regarding a task at hand. Through an equivalence to hierarchical tensor decompositions, we study the expressive efficiency and inductive bias of various architectural features in convolutional networks (depth, width, pooling geometry, inter-connectivity, overlapping receptive fields, etc.) as well as the long-term memory capacity of deep recurrent networks. Our results shed light on the demonstrated effectiveness of modern networks, and in addition, provide new tools for network design.

*Research under supervision of Prof. Amnon Shashua.