דלג לתוכן (מקש קיצור 's')
אירועים

אירועים והרצאות בפקולטה למדעי המחשב ע"ש הנרי ומרילין טאוב

event speaker icon
קארן ליבסקו (TTI - שיקגו) - בוטל!
event date icon
יום שני, 06.01.2014, 11:30
event location icon
חדר 337, בניין טאוב למדעי המחשב
Automatic sign language recognition has close connections with both computer vision and speech recognition. The linguistics of sign languages is less well understood than that of spoken languages, and sign language recognition is much less advanced than speech recognition. We consider American sign language (ASL), and focus on recognition of one constrained but important part of the language: fingerspelling, in which signers spell out a word as a sequence of handshapes or hand trajectories corresponding to individual letters. Fingerspelling accounts for up to 35% of ASL and includes both interesting research challenges and helpful constraints.

Unlike most previous work, we focus on the natural setting of unconstrained fingerspelling sequences, where the vocabulary is not known a priori. This talk presents an approach to this problem using linguistically motivated handshape features combined with statistical models of dynamics, including hidden Markov models and semi-Markov conditional random fields. This is joint work with Taehwan Kim and Greg Shakhnarovich.

Bio:
Karen Livescu is an Assistant Professor at TTI-Chicago. Previously she was a post-doc and graduate student at MIT in the EECS department. Karen's main research interests are in speech and language processing, with a slant toward combining machine learning with knowledge about language. Her recent work has included multi-view learning of speech representations, articulatory models, discriminative training for spoken term detection and pronunciation modeling, and automatic sign language recognition. She is a member of the IEEE Spoken Language Technical Committee and an associate editor for IEEE Transactions on Audio, Speech, and Language Processing.