Roger Levy
Fri 29 May 2015, 11:00 - 12:00
Informatics Forum (IF-4.31/4.33)

If you have a question about this talk, please contact: Diana Dalla Costa (ddallac)


Human language use is a central problem for the advancement of machine intelligence, and poses some of the deepest scientific challenges in accounting for the capabilities of the human mind.  In this talk I describe several major advances we have recently made in this domain that have led to a state-of-the-art theory of language comprehension as rational, goal-driven inference and action. These advances were made possible by combining leading ideas and techniques from computer science, psychology, and linguistics to define probabilistic models over detailed linguistic representations and testing their predictions through naturalistic data and controlled experiments.  In language comprehension, I describe a detailed expectation-based theory of real-time language understanding that unifies three topics central to the field — ambiguity resolution, prediction, and syntactic complexity — and that finds broad empirical support. I then move on to describe a “noisy-channel” theory which generalizes the expectation-based theory by removing the assumption of modularity between the processes of individual word recognition and sentence-level comprehension.  This theory accounts for critical outstanding puzzles for previous approaches, and when combined with reinforcement learning yield state-of-the-art models of human eye movement control in reading.


Roger Levy is Associate Professor of Linguistics at the University of California, San Diego, where he directs the Computational Psycholinguistics Laboratory. He received his B.S. in Mathematics from the University of Arizona and his M.S. in Anthropological Sciences and Ph.D. in Linguistics from Stanford University. He was a UK ESRC Postdoctoral Fellow in the School of Informatics at the University of Edinburgh prior to his current appointment at UCSD. He is the recipient of NSF CAREER and Alfred P. Sloan Fellowship awards, and in 2013-2014 was a Fellow at the Center for Advanced Study in the Behavioral Sciences. Levy's research focuses on theoretical and applied questions in the processing of human language comprehension, production, and acquisition, and in the representation of grammatical knowledge. Inherently, linguistic communication involves the resolution of uncertainty over a potentially unbounded set of possible signals and meanings. How can a fixed set of knowledge and resources be deployed to manage this uncertainty, and how can the requisite knowledge be acquired? To address these questions Levy use a combination of computational modeling, psycholinguistic experimentation, and corpus analysis. This work furthers our understanding of the cognitive underpinnings of human language, and helps us design models and algorithms that will allow machines to process human language.