David Dunson
Mon 28 May 2018, 16:00 - 17:00
Informatics Forum (IF-G.07)

If you have a question about this talk, please contact: Bob Fisher (rbf)

There is understandably huge excitement about the remarkable success of machine learning (ML) algorithms, such as “deep       
learning”, in a variety of domains ranging from pattern recognition in imaging, to self-driving cars. This excitement has     
generated increasing hype and an expectation that we can solve challenging problems in scientific inferences from modern      
complex data sources using recent ML/AI tools.  However, I would argue that the success stories are very special in           
involving highly structured data (in space/time) and tasks that humans are very good at, with the possibility of              
leveraging on abundant training data.  In sharp contrast, most scientific data are very complex and high-dimensional          
without a known structure to exploit, with very limited training data, and with most of the focus being on understanding      
relationships and latent structure and not on prediction.  Using applications in neuroscience, ecology and genomics as        
motivation, I debunk the hype that current ML/AI tools are at all good at solving key problems of interest, and suggest       
some promising strategies and areas of new research.                                                                          
                                                                                                                              
                                                                                                                              
                                                                                                                              
BIO:                                                                                                                          
David Dunson is Arts and Sciences Distinguished Professor of Statistical Science, Mathematics, and Electrical & Computer      
Engineering at Duke University.  His research focuses on Bayesian statistical theory and methods motivated by                 
high-dimensional and complex applications.  A particular emphasis is on dimensionality reduction, scalable inference          
algorithms, latent factor models, and nonparametric approaches, particularly for high-dimensional, dynamic and multimodal     
data, including images, functions, shapes and other complex objects.  His work involves inter-disciplinary thinking at the    
intersection of statistics, mathematics and computer science.  Motivation comes from applications in epidemiology,            
environmental health, neurosciences, genetics, fertility and other settings (music, fine arts, humanities).  Professor        
Dunson is a fellow of the American Statistical Association and of the Institute of Mathematical Statistics.  He is winner     
of the 2007 Mortimer Spiegelman Award for the top public health statistician under 41, the 2010 Myrto Lefkopoulou             
Distinguished Lectureship at Harvard University, the 2010 COPSS Presidents' Award for the top statistician under 41, and      
the 2012 Youden Award for inter-laboratory testing methods.