Prof Kody Law (University of Manchester)
Fri 08 Nov 2019, 15:05 - 16:00
JCMB 5323

If you have a question about this talk, please contact: Serveh Sharifi Far (ssharifi)

Image for Strategies for Multilevel Monte Carlo

This talk will concern the problem of inference when the posterior measure involves continuous models which require approximation before inference can be performed. Typically one cannot sample from the posterior distribution directly, but can at best only evaluate it, up to a normalizing constant. Therefore one must resort to computationally-intensive inference algorithms in order to construct estimators. These algorithms are typically of Monte Carlo type, and include for example Markov chain Monte Carlo, importance samplers, and sequential Monte Carlo samplers. The multilevel Monte Carlo method provides a way of optimally balancing discretization and sampling error on a hierarchy of approximation levels, such that cost is optimized. Recently this method has been applied to computationally intensive inference. This non-trivial task can be achieved in a variety of ways. This talk will review 3 primary strategies which have been successfully employed to achieve optimal (or canonical) convergence rates – in other words faster convergence than i.i.d. sampling at the finest discretization level. Some of the specific resulting algorithms, and applications, will also be presented.