Dr Iain L. MacDonald (University of Cape Town) |

Fri 15 Nov 2019, 15:05 - 16:00 |

JCMB 5328 |

If you have a question about this talk, please contact: Serveh Sharifi Far (ssharifi)

If one is to judge by counts of citations of the fundamental paper (Dempsteret al., 1977), EM algorithms are a runaway success. But it is surprisingly easy to find published applications of EM that are apparently unnecessary, in the sense that there are simpler methods available that will solve the relevant estimation problems. In particular, such problems can often be solved by the simple expedient of submitting the observed-data likelihood (or log-likelihood) to a general-purpose optimization routine. This can dispense with the need to derive and code (or modify) the E and M steps, a process which can sometimes be laborious or error-prone. Here I discuss five or six such applications of EM in some detail, and describe briefly some others that have already appeared in the literature. In all these cases, there seems to be no good reason to choose EM. Whether these are atypical of applications of EM is not obvious. But it is clear that there are problems traditionally solved by EM (e.g. the fitting of mixtures of normals or Poissons) that are easy to solve by other means. It is suggested that, before going to the effort of devising an EM algorithm to use on a new problem, the researcher should consider whether other methods (e.g. direct numerical maximization or an MM algorithm of some other kind) may be either simpler to implement or more efficient.

References: Dempster, A. P., Laird, N. M., and Rubin, D. B. (1977). Maximum likelihood from incomplete data via the EM algorithm (with discussion). Journal of the Royal Statistical Society Series B,39, 1–38.