BEGIN:VCALENDAR
VERSION:2.0
PRODID:-//talks.is.ed.ac.uk//EN
BEGIN:VTIMEZONE
TZID:Europe/London
BEGIN:DAYLIGHT
TZOFFSETFROM:+0000
TZOFFSETTO:+0100
TZNAME:BST
DTSTART:19700329T010000
RRULE:FREQ=YEARLY;BYMONTH=3;BYDAY=-1SU
END:DAYLIGHT
BEGIN:STANDARD
TZOFFSETFROM:+0100
TZOFFSETTO:+0000
TZNAME:GMT
DTSTART:19701025T020000
RRULE:FREQ=YEARLY;BYMONTH=10;BYDAY=-1SU
END:STANDARD
END:VTIMEZONE
BEGIN:VEVENT
CATEGORIES:All Hands Meetings on Big Data Optimization
SUMMARY:Stochastic Optimization with Variance Reduction for Infinite Datasets with Finite-Sum Structure - Dominik Csiba (s1459570)
DTSTART;TZID=Europe/London:20161115T121500
DTEND;TZID=Europe/London:20161115T133000
UID:TALK843
URL:http://talks.is.ed.ac.uk/talk/843/show
DESCRIPTION:Abstract: Stochastic optimization algorithms with variance reduction have proven successful for minimizing large finite sums of functions. However, in the context of empirical risk minimization, it is often helpful to augment the training set by considering random perturbations of input examples. In this case, the objective is no longer a finite sum, and the main candidate for optimization is the stochastic gradient descent method (SGD). In this paper, we introduce a variance reduction approach for this setting when the objective is strongly convex. After an initial linearly convergent phase, the algorithm achieves a O(1/t) convergence rate in expectation like SGD, but with a constant factor that is typically much smaller, depending on the variance of gradient estimates due to perturbations on a single example.
LOCATION:JCMB 6207
CONTACT:Dominik Csiba (s1459570)
END:VEVENT
END:VCALENDAR