GT Stat 20180111
Rates of convergence of averaged stochastic gradient algorithms : locally strongly convex objective
Salle de séminaires M.0.1
LMI (INSA Rouen)
An usual problem in statistics consists in estimating the minimizer of a convex function.
When we have to deal with large samples taking values in high dimensional spaces, stochastic
gradient algorithms and their averaged versions are efficient candidates.
Indeed, (1) they do not need too much computational efforts, (2) they do not need to store
all the data, which is crucial when we deal with big data, (3) they allow to simply update the
estimates, which is important when data arrive sequentially. The aim of this work is to give
asymptotic and non asymptotic rates of convergence of stochastic gradient estimates as well as
of their averaged versions when the function we would like to minimize is only locally strongly convex.