An Asymptotically Efficient Stochastic Newton Method with First Order Cost for Online Convex Optimization

Mardi 14 octobre 2025, 14:00

Salle des séminaires du LMRS

Guillaume Salle

LMI 

Stochastic Newton methods capable of achieving asymptotic efficiency have historically required a per-iteration cost of O(d3) for problems of dimension d. This presentation will first review the concept of asymptotic efficiency, the statistical benchmark for an optimal estimator. We then introduce an online algorithm that achieves this same statistical optimality with a reduced per-iteration cost of O(ℓd2), where the mask size ℓ can be chosen
such that ℓ ≪ d. Our method estimates the inverse Hessian via a stochastic gradient update on a quadratic functional, using a random masking strategy that selects a subset of ℓ Hessian columns at each iteration. Over a dataset of size N, this structure allows for a total cost of O(Nd), comparable to first order methods. We prove the algorithm achieves this asymptotic efficiency without requiring iterate averaging. The convergence analysis is based on an extension of the Robbins-Monro theorem.