Abstract:
Stochastic algorithms, especially stochastic gradient descent (SGD), have proven to be the go-to methods in data science and machine learning. In recent years, the stochastic proximal point algorithm (SPPA) emerged, and it was shown to be more robust than SGD with respect to stepsize settings. However, SPPA still suffers from a decreased convergence rate due to the need for vanishing stepsizes, which is resolved by using variance reduction methods. In the deterministic setting, there are many problems that can be solved more efficiently when viewing them in a non-Euclidean geometry using Bregman distances. This paper combines these two worlds and proposes variance reduction techniques for the Bregman stochastic proximal point algorithm (BSPPA). As special cases, we obtain SAGA- and SVRG-like variance reduction techniques for BSPPA. Our theoretical and numerical results demonstrate improved stability and convergence rates compared to the vanilla BSPPA with constant and vanishing stepsizes, respectively. Our analysis, also, allow to recover the same variance reduction techniques for Bregman SGD in a unified way.
Bibtex: @techreport{TO25,
title = {Bregman Stochastic Proximal Point Algorithm with Variance Reduction},
author = {C. Traoré and P. Ochs},
year = {2025},
journal = {ArXiv e-prints, arXiv:2510.16655},
}