Welcome to the homepage of the

Mathematical Optimization for Data Science Group

Department of Mathematics and Computer Science, Saarland University, Germany

Bregman Stochastic Proximal Point Algorithm with Variance Reduction

C. Traoré and P. Ochs

Abstract:
Stochastic algorithms, especially stochastic gradient descent (SGD), have proven to be the go-to methods in data science and machine learning. In recent years, the stochastic proximal point algorithm (SPPA) emerged, and it was shown to be more robust than SGD with respect to stepsize settings. However, SPPA still suffers from a decreased convergence rate due to the need for vanishing stepsizes, which is resolved by using variance reduction methods. In the deterministic setting, there are many problems that can be solved more efficiently when viewing them in a non-Euclidean geometry using Bregman distances. This paper combines these two worlds and proposes variance reduction techniques for the Bregman stochastic proximal point algorithm (BSPPA). As special cases, we obtain SAGA- and SVRG-like variance reduction techniques for BSPPA. Our theoretical and numerical results demonstrate improved stability and convergence rates compared to the vanilla BSPPA with constant and vanishing stepsizes, respectively. Our analysis, also, allow to recover the same variance reduction techniques for Bregman SGD in a unified way.
pdf Bibtex arXiv
Latest update: 18.10.2025
Citation:
C. Traoré, P. Ochs:
Bregman Stochastic Proximal Point Algorithm with Variance Reduction. [pdf]
Technical Report, ArXiv e-prints, arXiv:2510.16655, 2025.
Bibtex:
@techreport{TO25,
  title        = {Bregman Stochastic Proximal Point Algorithm with Variance Reduction},
  author       = {C. Traoré and P. Ochs},
  year         = {2025},
  journal      = {ArXiv e-prints, arXiv:2510.16655},
}


MOP Group
©2017-2025
The author is not
responsible for
the content of
external pages.