Welcome to the homepage of the

Mathematical Optimization for Data Science Group

Department of Mathematics and Computer Science, Saarland University, Germany

A Generalization Result for Convergence in Learning-to-Optimize

M. Sucker and P. Ochs

Abstract:
Convergence in learning-to-optimize is hardly studied, because conventional convergence guarantees in optimization are based on geometric arguments, which cannot be applied easily to learned algorithms. Thus, we develop a probabilistic framework that resembles deterministic optimization and allows for transferring geometric arguments into learning-to-optimize. Our main theorem is a generalization result for parametric classes of potentially non-smooth, non-convex loss functions and establishes the convergence of learned optimization algorithms to stationary points with high probability. This can be seen as a statistical counterpart to the use of geometric safeguards to ensure convergence. To the best of our knowledge, we are the first to prove convergence of optimization algorithms in such a probabilistic framework.
pdf Bibtex arXiv
Latest update: 10.10.2024
Citation:
M. Sucker, P. Ochs:
A Generalization Result for Convergence in Learning-to-Optimize. [pdf]
Technical Report, ArXiv e-prints, arXiv:2410.07704, 2024.
Bibtex:
@techreport{SO24b,
  title        = {A Generalization Result for Convergence in Learning-to-Optimize},
  author       = {M. Sucker and P. Ochs},
  year         = {2024},
  journal      = {ArXiv e-prints, arXiv:2410.07704},
}


MOP Group
©2017-2024
The author is not
responsible for
the content of
external pages.