Welcome to the homepage of the

Mathematical Optimization for Data Science Group

Department of Mathematics and Computer Science, Saarland University, Germany

A Markovian Model for Learning-to-Optimize

M. Sucker and P. Ochs

Abstract:
We present a probabilistic model for stochastic iterative algorithms with the use case of optimization algorithms in mind. Based on this model, we present PAC-Bayesian generalization bounds for functions that are defined on the trajectory of the learned algorithm, for example, the expected (non-asymptotic) convergence rate and the expected time to reach the stopping criterion. Thus, not only does this model allow for learning stochastic algorithms based on their empirical performance, it also yields results about their actual convergence rate and their actual convergence time. We stress that, since the model is valid in a more general setting than learning-to-optimize, it is of interest for other fields of application, too. Finally, we conduct five practically relevant experiments, showing the validity of our claims.
pdf Bibtex arXiv
Latest update: 21.08.2024
Citation:
M. Sucker, P. Ochs:
A Markovian Model for Learning-to-Optimize. [pdf]
Technical Report, ArXiv e-prints, arXiv:2408.11629, 2024.
Bibtex:
@techreport{SO24,
  title        = {A Markovian Model for Learning-to-Optimize},
  author       = {M. Sucker and P. Ochs},
  year         = {2024},
  journal      = {ArXiv e-prints, arXiv:2408.11629},
}


MOP Group
©2017-2024
The author is not
responsible for
the content of
external pages.