Welcome to the homepage of the

Mathematical Optimization for Data Science Group

Department of Mathematics and Computer Science, Saarland University, Germany

Learning-to-Optimize with PAC-Bayesian Guarantees: Theoretical Considerations and Practical Implementation

M. Sucker, J. Fadili and P. Ochs

Abstract:
We use the PAC-Bayesian theory for the setting of learning-to-optimize. To the best of our knowledge, we present the first framework to learn optimization algorithms with provable generalization guarantees (PAC-Bayesian bounds) and explicit trade-off between convergence guarantees and convergence speed, which contrasts with the typical worst-case analysis. Our learned optimization algorithms provably outperform related ones derived from a (deterministic) worst-case analysis. The results rely on PAC-Bayesian bounds for general, possibly unbounded loss-functions based on exponential families. Then, we reformulate the learning procedure into a one-dimensional minimization problem and study the possibility to find a global minimum. Furthermore, we provide a concrete algorithmic realization of the framework and new methodologies for learning-to-optimize, and we conduct four practically relevant experiments to support our theory. With this, we showcase that the provided learning framework yields optimization algorithms that provably outperform the state-of-the-art by orders of magnitude.
pdf Bibtex arXiv
Latest update: 04.04.2024
Citation:
M. Sucker, J. Fadili, P. Ochs:
Learning-to-Optimize with PAC-Bayesian Guarantees: Theoretical Considerations and Practical Implementation. [pdf]
Technical Report, ArXiv e-prints, arXiv:2404.03290, 2024.
Bibtex:
@techreport{SFO24,
  title        = {Learning-to-Optimize with PAC-Bayesian Guarantees: Theoretical Considerations and Practical Implementation},
  author       = {M. Sucker and J. Fadili and P. Ochs},
  year         = {2024},
  journal      = {ArXiv e-prints, arXiv:2404.03290},
}


MOP Group
©2017-2024
The author is not
responsible for
the content of
external pages.