Welcome to the homepage of the

Mathematical Optimization for Data Science Group

Department of Mathematics and Computer Science, Saarland University, Germany

Bregman Proximal Framework for Deep Linear Neural Networks

M.C. Mukkamala, F. Westerkamp, E. Laude, D. Cremers and P. Ochs

Abstract:
A typical assumption for the analysis of first order optimization methods is the Lipschitz continuity of the gradient of the objective function. However, for many practical applications this assumption is violated, including loss functions in deep learning. To overcome this issue, certain extensions based on generalized proximity measures known as Bregman distances were introduced. This initiated the development of the Bregman proximal gradient (BPG) algorithm and an inertial variant (momentum based) CoCaIn BPG, which however rely on problem dependent Bregman distances. In this paper, we develop Bregman distances for using BPG methods to train Deep Linear Neural Networks. The main implications of our results are strong convergence guarantees for these algorithms. We also propose several strategies for their efficient implementation, for example, closed form updates and a closed form expression for the inertial parameter of CoCaIn BPG. Moreover, the BPG method requires neither diminishing step sizes nor line search, unlike its corresponding Euclidean version. We numerically illustrate the competitiveness of the proposed methods compared to existing state of the art schemes.
pdf Bibtex arXiv
Latest update: 08.10.2019
Citation:
M.C. Mukkamala, F. Westerkamp, E. Laude, D. Cremers, P. Ochs:
Bregman Proximal Framework for Deep Linear Neural Networks. [pdf]
International Conference on Scale Space and Variational Methods in Computer Vision (SSVM). Lecture Notes in Computer Science, Springer, 2021.
Bibtex:
@inproceedings{MWLCO21,
  title        = {Bregman Proximal Framework for Deep Linear Neural Networks},
  author       = {M.C. Mukkamala and F. Westerkamp and E. Laude and D. Cremers and P. Ochs},
  year         = {2021},
  booktitle    = {International Conference on Scale Space and Variational Methods in Computer Vision (SSVM)},
  series       = {Lecture Notes in Computer Science},
  publisher    = {Springer},
}


MOP Group
©2017-2024
The author is not
responsible for
the content of
external pages.