Welcome to the homepage of the

Mathematical Optimization for Data Science Group

Department of Mathematics and Computer Science, Saarland University, Germany

Non-smooth Non-convex Bregman Minimization: Unification and new Algorithms

P. Ochs, J. Fadili and T. Brox

Abstract:
We propose a unifying algorithm for non-smooth non-convex optimization. The algorithm approximates the objective function by a convex model function and finds an approximate (Bregman) proximal point of the convex model. This approximate minimizer of the model function yields a descent direction, along which the next iterate is found. Complemented with an Armijo-like line search strategy, we obtain a flexible algorithm for which we prove (subsequential) convergence to a stationary point under weak assumptions on the growth of the model function error. Special instances of the algorithm with a Euclidean distance function are, for example, Gradient Descent, Forward--Backward Splitting, ProxDescent, without the common requirement of a "Lipschitz continuous gradient". In addition, we consider a broad class of Bregman distance functions (generated by Legendre functions) replacing the Euclidean distance. The algorithm has a wide range of applications including many linear and non-linear inverse problems in image processing and machine learning.
pdf Bibtex Publisher's link arXiv
Latest update: 10.07.2017
Citation:
P. Ochs, J. Fadili, T. Brox:
Non-smooth Non-convex Bregman Minimization: Unification and new Algorithms. [pdf]
Journal of Optimization Theory and Applications, 181(1):244-278, 2018.
Bibtex:
@article{OFB18,
  title        = {Non-smooth Non-convex Bregman Minimization: Unification and new Algorithms},
  author       = {P. Ochs and J. Fadili and T. Brox},
  year         = {2018},
  journal      = {Journal of Optimization Theory and Applications},
  number       = {1},
  volume       = {181},
  pages        = {244--278}
}


MOP Group
©2017-2024
The author is not
responsible for
the content of
external pages.