We seek for algorithms that extend the modelling flexibility for practitioners, i.e., we want to solve new problem structures. In order to achieve this, we generalize existing algorithms to generic frameworks by reducing the convergence proofs to the their essential mechanisms. Besides the gain in flexibility for modeling and solving real world problems, this research yields deep insights into the understanding of algorithms and the connections between them.
Partially funded by the German Research Foundation (DFG Grant OC 150/1-1).
In the same line of research as [OFB19] above, we develop a model function framework that extends the classical Frank--Wolfe method (Conditional Gradient Descent). This generalization allows us to solve non-smooth non-convex optimization problems with a flexible design of the iteration oracle. Naturally, higher order schemes can be derived and interesting variants that combine the Frank--Wolfe oracle with respect to one block of coordinates and a proximal minimization oracle with respect to another block of coordinates are proposed.
We provide a unifying perspective of inertial algorithms, which yields also block-coordinate descent and variable metric versions. The contribution is in the line of
[ABS13]
H. Attouch, J. Bolte, and B. F. Svaiter: Convergence of descent methods for semi-algebraic and tame problems: proximal algorithms, forward--backward splitting, and regularized Gauss--Seidel methods.
Mathematical Programming 137(1-2):91--129, 2013.
.
They provide an abstract descent framework that implies convergence of the full sequence of iterates generated by an abstract algorithm to a stationary point. Key for this contribution is the so-called Kurdyka--Lojasiewicz inequality. Basically, the framework requires that the algorithm generates a sequence that satisfies a sufficient decrease condition, a relative error condition, and a continuity condition, which leads to a common analysis of several recent algorithms in non-smooth non-convex optimization. Several follow up papers have modified these three conditions to prove the same global convergence result, yielding new algorithms. Our contribution, unifies most of these modifications and proves global convergence for an abstract algorithm. While, originally, the sufficient decrease condition was imposed on the objective, we impose it on a Lyapunov-like (and also parametric) function, which is crucial for incorporating inertial algorithms. In particular, we propose a block coordinate variable metric version of iPiano. See also our research page on inertial or accelerated algorithms, which also presents algorithms for non-smooth non-convex optimization.
A robust and efficient algorithm was introduced to solve optimization problems in image processing and computer vision that require non-convex regularization. Before, due to the complexity, usually convex surrogates of these non-convex regularizers were used, though the usage of non-convex regularizers can be motivated naturally in various ways, including natural image statistics. Applications are image denoising, deconvolution, optical flow, depth map fusion, etc. The algorithm unifies iteratively reweighted algorithms (e.g. IRLS, IRL1) based on the strategy of majorization--minimization (MM). Motivated from the non-convex regularizer in image processing, we formulate an MM framework, which can be implemented efficiently using primal--dual schemes.