Abstract:
In this paper we propose an adaptively extrapolated proximal gradient method, which is based on the accelerated proximal gradient method (also known as FISTA), however we locally optimize the extrapolation parameter by carrying out an exact (or inexact) line search. It turns out that in some situations, the proposed algorithm is equivalent to a class of SR1 (identity minus rank 1) proximal quasi-Newton methods. Convergence is proved in a general non-convex setting, and hence, as a byproduct, we also obtain new convergence guarantees for proximal quasi-Newton methods. The efficiency of the new method is shown in numerical experiments on a sparsity regularized non-linear inverse problem.
Bibtex: @article{OP19,
title = {Adaptive Fista for Non-convex Optimization},
author = {P. Ochs and T. Pock},
year = {2019},
journal = {SIAM Journal on Optimization},
number = {4},
volume = {29},
pages = {2482--2503}
}