TY - JOUR
T1 - Accelerated Bregman proximal gradient methods for relatively smooth convex optimization
AU - Hanzely, Filip
AU - Richtárik, Peter
AU - Xiao, Lin
N1 - Publisher Copyright:
© 2021, The Author(s), under exclusive licence to Springer Science+Business Media, LLC, part of Springer Nature.
PY - 2021/6
Y1 - 2021/6
N2 - We consider the problem of minimizing the sum of two convex functions: one is differentiable and relatively smooth with respect to a reference convex function, and the other can be nondifferentiable but simple to optimize. We investigate a triangle scaling property of the Bregman distance generated by the reference convex function and present accelerated Bregman proximal gradient (ABPG) methods that attain an O(k-γ) convergence rate, where γ∈ (0 , 2] is the triangle scaling exponent (TSE) of the Bregman distance. For the Euclidean distance, we have γ= 2 and recover the convergence rate of Nesterov’s accelerated gradient methods. For non-Euclidean Bregman distances, the TSE can be much smaller (say γ≤ 1), but we show that a relaxed definition of intrinsic TSE is always equal to 2. We exploit the intrinsic TSE to develop adaptive ABPG methods that converge much faster in practice. Although theoretical guarantees on a fast convergence rate seem to be out of reach in general, our methods obtain empirical O(k- 2) rates in numerical experiments on several applications and provide posterior numerical certificates for the fast rates.
AB - We consider the problem of minimizing the sum of two convex functions: one is differentiable and relatively smooth with respect to a reference convex function, and the other can be nondifferentiable but simple to optimize. We investigate a triangle scaling property of the Bregman distance generated by the reference convex function and present accelerated Bregman proximal gradient (ABPG) methods that attain an O(k-γ) convergence rate, where γ∈ (0 , 2] is the triangle scaling exponent (TSE) of the Bregman distance. For the Euclidean distance, we have γ= 2 and recover the convergence rate of Nesterov’s accelerated gradient methods. For non-Euclidean Bregman distances, the TSE can be much smaller (say γ≤ 1), but we show that a relaxed definition of intrinsic TSE is always equal to 2. We exploit the intrinsic TSE to develop adaptive ABPG methods that converge much faster in practice. Although theoretical guarantees on a fast convergence rate seem to be out of reach in general, our methods obtain empirical O(k- 2) rates in numerical experiments on several applications and provide posterior numerical certificates for the fast rates.
KW - Accelerated gradient methods
KW - Bregman divergence
KW - Convex optimization
KW - Proximal gradient methods
KW - Relative smoothness
UR - http://www.scopus.com/inward/record.url?scp=85103928357&partnerID=8YFLogxK
U2 - 10.1007/s10589-021-00273-8
DO - 10.1007/s10589-021-00273-8
M3 - Article
AN - SCOPUS:85103928357
SN - 0926-6003
VL - 79
SP - 405
EP - 440
JO - Computational Optimization and Applications
JF - Computational Optimization and Applications
IS - 2
ER -