Accelerated Bregman proximal gradient methods for relatively smooth convex optimization

Filip Hanzely, Peter Richtárik, Lin Xiao

Research output: Contribution to journalArticlepeer-review

12 Scopus citations


We consider the problem of minimizing the sum of two convex functions: one is differentiable and relatively smooth with respect to a reference convex function, and the other can be nondifferentiable but simple to optimize. We investigate a triangle scaling property of the Bregman distance generated by the reference convex function and present accelerated Bregman proximal gradient (ABPG) methods that attain an O(k-γ) convergence rate, where γ∈ (0 , 2] is the triangle scaling exponent (TSE) of the Bregman distance. For the Euclidean distance, we have γ= 2 and recover the convergence rate of Nesterov’s accelerated gradient methods. For non-Euclidean Bregman distances, the TSE can be much smaller (say γ≤ 1), but we show that a relaxed definition of intrinsic TSE is always equal to 2. We exploit the intrinsic TSE to develop adaptive ABPG methods that converge much faster in practice. Although theoretical guarantees on a fast convergence rate seem to be out of reach in general, our methods obtain empirical O(k- 2) rates in numerical experiments on several applications and provide posterior numerical certificates for the fast rates.

Original languageEnglish
Pages (from-to)405-440
Number of pages36
JournalComputational Optimization and Applications
Issue number2
StatePublished - Jun 2021
Externally publishedYes


  • Accelerated gradient methods
  • Bregman divergence
  • Convex optimization
  • Proximal gradient methods
  • Relative smoothness


Dive into the research topics of 'Accelerated Bregman proximal gradient methods for relatively smooth convex optimization'. Together they form a unique fingerprint.

Cite this