An accelerated proximal coordinate gradient method

Qihang Lin, Zhaosong Lu, Lin Xiao

Research output: Contribution to journalConference articlepeer-review

75 Scopus citations


We develop an accelerated randomized proximal coordinate gradient (APCG) method, for solving a broad class of composite convex optimization problems. In particular, our method achieves faster linear convergence rates for minimizing strongly convex functions than existing randomized proximal coordinate gradient methods. We show how to apply the APCG method to solve the dual of the regularized empirical risk minimization (ERM) problem, and devise efficient implementations that avoid full-dimensional vector operations. For ill-conditioned ERM problems, our method obtains improved convergence rates than the state-of-the-art stochastic dual coordinate ascent (SDCA) method.

Original languageEnglish
Pages (from-to)3059-3067
Number of pages9
JournalAdvances in Neural Information Processing Systems
Issue numberJanuary
StatePublished - 2014
Externally publishedYes
Event28th Annual Conference on Neural Information Processing Systems 2014, NIPS 2014 - Montreal, Canada
Duration: Dec 8 2014Dec 13 2014


Dive into the research topics of 'An accelerated proximal coordinate gradient method'. Together they form a unique fingerprint.

Cite this