An adaptive accelerated proximal gradient method and its homotopy continuation for sparse optimization

Qihang Lin, Lin Xiao

Research output: Chapter in Book/Report/Conference proceedingConference contributionpeer-review

15 Scopus citations

Abstract

2014 We first propose an adaptive accelerated proximal gradient (APG) method for minimizing strongly convex composite functions with unknown convexity parameters. This method incorporates a restarting scheme to automatically estimate the strong convexity parameter and achieves a nearly optimal iteration complexity. Then we consider the l1-regularized least- squares (l1-LS) problem in the high-dimensional setting. Although such an objective function is not strongly convex, it has restricted strong convexity over sparse vectors. We exploit this property by combining the adaptive APG method with a homotopy continuation scheme, which generates a sparse solution path towards optimality. This method obtains a global linear rate of convergence and its overall iteration complexity has a weaker dependency on the restricted condition number than previous work.

Original languageEnglish
Title of host publication31st International Conference on Machine Learning, ICML 2014
PublisherInternational Machine Learning Society (IMLS)
Pages120-150
Number of pages31
ISBN (Electronic)9781634393973
StatePublished - 2014
Externally publishedYes
Event31st International Conference on Machine Learning, ICML 2014 - Beijing, China
Duration: Jun 21 2014Jun 26 2014

Publication series

Name31st International Conference on Machine Learning, ICML 2014
Volume1

Conference

Conference31st International Conference on Machine Learning, ICML 2014
Country/TerritoryChina
CityBeijing
Period06/21/1406/26/14

Fingerprint

Dive into the research topics of 'An adaptive accelerated proximal gradient method and its homotopy continuation for sparse optimization'. Together they form a unique fingerprint.

Cite this