From low probability to high confidence in stochastic convex optimization

Damek Davis, Dmitriy Drusvyatskiy, Lin Xiao, Junyu Zhang

Research output: Contribution to journalArticlepeer-review

Abstract

Standard results in stochastic convex optimization bound the number of samples that an algorithm needs to generate a point with small function value in expectation. More nuanced high probability guarantees are rare, and typically either rely on light-tail" noise assumptions or exhibit worse sample complexity. In this work, we show that a wide class of stochastic optimization algorithms for strongly convex problems can be augmented with high con

Original languageEnglish
JournalJournal of Machine Learning Research
Volume22
StatePublished - 2021
Externally publishedYes

Keywords

  • Composite optimization
  • Empirical risk minimization
  • Proximal point method
  • Robust distance estimation
  • Stochastic approximation

Fingerprint

Dive into the research topics of 'From low probability to high confidence in stochastic convex optimization'. Together they form a unique fingerprint.

Cite this