Proximal Gradient Methods with Adaptive Subspace Sampling

Abstract

Many applications in machine learning or signal processing involve nonsmooth optimization problems. This nonsmoothness brings a low-dimensional structure to the optimal solutions. In this paper, we propose a randomized proximal gradient method harnessing this underlying structure. We introduce two key components”:” i) a random subspace proximal gradient algorithm; ii) an identification-based sampling of the subspaces. Their interplay brings a significant performance improvement on typical learning problems in terms of dimensions explored.

Date
Event
MOTOR 2021
Location
Online
Avatar
Dmitry Grishchenko
Senior Engineer