Sampling and Update Frequencies in Proximal Variance-Reduced Stochastic Gradient Methods
Variance-reduced stochastic gradient methods have gained popularity in recent times. Several variants exist with different strategies for storing and sampling gradients, and this work concerns the interactions between these two aspects. We present a general proximal variance-reduced gradient method and analyze it under strong convexity assumptions. Special cases of the algorithm include SAGA, L-SV
