Research Post
We analyze the stochastic average gradient (SAG) method for optimizing the sum of a finite number of smooth convex functions. Like stochastic gradient (SG) methods, the SAG method’s iteration cost is independent of the number of terms in the sum. However, by incorporating a memory of previous gradient values the SAG method achieves a faster convergence rate than black-box SG methods. The convergence rate is improved from 𝑂(1/𝑘‾‾√)O(1/k) to O(1 / k) in general, and when the sum is strongly-convex the convergence rate is improved from the sub-linear O(1 / k) to a linear convergence rate of the form 𝑂(𝜌𝑘)O(ρk) for 𝜌<1ρ<1. Further, in many cases the convergence rate of the new method is also faster than black-box deterministic gradient methods, in terms of the number of gradient evaluations. This extends our earlier work Le Roux et al. (Adv Neural Inf Process Syst, 2012), which only lead to a faster rate for well-conditioned strongly-convex problems. Numerical experiments indicate that the new algorithm often dramatically outperforms existing SG and deterministic gradient methods, and that the performance may be further improved through the use of non-uniform sampling strategies.
Acknowledgments
We would like to thank the anonymous reviewers for their many useful comments. This work was partially supported by the European Research Council (SIERRA-ERC-239993) and a Google Research Award. Mark Schmidt is also supported by a postdoctoral fellowship from the Natural Sciences and Engineering Research Council of Canada.
Feb 15th 2022
Research Post
Read this research paper, co-authored by Amii Fellow and Canada CIFAR AI Chair Osmar Zaiane: UCTransNet: Rethinking the Skip Connections in U-Net from a Channel-Wise Perspective with Transformer
Sep 27th 2021
Research Post
Sep 17th 2021
Research Post
Looking to build AI capacity? Need a speaker at your event?