'MADGRAD' Method for Stochastic Optimization

A Momentumized, Adaptive, Dual Averaged Gradient Method for Stochastic Optimization algorithm. MADGRAD is a 'best-of-both-worlds' optimizer with the generalization performance of stochastic gradient descent and at least as fast convergence as that of Adam, often faster. A drop-in optim_madgrad() implementation is provided based on Defazio et al (2020) .


News

Reference manual

It appears you don't have a PDF plugin for this browser. You can click here to download the reference manual.

install.packages("madgrad")

0.1.0 by Daniel Falbel, 7 months ago


Browse source code at https://github.com/cran/madgrad


Authors: Daniel Falbel [aut, cre, cph] , RStudio [cph] , MADGRAD original implementation authors. [cph]


Documentation:   PDF Manual  


MIT + file LICENSE license


Imports torch, rlang

Suggests testthat


See at CRAN