项目作者: JRC1995

项目描述 :
Quasi Hyperbolic Rectified DEMON Adam/Amsgrad with AdaMod, Gradient Centralization, Lookahead, iterative averaging and decorrelated Weight Decay
高级语言: Python
项目地址: git://github.com/JRC1995/DemonRangerOptimizer.git
创建时间: 2020-04-18T23:24:41Z
项目社区:https://github.com/JRC1995/DemonRangerOptimizer

开源协议:

下载


DemonRangerOptimizer

Quasi Hyperbolic Rectified DEMON (Decaying Momentum) Adam/Amsgrad with AdaMod, Lookahead, iterate averaging, and decorrelated weight decay.

Also, other variants with Nostalgia (NosAdam), P (from PAdam), LaProp, and Hypergradient Descent (see HyperRanger and HyperRangerMod and others in optimizers.py)

Notes:

  • Hyperxxx series optimizers implements hypergradient descent for dynamic learning rate updates. Some optimizers like HDQHSGDW implements hypergradient descent for all hyperparameters - beta, nu, lr. Unlike the original implementation (https://arxiv.org/abs/1703.04782, https://github.com/gbaydin/hypergradient-descent) they take care of the gradients due to the weight decay and other things. (I also implement state level lr so that lr for each parameters will be hypertuned through hypergradient descent separately instead of in the group level like in the original implementation)

  • LRangerMod uses Linear Warmup within Adam/AMSGrad based on the rule of thumb as in (https://arxiv.org/abs/1910.04209v1). Note Rectified Adam boils down to a fixed (not dynamic) form of learning rate scheduling similar to a linear warmup.

  • The file explains the parameters for each different synergistic optimizers.

How to use:

  1. from optimizers import DemonRanger
  2. from dataloader import batcher # some random function to batch data
  3. class config:
  4. def __init__(self):
  5. self.batch_size = ...
  6. self.wd = ...
  7. self.lr = ...
  8. self.epochs = ...
  9. config = config()
  10. train_data = ...
  11. step_per_epoch = count_step_per_epoch(train_data,config.batch_size)
  12. model = module(stuff)
  13. optimizer = DemonRanger(params=model.parameters(),
  14. lr=config.lr,
  15. weight_decay=config.wd,
  16. epochs=config.epochs,
  17. step_per_epoch=step_per_epoch,
  18. IA_cycle=step_per_epoch)
  19. IA_activate = False
  20. for epoch in range(config.epochs):
  21. batches = batcher(train_data, config.batch_size)
  22. for batch in batches:
  23. loss = do stuff
  24. loss.backward()
  25. optimizer.step(IA_activate=IA_activate)
  26. # automatically enable IA (Iterate Averaging) near the end of training (when metric of your choice not improving for a while)
  27. if (IA_patience running low) and IA_activate is False:
  28. IA_activate = True

Recover AdamW:

  1. optimizer = DemonRanger(params=model.parameters(),
  2. lr=config.lr,
  3. betas=(0.9,0.999,0.999), # restore default AdamW betas
  4. nus=(1.0,1.0), # disables QHMomentum
  5. k=0, # disables lookahead
  6. alpha=1.0,
  7. weight_decay=config.wd,
  8. IA=False, # disables Iterate Averaging
  9. rectify=False, # disables RAdam Recitification
  10. AdaMod=False, #disables AdaMod
  11. use_demon=False, #disables Decaying Momentum (DEMON)
  12. use_gc=False, #disables gradient centralization
  13. amsgrad=False # disables amsgrad
  14. )
  15. # just do optimizer.step() when necessary

Recover AMSGrad:

  1. optimizer = DemonRanger(params=model.parameters(),
  2. lr=config.lr,
  3. betas=(0.9,0.999,0.999), # restore default AdamW betas
  4. nus=(1.0,1.0), # disables QHMomentum
  5. k=0, # disables lookahead
  6. alpha=1.0,
  7. weight_decay=config.wd,
  8. IA=False, # disables Iterate Averaging
  9. rectify=False, # disables RAdam Recitification
  10. AdaMod=False #disables AdaMod
  11. use_demon=False, #disables Decaying Momentum (DEMON)
  12. use_gc=False, #disables gradient centralization
  13. amsgrad=True # disables amsgrad
  14. )
  15. # just do optimizer.step() when necessary

Recover QHAdam

  1. optimizer = DemonRanger(params=model.parameters(),
  2. lr=config.lr,
  3. k=0, # disables lookahead
  4. alpha=1.0,
  5. weight_decay=config.wd,
  6. IA=False, # disables Iterate Averaging
  7. rectify=False, # disables RAdam Recitification
  8. AdaMod=False, #disables AdaMod
  9. use_demon=False, #disables Decaying Momentum (DEMON)
  10. use_gc=False, #disables gradient centralization
  11. amsgrad=False # disables amsgrad
  12. )
  13. # just do optimizer.step() when necessary

Recover RAdam

  1. optimizer = DemonRanger(params=model.parameters(),
  2. lr=config.lr,
  3. betas=(0.9,0.999,0.999), # restore default AdamW betas
  4. nus=(1.0,1.0), # disables QHMomentum
  5. k=0, # disables lookahead
  6. alpha=1.0,
  7. weight_decay=config.wd,
  8. IA=False, # disables Iterate Averaging
  9. AdaMod=False, #disables AdaMod
  10. use_demon=False, #disables Decaying Momentum (DEMON)
  11. use_gc=False, #disables gradient centralization
  12. amsgrad=False # disables amsgrad
  13. )
  14. # just do optimizer.step() when necessary

Recover Ranger (RAdam + LookAhead)

  1. optimizer = DemonRanger(params=model.parameters(),
  2. lr=config.lr,
  3. betas=(0.9,0.999,0.999), # restore default AdamW betas
  4. nus=(1.0,1.0), # disables QHMomentum
  5. weight_decay=config.wd,
  6. IA=False, # disables Iterate Averaging
  7. AdaMod=False, #disables AdaMod
  8. use_demon=False, #disables Decaying Momentum (DEMON)
  9. use_gc=False, #disables gradient centralization
  10. amsgrad=False # disables amsgrad
  11. )
  12. # just do optimizer.step() when necessary

Recover QHRanger (QHRAdam + LookAhead)

  1. optimizer = DemonRanger(params=model.parameters(),
  2. lr=config.lr,
  3. weight_decay=config.wd,
  4. IA=False, # disables Iterate Averaging
  5. AdaMod=False, #disables AdaMod
  6. use_demon=False, #disables Decaying Momentum (DEMON)
  7. use_gc=False, #disables gradient centralization
  8. amsgrad=False # disables amsgrad
  9. )
  10. # just do optimizer.step() when necessary

Recover AdaMod

  1. optimizer = DemonRanger(params=model.parameters(),
  2. lr=config.lr,
  3. weight_decay=config.wd,
  4. betas=(0.9,0.999,0.999), # restore default AdamW betas
  5. nus=(1.0,1.0), # disables QHMomentum
  6. k=0, # disables lookahead
  7. alpha=1.0,
  8. IA=False, # disables Iterate Averaging
  9. rectify=False, # disables RAdam Recitification
  10. AdaMod_bias_correct=False, #disables AdaMod bias corretion (not used originally)
  11. use_demon=False #disables Decaying Momentum (DEMON)
  12. use_gc=False #disables gradient centralization
  13. amsgrad=False # disables amsgrad
  14. )
  15. # just do optimizer.step() when necessary

Recover GAdam

  1. optimizer = DemonRanger(params=model.parameters(),
  2. lr=config.lr,
  3. weight_decay=config.wd,
  4. betas=(0.9,0.999,0.999), # restore default AdamW betas
  5. nus=(1.0,1.0), # disables QHMomentum
  6. k=0, # disables lookahead
  7. alpha=1.0,
  8. IA=True, # enables Iterate Averaging
  9. rectify=False, # disables RAdam Recitification
  10. AdaMod=False, #disables AdaMod
  11. use_demon=False, #disables Decaying Momentum (DEMON)
  12. use_gc=False, #disables gradient centralization
  13. amsgrad=False # disables amsgrad
  14. )
  15. # just do optimizer.step(IA_activate=IA_activate) when necessary (change IA_activate to True near the end of training based on some scheduling scheme or tuned hyperparameter--- alternative to learning rate scheduling)

Recover GAdam + LookAhead

  1. optimizer = DemonRanger(params=model.parameters(),
  2. lr=config.lr,
  3. weight_decay=config.wd,
  4. betas=(0.9,0.999,0.999), # restore default AdamW betas
  5. nus=(1.0,1.0), # disables QHMomentum
  6. k=5, # enables lookahead
  7. alpha=0.88,
  8. IA=True, # enables Iterate Averaging
  9. rectify=False, # disables RAdam Recitification
  10. AdaMod=False, #disables AdaMod
  11. use_demon=False, #disables Decaying Momentum (DEMON)
  12. use_gc=False, #disables gradient centralization
  13. amsgrad=False # disables amsgrad
  14. )
  15. # just do optimizer.step(IA_activate=IA_activate) when necessary (change IA_activate to True near the end of training based on some scheduling scheme or tuned hyperparameter--- alternative to learning rate scheduling)

Recover DEMON Adam

  1. optimizer = DemonRanger(params=model.parameters(),
  2. lr=config.lr,
  3. weight_decay=config.wd,
  4. epochs = config.epochs,
  5. step_per_epoch = step_per_epoch,
  6. betas=(0.9,0.999,0.999), # restore default AdamW betas
  7. nus=(1.0,1.0), # disables QHMomentum
  8. k=0, # disables lookahead
  9. alpha=1.0,
  10. IA=False, # enables Iterate Averaging
  11. rectify=False, # disables RAdam Recitification
  12. AdaMod=False, #disables AdaMod
  13. AdaMod_bias_correct=False, #disables AdaMod bias corretion (not used originally)
  14. use_demon=True, #enables Decaying Momentum (DEMON)
  15. use_gc=False, #disables gradient centralization
  16. amsgrad=False # disables amsgrad
  17. )
  18. # just do optimizer.step() when necessary

Use Variance Rectified DEMON QHAMSGradW with AdaMod, LookAhead, Iterate Averaging, and Gradient Centralization

  1. optimizer = DemonRanger(params=model.parameters(),
  2. lr=config.lr,
  3. weight_decay=config.wd,
  4. epochs=config.epochs,
  5. step_per_epoch=step_per_epoch,
  6. IA_cycle=step_per_epoch)
  7. # just do optimizer.step(IA_activate=IA_activate) when necessary (change IA_activate to True near the end of training based on some scheduling scheme or tuned hyperparameter--- alternative to learning rate scheduling)

Stuffs to try or add: