improve callback handling
This issue emerged during #30 (closed). The enhanced model checkpoint takes all callbacks to save as argument, but meanwhile this callbacks need also to be parsed to the fit_generator function in addition to the checkpoint. See the following example:
- define learning rate scheduler
lr = LearningRateDecay(base_lr=1e-2, drop=.94, epochs_drop=10)
- add this scheduler to the enhanced checkpoint
ckpt = ModelCheckpointEnhanced(...., callbacks_to_save=lr, ...)
- but also make sure, that it is added to the model
history = model.fit_generator(..., callbacks=[ckpt, lr], ...)
This is not intuitive, because the same object needs to be added more than once. Any change on the callbacks requires two changes in the code.
-> Think about a solution to add all callbacks in one place and create a class (or what ever) that can handle this.
SOLUTION:
- New class
CallbackHandler
- add all callbacks to this class via
.add_callback(<obj>, <path>, <name(opt)>)
- create advanced model checkpoint inside this class via
.create_model_checkpoint(**kwargs)
- add callbacks e.g. to
fit_generator()
using kwargcallbacks=CallbackHandler.get_callbacks(as_dict=False)
- the command above takes care, to add all callbacks in a consistent order and the checkpoint as last element (this is required, because it triggers the local save of all previous mentioned callbacks)
- to update the callbacks by previous saved data, just call
.load_callbacks()
and update the checkpoint afterwards with.update_checkpoint(history_name=<default:"hist">)
- specific callbacks can always be accessed with
.get_callback_by_name(<name>)
Edited by Ghost User