- Many wording improvements in the getting started guides (#81, @jonthegeek).
- Added MixUp callback and helper loss function and functional logic. (#82, @skeydan).
lr_finder()now by default divides the range between
end_lrinto log-spaced intervals, following the fast.ai implementation. Cf. Sylvain Gugger’s post: https://sgugger.github.io/how-do-you-find-a-good-learning-rate.html. The previous behavior can be achieved passing
log_spaced_intervals=FALSEto the function. (#82, @skeydan)
plot.lr_records()now in addition plots an exponentially weighted moving average of the loss (again, see Sylvain Gugger’s post), with a weighting coefficient of
0.9(which seems a reasonable value for the default setting of 100 learning-rate-incrementing intervals). (#82, @skeydan)
CRAN release: 2021-10-07
- Allow users to provide the minimum and maximum number of epochs when calling
ctx$epochsfrom context object and replaced it with
- Early stopping will now only occur if the minimum number of training epochs has been met (#53, @mattwarkentin).
acceleratorto allow selecting an specific GPU when multiple are present (#58, @cmcmaster1).
- We now handle different kinds of data arguments passed to
valid_datacan now be scalar value indicating the proportion of
datathat will be used for fitting. This only works if
datais a torch dataset or a list. (#69)
- You can now supply
fitto pass additional information to
- Implemented the
evaluatefunction allowing users to get metrics from a model in a new dataset. (#73)
- Fixed bug in CSV logger callback that was saving the logs as a space delimited file (#52, @mattwarkentin).
- Fixed bug in the length of the progress bar for the validation dataset (#52, @mattwarkentin).
- Fixed bugs in early stopping callback related to them not working properly when
patience = 1and when they are specified before other logging callbacks. (#76)