Changelog
Source:NEWS.md
tabnet 0.8.0
New features
- messaging is now improved with {cli}
- add optimal threshold and support size into new 1.5 alpha
entmax15()andsparsemax15()mask_types. Add an optionalmask_topkconfig parameter. (#180) -
optimizernow default to thetorch_ignite_adamwhen available. Result is 30% faster pretraining and fitting tasks (#178). - add
nn_aum_loss()function for area under the optimization for cases of unbalanced binary classification (#178). - add a vignette on imbalanced binary classification with
nn_aum_loss()(#178).
tabnet 0.7.0
CRAN release: 2025-04-16
Bugfixes
- Remove long-run example raising a Note.
- fix
tabet_pretrainfailing withvalue_error("Can't convert data of class: 'NULL'")in R 4.5 - fix
tabet_pretrainwrongly used instead oftabnet_fitin Missing data predictor vignette - improve message related to case_weights not being used as predictors.
- improve function documentation consistency before translation.
- fix “…” is not an exported object from ‘namespace:dials’” error when using tune() on tabnet parameters. (#160 @cphaarmeyer)
tabnet 0.6.0
CRAN release: 2024-06-15
New features
- parsnip models now allow transparently passing case weights through
workflows::add_case_weights()parameters (#151) - parsnip models now support
tabnet_modelandfrom_epochparameters (#143)
Bugfixes
- Adapt
tune::finalize_workflow()test to {parsnip} v1.2 breaking change. (#155) -
autoplot()now position the “has_checkpoint” points correctly when atabnet_fit()is continuing a previous training usingtabnet_model =. (#150) - Explicitely warn that
tabnet_modeloption will not be used intabnet_pretrain()tasks. (#150)
tabnet 0.5.0
CRAN release: 2023-12-05
New features
- {tabnet} now allows hierarchical multi-label classification through {data.tree} hierarchical
Nodedataset. (#126) -
tabnet_pretrain()now allows different GLU blocks in GLU layers in encoder and in decoder through theconfig()parametersnum_idependant_decoderandnum_shared_decoder(#129) - Add
reduce_on_plateauas option forlr_schedulerattabnet_config()(@SvenVw, #120) - use zeallot internally with %<-% for code readability (#133)
- add FR translation (#131)
tabnet 0.4.0
CRAN release: 2023-05-11
New features
- Add explicit legend in
autoplot.tabnet_fit()(#67) - Improve unsupervised vignette content. (#67)
-
tabnet_pretrain()now allows missing values in predictors. (#68) -
tabnet_explain()now works fortabnet_pretrainmodels. (#68) - Allow missing-values values in predictor for unsupervised training. (#68)
- Improve performance of
random_obfuscator()torch_nn module. (#68) - Add support for early stopping (#69)
-
tabnet_fit()andpredict()now allow missing values in predictors. (#76) -
tabnet_config()now supports anum_workers=parameters to control parallel dataloading (#83) - Add a vignette on missing data (#83)
-
tabnet_config()now has a flagskip_importanceto skip calculating feature importance (@egillax, #91) - Export and document
tabnet_nn - Added
min_grid.tabnetmethod fortune(@cphaarmeyer, #107) - Added
tabnet_explain()method for parsnip models (@cphaarmeyer, #108) -
tabnet_fit()andpredict()now allow multi-outcome, all numeric or all factors but not mixed. (#118)
Bugfixes
-
tabnet_explain()is now correctly handling missing values in predictors. (#77) -
dataloadercan now usenum_workers>0(#83) - new default values for
batch_sizeandvirtual_batch_sizeimproves performance on mid-range devices. - add default
engine="torch"to tabnet parsnip model (#114) - fix
autoplot()warnings turned into errors with {ggplot2} v3.4 (#113)
tabnet 0.3.0
CRAN release: 2021-10-11
- Added an
updatemethod for tabnet models to allow the correct usage offinalize_workflow(#60).
tabnet 0.2.0
CRAN release: 2021-06-22
New features
- Allow model fine-tuning through passing a pre-trained model to
tabnet_fit()(@cregouby, #26) - Explicit error in case of missing values (@cregouby, #24)
- Better handling of larger datasets when running
tabnet_explain(). - Add
tabnet_pretrain()for unsupervised pretraining (@cregouby, #29) - Add
autoplot()of model loss among epochs (@cregouby, #36) - Added a
configargument tofit() / pretrain()so one can pass a pre-made config list. (#42) - In
tabnet_config(), newmask_typeoption withentmaxadditional to defaultsparsemax(@cmcmaster1, #48) - In
tabnet_config(),lossnow also takes function (@cregouby, #55)
Bugfixes
- Fixed bug in GPU training. (#22)
- Fixed memory leaks when using custom autograd function.
- Batch predictions to avoid OOM error.
Internal improvements
- Added GPU CI. (#22)