Skip to content

Releases: skorch-dev/skorch

v1.2.0

08 Aug 11:11
db77adb
Compare
Choose a tag to compare

Version 1.2.0

This is a smaller release, most changes concern examples and development and thus don't affect users of skorch.

Changed

  • Loading of skorch nets using pickle: When unpickling a skorch net, you may come across a PyTorch warning that goes: "FutureWarning: You are using torch.load with weights_only=False [...]"; to avoid this warning, pickle the net again and use the new pickle file (#1092)

Added

  • Add Contributing Guidelines for skorch. (#1097)
  • Add an example of hyper-parameter optimization using Optuna here (#1098)
  • Add Example for Streaming Dataset(#1105)
  • Add pyproject.toml to Improve CI/CD and Tooling (#1108)

Thanks @raphaelrubrice, @omahs, and @ParagEkbote for their contributions.

Full Changelog: v1.1.0...v1.2.0

Version 1.1.0

10 Jan 13:04
6008085
Compare
Choose a tag to compare

Please welcome skorch 1.1.0 - a smaller release with a few fixes, a new notebook showcasing learning rate
schedulers and mainly support for scikit-learn 1.6.0.

Full list of changes:

Added

  • Added a notebook that shows how to use Learning Rate Scheduler in skorch.(#1074)

Changed

  • All neural net classes now inherit from sklearn's BaseEstimator. This is to support compatibility with sklearn 1.6.0 and above. Classification models additionally inherit from ClassifierMixin and regressors from RegressorMixin. (#1078)
  • When using the ReduceLROnPlateau learning rate scheduler, we now record the learning rate in the net history (net.history[:, 'event_lr'] by default). It is now also possible to to step per batch, not only by epoch (#1075)
  • The learning rate scheduler .simulate() method now supports adding step args which is useful when simulation policies such as ReduceLROnPlateau which expect metrics to base their schedule on. (#1077)
  • Removed deprecated skorch.callbacks.scoring.cache_net_infer (#1088)

Fixed

  • Fix an issue with using NeuralNetBinaryClassifier with torch.compile (#1058)

Thanks @Ball-Man and @ParagEkbote for their contributions.

Version 1.0.0

27 May 15:25
dd341d3
Compare
Choose a tag to compare

The 1.0.0 release of skorch is here. We think that skorch is at a very stable point, which is why a 1.0.0 release is appropriate. There are no plans to add any breaking changes or major revisions in the future. Instead, our focus now is to keep skorch up-to-date with the latest versions of PyTorch and scikit-learn, and to fix any bugs that may arise.

Find the list of full changes here: v0.15.0...v1.0.0

Version 0.15.0

04 Sep 10:10
17c7675
Compare
Choose a tag to compare

This is a smaller release, but it still contains changes which will be interesting to some of you.

We added the possibility to store weights using safetensors. This can have several advantages, listed here. When calling net.save_params and net.load_params, just pass use_safetensors=True to use safetensors instead of pickle.

Moreover, there is a new argument on NeuralNet: You can now pass use_caching=False or True to disable or enable caching for all callbacks at once. This is useful if you have a lot of scoring callbacks and don't want to toggle caching on each individually.

Finally, we fixed a few issues related to using skorch with accelerate.

Thanks Zach Mueller (@muellerzr) for his first contribution to skorch.

Find the full list of changes here: v0.14.0...v0.15.0

Version 0.14.0

26 Jun 15:29
4c5cfda
Compare
Choose a tag to compare

This release offers a new interface for scikit-learn to do zero-shot and few-shot classification using open source large language models (Jump right into the example notebook).

skorch.llm.ZeroShotClassifier and skorch.llm.FewShotClassifier allow the user to do classification using open-source language models that are compatible with the huggingface generation interface. This allows you to do all sort of interesting things in your pipelines. From simply plugging a LLM into your classification pipeline to get preliminary results quickly, to using these classifiers to generate training data candidates for downstream models. This is a first draft of the interface, therefore it is not unlikely that the interface will change a bit in the future, so please, let us know about any potential issues you have.

Other items of this release are

  • the drop of Python 3.7 support - this version of Python has reached EOL and will not be supported anymore
  • the NeptuneLogger now logs the skorch version thanks to @AleksanderWWW
  • NeuralNetRegressor can now be fitted with 1-dimensional y, which is necessary in some specific circumstances (e.g. in conjunction with sklearn's BaggingRegressor, see #972); for this to work correctly, the output of the of the PyTorch module should also be 1-dimensional; the existing default, i.e. having y and y_pred be 2-dimensional, remains the recommended way of using NeuralNetRegressor

Full Changelog: v0.13.0...v0.14.0

Version 0.13.0

17 May 10:20
cc210fe
Compare
Choose a tag to compare

The new skorch release is here and it has some changes that will be exiting for some users.

  • First of all, you may have heard of the PyTorch 2.0 release, which includes the option to compile the PyTorch module for better runtime performance. This skorch release allows you to pass compile=True when initializing the net to enable compilation.
  • Support for training on multiple GPUs with the help of the accelerate package has been improved by fixing some bugs and providing a dedicated history class. Our documentation contains more information on what to consider when training on multiple GPUs.
  • If you have ever been frustrated with your neural net not training properly, you know how hard it can be to discover the underlying issue. Using the new SkorchDoctor class will simplify the diagnosis of underlying issues. Take a look at the accompanying notebook.

Apart from that, a few bugs have been fixed and the included notebooks have been updated to properly install requirements on Google Colab.

We are grateful for external contributors, many thanks to:

Find below the list of all changes since v0.12.1 below:

Added

  • Add support for compiled PyTorch modules using the torch.compile function, introduced in PyTorch 2.0 release, which can greatly improve performance on new GPU architectures; to use it, initialize your net with the compile=True argument, further compilation arguments can be specified using the dunder notation, e.g. compile__dynamic=True
  • Add a class DistributedHistory which should be used when training in a multi GPU setting (#955)
  • SkorchDoctor: A helper class that assists in understanding and debugging the neural net training, see this notebook (#912)
  • When using AccelerateMixin, it is now possible to prevent unwrapping of the modules by setting unwrap_after_train=True (#963)

Fixed

  • Fixed install command to work with recent changes in Google Colab (#928)
  • Fixed a couple of bugs related to using non-default modules and criteria (#927)
  • Fixed a bug when using AccelerateMixin in a multi-GPU setup (#947)
  • _get_param_names returns a list instead of a generator so that subsequent error messages return useful information instead of a generator repr string (#925)
  • Fixed a bug that caused modules to not be sufficiently unwrapped at the end of training when using AccelerateMixin, which could prevent them from being pickleable (#963)

Version 0.12.1

18 Nov 12:42
Compare
Choose a tag to compare

This is a small release which consists mostly of a couple of bug fixes. The standout feature here is the update of the NeptuneLogger, which makes it work with the latest Neptune client versions and adds many useful features, check it out. Big thanks to @twolodzko and colleagues for this update.

Here is the list of all changes:

  • Add Hugging Face integration tests #904
  • The entry for the HF badge was missing #905
  • Fix false warning if iterator_valid__shuffle=False #908
  • Update the Neptune integration by @twolodzko #906
  • DOC Update the documentation in several places #909
  • Don't fail when gpytorch import fails #913

Version 0.12.0

07 Oct 09:48
1596c51
Compare
Choose a tag to compare

We're pleased to announce a new skorch release, bringing new features that might interest you.

The main changes relate to better integration with the Hugging Face ecosystem:

But this is not all. We have added the possibility to load the best model parameters at the end of training when using the EarlyStopping callback. We also added the possibility to remove unneeded attributes from the net after training when it is intended to be only used for prediction by calling the trim_for_prediction method. Moreover, we now show how to use skorch with PyTorch Geometric in this notebook.

As always, this release was made possible by outside contributors. Many thanks to:

Find below the list of all changes:

Added

  • Added load_best attribute to EarlyStopping callback to automatically load module weights of the best result at the end of training
  • Added a method, trim_for_prediction, on the net classes, which trims the net from everything not required for using it for prediction; call this after fitting to reduce the size of the net
  • Added experimental support for huggingface accelerate; use the provided mixin class to add advanced training capabilities provided by the accelerate library to skorch
  • Add integration for Huggingface tokenizers; use skorch.hf.HuggingfaceTokenizer to train a Huggingface tokenizer on your custom data; use skorch.hf.HuggingfacePretrainedTokenizer to load a pre-trained Huggingface tokenizer
  • Added support for creating model checkpoints on Hugging Face Hub using HfHubStorage
  • Added a notebook that shows how to use skorch with PyTorch Geometric (#863)

Changed

  • The minimum required scikit-learn version has been bumped to 0.22.0
  • Initialize data loaders for training and validation dataset once per fit call instead of once per epoch (migration guide)
  • It is now possible to call np.asarray with SliceDatasets (#858)

Fixed

  • Fix a bug in SliceDataset that prevented it to be used with to_numpy (#858)
  • Fix a bug that occurred when loading a net that has device set to None (#876)
  • Fix a bug that in some cases could prevent loading a net that was trained with CUDA without CUDA
  • Enable skorch to work on M1/M2 Apple MacBooks (#884)

Version 0.11.0

31 Oct 15:54
baf0580
Compare
Choose a tag to compare

We are happy to announce the new skorch 0.11 release:

Two basic but very useful features have been added to our collection of callbacks. First, by setting load_best=True on the Checkpoint callback, the snapshot of the network with the best score will be loaded automatically when training ends. Second, we added a callback InputShapeSetter that automatically adjusts your input layer to have the size of your input data (useful e.g. when that size is not known beforehand).

When it comes to integrations, the MlflowLogger now allows to automatically log to MLflow. Thanks to a contributor, some regressions in net.history have been fixed and it even runs faster now.

On top of that, skorch now offers a new module, skorch.probabilistic. It contains new classes to work with Gaussian Processes using the familiar skorch API. This is made possible by the fantastic GPyTorch library, which skorch uses for this. So if you want to get started with Gaussian Processes in skorch, check out the documentation and this notebook. Since we're still learning, it's possible that we will change the API in the future, so please be aware of that.

Morever, we introduced some changes to make skorch more customizable. First of all, we changed the signature of some methods so that they no longer assume the dataset to always return exactly 2 values. This way, it's easier to work with custom datasets that return e.g. 3 values. Normal users should not notice any difference, but if you often create custom nets, take a look at the migration guide.

And finally, we made a change to how custom modules, criteria, and optimizers are handled. They are now "first class citizens" in skorch land, which means: If you add a second module to your custom net, it is treated exactly the same as the normal module. E.g., skorch takes care of moving it to CUDA if needed and of switching it to train or eval mode. This way, customizing your networks architectures with skorch is easier than ever. Check the docs for more details.

Since these are some big changes, it's possible that you encounter issues. If that's the case, please check our issue page or create a new one.

As always, this release was made possible by outside contributors. Many thanks to:

  • Autumnii
  • Cebtenzzre
  • Charles Cabergs
  • Immanuel Bayer
  • Jake Gardner
  • Matthias Pfenninger
  • Prabhat Kumar Sahu

Find below the list of all changes:

Added

  • Added load_best attribute to Checkpoint callback to automatically load state of the best result at the end of training
  • Added a get_all_learnable_params method to retrieve the named parameters of all PyTorch modules defined on the net, including of criteria if applicable
  • Added MlflowLogger callback for logging to Mlflow (#769)
  • Added InputShapeSetter callback for automatically setting the input dimension of the PyTorch module
  • Added a new module to support Gaussian Processes through GPyTorch. To learn more about it, read the GP documentation or take a look at the GP notebook. This feature is experimental, i.e. the API could be changed in the future in a backwards incompatible way (#782)

Changed

  • Changed the signature of validation_step, train_step_single, train_step, evaluation_step, on_batch_begin, and on_batch_end such that instead of receiving X and y, they receive the whole batch; this makes it easier to deal with datasets that don't strictly return an (X, y) tuple, which is true for quite a few PyTorch datasets; please refer to the migration guide if you encounter problems (#699)
  • Checking of arguments to NeuralNet is now during .initialize(), not during __init__, to avoid raising false positives for yet unknown module or optimizer attributes
  • Modules, criteria, and optimizers that are added to a net by the user are now first class: skorch takes care of setting train/eval mode, moving to the indicated device, and updating all learnable parameters during training (check the docs for more details, #751)
  • CVSplit is renamed to ValidSplit to avoid confusion (#752)

Fixed

  • Fixed a few bugs in the net.history implementation (#776)
  • Fixed a bug in TrainEndCheckpoint that prevented it from being unpickled (#773)

Version 0.10.0

23 Mar 15:34
Compare
Choose a tag to compare

This one is a smaller release, but we have some bigger additions waiting for the next one.

First we added support for Sacred to help you better organize your experiments. The CLI helper now also works with non-skorch estimators, as long as they are sklearn compatible. Some issues related to learning rate scheduling have been solved.

A big topic this time was also working on performance. First of all, we added a performance section to the docs. Furthermore, we facilitated switching off callbacks completely if performance is absolutely critical. Finally, we improved the speed of some internals (history logging). In sum, that means that skorch should be much faster for small network architectures.

We are grateful to the contributors, new and recurring:

  • Fariz Rahman
  • Han Bao
  • Scott Sievert
  • supetronix
  • Timo Kaufmann