Skip to content

Releases: pytorch/functorch

functorch -> torch.func (as of PyTorch 2.0)

17 Mar 17:10
feed934
Compare
Choose a tag to compare

As of PyTorch 2.0, we have deprecated the functorch module in favor of the new torch.func module in PyTorch.

functorch started as an out-of-tree library here at the pytorch/functorch repository. Our goal has always been to upstream functorch directly into PyTorch and provide it as a core PyTorch library.

As the final step of the upstream, we’ve decided to migrate from being a top level package (functorch) to being a part of PyTorch to reflect how the function transforms are integrated directly into PyTorch core. As of PyTorch 2.0, we are deprecating import functorch and ask that users migrate to the newest APIs, which we will maintain going forward. import functorch will be kept around to maintain backwards compatibility for at least one year.

Please see https://pytorch.org/docs/2.0/func.migrating.html for a full guide to the migration. The TL;DR is that:

  • our function transform API has not changed
  • we've changed how the functional NN module API works to better fit into PyTorch
  • AOTAutograd is still alive and well, but should not be used as a frontend for compilation. PyTorch has consolidated on torch.compile as the frontend compilation API.

We'll continue to publish functorch binaries on pypi as well to maintain backwards compatibility, but note that as of PyTorch 1.13, installing PyTorch will automatically allow you to import functorch.

functorch is now included with PyTorch 1.13

28 Oct 17:57
ca9752a
Compare
Choose a tag to compare

We’re excited to announce that, as a first step towards closer integration with PyTorch, functorch has moved to inside the PyTorch library and no longer requires the installation of a separate functorch package. After installing PyTorch via conda or pip, you’ll be able to import functorch in your program.

functorch will no longer have a separate version number (and instead the version number will match PyTorch’s; 1.13 for the current release).

If you're upgrading from an older version of functorch (functorch 0.1.x or 0.2.x), then you may need to uninstall functorch first via pip uninstall functorch.

We've maintained backwards compatibility for pip install functorch: this command works for PyTorch 1.13 and will continue to work for the foreseeable future until we do a proper deprecation. This is helpful if you're maintaining a library that supports multiple versions of PyTorch and/or functorch. The actual mechanics of this is that the functorch pip wheel is just a dummy package that lists torch==1.13 as a dependency.

Please refer to the PyTorch release notes for a detailed changelog.

functorch 0.2.1

05 Aug 20:26
Compare
Choose a tag to compare

We’re excited to present the functorch 0.2.1 minor bug-fix release, compatible with PyTorch 1.12.1. Please see here for installation instructions.

Changelog

  • Previously the functorch package was incompatible with the PyTorch 1.12.0 cu102 package. This is now fixed (functorch 0.2.1 is compatible with all PyTorch 1.12.1 packages).
  • Fixed a 25% regression from v0.1.1 to v0.2.0 in computing hessians of fully-connected layers (#989)
  • Added batching rules for masked_fill (#946), searchsorted (#966)
  • Batch norm now works with all forms of vmap when training is False (or .eval() is set on the model) (#958)

functorch 0.2.0

05 Jul 13:54
Compare
Choose a tag to compare

functorch 0.2.0 release notes

Inspired by Google JAX, functorch is a library that offers composable vmap (vectorization) and autodiff transforms. It enables advanced autodiff use cases that would otherwise be tricky to express in PyTorch. Examples of these include:

We’re excited to announce functorch 0.2.0 with a number of improvements and new experimental features.

Caveats

functorch's Linux binaries are compatible with all PyTorch 1.12.0 binaries aside from the PyTorch 1.12.0 cu102 binary; functorch will raise an error if it is used with an incompatible PyTorch binary. This is due to a bug in PyTorch (pytorch/pytorch#80489); in previous versions of PyTorch, it is possible to build a single Linux binary for functorch that works with all PyTorch Linux binaries. This will be fixed in the next PyTorch (and functorch) minor release.

Highlights

Significantly improved coverage

We significantly improved coverage for functorch.jvp (our forward-mode autodiff API) and other APIs that rely on it (functorch.{jacfwd, hessian}).

(Prototype) functorch.experimental.functionalize

Given a function f, functionalize(f) returns a new function without mutations (with caveats). This is useful for constructing traces of PyTorch functions without in-place operations. For example, you can use make_fx(functionalize(f)) to construct a mutation-free trace of a pytorch function. To learn more, please see the documentation

Windows support

There are now official functorch pip wheels for Windows.

Changelog

Note that this is not an exhaustive list of changes, e.g. changes to pytorch/pytorch can fix bugs in functorch or improve our transform coverage. Here we include user-facing changes that were committed to pytorch/functorch.

  • Added functorch.experimental.functionalize (#236, #720, and more)
  • Added support for Windows (#696)
  • Fixed vmap support for torch.norm (#708)
  • Added disable_autograd_tracking to make_functional variants. This is useful if you’re not using torch.autograd (#701)
  • Fixed a bug in the neural tangent kernels tutorial (#788)
  • Improve vmap over indexing with Tensors (#777, #862)
  • Fixed vmap over torch.nn.functional.mse_loss (#860)
  • Raise an error on unsupported combinations of torch.autograd.functional and functorch transforms (#849)
  • Improved docs on the limitations of functorch transforms (#879)

functorch 0.1.1

12 Apr 23:32
Compare
Choose a tag to compare

functorch 0.1.1

We’re excited to present the functorch 0.1.1 minor bug-fix release, compatible with PyTorch 1.11. Please see here for installation instructions.

Changelog

  • Fixed a bug when composing jvp with vmap (#603)
  • jvp now works when called inside autograd.Function (#607)
  • make_functional (and variants) now work with models that do parameter sharing (also known as weight tying) (#620)
  • Added batching rules for nn.functional.silu, nn.functional.prelu, nn.functional.glu (#677, #609, #665)
  • Fixed vmap support for nn.functional.group_norm, binomial, torch.multinomial, Tensor.to (#685, #670, #672, #649)

functorch 0.1.0

10 Mar 20:10
Compare
Choose a tag to compare

We’re excited to announce the first beta release of functorch. Heavily inspired by Google JAX, functorch is a library that adds composable function transforms to PyTorch. It aims to provide composable vmap (vectorization) and autodiff transforms that work with PyTorch modules and PyTorch autograd with good eager-mode performance.

Composable function transforms can help with a number of use cases that are tricky to do in PyTorch today:

  • computing per-sample-gradients (or other per-sample quantities)
  • running ensembles of models on a single machine
  • efficiently batching together tasks in the inner-loop of MAML
  • efficiently computing Jacobians and Hessians as well as batched ones
  • Composing vmap (vectorization), vjp (reverse-mode AD), and jvp (forward-mode AD) transforms allows us to effortlessly express the above without designing a separate library for each.

For more details, please see our documentation, tutorials, and installation instructions.