You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Copy file name to clipboardexpand all lines: CHANGELOG.md
+14-1
Original file line number
Diff line number
Diff line change
@@ -10,8 +10,21 @@ and this project adheres to [Semantic Versioning](https://semver.org/spec/v2.0.0
10
10
11
11
### Added
12
12
13
+
### Changed
14
+
15
+
### Fixed
16
+
17
+
18
+
## v0.1.2
19
+
20
+
### Added
21
+
13
22
* New example: [Meta-World](https://github.com/rlworkgroup/metaworld) example with MAML-TRPO with it's own env wrapper. (@[Kostis-S-Z](https://github.com/Kostis-S-Z))
14
-
* Add l2l.vision.benchmarks interface.
23
+
*`l2l.vision.benchmarks` interface.
24
+
* Differentiable optimization utilities in `l2l.optim`. (including `l2l.optim.LearnableOptimizer` for meta-descent)
25
+
* General gradient-based meta-learning wrapper in `l2l.algorithms.GBML`.
26
+
* Various `nn.Modules` in `l2l.nn`.
27
+
*`l2l.update_module` as a more general alternative to `l2l.algorithms.maml_update`.
**fast prototyping*, essential in letting researchers quickly try new ideas, and
12
+
**correct reproducibility*, ensuring that these ideas are evaluated fairly.
14
13
15
-
**Features**
14
+
learn2learn provides low-level utilities and unified interface to create new algorithms and domains, together with high-quality implementations of existing algorithms and standardized benchmarks.
15
+
It retains compatibility with [torchvision](https://pytorch.org/vision/), [torchaudio](https://pytorch.org/audio/), [torchtext](https://pytorch.org/text/), [cherry](http://cherry-rl.net/), and any other PyTorch-based library you might be using.
16
16
17
-
learn2learn provides high- and low-level utilities for meta-learning.
18
-
The high-level utilities allow arbitrary users to take advantage of exisiting meta-learning algorithms.
19
-
The low-level utilities enable researchers to develop new and better meta-learning algorithms.
17
+
**Overview**
20
18
21
-
Some features of learn2learn include:
19
+
*[`learn2learn.data`](http://learn2learn.net/docs/learn2learn.data/): `TaskDataset` and transforms to create few-shot tasks from any PyTorch dataset.
20
+
*[`learn2learn.vision`](http://learn2learn.net/docs/learn2learn.vision/): Models, datasets, and benchmarks for computer vision and few-shot learning.
21
+
*[`learn2learn.gym`](http://learn2learn.net/docs/learn2learn.gym/): Environment and utilities for meta-reinforcement learning.
22
+
*[`learn2learn.algorithms`](http://learn2learn.net/docs/learn2learn.algorithms/): High-level wrappers for existing meta-learning algorithms.
23
+
*[`learn2learn.optim`](http://learn2learn.net/docs/learn2learn.optim/): Utilities and algorithms for differentiable optimization and meta-descent.
22
24
23
-
* Modular API: implement your own training loops with our low-level utilities.
* Task generator with unified API, compatible with torchvision, torchtext, torchaudio, and cherry.
26
-
* Provides standardized meta-learning tasks for vision (Omniglot, mini-ImageNet), reinforcement learning (Particles, Mujoco), and even text (news classification).
27
-
* 100% compatible with PyTorch -- use your own modules, datasets, or libraries!
The following snippets provide a sneak peek at the functionalities of learn2learn.
43
+
44
+
### High-level Wrappers
36
45
37
-
The following is an example of using the high-level MAML implementation on MNIST.
38
-
For more algorithms and lower-level utilities, please refer to the [documentation](http://learn2learn.net/docs/learn2learn/) or the [examples](https://github.com/learnables/learn2learn/tree/master/examples).
46
+
**Few-Shot Learning with MAML**
39
47
48
+
For more algorithms (ProtoNets, ANIL, Meta-SGD, Reptile, Meta-Curvature, KFO) refer to the [examples](https://github.com/learnables/learn2learn/tree/master/examples/vision) folder.
49
+
Most of them can be implemented with with the `GBML` wrapper. ([documentation](http://learn2learn.net/docs/learn2learn.algorithms/#gbml)).
A human-readable changelog is available in the [CHANGELOG.md](CHANGELOG.md) file.
65
+
Learn any kind of optimization algorithm with the `LearnableOptimizer`. ([example](https://github.com/learnables/learn2learn/tree/master/examples/optimization) and [documentation](http://learn2learn.net/docs/learn2learn.optim/#learnableoptimizer))
learned_update = l2l.optim.ParameterUpdate( # learnable update function
128
+
model.parameters(), transform)
129
+
clone = l2l.clone_module(model) # torch.clone() for nn.Modules
130
+
error = loss(clone(X), y)
131
+
updates = learned_update( # similar API as torch.autograd.grad
132
+
error,
133
+
clone.parameters(),
134
+
create_graph=True,
135
+
)
136
+
l2l.update_module(clone, updates=updates)
137
+
loss(clone(X), y).backward() # Gradients w.r.t model.parameters() and learned_update.parameters()
138
+
~~~
139
+
140
+
## Changelog
141
+
142
+
A human-readable changelog is available in the [CHANGELOG.md](CHANGELOG.md) file.
82
143
83
144
## Citation
84
145
@@ -101,5 +162,5 @@ You can also use the following Bibtex entry.
101
162
### Acknowledgements & Friends
102
163
103
164
1. The RL environments are adapted from Tristan Deleu's [implementations](https://github.com/tristandeleu/pytorch-maml-rl) and from the ProMP [repository](https://github.com/jonasrothfuss/ProMP/). Both shared with permission, under the MIT License.
104
-
2.[TorchMeta](https://github.com/tristandeleu/pytorch-meta) is similar library, with a focus on supervised meta-learning. If learn2learn were missing a particular functionality, we would go check if TorchMeta has it. But we would also open an issue ;)
105
-
3.[higher](https://github.com/facebookresearch/higher) is a PyTorch library that also enables differentiating through optimization inner-loops. Their approach is different from learn2learn in that they monkey-patch nn.Module to be stateless. For more information, refer to [their ArXiv paper](https://arxiv.org/abs/1910.01727).
165
+
2.[TorchMeta](https://github.com/tristandeleu/pytorch-meta) is similar library, with a focus on datasets for supervised meta-learning.
166
+
3.[higher](https://github.com/facebookresearch/higher) is a PyTorch library that enables differentiating through optimization inner-loops. While they monkey-patch `nn.Module` to be stateless, learn2learn retains the stateful PyTorch look-and-feel. For more information, refer to [their ArXiv paper](https://arxiv.org/abs/1910.01727).
This directory contains examples of using learn2learn for meta-optimization or meta-descent.
4
+
5
+
# Hypergradient
6
+
7
+
The script `hypergrad_mnist.py` demonstrates how to implement a slightly modified version of "[Online Learning Rate Adaptation with Hypergradient Descent](https://arxiv.org/abs/1703.04782)".
8
+
The implementation departs from the algorithm presented in the paper in two ways.
9
+
10
+
1. We forgo the analytical formulation of the learning rate's gradient to demonstrate the capability of the `LearnableOptimizer` class.
11
+
2. We adapt per-parameter learning rates instead of updating a single learning rate shared by all parameters.
12
+
13
+
**Usage**
14
+
15
+
!!! warning
16
+
The parameters for this script were not carefully tuned.
0 commit comments