Releases: MushroomRL/mushroom-rl
Releases · MushroomRL/mushroom-rl
MushroomRL-v.1.10.1
- Fixed loading of alpha parameter in the SAC algorithm
MushroomRL-v.1.10.0
- Implemented record interface to record videos of environments
- Updated MuJoCo interface with support for multiple environments XMLs
- Updated MuJoCo viewer with headless rendering, different backend support, advanced functionalities and options, and multiple view support.
- Improved SAC algorithm.
- Bugfixes and code cleanup
MushroomRL-v.1.9.2
Minor release with bugfixes and improvements:
- fixed MuJoCo viewer window scaling on MacOS
- improved polynomial features and Gaussian radial basis functions
- new ProMP policy added
- Fixed bug in BoltzmannTorchPolicy, now the policy can be used properly with PPO and TRPO
- minor bugfixes in serialization
MushroomRL-v.1.9.1
Minor changes to the MuJoCo interface:
- Updated to support the latest version of MuJoco 2.3.2
- Added support to reset MuJOCo environment states using an observation
MushroomRL-v.1.9.0
- Removed every Cython dependency, the package is easier to install now!
- Removed the humanoid environment, which depended on Cython
- Improved PyBullet environments
- New MuJoCo interface using native Deepmind MuJoCo bindings
- New air hockey environments implemented with MuJoCo
- The core now collects environment info and this info is passed to the fit method of the agent. This breaks the previous MushroomRL interface but allows supporting different kinds of algorithms (e.g. safe RL approaches)
- Improvements in the documentation
- Minor updates and bug fixes
MushroomRL-v.1.7.2
- Added plotting functionality, previously from MushroomRL Benchmark
- Fixed MuJoCo interface
- Added missing discount factor to eNAC update
- Gym Real-time rendering
- Pybullet interface now enforces joint torque limits
MushroomRL-v.1.7.1
- Improved documentation;
- Added MORE algorithm;
- Added Quantile Regression DQN algorithm;
- Added wrappers for Minigrid, Habitat, iGibson (thanks to @sparisi);
- Added AirHockey environments (still experimental, these environments will probably change in the future);
- Upgraded to new OpenAI gym version;
- Bug fixes in NoysiDQN, LSPI;
- Fixed ClippedGaussianPolicy, now works as expected;
- improved DMControl environment, added pixel support and arm environments e.g., 'manipulator' (thanks to @jdsalmonson).
MushroomRL-v.1.7.0
- Agent and Environment interfaces are now in core.py module.
- Added an easy interface for environment registration: environment can be created using the environment name;
- Updated Doc;
- New tutorials added;
- Improved CONTRIBUTING.md file;
- Added ConstrainedREPS;
- Bug fixed in GPOMDP;
- Improved logging of loss in regressor fit function;
- General cleanup of environment constructors;
- Improved Pybullet environment;
- Improved Voronoi tiles;
- Predict params added in DQN and Actor-Critic algorithms;
- Added support to Logger in DQN.
MushroomRL-v.1.6.1
- Replay memory can return truncated n-step return;
- Rainbow and NoisyDQN algorithms added;
- Improved PyBullet environment;
- Added clipped Gaussian policy;
- Prediction parameters added in policy and approximator.
MushroomRL-v.1.6.0
- Added MushroomRL logger;
- Support for wrapper args in gym environment;
- Fixes in tiles;
- Dueling DQN added;
- MDPInfo and spaces are now serializable;
- Optimizers are now serializable;
- DoubleFQI and BoostedFQI split into separate modules;
- Minor bug fixes.