🌟 This is a curated list of safe RL papers from 2017 to 2022. If you would like to contribute additional papers or update the list, please feel free to do so. https://safe-rl-team.github.io/topics-in-RL/
Reimplementing state-of-the-art RL algorithms allows us to gain a deeper understanding of the algorithms' inner workings, and subsequently, explore novel and innovative approaches. Safe Reinforcement Learning is a cutting-edge field that holds immense potential for real-world applications.
During the course "Advanced Topics in Reinforcement Learning", we took on the challenge of reimplementing ideas from several recent safe RL papers.
Our findings and discussions are available as scientific blogs, with code re-implementations available on our GitHub repository (https://github.com/Safe-RL-Team).
Join us on an exciting journey of advancing the field of Safe RL!
-
Safe Reinforcement Learning via Curriculum Induction, Matteo Turchetta, Andrey Kolobov, Shital Shah, Andreas Krause, and Alekh Agarwal
📚 Blog Marvin Sextro, Jonas Loos
-
Safe Reinforcement Learning with Natural Language Constraints, Tsung-Yen Yang, Michael Hu, Yinlam Chow, Peter J. Ramadge, and Karthik Narasimhan, NeurIPS 2021
📚 Blog Hongyou Zhou
-
Adversarial Policies: Attacking Deep Reinforcement Learning, Adam Gleave, Michael Dennis, Cody Wild, Neel Kant, Sergey Levine, and Stuart Russell, ICLR 2020
📚 Blog Lorenz Hufe, Jarek Liesen
-
Reward constrained policy optimization, Chen Tessler, Daniel J. Mankowitz, and Shie Mannor, ICLR 2019
📚 Blog Boris Meinardus, Tuan Anh Le
-
Constrained Policy Optimization via Bayesian World Models, Yarden As, Ilnura Usmanova, Sebastian Curi and Andreas Krause, ICLR 2022
📚 Blog Vincent Meilinger
-
Constrained Policy Optimization, Joshua Achiam, David Held, Aviv Tamar, and Pieter Abbeel, ICML 2017
📚 Blog Thanh Cuong Le, Paul Hasenbusch
-
Responsive Safety in Reinforcement Learning by PID Lagrangian Methods Adam, Adam Stooke, Joshua Achiam, and Pieter Abbeel, ICML 2020
📚 Blog Wenxi Huang
-
There Is No Turning Back: A Self-Supervised Approach for Reversibility-Aware Reinforcement Learning, Nathan Grinsztajn, Johan Ferret, Olivier Pietquin, Philippe Preux, and Matthieu Geist, NeurIPS 2021
📚 Blog Malik-Manel Hashim
-
Uncertainty-Based Offline Reinforcement Learning with Diversified Q-Ensemble, Gaon An, Seungyong Moon, Jang-Hyun Kim, and Hyun Oh Song, NeurIPS 2021
📚 Blog Jonas Loos, Julian Dralle
-
Learning Barrier Certificates: Towards Safe Reinforcement Learning with Zero Training-time Violations, Yuping Luo, and Tengyu Ma, NeurIPS 2021
📚 Blog Lars Chen, Jeremiah Flannery
-
Teachable Reinforcement Learning via Advice Distillation, Olivia Watkins, Trevor Darrell, Pieter Abbeel, Jacob Andreas, and Abhishek Gupta, NeurIPS 2021
📚 Blog Mihai Dumitrescu, Claire Sturgill
-
Cautious Adaptation For Reinforcement Learning in Safety-Critical Settings, Jesse Zhang, Brian Cheung, Chelsea Finn, Sergey Levine, and Dinesh Jayaraman, ICML 2020
📚 Blog Maren Eberle
-
Verifiable Reinforcement Learning via Policy Extraction, Osbert Bastani, Yewen Pu, and Armando Solar-Lezama, NeurIPS 2018
📚 Blog Christoph Pröschel
-
Time Discretization-Invariant Safe Action Repetition for Policy Gradient Methods, Seohong Park, Jaekyeom Kim, and Gunhee Kim, NeurIPS 2021
📚 Blog Hristo Boyadzhiev
By implementing and exploring ideas from state-of-the-art papers, we can push the boundaries of what is possible and pave the way for even more effective and robust safe RL algorithms.
So, let's dive in and make the world a safer place, one policy at a time!
- García, J. and Fernandez, F., A Comprehensive Survey on Safe Reinforcement Learning Journal of Machine Learning Research, 2015
- Ray, A., Achiam, J. and Amodei, D., Benchmarking Safe Exploration in Deep Reinforcement Learning Open AI, 2019
- Kumar, A. and Levine, S, Offline Reinforcement Learning: From Algorithms to Practical Challenges NeurIPS Tutorial 2020