diff --git a/tutorials/4. Batch Reinforcement Learning.ipynb b/tutorials/4. Batch Reinforcement Learning.ipynb index 9859a9c7..82b9ad98 100644 --- a/tutorials/4. Batch Reinforcement Learning.ipynb +++ b/tutorials/4. Batch Reinforcement Learning.ipynb @@ -18,7 +18,7 @@ "\n", "Alternatively, what do we do if we don't have a simulator, but instead we can actually deploy our policy on that real-world environment, and would just like to separate the new data collection part from the learning part (i.e. if we have a system that can quite easily run inference, but is very hard to integrate a reinforcement learning framework with, such as Coach, for learning a new policy).\n", "\n", - "We will try to address these questions and more in this tutorial, demonstrating how to use [Batch Reinforcement Learning](http://tgabel.de/cms/fileadmin/user_upload/documents/Lange_Gabel_EtAl_RL-Book-12.pdf). \n", + "We will try to address these questions and more in this tutorial, demonstrating how to use [Batch Reinforcement Learning](https://link.springer.com/chapter/10.1007/978-3-642-27645-3_2). \n", "\n", "First, let's use a simple environment to collect the data to be used for learning a policy using Batch RL. In reality, we probably would already have a dataset of transitions of the form `` to be used for learning a new policy. Ideally, we would also have, for each transtion, $p(a|o)$ the probabilty of an action, given that transition's `current_observation`. " ]