Skip to content

Commit ef03d33

Browse files
authored
Fix typos and grammar mistakes (#2140)
1 parent f9c4ca5 commit ef03d33

File tree

11 files changed

+16
-16
lines changed

11 files changed

+16
-16
lines changed

README.md

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -116,7 +116,7 @@ Install the Stable Baselines3 package:
116116
pip install 'stable-baselines3[extra]'
117117
```
118118

119-
This includes an optional dependencies like Tensorboard, OpenCV or `ale-py` to train on atari games. If you do not need those, you can use:
119+
This includes optional dependencies like Tensorboard, OpenCV or `ale-py` to train on atari games. If you do not need those, you can use:
120120
```sh
121121
pip install stable-baselines3
122122
```

docs/guide/algos.rst

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -78,7 +78,7 @@ Credit: part of the *Reproducibility* section comes from `PyTorch Documentation
7878
Training exceeds ``total_timesteps``
7979
------------------------------------
8080

81-
When you train an agent using SB3, you pass a ``total_timesteps`` parameter to the ``learn()`` method which defines the training budget for the agent (how many interaction with the environment are allowed).
81+
When you train an agent using SB3, you pass a ``total_timesteps`` parameter to the ``learn()`` method which defines the training budget for the agent (how many interactions with the environment are allowed).
8282
For example:
8383

8484
.. code-block:: python

docs/guide/callbacks.rst

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -412,7 +412,7 @@ It must be used with the :ref:`EvalCallback` and use the event triggered after e
412412
413413
model = SAC("MlpPolicy", "Pendulum-v1", learning_rate=1e-3, verbose=1)
414414
# Almost infinite number of timesteps, but the training will stop early
415-
# as soon as the the number of consecutive evaluations without model
415+
# as soon as the number of consecutive evaluations without model
416416
# improvement is greater than 3
417417
model.learn(int(1e10), callback=eval_callback)
418418

docs/guide/checking_nan.rst

Lines changed: 2 additions & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -145,12 +145,12 @@ It will monitor the actions, observations, and rewards, indicating what action o
145145
RL Model hyperparameters
146146
------------------------
147147

148-
Depending on your hyperparameters, NaN can occurs much more often.
148+
Depending on your hyperparameters, NaN can occur much more often.
149149
A great example of this: https://github.com/hill-a/stable-baselines/issues/340
150150

151151
Be aware, the hyperparameters given by default seem to work in most cases,
152152
however your environment might not play nice with them.
153-
If this is the case, try to read up on the effect each hyperparameters has on the model,
153+
If this is the case, try to read up on the effect each hyperparameter has on the model,
154154
so that you can try and tune them to get a stable model. Alternatively, you can try automatic hyperparameter tuning (included in the rl zoo).
155155

156156
Missing values from datasets

docs/guide/custom_env.rst

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -102,7 +102,7 @@ To check that your environment follows the Gym interface that SB3 supports, plea
102102
103103
Gymnasium also have its own `env checker <https://gymnasium.farama.org/api/utils/#gymnasium.utils.env_checker.check_env>`_ but it checks a superset of what SB3 supports (SB3 does not support all Gym features).
104104

105-
We have created a `colab notebook <https://colab.research.google.com/github/araffin/rl-tutorial-jnrr19/blob/sb3/5_custom_gym_env.ipynb>`_ for a concrete example on creating a custom environment along with an example of using it with Stable-Baselines3 interface.
105+
We have created a `colab notebook <https://colab.research.google.com/github/araffin/rl-tutorial-jnrr19/blob/sb3/5_custom_gym_env.ipynb>`_ for a concrete example of creating a custom environment along with an example of using it with Stable-Baselines3 interface.
106106

107107
Alternatively, you may look at Gymnasium `built-in environments <https://gymnasium.farama.org>`_.
108108

docs/guide/custom_policy.rst

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -16,7 +16,7 @@ other type of input features (MlpPolicies) and multiple different inputs (MultiI
1616
SB3 Policy
1717
^^^^^^^^^^
1818

19-
SB3 networks are separated into two mains parts (see figure below):
19+
SB3 networks are separated into two main parts (see figure below):
2020

2121
- A features extractor (usually shared between actor and critic when applicable, to save computation)
2222
whose role is to extract features (i.e. convert to a feature vector) from high-dimensional observations, for instance, a CNN that extracts features from images.

docs/guide/examples.rst

Lines changed: 2 additions & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -247,7 +247,7 @@ If your callback returns False, training is aborted early.
247247
248248
:param check_freq:
249249
:param log_dir: Path to the folder where the model will be saved.
250-
It must contains the file created by the ``Monitor`` wrapper.
250+
It must contain the file created by the ``Monitor`` wrapper.
251251
:param verbose: Verbosity level: 0 for no output, 1 for info messages, 2 for debug messages
252252
"""
253253
def __init__(self, check_freq: int, log_dir: str, verbose: int = 1):
@@ -702,7 +702,7 @@ A2C policy gradient updates on the model.
702702
if ("policy" in key or "shared_net" in key or "action" in key)
703703
)
704704
705-
# population size of 50 invdiduals
705+
# population size of 50 individuals
706706
pop_size = 50
707707
# Keep top 10%
708708
n_elite = pop_size // 10

docs/guide/export.rst

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -167,7 +167,7 @@ For more discussion around the topic, please refer to `GH#383 <https://github.co
167167
Trace/Export to C++
168168
-------------------
169169

170-
You can use PyTorch JIT to trace and save a trained model that can be re-used in other applications
170+
You can use PyTorch JIT to trace and save a trained model that can be reused in other applications
171171
(for instance inference code written in C++).
172172

173173
There is a draft PR in the RL Zoo about C++ export: https://github.com/DLR-RM/rl-baselines3-zoo/pull/228

docs/guide/install.rst

Lines changed: 3 additions & 3 deletions
Original file line numberDiff line numberDiff line change
@@ -18,7 +18,7 @@ For a quick start you can move straight to installing Stable-Baselines3 in the n
1818

1919
.. note::
2020

21-
Trying to create Atari environments may result to vague errors related to missing DLL files and modules. This is an
21+
Trying to create Atari environments may result in vague errors related to missing DLL files and modules. This is an
2222
issue with atari-py package. `See this discussion for more information <https://github.com/openai/atari-py/issues/65>`_.
2323

2424

@@ -34,7 +34,7 @@ To install Stable Baselines3 with pip, execute:
3434
Some shells such as Zsh require quotation marks around brackets, i.e. ``pip install 'stable-baselines3[extra]'`` `More information <https://stackoverflow.com/a/30539963>`_.
3535

3636

37-
This includes an optional dependencies like Tensorboard, OpenCV or ``ale-py`` to train on Atari games. If you do not need those, you can use:
37+
This includes optional dependencies like Tensorboard, OpenCV or ``ale-py`` to train on Atari games. If you do not need those, you can use:
3838

3939
.. code-block:: bash
4040
@@ -151,7 +151,7 @@ Explanation of the docker command:
151151
run it interactively (so ctrl+c will work)
152152
- ``--rm`` option means to remove the container once it exits/stops
153153
(otherwise, you will have to use ``docker rm``)
154-
- ``--network host`` don't use network isolation, this allow to use
154+
- ``--network host`` don't use network isolation, this allows to use
155155
tensorboard/visdom on host machine
156156
- ``--ipc=host`` Use the host system’s IPC namespace. IPC (POSIX/SysV IPC) namespace provides
157157
separation of named shared memory segments, semaphores and message

docs/guide/integrations.rst

Lines changed: 2 additions & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -175,7 +175,7 @@ With ``package_to_hub()``
175175
# Train the agent
176176
model.learn(total_timesteps=int(5000))
177177
178-
# This method save, evaluate, generate a model card and record a replay video of your agent before pushing the repo to the hub
178+
# This method saves, evaluates, generates a model card and records a replay video of your agent before pushing the repo to the hub
179179
package_to_hub(model=model,
180180
model_name="ppo-CartPole-v1",
181181
model_architecture="PPO",
@@ -219,7 +219,7 @@ With ``push_to_hub()``
219219
model.save("ppo-CartPole-v1")
220220
221221
# Push this saved model .zip file to the hf repo
222-
# If this repo does not exists it will be created
222+
# If this repo does not exist it will be created
223223
## repo_id = id of the model repository from the Hugging Face Hub (repo_id = {organization}/{repo_name})
224224
## filename: the name of the file == "name" inside model.save("ppo-CartPole-v1")
225225
push_to_hub(

0 commit comments

Comments
 (0)