Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Update examples #2079

Merged
merged 60 commits into from
Aug 8, 2023
Merged
Show file tree
Hide file tree
Changes from all commits
Commits
Show all changes
60 commits
Select commit Hold shift + click to select a range
c5465a6
Add hydra configured example.
Gamenot Jul 6, 2023
4c58227
Show temporary click+hydra configuration example.
Gamenot Jul 6, 2023
faaee81
Fix examples in setup.cfg.
Gamenot Jul 6, 2023
9ed380b
Update temporary example to print yaml.
Gamenot Jul 6, 2023
92b8ecb
Update examples test install.
Gamenot Jul 11, 2023
3ee91b6
Update for mac.
Gamenot Jul 11, 2023
259914d
Remove test configuration files.
Gamenot Jul 11, 2023
1cc9af0
Update changelog.
Gamenot Jul 11, 2023
910a390
Improve envision logging.
Gamenot Jul 11, 2023
87cb4b2
Update configuration name
Gamenot Jul 11, 2023
7a82a1c
Set default observation and action options to multi-agent.
Gamenot Jul 11, 2023
414ac5b
Finalize configurable experiment changes.
Gamenot Jul 12, 2023
d52c34d
Fix type test.
Gamenot Jul 13, 2023
7b89bd3
Clarify changes in change log.
Gamenot Jul 13, 2023
fb24d6a
Fix docs test.
Gamenot Jul 13, 2023
213c84d
Fix syntax error in docs.
Gamenot Jul 14, 2023
ff3914e
Fix fake socket missing url in envision test.
Gamenot Jul 14, 2023
9c27ef1
Rename agents_configs to agent_configs.
Gamenot Jul 14, 2023
b18f1f7
Fix rendering.
Gamenot Jul 20, 2023
7f518ef
Fix setup
Gamenot Jul 20, 2023
bea338b
Update examples.
Gamenot Jul 20, 2023
dfb1f1e
Move experiment
Gamenot Jul 21, 2023
9105868
Fix example configuration
Gamenot Jul 21, 2023
277c0ba
Update intro examples.
Gamenot Jul 21, 2023
7ac05b8
Move experiment.
Gamenot Jul 21, 2023
165c25b
Tighten up example 4
Gamenot Jul 21, 2023
7e961ca
Fix registry interfaces.
Gamenot Jul 21, 2023
f96483b
Tighten example 4 more.
Gamenot Jul 21, 2023
8f4421d
Remove some locator references in docs.
Gamenot Jul 21, 2023
9a83af9
Finish cleaning up examples
Gamenot Jul 21, 2023
a5dd715
Finish example 5.
Gamenot Jul 21, 2023
2210e65
Clean up example.
Gamenot Jul 21, 2023
5a33cc0
Fix example 6.
Gamenot Jul 21, 2023
1cb8402
Make format.
Gamenot Jul 21, 2023
c6f8256
Fix changlog.
Gamenot Jul 21, 2023
98e0c61
Restore example tests.
Gamenot Jul 21, 2023
1dcb12b
Constrain example 6
Gamenot Jul 21, 2023
6a2958f
Simplify test.
Gamenot Jul 21, 2023
601f26b
Fix glb generation.
Gamenot Jul 21, 2023
4f79473
Update some examples.
Gamenot Jul 25, 2023
be17272
Fix examples 7 and 8
Gamenot Jul 25, 2023
6140201
Update all examples to make room for agent interface example.
Gamenot Jul 25, 2023
94da081
Add action space example.
Gamenot Jul 26, 2023
02a40cf
Update examples.
Gamenot Jul 27, 2023
8d23f64
Update envision js format.
Gamenot Jul 27, 2023
185c538
Rename examples to start with alpha character.
Gamenot Jul 31, 2023
8a585ba
Fix test references.
Gamenot Aug 1, 2023
1feda79
Fix tests.
Gamenot Aug 1, 2023
ca35345
Clarify examples
Gamenot Aug 1, 2023
cae67c5
Fix drive and platoon references.
Gamenot Aug 1, 2023
7c21847
Fix platoon and drive benchmark tests.
Gamenot Aug 1, 2023
27c28f3
Fix agent interface generated as tuple.
Gamenot Aug 1, 2023
edf5efb
Fix parallel env example.
Gamenot Aug 1, 2023
a7f2c40
Fix typecheck errors.
Gamenot Aug 1, 2023
f00b855
Ignore enum.Enum typecheck error.
Gamenot Aug 1, 2023
05fa9ca
Fix potential source of non-determinism.
Gamenot Aug 4, 2023
40713d8
Fix rendering non-determinsim...
Gamenot Aug 4, 2023
0847ddd
Fix lane sorting.
Gamenot Aug 4, 2023
c3612a8
Make rendering backend configurable.
Gamenot Aug 8, 2023
35bafda
Format example.
Gamenot Aug 8, 2023
File filter

Filter by extension

Filter by extension


Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
19 changes: 10 additions & 9 deletions .github/workflows/ci-base-tests-linux.yml
Original file line number Diff line number Diff line change
Expand Up @@ -31,7 +31,8 @@ jobs:
pip install --upgrade pip
pip install wheel==0.38.4
pip install -e .[camera_obs,opendrive,test,test_notebook,torch,train,gif_recorder,gymnasium,argoverse,envision,sumo]
if echo ${{matrix.tests}} | grep -q -e "test_rllib_hiway_env.py" -e "test_examples.py"; then pip install -e .[rllib]; fi
if echo ${{matrix.tests}} | grep -q -e "test_rllib_hiway_env.py"; then pip install -e .[rllib]; fi
if echo ${{matrix.tests}} | grep -q -e "test_examples.py"; then pip install -e .[examples,rllib]; fi
if echo ${{matrix.tests}} | grep -q -e "/smarts/ray"; then pip install -e .[ray]; fi
- name: Build scenarios
run: |
Expand Down Expand Up @@ -71,23 +72,23 @@ jobs:
strategy:
matrix:
tests:
- drive
- platoon
- e10_drive
- e11_platoon
steps:
- name: Checkout
uses: actions/checkout@v2
- name: Install dependencies
run: |
cd ${GITHUB_WORKSPACE}/examples/rl/${{matrix.tests}}
cd ${GITHUB_WORKSPACE}/examples/${{matrix.tests}}
python3.8 -m venv ${{env.venv_dir}}
. ${{env.venv_dir}}/bin/activate
pip install --upgrade pip
pip install wheel==0.38.4
pip install -e ./../../../.[camera_obs,argoverse,sumo,test]
pip install -e ./../../.[camera_obs,argoverse,sumo,test]
pip install -e ./inference/
- name: Run smoke tests
run: |
cd ${GITHUB_WORKSPACE}/examples/rl/${{matrix.tests}}
cd ${GITHUB_WORKSPACE}/examples/${{matrix.tests}}
. ${{env.venv_dir}}/bin/activate
PYTHONPATH=$PWD PYTHONHASHSEED=42 pytest -v \
--doctest-modules \
Expand All @@ -103,8 +104,8 @@ jobs:
strategy:
matrix:
tests:
- drive
- platoon
- e10_drive
- e11_platoon
steps:
- name: Checkout
uses: actions/checkout@v2
Expand All @@ -116,7 +117,7 @@ jobs:
pip install --upgrade pip
pip install wheel==0.38.4
pip install -e .[camera_obs,argoverse,test,ray,sumo]
scl zoo install examples/rl/${{matrix.tests}}/inference
scl zoo install examples/${{matrix.tests}}/inference
- name: Run smoke tests
run: |
cd ${GITHUB_WORKSPACE}
Expand Down
3 changes: 2 additions & 1 deletion .github/workflows/ci-base-tests-mac.yml
Original file line number Diff line number Diff line change
Expand Up @@ -49,7 +49,8 @@ jobs:
pip install wheel==0.38.4
pip install -r utils/setup/mac_requirements.txt
pip install -e .[camera_obs,opendrive,rllib,test,test_notebook,torch,train,argoverse,envision,sumo]
if echo ${{matrix.tests}} | grep -q -e "/env" -e "/examples"; then pip install -e .[rllib]; fi
if echo ${{matrix.tests}} | grep -q -e "/env"; then pip install -e .[rllib]; fi
if echo ${{matrix.tests}} | grep -q -e "/examples"; then pip install -e .[examples,rllib]; fi
if echo ${{matrix.tests}} | grep -q "/ray"; then pip install -e .[ray]; fi
- name: Run smoke tests
run: |
Expand Down
7 changes: 5 additions & 2 deletions .gitignore
Original file line number Diff line number Diff line change
Expand Up @@ -73,7 +73,7 @@ target/
celerybeat-schedule

# Environments
.venv
.venv*
venv
.bugtest

Expand Down Expand Up @@ -150,4 +150,7 @@ collected_observations/
.pytype

# Benchmark
**/diagnostic/reports/*
**/diagnostic/reports/*

# Experiments
outputs/
11 changes: 11 additions & 0 deletions CHANGELOG.md
Original file line number Diff line number Diff line change
Expand Up @@ -10,12 +10,23 @@ Copy and pasting the git commit messages is __NOT__ enough.

## [Unreleased] - XXXX-XX-XX
### Added
- SMARTS option `smarts[examples]` added for running SMARTS examples.
### Changed
- The following dependencies have been loosened: `numpy`, `opencv`, `torch`.
- Clarified engine configuration location under `logging.info` instead of `print`.
- `ScenarioOrder` enumeration values are now lower-case (e.g. `scrambled` instead of `Scrambled`).
- `EnvReturnMode`, `ScenarioOrder`, and `SumoOptions` have been moved to `smarts.env.configs.hiway_env_configs`.
- `trimesh` version has been loosened to `trimesh>=3.9.29`.
### Deprecated
### Fixed
- The `smarts` package now works with `python3.10` and `python3.11`.
- Fixed an issue where default `AgentInterface.events` shared a reference.
- Episode log now lists current value out of maximum rather than index.
- Episode log now correctly shows all agent scores.
- Added `scipy` back to dependencies to fix scenario building.
- Fixed `gymnasium` floating type cast warnings in action conversion.
### Removed
- Removed previously deprecated `SMARTS.timestep_sec` attribute.
### Security

## [1.3.0] - 2023-07-11
Expand Down
51 changes: 26 additions & 25 deletions README.md
Original file line number Diff line number Diff line change
Expand Up @@ -17,37 +17,38 @@ Check out the paper at [SMARTS: Scalable Multi-Agent Reinforcement Learning Trai
:rotating_light: :bell: Read the docs :notebook_with_decorative_cover: at [smarts.readthedocs.io](https://smarts.readthedocs.io/) . :bell: :rotating_light:

# Examples
### Egoless
Simulate a SMARTS environment without any ego agents, but with only background traffic.
1. [Egoless](examples/egoless.py) example.
### Primitive
1. [Egoless](examples/e1_egoless.py) example.
+ Run a SMARTS simulation without any ego agents, but with only background traffic.
1. [Single-Agent](examples/e2_single_agent.py) example.
+ Run a SMARTS simulation with a single ego agent.
1. [Multi-Agent](examples/e3_multi_agent.py) example.
+ Run a SMARTS simulation with multiple ego agents.
1. [Environment Config](examples/e4_environment_config.py) example.
+ Demonstrate the main observation/action configuration of the environment.
1. [Agent Zoo](examples/e5_agent_zoo.py) example.
+ Demonstrate how the agent zoo works.
1. [Agent interface example](examples/6_agent_interface.py)
+ TODO demonstrate how the agent interface works.

### Control Theory
Several agent control policies and agent [action types](smarts/core/controllers/__init__.py) are demonstrated.
### Integration examples
A few more complex integrations are demonstrated.

1. Chase Via Points
+ script: [control/chase_via_points.py](examples/control/chase_via_points.py)
+ Multi agent
+ ActionSpaceType: LaneWithContinuousSpeed
1. Trajectory Tracking
+ script: [control/trajectory_tracking.py](examples/control/trajectory_tracking.py)
+ ActionSpaceType: Trajectory
1. OpEn Adaptive Control
+ script: [control/ego_open_agent.py](examples/control/ego_open_agent.py)
+ ActionSpaceType: MPC
1. Laner
+ script: [control/laner.py](examples/control/laner.py)
+ Multi agent
+ ActionSpaceType: Lane
1. Configurable example
+ script: [examples/e7_experiment_base.py](examples/e7_experiment_base.py)
+ Configurable agent number.
+ Configurable agent type.
+ Configurable environment.
1. Parallel environments
+ script: [control/parallel_environment.py](examples/control/parallel_environment.py)
+ script: [examples/e8_parallel_environment.py](examples/e8_parallel_environment.py)
+ Multiple SMARTS environments in parallel
+ ActionSpaceType: LaneWithContinuousSpeed

### RL Model
1. [Drive](examples/rl/drive). See [Driving SMARTS 2023.1 & 2023.2](https://smarts.readthedocs.io/en/latest/benchmarks/driving_smarts_2023_1.html) for more info.
1. [VehicleFollowing](examples/rl/platoon). See [Driving SMARTS 2023.3](https://smarts.readthedocs.io/en/latest/benchmarks/driving_smarts_2023_3.html) for more info.
1. [PG](examples/rl/rllib/pg_example.py). See [RLlib](https://smarts.readthedocs.io/en/latest/ecosystem/rllib.html) for more info.
1. [PG Population Based Training](examples/rl/rllib/pg_pbt_example.py). See [RLlib](https://smarts.readthedocs.io/en/latest/ecosystem/rllib.html) for more info.
### RL Examples
1. [Drive](examples/e10_drive). See [Driving SMARTS 2023.1 & 2023.2](https://smarts.readthedocs.io/en/latest/benchmarks/driving_smarts_2023_1.html) for more info.
1. [VehicleFollowing](examples/e11_platoon). See [Driving SMARTS 2023.3](https://smarts.readthedocs.io/en/latest/benchmarks/driving_smarts_2023_3.html) for more info.
1. [PG](examples/e12_rllib/pg_example.py). See [RLlib](https://smarts.readthedocs.io/en/latest/ecosystem/rllib.html) for more info.
1. [PG Population Based Training](examples/e12_rllib/pg_pbt_example.py). See [RLlib](https://smarts.readthedocs.io/en/latest/ecosystem/rllib.html) for more info.

### RL Environment
1. [ULTRA](https://github.com/smarts-project/smarts-project.rl/blob/master/ultra) provides a gym-based environment built upon SMARTS to tackle intersection navigation, specifically the unprotected left turn.
Expand Down
24 changes: 12 additions & 12 deletions docs/benchmarks/driving_smarts_2023_1.rst
Original file line number Diff line number Diff line change
Expand Up @@ -178,7 +178,7 @@ the user.
agent_params=agent_params,
)

register(locator="contrib-agent-v0", entry_point=entry_point)
register("contrib-agent-v0", entry_point=entry_point)

+ User may fill in the ``<...>`` spaces in the template.
+ User may specify the ego's interface by configuring any field of :class:`~smarts.core.agent_interface.AgentInterface`, except
Expand Down Expand Up @@ -239,7 +239,7 @@ Example
-------

An example training and inference code is provided for this benchmark.
See the :examples:`rl/drive` example. The example uses PPO algorithm from
See the :examples:`e10_drive` example. The example uses PPO algorithm from
`Stable Baselines3 <https://github.com/DLR-RM/stable-baselines3>`_ reinforcement learning library.
It uses :attr:`~smarts.core.controllers.action_space_type.ActionSpaceType.RelativeTargetPose` action space.
Instructions for training and evaluating the example is as follows.
Expand All @@ -251,12 +251,12 @@ Train
.. code-block:: bash

# In terminal-A
$ cd <path>/SMARTS/examples/rl/drive
$ cd <path>/SMARTS/examples/e10_drive
$ python3.8 -m venv ./.venv
$ source ./.venv/bin/activate
$ pip install --upgrade pip
$ pip install wheel==0.38.4
$ pip install -e ./../../../.[camera_obs,argoverse,envision,sumo]
$ pip install -e ./../../.[camera_obs,argoverse,envision,sumo]
$ pip install -e ./inference/

+ Train locally without visualization
Expand All @@ -271,7 +271,7 @@ Train
.. code-block:: bash

# In a different terminal-B
$ cd <path>/SMARTS/examples/rl/drive
$ cd <path>/SMARTS/examples/e10_drive
$ source ./.venv/bin/activate
$ scl envision start
# Open http://localhost:8081/
Expand All @@ -281,7 +281,7 @@ Train
# In terminal-A
$ python3.8 train/run.py --head

+ Trained models are saved by default inside the ``<path>/SMARTS/examples/rl/drive/train/logs/`` folder.
+ Trained models are saved by default inside the ``<path>/SMARTS/examples/e10_drive/train/logs/`` folder.

Docker
^^^^^^
Expand All @@ -290,14 +290,14 @@ Docker
.. code-block:: bash

$ cd <path>/SMARTS
$ docker build --file=./examples/rl/drive/train/Dockerfile --network=host --tag=drive .
$ docker build --file=./examples/e10_drive/train/Dockerfile --network=host --tag=drive .
$ docker run --rm -it --network=host --gpus=all drive
(container) $ cd /SMARTS/examples/rl/drive
(container) $ cd /SMARTS/examples/e10_drive
(container) $ python3.8 train/run.py

Evaluate
^^^^^^^^
+ Choose a desired saved model from the previous training step, rename it as ``saved_model.zip``, and move it to ``<path>/SMARTS/examples/rl/drive/inference/contrib_policy/saved_model.zip``.
+ Choose a desired saved model from the previous training step, rename it as ``saved_model.zip``, and move it to ``<path>/SMARTS/examples/e10_drive/inference/contrib_policy/saved_model.zip``.
+ Evaluate locally

.. code-block:: bash
Expand All @@ -308,11 +308,11 @@ Evaluate
$ pip install --upgrade pip
$ pip install wheel==0.38.4
$ pip install -e .[camera_obs,argoverse,envision,sumo]
$ scl zoo install examples/rl/drive/inference
$ scl zoo install examples/e10_drive/inference
# For Driving SMARTS 2023.1
$ scl benchmark run driving_smarts_2023_1 examples.rl.drive.inference:contrib-agent-v0 --auto-install
$ scl benchmark run driving_smarts_2023_1 examples.e10_drive.inference:contrib-agent-v0 --auto-install
# For Driving SMARTS 2023.2
$ scl benchmark run driving_smarts_2023_2 examples.rl.drive.inference:contrib-agent-v0 --auto-install
$ scl benchmark run driving_smarts_2023_2 examples.e10_drive.inference:contrib-agent-v0 --auto-install

Zoo agents
----------
Expand Down
22 changes: 11 additions & 11 deletions docs/benchmarks/driving_smarts_2023_3.rst
Original file line number Diff line number Diff line change
Expand Up @@ -156,7 +156,7 @@ the user.
agent_params=agent_params,
)

register(locator="contrib-agent-v0", entry_point=entry_point)
register("contrib-agent-v0", entry_point=entry_point)

+ User may fill in the ``<...>`` spaces in the template.
+ User may specify the ego's interface by configuring any field of :class:`~smarts.core.agent_interface.AgentInterface`, except
Expand Down Expand Up @@ -217,7 +217,7 @@ Example
-------

An example training and inference code is provided for this benchmark.
See the :examples:`rl/platoon` example. The example uses PPO algorithm from
See the :examples:`e11_platoon` example. The example uses PPO algorithm from
`Stable Baselines3 <https://github.com/DLR-RM/stable-baselines3>`_ reinforcement learning library.
It uses :attr:`~smarts.core.controllers.action_space_type.ActionSpaceType.Continuous` action space.
Instructions for training and evaluating the example is as follows.
Expand All @@ -229,12 +229,12 @@ Train
.. code-block:: bash

# In terminal-A
$ cd <path>/SMARTS/examples/rl/platoon
$ cd <path>/SMARTS/examples/e11_platoon
$ python3.8 -m venv ./.venv
$ source ./.venv/bin/activate
$ pip install --upgrade pip
$ pip install wheel==0.38.4
$ pip install -e ./../../../.[camera_obs,argoverse,envision,sumo]
$ pip install -e ./../../.[camera_obs,argoverse,envision,sumo]
$ pip install -e ./inference/

+ Train locally without visualization
Expand All @@ -249,7 +249,7 @@ Train
.. code-block:: bash

# In a different terminal-B
$ cd <path>/SMARTS/examples/rl/platoon
$ cd <path>/SMARTS/examples/e11_platoon
$ source ./.venv/bin/activate
$ scl envision start
# Open http://localhost:8081/
Expand All @@ -259,7 +259,7 @@ Train
# In terminal-A
$ python3.8 train/run.py --head

+ Trained models are saved by default inside the ``<path>/SMARTS/examples/rl/platoon/train/logs/`` folder.
+ Trained models are saved by default inside the ``<path>/SMARTS/examples/e11_platoon/train/logs/`` folder.

Docker
^^^^^^
Expand All @@ -268,14 +268,14 @@ Docker
.. code-block:: bash

$ cd <path>/SMARTS
$ docker build --file=./examples/rl/platoon/train/Dockerfile --network=host --tag=platoon .
$ docker build --file=./examples/e11_platoon/train/Dockerfile --network=host --tag=platoon .
$ docker run --rm -it --network=host --gpus=all platoon
(container) $ cd /SMARTS/examples/rl/platoon
(container) $ cd /SMARTS/examples/e11_platoon
(container) $ python3.8 train/run.py

Evaluate
^^^^^^^^
+ Choose a desired saved model from the previous training step, rename it as ``saved_model.zip``, and move it to ``<path>/SMARTS/examples/rl/platoon/inference/contrib_policy/saved_model.zip``.
+ Choose a desired saved model from the previous training step, rename it as ``saved_model.zip``, and move it to ``<path>/SMARTS/examples/e11_platoon/inference/contrib_policy/saved_model.zip``.
+ Evaluate locally

.. code-block:: bash
Expand All @@ -286,8 +286,8 @@ Evaluate
$ pip install --upgrade pip
$ pip install wheel==0.38.4
$ pip install -e .[camera_obs,argoverse,envision,sumo]
$ scl zoo install examples/rl/platoon/inference
$ scl benchmark run driving_smarts_2023_3 examples.rl.platoon.inference:contrib-agent-v0 --auto-install
$ scl zoo install examples/e11_platoon/inference
$ scl benchmark run driving_smarts_2023_3 examples.e11_platoon.inference:contrib-agent-v0 --auto-install

Zoo agents
----------
Expand Down
1 change: 1 addition & 0 deletions docs/conf.py
Original file line number Diff line number Diff line change
Expand Up @@ -121,6 +121,7 @@
("py:class", "ActType"),
("py:class", "ObsType"),
("py:class", "smarts.env.gymnasium.wrappers.metric.utils.T"),
("py:class", "enum.Enum"),
}
nitpick_ignore_regex = {
(r"py:.*", r"av2\..*"),
Expand Down
4 changes: 2 additions & 2 deletions docs/ecosystem/rllib.rst
Original file line number Diff line number Diff line change
Expand Up @@ -10,9 +10,9 @@ deep learning frameworks.

SMARTS contains two examples using `Policy Gradients (PG) <https://docs.ray.io/en/latest/rllib-algorithms.html#policy-gradients-pg>`_.

1. ``rllib/pg_example.py``
1. ``e12_rllib/pg_example.py``
This example shows the basics of using RLlib with SMARTS through :class:`~smarts.env.rllib_hiway_env.RLlibHiWayEnv`.
1. ``rllib/pg_pbt_example.py``
1. ``e12_rllib/pg_pbt_example.py``
This example combines Policy Gradients with `Population Based Training (PBT) <https://docs.ray.io/en/latest/tune/api/doc/ray.tune.schedulers.PopulationBasedTraining.html>`_ scheduling.

Recommended reads
Expand Down
12 changes: 5 additions & 7 deletions docs/quickstart.rst
Original file line number Diff line number Diff line change
Expand Up @@ -16,25 +16,23 @@ A typical workflow would look like this.
Example
-------

In this quickstart guide, we will run the `Chase Via Points` example. Here,
In this quickstart guide, we will run the `multi-agent` example. Here,

1. a pre-designed scenario :scenarios:`scenarios/sumo/loop <sumo/loop>` is used.
2. a simple agent with `interface` == :attr:`~smarts.core.agent_interface.AgentType.LanerWithSpeed` and `policy` == `Chase Via Points` is demonstrated. The agent chases via points or follows nearby waypoints if a via point is unavailable.
2. a simple agent with `interface` == :attr:`~smarts.core.agent_interface.AgentType.Laner` and `policy` == `Random Laner` is demonstrated. The agent chases via points or follows nearby waypoints if a via point is unavailable.

File: :examples:`examples/control/chase_via_points.py <control/chase_via_points.py>`
File: :examples:`examples/e3_multi_agent.py <e3_multi_agent.py>`

.. literalinclude:: ../examples/control/chase_via_points.py
.. literalinclude:: ../examples/e3_multi_agent.py
:language: python

Use the `scl` command to run SMARTS together with it's supporting processes.

.. code-block:: bash

$ cd <path>/SMARTS
# Build the scenario `scenarios/sumo/loop`.
$ scl scenario build scenarios/sumo/loop
# Run SMARTS simulation with Envision display and `loop` scenario.
$ scl run --envision examples/control/chase_via_points.py scenarios/sumo/loop
$ scl run --envision examples/e3_multi_agent.py scenarios/sumo/loop

Visit `http://localhost:8081/ <http://localhost:8081/>`_ to view the experiment.

Expand Down
Loading
Loading