Skip to content
Open
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension


Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
5 changes: 0 additions & 5 deletions docs/FAQ.rst
Original file line number Diff line number Diff line change
Expand Up @@ -13,11 +13,6 @@ We recommend using the following options to help debug workflows::
logger.set_level("DEBUG")
libE_specs["safe_mode"] = True

To make it easier to debug a generator try setting the **libE_specs** option ``gen_on_manager``.
To do so, add the following to your calling script::

libE_specs["gen_on_manager"] = True

With this, ``pdb`` breakpoints can be set as usual in the generator.

For more debugging options see "How can I debug specific libEnsemble processes?" below.
Expand Down
12 changes: 3 additions & 9 deletions docs/data_structures/libE_specs.rst
Original file line number Diff line number Diff line change
Expand Up @@ -9,12 +9,7 @@ libEnsemble is primarily customized by setting options within a ``LibeSpecs`` cl

from libensemble.specs import LibeSpecs

specs = LibeSpecs(
gen_on_manager=True,
save_every_k_gens=100,
sim_dirs_make=True,
nworkers=4
)
specs = LibeSpecs(save_every_k_gens=100, sim_dirs_make=True, nworkers=4)

.. dropdown:: Settings by Category
:open:
Expand All @@ -31,9 +26,8 @@ libEnsemble is primarily customized by setting options within a ``LibeSpecs`` cl
**nworkers** [int]:
Number of worker processes in ``"local"``, ``"threads"``, or ``"tcp"``.

**gen_on_manager** [bool] = False
Instructs Manager process to run generator functions.
This generator function can access/modify user objects by reference.
**gen_on_worker** [bool] = False
Instructs Worker process to run generator instead of Manager.

**mpi_comm** [MPI communicator] = ``MPI.COMM_WORLD``:
libEnsemble MPI communicator.
Expand Down
3 changes: 3 additions & 0 deletions docs/overview_usecases.rst
Original file line number Diff line number Diff line change
Expand Up @@ -20,6 +20,9 @@ which perform computations via **user functions**:

|

As of **v2.0** the **Manager** by default runs **a single generator**. This
is configurable.

The default allocator (``alloc_f``) instructs workers to run the simulator on the
highest priority work from the generator. If a worker is idle and there is
no work, that worker is instructed to call the generator.
Expand Down
22 changes: 1 addition & 21 deletions docs/platforms/aurora.rst
Original file line number Diff line number Diff line change
Expand Up @@ -57,7 +57,7 @@ simulations for each worker:
.. code-block:: python

# Instruct libEnsemble to exit after this many simulations
ensemble.exit_criteria = ExitCriteria(sim_max=nsim_workers*2)
ensemble.exit_criteria = ExitCriteria(sim_max=nsim_workers * 2)

Now grab an interactive session on two nodes (or use the batch script at
``../submission_scripts/submit_pbs_aurora.sh``)::
Expand Down Expand Up @@ -115,26 +115,6 @@ will use one GPU tile)::

python run_libe_forces.py -n 25

Running generator on the manager
--------------------------------

An alternative is to run the generator on a thread on the manager. The
number of workers can then be set to the number of simulation workers.

Change the ``libE_specs`` in **run_libe_forces.py** as follows:

.. code-block:: python

nsim_workers = ensemble.nworkers

# Persistent gen does not need resources
ensemble.libE_specs = LibeSpecs(
gen_on_manager=True,

then we can run with 12 (instead of 13) workers::

python run_libe_forces.py -n 12

Dynamic resource assignment
---------------------------

Expand Down
20 changes: 0 additions & 20 deletions docs/platforms/perlmutter.rst
Original file line number Diff line number Diff line change
Expand Up @@ -105,26 +105,6 @@ To see GPU usage, ssh into the node you are on in another window and run::

watch -n 0.1 nvidia-smi

Running generator on the manager
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^

An alternative is to run the generator on a thread on the manager. The
number of workers can then be set to the number of simulation workers.

Change the ``libE_specs`` in **run_libe_forces.py** as follows.

.. code-block:: python

nsim_workers = ensemble.nworkers

# Persistent gen does not need resources
ensemble.libE_specs = LibeSpecs(
gen_on_manager=True,

and run with::

python run_libe_forces.py -n 4

To watch video
^^^^^^^^^^^^^^

Expand Down
70 changes: 26 additions & 44 deletions docs/platforms/platforms_index.rst
Original file line number Diff line number Diff line change
Expand Up @@ -24,35 +24,21 @@ simulation worker, and libEnsemble will distribute user applications across the
node allocation. This is the **most common approach** where each simulation
runs an MPI application.

The generator will run on a worker by default, but if running a single generator,
the :ref:`libE_specs<datastruct-libe-specs>` option **gen_on_manager** is recommended,
which runs the generator on the manager (using a thread) as below.
.. image:: ../images/centralized_gen_on_manager.png
:alt: centralized
:scale: 55

.. list-table::
:widths: 60 40
A SLURM batch script may include:

* - .. image:: ../images/centralized_gen_on_manager.png
:alt: centralized
:scale: 55
.. code-block:: bash

- In calling script:
#SBATCH --nodes 3

.. code-block:: python
:linenos:
python run_libe_forces.py --nworkers 3

ensemble.libE_specs = LibeSpecs(
gen_on_manager=True,
)

A SLURM batch script may include:

.. code-block:: bash

#SBATCH --nodes 3

python run_libe_forces.py --nworkers 3

When using **gen_on_manager**, set ``nworkers`` to the number of workers desired for running simulations.
If running multiple generator processes instead, then set the
:ref:`libE_specs<datastruct-libe-specs>` option **gen_on_worker** so that multiple
worker processes can run multiple generator instances.

Dedicated Mode
^^^^^^^^^^^^^^
Expand All @@ -62,32 +48,29 @@ True, the MPI executor will not launch applications on nodes where libEnsemble P
processes (manager and workers) are running. Workers launch applications onto the
remaining nodes in the allocation.

.. list-table::
:widths: 60 40

* - .. image:: ../images/centralized_dedicated.png
:alt: centralized dedicated mode
:scale: 30

- In calling script:
.. image:: ../images/centralized_dedicated.png
:alt: centralized dedicated mode
:scale: 30

.. code-block:: python
:linenos:
In calling script:

ensemble.libE_specs = LibeSpecs(
num_resource_sets=2,
dedicated_mode=True,
)
.. code-block:: python
:linenos:

A SLURM batch script may include:
ensemble.libE_specs = LibeSpecs(
gen_on_worker=True,
num_resource_sets=2,
dedicated_mode=True,
)

.. code-block:: bash
A SLURM batch script may include:

#SBATCH --nodes 3
.. code-block:: bash

python run_libe_forces.py --nworkers 3
#SBATCH --nodes 3

Note that **gen_on_manager** is not set in the above example.
python run_libe_forces.py --nworkers 3

Distributed Running
-------------------
Expand Down Expand Up @@ -137,8 +120,7 @@ Zero-resource workers
---------------------

Users with persistent ``gen_f`` functions may notice that the persistent workers
are still automatically assigned system resources. This can be resolved by using
the ``gen_on_manager`` option or by
are still automatically assigned system resources. This can be resolved by
:ref:`fixing the number of resource sets<zero_resource_workers>`.

Assigning GPUs
Expand Down
30 changes: 0 additions & 30 deletions docs/running_libE.rst
Original file line number Diff line number Diff line change
Expand Up @@ -12,13 +12,6 @@ determine the parameters/inputs for simulations. Simulator functions run and
manage simulations, which often involve running a user application (see
:doc:`Executor<executor/ex_index>`).

.. note::
As of version 1.3.0, the generator can be run as a thread on the manager,
using the :ref:`libE_specs<datastruct-libe-specs>` option **gen_on_manager**.
When using this option, set the number of workers desired for running
simulations. See :ref:`Running generator on the manager<gen-on-manager>`
for more details.

To use libEnsemble, you will need a calling script, which in turn will specify
generator and simulator functions. Many :doc:`examples<examples/examples_index>`
are available.
Expand Down Expand Up @@ -161,29 +154,6 @@ If this example was run as::

No simulations will be able to run.

.. _gen-on-manager:

Running generator on the manager
--------------------------------

The majority of libEnsemble use cases run a single generator. The
:ref:`libE_specs<datastruct-libe-specs>` option **gen_on_manager** will cause
the generator function to run on a thread on the manager. This can run
persistent user functions, sharing data structures with the manager, and avoids
additional communication to a generator running on a worker. When using this
option, the number of workers specified should be the (maximum) number of
concurrent simulations.

If modifying a workflow to use ``gen_on_manager`` consider the following.

* Set ``nworkers`` to the number of workers desired for running simulations.
* If using :meth:`add_unique_random_streams()<tools.add_unique_random_streams>`
to seed random streams, the default generator seed will be zero.
* If you have a line like ``libE_specs["nresource_sets"] = nworkers -1``, this
line should be removed.
* If the generator does use resources, ``nresource_sets`` can be increased as needed
so that the generator and all simulations are resourced.

Environment Variables
---------------------

Expand Down
38 changes: 0 additions & 38 deletions docs/tutorials/executor_forces_tutorial.rst
Original file line number Diff line number Diff line change
Expand Up @@ -336,44 +336,6 @@ These may require additional browsing of the documentation to complete.

...

Running the generator on the manager
------------------------------------

As of version 1.3.0, the generator can be run on a thread on the manager,
using the :ref:`libE_specs<datastruct-libe-specs>` option **gen_on_manager**.

Change the libE_specs as follows.

.. code-block:: python
:linenos:
:lineno-start: 28

nsim_workers = ensemble.nworkers

# Persistent gen does not need resources
ensemble.libE_specs = LibeSpecs(
gen_on_manager=True,
sim_dirs_make=True,
ensemble_dir_path="./test_executor_forces_tutorial",
)

When running set ``nworkers`` to the number of workers desired for running simulations.
E.g., Instead of:

.. code-block:: bash

python run_libe_forces.py --nworkers 5

use:

.. code-block:: bash

python run_libe_forces.py --nworkers 4

Note that as the generator random number seed will be zero instead of one, the checksum will change.

For more information see :ref:`Running generator on the manager<gen-on-manager>`.

Running forces application with input file
------------------------------------------

Expand Down
Loading
Loading