Skip to content

[Feature Request] same random seed for every env in AsyncEval #253

Open
@1-Bart-1

Description

@1-Bart-1

🚀 Feature

When training with ARS in combination with AsyncEval, multiple environments are run at the same time. When seeding these environments, all environments get a different seed. There should be an option to seed all the environments with the same seed at the start of each time ARS.evaluate_candidates() is run.

def evaluate_candidates(

def seed(self, seed: Optional[int] = None) -> List[Union[None, int]]:

Motivation

Some environments have random values generated in the reset function, for instance external factors that are random. When running evaluate_candidates, these random values can have an effect on the returned rewards, which makes that some good sets of params get bad rewards and bad params get good rewards. This makes training slower. In order to mitigate this, while still generating different random values for external values, all environments in AsyncEval should be seeded with the same random number at the start of evaluate_candidates, or this should at least be an option.

Pitch

Add the following lines to the start of ARS.evaluate_candidates:


        if async_eval is not None:
            # Multiprocess asynchronous version
            async_eval.send_jobs(candidate_weights, self.pop_size)
            results = async_eval.get_results()
            async_eval.seed(self.seed)

Add the following lines to AsyncEval.seed

  if seed is None or seed == 0:
            for idx, remote in enumerate(self.remotes):
                remote.send(("seed", np.random.randint(2**32 - 1, dtype="int64").item() )) # seed all envs with the same random seed
        else:
            for idx, remote in enumerate(self.remotes):
                remote.send(("seed", seed + idx))
        return None

And change the worker so that it doesnt return values after seed:

elif cmd == "seed":
                # Note: the seed will only be effective at the next reset
                vec_env.seed(seed=data)

Alternatives

None.

Additional context

At least in my specific environment this method leads to great improvements in training.

Checklist

  • I have checked that there is no similar issue in the repo
  • If I'm requesting a new feature, I have proposed alternatives

Metadata

Metadata

Assignees

No one assigned

    Labels

    check the checklistYou have checked the required items in the checklist but you didn't do what is written...enhancementNew feature or request

    Type

    No type

    Projects

    No projects

    Milestone

    No milestone

    Relationships

    None yet

    Development

    No branches or pull requests

    Issue actions