Skip to content

How to get reproducible deterministic evaluation results? #28

@Yingdong-Hu

Description

@Yingdong-Hu

I evaluate the example pre-trained models on 100 trajectories. I set the seed to 0. I run the following command twice:

python -m tools.run_rl configs/bc/mani_skill_point_cloud_transformer.py --gpu-ids=0 --evaluation \
--work-dir=./test/OpenCabinetDrawer_1045_link_0-v0_pcd \
--resume-from=./example_mani_skill_data/OpenCabinetDrawer_1045_link_0-v0_PN_Transformer.ckpt \
--cfg-options "env_cfg.env_name=OpenCabinetDrawer_1045_link_0-v0" \
"eval_cfg.save_video=False" \
"eval_cfg.num=100" \
"eval_cfg.num_procs=10" \
"eval_cfg.use_log=True" \
--seed=0

For the first run, the Success or Early Stop Rate is 0.81. For the second time, the result is 0.84.
It seems that the generated seed (using following code) is different although I set the seed to 0 explictly.

if hasattr(self.env, 'seed'):
# Make sure that envs in different processes have different behaviors
self.env.seed(np.random.randint(0, 10000) + os.getpid())

So how can I control the determinism through seed?

In addition, I have a queation about the ManiSkill environment. I notice that there are shadows of objects and robots in the rendered image in the first version of your arxiv paper, like this:
image

But the world frame image I get is like this (I change the resolution to 256*256). How to make the image more realistic like the image shown above?
word_frame39_5

Metadata

Metadata

Assignees

No one assigned

    Labels

    No labels
    No labels

    Type

    No type

    Projects

    No projects

    Milestone

    No milestone

    Relationships

    None yet

    Development

    No branches or pull requests

    Issue actions