Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

[Bug] _init_callback in base_class does not handle verbose parameter when creating EvalCallback #1006

Closed
3 tasks done
BurakDmb opened this issue Aug 12, 2022 · 3 comments · Fixed by #1011
Closed
3 tasks done
Labels
bug Something isn't working

Comments

@BurakDmb
Copy link
Contributor

🐛 Bug

Hi,

I just noticed that in the _init_callback method, the verbose parameter is not used when creating EvalCallback. Since the default verbose value is 1 in the EvalCallback, logging could not be disabled when only using the default verbose value.

Code reference:

eval_callback = EvalCallback(
eval_env,
best_model_save_path=log_path,
log_path=log_path,
eval_freq=eval_freq,
n_eval_episodes=n_eval_episodes,
)
callback = CallbackList([callback, eval_callback])

To Reproduce

I would like to set verbose=0 in my studies, and therefore, when I enable evaluation, I cannot decide whether to print the evaluation results or not.

Below, you can see my example code:

model = PPO(..., verbose=0, ...)

model.learn(..., eval_env=eval_env,
            eval_freq=100,
            n_eval_episodes=5,
            eval_log_path="eval")

Expected behavior

I think this problem could be solved easily by passing the self.verbose variable into the EvalCalback constructor.

I can volunteer for this PR.

if eval_env is not None:
    eval_callback = EvalCallback(
        eval_env,
        best_model_save_path=log_path,
        log_path=log_path,
        eval_freq=eval_freq,
        n_eval_episodes=n_eval_episodes,
        verbose=self.verbose,
    )
    callback = CallbackList([callback, eval_callback])

System Info

({'OS': 'Linux-5.13.0-40-generic-x86_64-with-glibc2.17 #45~20.04.1-Ubuntu SMP Mon Apr 4 09:38:31 UTC 2022', 'Python': '3.8.13', 'Stable-Baselines3': '1.6.1a0', 'PyTorch': '1.11.0', 'GPU Enabled': 'True', 'Numpy': '1.22.3', 'Gym': '0.25.0'}, 'OS: Linux-5.13.0-40-generic-x86_64-with-glibc2.17 #45~20.04.1-Ubuntu SMP Mon Apr 4 09:38:31 UTC 2022\nPython: 3.8.13\nStable-Baselines3: 1.6.1a0\nPyTorch: 1.11.0\nGPU Enabled: True\nNumpy: 1.22.3\nGym: 0.25.0\n')

Checklist

  • I have checked that there is no similar issue in the repo (required)
  • I have read the documentation (required)
  • I have provided a minimal working example to reproduce the bug (required)
@BurakDmb BurakDmb added the bug Something isn't working label Aug 12, 2022
@qgallouedec
Copy link
Collaborator

I see your point. If we want to be rigorous, you are right, verbose=0 should not permit to write anything to stdout.

However, I think many people like the format with evaluation log without training log (currently supported by verbose=0).

I need more feedback on this.

@araffin
Copy link
Member

araffin commented Aug 13, 2022

Hello,
good point, verbose value should be passed.
Also related: I was thinking about removing the eval_env parameter completely (#925) as its usage is limited and bring complexity to the codebase.
A better alternative is to create the eval callback outside the model and pass it to the learn() method.

@qgallouedec
Copy link
Collaborator

I can volunteer for this PR.

So yes, please go ahead.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
bug Something isn't working
Projects
None yet
Development

Successfully merging a pull request may close this issue.

3 participants