Skip to content

problems about hyperparameters configuration #1

@jxqhhh

Description

@jxqhhh

Hi! I think your LDMNet-pytorch project is really excellent, but I found out that if I ran using 'python main.py with mnist' then the train loss would greatly increase at last. I think this may be due to that you're confused by these two different concepts——"epoch" and "iteration". And in the paper the author use the term 'epoch' instead of 'iteration'.
When I change the hyparameters by using 'python main.py with mnist epochs_update=200 max_epochs=50000' (which means I use 100 iterations for every epoch), I find that the train loss will steadily decrease rather than greatly increase.
Looking forward to your reply, thanks a lot!

Metadata

Metadata

Assignees

No one assigned

    Labels

    No labels
    No labels

    Projects

    No projects

    Milestone

    No milestone

    Relationships

    None yet

    Development

    No branches or pull requests

    Issue actions