Skip to content

At the end of the CoOp train main.sh an additional progress bar indicates the accuracy of the output, whether the test was executed in train? #79

@KevinLi-167

Description

@KevinLi-167

When I run /scripts/coop/main.sh to train prompt,

file CoOp/train.py :

from dassl.engine import build_trainer
...
    trainer = build_trainer(cfg)

    if args.eval_only:
        trainer.load_model(args.model_dir, epoch=args.load_epoch)
        trainer.test() # I can't find the relevant code
        return

    if not args.no_train:
        trainer.train() # I can't find the relevant code

build_trainer is import from Dassl.pytorch/dassl/engine/build.py :

from dassl.utils import Registry, check_availability

TRAINER_REGISTRY = Registry("TRAINER")

def build_trainer(cfg):
    avai_trainers = TRAINER_REGISTRY.registered_names()
    check_availability(cfg.TRAINER.NAME, avai_trainers)
    if cfg.VERBOSE:
        print("Loading trainer: {}".format(cfg.TRAINER.NAME))
    return TRAINER_REGISTRY.get(cfg.TRAINER.NAME)(cfg)

There is output accuracy during and before the end, and I know that the final output accuracy is a different part of the function.

I think the first continuous output is the valid score in the process (from the coop.py:forward_backward function).

And the final process with a progress bar( similar to Zero-Shot.sh ) may be the test of eval automatically. I can't find the code using the PyCharm and github.searching, so I can't know.

Metadata

Metadata

Assignees

No one assigned

    Labels

    No labels
    No labels

    Projects

    No projects

    Milestone

    No milestone

    Relationships

    None yet

    Development

    No branches or pull requests

    Issue actions