Skip to content

reproducing fine-tuned tokens. (textual inversion) #24

@bansh123

Description

@bansh123

I have attempted to reproduce the results of the few-shot classification task on the PASCAL VOC dataset.
I managed to achieve comparable outcomes when utilizing the fine-tuned tokens you previously shared via the Google Drive link.
However, I was unsuccessful in reproducing the fine-tuned tokens.
When employing fine_tune.py and aggregate_embeddings.py with the provided scripts, I obtained inferior tokens, resulting in significantly lower accuracy (approximately a 10% gap in 1-shot).

Am I overlooking something?

Metadata

Metadata

Labels

No labels
No labels

Projects

No projects

Milestone

No milestone

Relationships

None yet

Development

No branches or pull requests

Issue actions