Skip to content

Add DoRA support for ViT classification model. #4466

New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Open
wants to merge 7 commits into
base: develop
Choose a base branch
from

Conversation

gyuilLim
Copy link
Contributor

Summary

Added DoRA support and replaced lora falg with peft argument using Literal["lora", "dora"].

How to test

Checklist

  • I have added unit tests to cover my changes.​
  • I have added integration tests to cover my changes.​
  • I have ran e2e tests and there is no issues.
  • I have added the description of my changes into CHANGELOG in my target branch (e.g., CHANGELOG in develop).​
  • I have updated the documentation in my target branch accordingly (e.g., documentation in develop).
  • I have linked related issues.

License

  • I submit my code changes under the same Apache License that covers the project.
    Feel free to contact the maintainers if that's a concern.
  • I have updated the license header for each file (see an example below).
# Copyright (C) 2025 Intel Corporation
# SPDX-License-Identifier: Apache-2.0

Copy link
Member

@sovrasov sovrasov left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@gyuilLim thanks for your efforts! Could you add a brief version of the table with results of the experiments? That'd be useful in the future, so everyone can see the impact of the PR by simply looking at description.
Also, the PR doesn't contain any tests. Newly added modules should be covered with unit tests. It'd be good to check that peft parameter actually works, like Kirill said. It might be enough to check if a multiclass cls ViT model can be created and forwarded with peft not None. Actual training seems to be a bit redundant and we have don't have a redy-to-use location to store that kind of tests

@gyuilLim
Copy link
Contributor Author

gyuilLim commented Jul 18, 2025

@sovrasov Thank you! I'll add unit tests.
and here is the table that shows exprimental results using DINOv2 on six datasets

  • GPU memory usage(GB)
Method FGVC_aircraft FOOD_101 Stanford_Cars CUB-200 HAM10000 RESISC45 Avg Memory
Full Fine-Tuning 4.050 4.070 4.010 4.020 3.990 4.050 4.032
Freeze backbone 0.550 0.560 0.550 0.560 0.530 0.540 0.548
LoRA(q, v) 2.580 2.640 2.580 2.590 2.560 2.570 2.587
DoRA(q, v) 2.680 2.700 2.680 2.730 2.670 2.670 2.688
  • Test Accuracy(%)
Method FGVC_aircraft FOOD_101 Stanford_Cars CUB-200 HAM10000 RESISC45 Avg Top-1 Acc
Full Fine-Tuning 47.11 83.87 75.0 75.41 78.16 92.14 75.28
Freeze backbone 38.52 87.12 66.24 81.98 59.76 80.87 69.08
LoRA(q, v) 49.12 88.53 76.51 80.36 78.62 93.05 77.70
DoRA(q, v) 50.04 88.87 75.70 81.05 79.02 92.76 77.90

@github-actions github-actions bot added the TEST Any changes in tests label Jul 21, 2025
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
GSoC TEST Any changes in tests
Projects
None yet
Development

Successfully merging this pull request may close these issues.

3 participants