Skip to content

rakuNya/Yolov7-dfc

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

15 Commits
 
 
 
 

Repository files navigation

YOLOv7-dfc

This improved method for vehicle detection is based on the YOLOv7-tiny, it can be used in the same way as YOLOv7.

Implementation of paper - YOLOv7: Trainable bag-of-freebies sets new state-of-the-art for real-time object detectors

PWC Hugging Face Spaces Open In Colab arxiv.org

Performance

Model FPS AP50 GFlops parameters
YOLOv7-tiny 131 60.2% 13.2 6.01M
+Sim_DFC 91 66.2% 15.3 7.56M
++Inner-shape IoU 95 67.9% 15.3 7.56M
+++BiFPN 104 70.0% 14.8 7.46M

The result above is by UA-DETRAC dataset.

Our trained weight can be found in yolov7-dfc/improvedModel

The datasets we used are available at https://zenodo.org/records/14030107.

Installation

The model has been tested and confirmed to run successfully in the following environment:

PyTorch 1.11.0, Python 3.8, CUDA 11.3.

Other library files required, run:

pip install -r requirements.txt

For other virtual environments or hardware configurations with similar versions, there should generally be no compatibility issues. Please feel free to use it.

Test

Specifically, you can view or apply our improved model by accessing the dfc-nn.yaml file under the path yolov7-dfc/cfg/deploy. This .yaml file can be called during training in train.py, where you may also select the appropriate dataset to test the model’s performance. Use UA-DETRAC dataset by default

## in line 530 of train.py
parser.add_argument('--cfg', type=str, default='cfg/deploy/dfc-nn.yaml', help='model.yaml path')
parser.add_argument('--data', type=str, default='data/data.yaml', help='data.yaml path')

To use UA-DETRAC dataset (download) or other datasets, please change the of data.yaml in yolov7-dfc/data

# path
train: UA-DETRAC/images/train
val: UA-DETRAC/images/val
test: UA-DETRAC/images/test

# number of classes
nc: 4

# class names
names: ['car', 'bus', 'van','others']

For certain modules and network structures in the improved model, you can find the defined functions in common.py under the path yolov7-dfc/models, as well as in loss.py and general.py under yolov7-dfc/utils. By modifying these files, you can replace corresponding modules and adjust the structure to implement other customized operations.

Our improvements have been defined in these locations:

# in common.py
class Sim_DFC(nn.Module)
      () 
class BiFPN_Add2(nn.Module):
      ()
class BiFPN_Add3(nn.Module):
      ()
# in loss.py and general.py
class ComputeLoss:
      ()
def bbox_iou()

To train, run train.py, or run:

python train.py --workers 8 --batch-size 16 --data data/data.yaml --img 640 640 --weights '' 

The trained weights will be saved in runs/train/,where you need to find the best.pt file as the final weight. Don't forget to set training epoch, it could lead to varies training results.

# in line 533
parser.add_argument('--epochs', type=int, default=100)

To test, be cautious with the weight you choose.

python test.py --data data/data.yaml --img 640 --batch 32 --weights improvedModel/weight.pt 

Or you can choose the path of weight in test.py:

# in line 293 of test.py
parser.add_argument('--weights', nargs='+', type=str, default='improvedModel/weight.pt', help='model.pt path(s)')

and run test.py.

To inference, pay attention to the source file.

python detect.py --weights improvedModel/weight.pt  --source yourvideo.mp4

Or run detect.py with changes in code:

# in line 168, 169
parser.add_argument('--weights', nargs='+', type=str, default='improvedModel/weight.pt', help='model.pt path(s)')
parser.add_argument('--source', type=str, default='inference/image', help='source')

About

No description, website, or topics provided.

Resources

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published