This project involves training a Computer Vision Model to detect and identify individual cows in a tie-stall barn under the McGill-UQAM Research and Innovation Chair in Animal Welfare and Artificial Intelligence the Bioinformatics at the University of Quebec at Montreal (UQAM). The project consists of two main parts: training a cow detection model and then using this model to train an individual cow identification model.
We generate a dataset from video files taken at the barn with different views: center, front, and rear, each with three orientations: center, left, and right. Frames are extracted from these videos and labeled to create train, validation, and test datasets. Data augmentation techniques are used to enhance the dataset. This task is performed using Roboflow.
The dataset is used to train the cow detection model using Ultralytics YOLOv8 for over 50 epochs with a 640 image size.
# Start training from a pretrained *.pt model using GPUs 0 and 1
yolo detect train data=<path/to/data.yaml> model=yolov8n.pt epochs=50 imgsz=640 device=mps
The trained model (best.pt) is validated using the validation data to show metrics such as mAP, accuracy, precision, recall, confusion matrix, and confidence curves.
yolo detect val model=<path/to/best.pt> data=<path/to/data.yaml>
We perform inference to detect the cow of interest in the barn, achieving 85% to 95% accuracy on the initial videos.
yolo predict model=path/to/best.pt source="path/to/*.MP4"
The trained model is exported to ONNX format for deployment across various platforms and devices.
yolo export model=path/to/best.pt format=onnx # export custom trained model
The custom trained model is used to track the cow of interest in the videos, maintaining 85% to 95% performance accuracy.
yolo track model=path/to/best.pt source="path/to/*.MP4" tracker="bytetrack.yaml"
During tracking, crops of the cow's positions in the videos are saved, with outputs organized by camera orientation.
yolo task=detect mode=predict model=path/to/best.pt source="path/to/*.mp4" save=True save_txt=True save_crop=True hide_labels=True hide_conf=True conf=0.8
LabelImg is used to generate XML files for each image in the train, validation, and test datasets for future use in training the cow identification model.
pyrcc5 -o libs/resources.py resources.qrc
pip3 install lxml
python3 labelImg.py
The custom cow detection model is evaluated on new, unseen data to assess its performance with unknown cows.
A larger dataset of individual cows is created by saving crops of each tracked cow as images, labeled with the cow's name. This is done by processing videos of each cow through the detection model.
yolo task=detect mode=predict model="path/to/best.pt" source="path/to/*.MP4" save=True save_period=10 conf=0.8 save_txt=True save_crop=True hide_labels=True hide_conf=True
The new dataset is used to train the cow identification model using YOLOv8.
This project successfully develops a robust cow detection and identification system using state-of-the-art computer vision techniques. The models trained in this project can significantly enhance the monitoring and management of cows in a barn environment, contributing to improved animal welfare.
This work is part of the research activities at the McGill-UQAM Research and Innovation Chair in Animal Welfare and Artificial Intelligence, conducted in the bioinformatics lab at the University of Quebec at Montreal.
For more information, visit well-e.org.
Francois Gonothi Toure
Graduate Student Researcher,
McGill-UQAM Research and Innovation Chair in Animal Welfare and Artificial Intelligence
E-mail: [email protected]