-
Notifications
You must be signed in to change notification settings - Fork 4
Home
This page will explain in more detail how to use this repo in order to detect the pupil in your videos.
Your videos should consist of close up images of a mouse's eye like in the image below. I expect that detection will not work well for other views, e.g. videos including the snout.
The network is trained to detect 4 edge of the pupil (top, bottom, left, right) and approximately the centers of the top and bottom eye lids. The image below is one example showing these 6 points as marked by me for training (dots) and as predicted from the network (+).

Install DeepLabCut (DLC) as explained here if you haven't done so before. If you run into problems the Forum is a good place to get answers and find solutions. A useful way to find out about specific uses of DLC functions, type deeplabcut._function name_?. For this to work you first need to perform steps 1-5 from 2. Run DLC to label videos.
Open \DLC data\Pupil-Sylvia-2020-04-27\config.yaml in a text editor. Change the following:
- project path: the path to
config.yaml
The following steps are substeps of the DLC pipeline described in detail here. There are different ways how you can run this code. Here I describe my way (as Windows user).
- Open Command Prompt (run as administrator).
-
activate DLC-GPU(if you have installed the version of DLC that uses GPUs, otherwise change this statement accordingly) - (optional)
pip install --upgrade deeplabcut ipythonimport deeplabcut-
config_path = r'_path to config.yaml_'(theris only necessary for Windows) videos = [_list of paths to the videos you want to analyse_]-
deeplabcut.analyze_videos(config_path, videos, save_as_csv = True)(the last argument saves the results also as a .csv file, which can be easily read by many softwares, e.g. Matlab; otherwise only an .h5 file is saved) - (optional)
deeplabcut.filterpredictions(config_path, videos)(this smooths the trajectories of the detected markers; look up the function for details)
- You need to have installed CUDA, GPU driver, Python 3. Instructions can be found here
- Open google colab notebook
- Type the following in order to load deeplabcut and provide settings:
!pip install deeplabcut
%tensorflow_version 1.x
import os
os.environ["DLClight"]="True"
import deeplabcut
- Mount google drive to give the notebook access to it. You should put the videos you want to analyse and the trained network into the google drive.
from google.colab import drive
drive.mount('/content/drive/')
%cd '/content/drive/My Drive/'
- Provide the paths to the trained network and the videos you wish to annotate. Then run video analysis.
videofile_path= [r'/_path_to_your_video_']# Enter the paths of your videos OR FOLDER you want to grab frames from.
path_config_file= r'/_path_to_your_config_'
deeplabcut.analyze_videos(path_config_file,videofile_path, save_as_csv=True)
- To create labelled videos:
deeplabcut.create_labeled_video(path_config_file,videofile_path)
Start Anaconda, run:
activate DLC-GPU
ipython
import deeplabcut
deeplabcut.launch_dlc()
The steps here continue from the previous paragraph.
-
deeplabcut.plot_trajectories(config_path, videos, showfigures=True)and/ordeeplabcut.plot_trajectories(config_path, videos, showfigures=True, filtered=True) -
deeplabcut.create_labeled_video(config_path, videos, save_frames=True, filtered=True)(delete last input argument if trajectories not filtered). Ifsave_frames=Trueevery video frame will be saved as image together with the predicted markers. You can watch them, e.g., by loading the folder containing all images into ImageJ. To check the performance of the results, I found it much more useful to be able to scroll through each image rather than just watching a video (e.g. .mpg4). (Note: DLC will still make a video from the saved images. The resulting video looked very bad in my case. However, the video resulting when calling this function with save_frames=False looked fine).
Check if FFmpeg is installed when 'Command 'ffprobe -i "xxxxxxxxxxxxx" -show_entries format=duration -v quiet -of csv="p=0"' returned non-zero exit status 1.' pops up. Link: https://ffmpeg.org/ and add its directory to the environmental variable: click on PROPERY in THIS PC on the top left corner and open 'Advanced system settings'. Click on Environment Variables and double click 'path' to add the directory:'C:\ffmpeg\bin'
- Choose the video(s) to use for retraining (you will only use a few frames of each). These are typically those where the results were bad. You don't need to add all bad videos if they look similar and the errors were qualitatively similar.
- Perform steps 1-6 from 2. Run DLC to label videos.
videos = [_list of paths to the videos you want to use for retraining_]deeplabcut.add_new_videos(config_path,videos)-
deeplabcut.extract_outlier_frames(config_path, videos, outlieralgorithm='manual')(if you want to choose each frame manually using a GUI) ordeeplabcut.extract_outlier_frames(config_path, videos, outlieralgorithm='uncertain')(you may want to check this function in detail to more other options for finding good frames for retraining) -
deeplabcut.refine_labels(config_path). This will open GUI where you can load the previously selected frames and put the markers on each frame. Be careful to use locations for each marker that are similar to the locations I used (see the image above; more examples here). Learn how to use the GUI here. deeplabcut.merge_datasets(config_path)- (optional)
deeplabcut.check_labels(config_path)(this will save the labelled frames as images so you can check whether everything was correct) deeplabcut.create_training_dataset(config_path)deeplabcut.train_network(config_path)deeplabcut.evaluate_network(config_path,plotting=True)- Perform steps 7-11 from Steps to run trained network on your videos