SPEED-light - Software Processing and Extraction of Eye tracking Data by LabSCoC (University of L'Aquila, Italy)
This project provides a graphical user interface (GUI) for processing and analyzing eye-tracking data from Pupil Labs devices. It allows users to load recording data, segment it based on events, calculate various metrics (related to fixations, blinks, pupil diameter, and saccades), generate plots, and create an overlay video with gaze points and event information.
The application uses Tkinter to create a user-friendly interface. Here's a breakdown of the main components and how they interact:
-
Data Preparation:
- The user selects a
Datafolder (from the Pupil Labs recording) and an optionalEnrichmentfolder. - The application creates a unified
filesdirectory, intelligently merging files from both sources. For example, it prioritizesgaze.csvandfixations.csvfrom the enrichment folder if available.
- The user selects a
-
Event Editing (Optional):
- A simple event editor allows the user to review the recording, add new event markers, or remove existing ones. This helps in refining the data segmentation.
-
Analysis & Output Generation:
- The script segments the data based on the events defined in
events.csv. - For each segment, it calculates 16 different eye-tracking metrics (e.g., number of fixations, mean pupil diameter).
- It generates PDF plots for each event, including gaze/fixation heatmaps and pupillometry timeseries.
- It produces a final summary video (
final_video_overlay.mp4) that shows the original recording overlaid with gaze points, gaze path, active events, and a live pupil diameter chart. - All calculated metrics are saved to an Excel file (
Speed_Lite_Results.xlsx).
- The script segments the data based on the events defined in
The application is designed to work with the data structure produced by Pupil Labs recording software. It primarily requires the following folders and files:
- Data Folder: The main folder containing the raw recording data (e.g.,
gaze.csv,fixations.csv,blinks.csv,world_timestamps.csv, and theexternal.mp4video). - Enrichment Folder (Optional): A folder containing enriched or post-processed data, such as a corrected
gaze.csvorfixations.csv.
Before you begin, ensure you have Anaconda installed on your system.
- Anaconda: You can download and install the Anaconda Distribution (for Python 3.x) from anaconda.com/products/distribution.
-
Clone the repository Open a terminal (or Anaconda Prompt on Windows) and clone this repository to your local machine using the following command:
git clone https://github.com/your-username/SPEED-light.git cd SPEED-light(Replace
your-usernamewith the correct repository path if needed) -
Create and activate the Anaconda environment Create a new virtual environment for this project to manage dependencies in isolation.
conda create --name speedlight python=3.9
Activate the new environment:
conda activate speedlight conda install pip conda install git
-
Install dependencies With the environment activated, install the required Python libraries listed in the
requirements.txtfile.pip install -r requirements.txt
-
Running the Application:
- Execute the
gui.pyscript from your terminal:python gui.py
- The GUI window will appear.
- Use the "Browse" buttons to select the
Data,Enrichment(optional), andOutputfolders. - Click "1. Load and Prepare Data" to initialize the process.
- (Optional) Click "2. Edit Events" to open the event editor.
- Click "3. Extract Features, Plots & Video" to run the full analysis pipeline.
- Execute the
The gui.py script contains the entire implementation. Key parts include:
- Tkinter Setup: Creates the main application window and all GUI elements.
- Data Preparation:
prepare_working_directoryfunction handles the logic for merging data sources. - Metrics Calculation:
calculate_metricscomputes the 16 features for each data segment. - Plotting: Functions like
generate_heatmap_pdfandgenerate_pupil_timeseries_pdfcreate the visual outputs. - Video Generation:
generate_full_videouses OpenCV to render the final video with overlays. - Event Editor: The
LiteEventEditorclass provides an interactive way to manage event markers.
- The script is designed to be easily customizable. You can modify the analysis parameters or add new metrics as needed.
- The analysis runs in a separate thread to keep the GUI responsive.
If you use this script in your research or work, please cite the following publications:
- Lozzi, D.; Di Pompeo, I.; Marcaccio, M.; Ademaj, M.; Migliore, S.; Curcio, G. SPEED: A Graphical User Interface Software for Processing Eye Tracking Data. NeuroSci 2025, 6, 35. 10.3390/neurosci6020035
- Lozzi, D.; Di Pompeo, I.; Marcaccio, M.; Alemanno, M.; Krüger, M.; Curcio, G.; Migliore, S. AI-Powered Analysis of Eye Tracker Data in Basketball Game. Sensors 2025, 25, 3572. 10.3390/s25113572
It is also requested to cite Pupil Labs publication, as requested on their website https://docs.pupil-labs.com/neon/data-collection/publications-and-citation/
- Baumann, C., & Dierkes, K. (2023). Neon accuracy test report. Pupil Labs, 10. 10.5281/zenodo.10420388
This code is written in Vibe Coding with Google Gemini Pro