The increasingly popular use of aesthetic filters on social media has introduced new challenges in the field of Image Quality Assessment (IQA), as traditional distortion-based metrics are not designed to capture the subjective and content-aware nature of filter-based enhancements.
This work presents a no-reference IQA method that combines deep feature extraction with gradient-boosted decision trees to assess the perceptual quality of filtered images. Specifically, the proposed model employs three pre-trained deep learning networks to extract an image feature vector, which is fed into an ensemble of eight LightGBM models. The final Mean Opinion Score (MOS) prediction is obtained by averaging the outputs of these individual models.
Ranked 3rd in the Image Manipulation Quality Assessment Challenge at the International Conference on Visual Communication and Image Processing (VCIP) 2025, the proposed method outperforms the baseline across all metrics, achieving a Pearson Linear Correlation Coefficient (PLCC) of 0.847 vs. 0.543, a Spearman Rank-Order Correlation Coefficient (SROCC) of 0.839 vs. 0.516, and a Root Mean Squared Error (RMSE) of 0.076 vs. 0.121.
The paper describing the proposed method has been accepted for presentation at VCIP 2025.
You can access the camera-ready version here: LightGBM_Ensemble_IQA_CameraReady.pdf.
Clone this repository to your local machine:
git clone https://github.com/sergio-sanz-rodriguez/LightGBM-Ensemble-IQA.gitMove into the project directory:
cd LightGBM-Ensemble-IQA(Optional) If you already cloned it before and want to get the latest updates:
git pullThe image dataset and the baseline model can be downloaded from the Image Manipulation Quality Assessment Challenge (IMQAC) website.
Once the ZIP file has been downloaded, unzip it and move its contents into the project directory.
After extraction, your repository should have the following structure:
LightGBM-Ensemble-IQA/
├── images/
├── models/
├── paper/
├── trained_3072to128/
├── trained_3072to128_weights/
├── utils/
└── VCIP_IMQA/
└── VCIP/
├── EQ420_image/
| ├── img1.png
| ├── img2.png
| └── ...
├── IMQA/
└── Labels/
Follow these steps to create and configure a clean Python virtual environment (.venv).
winget install -e --id Python.Python.3.11Download the Windows Installer (64-bit) from python.org/downloads/release/python-3110.
During setup, make sure to check:
-
Add python.exe to PATH
-
Install launcher for all users
py --versionGo to the project folder and create the environment:
py -3.11 -m venv .venv.\.venv\Scripts\Activate.ps1If you see an error about script execution being disabled, run once:
Set-ExecutionPolicy RemoteSigned -Scope CurrentUsersource .venv/Scripts/activatepython -m pip install -U pip setuptools wheelpip install --index-url https://download.pytorch.org/whl/cu121 torch==2.5.1 torchvision==0.20.1pip install --index-url https://download.pytorch.org/whl/cpu torch==2.5.1 torchvision==0.20.1Ensure binary wheels for heavy packages:
pip install --only-binary=:all: numpy==1.26.4 matplotlib==3.10.7Then install all requirements:
pip install -r requirements.txtPyTorch + CUDA check:
python -c "import torch; print(torch.__version__, '| CUDA available:', torch.cuda.is_available(), '| Device:', torch.cuda.get_device_name(0) if torch.cuda.is_available() else 'CPU')"Import all required files:
python -c "import numpy, matplotlib, einops, pandas, scipy, timm, torch, torchvision, tqdm, visdom, joblib, optuna, sklearn, lightgbm, memory_profiler, cv2, skimage, torchinfo; print('All good! All libraries imported successfully.')"python -m pip install -U ipykernel jupyter ipywidgetspython -m ipykernel install --user --name ".venv" --display-name "Python 3.11 (.venv)"