Title
Quantitative Currency Evaluation in Low-Resource Settings through Pattern Analysis to Assist Visually Impaired Users
Authors
Md Sultanul Islam Ovi, Mainul Hossain, Md Badsha Biswas
Abstract
Currency recognition systems often overlook usability and authenticity assessment, especially in low-resource environments where visually impaired users and offline validation are common. While existing methods focus on denomination classification, they typically ignore physical degradation and forgery, limiting their applicability in real-world conditions. This paper presents a unified framework for currency evaluation that integrates three modules: denomination classification using lightweight CNN models, damage quantification through a novel Unified Currency Damage Index (UCDI), and counterfeit detection using feature-based template matching. The dataset consists of over 82,000 annotated images spanning clean, damaged, and counterfeit notes. Our Custom_CNN model achieves high classification performance with low parameter count. The UCDI metric provides a continuous usability score based on binary mask loss, chromatic distortion, and structural feature loss. The counterfeit detection module demonstrates reliable identification of forged notes across varied imaging conditions. The framework supports real-time, on-device inference and addresses key deployment challenges in constrained environments. Results show that accurate, interpretable, and compact solutions can support inclusive currency evaluation in practical settings.
Conference
ICDM 2025 RDM Workshop
PDF
ArXiv
BibTeX
@article{ovi2025quantitative,
title={Quantitative Currency Evaluation in Low-Resource Settings through Pattern Analysis to Assist Visually Impaired Users},
author={Ovi, Md Sultanul Islam and Hossain, Mainul and Biswas, Md Badsha},
journal={arXiv preprint arXiv:2509.06331},
year={2025}
}- Dataset details are provided in the paper.
- Bangla Currency Dataset
The project provides a unified framework for:
- Denomination classification using lightweight and pre-trained CNNs
- Damage quantification using binary, chromatic, and symbolic features
- Counterfeit detection through structural feature matching
All modules are implemented in Jupyter notebooks and organized for reproducibility.
├── data/ # Torn Results
├── data_torn/ # Torn note samples for usability analysis
├── standard_notes/ # Reference templates for alignment
├── 0_dataset_preparation.ipynb # Initial dataset cleaning and deduplication
├── 1_data_split_and_augmentation.ipynb # Split and augment datasets
├── 2_classification_all_datasets.ipynb # Run classification across 4 CNN models
├── 3_customCNN_all_datasets.ipynb # Train and evaluate a custom lightweight CNN
├── 4_usability_analysis.ipynb # Damage assessment and CCDS score generation
├── 5_fake_currency_detection.ipynb # Counterfeit detection using template matching
├── LICENSE
├── README.md
- Python 3.8+
- PyTorch
- OpenCV
- scikit-learn, seaborn, matplotlib
You can recreate the virtual environment using:
python -m venv dmp
source dmp/bin/activateEach numbered notebook represents a pipeline stage:
- Start with
0_dataset_preparation.ipynbto clean and deduplicate. - Continue through
1_data_split_and_augmentation.ipynband train using2_and3_series. - Run
4_for damage quantification and5_for counterfeit detection.
- All image-based data used are public or synthetic.
- This repository has been anonymized for double-blind review.
- All figures, metrics, and scores reported in the paper can be regenerated from the notebooks.
Released for academic, non-commercial research use only.
For questions, feel free to reach out.


