Welcome to the Violence Detection App — a lightweight, AI-powered system that monitors video streams and detects violent or harmful activities in real-time using OpenAI's CLIP, OpenCV, and PyTorch.
Ideal for surveillance, content moderation, and safety automation.
- 🎥 Real-time video stream analysis using webcam or video file
- 🔍 Violence/harm detection using zero-shot CLIP model
- 📸 Automatic frame capture upon detection
- 📂 Easily configurable detection sensitivity via
settings.yaml
- ⚙️ Modular and extensible architecture
The app uses OpenAI’s CLIP model to match visual frames against text descriptions like "a photo of violence"
or "a photo of normal activity"
.
It calculates the similarity between the image and text features and selects the most likely label.
A confidence threshold is used to determine whether a frame should be flagged.
✅ If the confidence is below the threshold → considered normal
✅ No violence detected — everything is running smoothly:
The app behavior is fully controlled through settings.yaml
.
Example configuration:
model-settings:
model-name: ViT-B/32
prediction-threshold: 0.24 # 📌 Adjust sensitivity here
📌 Lower values make the model more sensitive (may increase false positives)
📌 Higher values reduce sensitivity (may miss subtle incidents)
label-settings:
labels:
- violence
- no violence detected
default-label: no violence detected
``yaml
- Python 3.7 or higher
- OpenCV (
cv2
) - PyYAML
- NumPy
- TensorFlow or PyTorch (based on the model used)
Install all dependencies using:
pip install -r requirements.txt
To run the application:
python app.py