Watching a movie should be a moment of relaxation and immersion. However, for people with specific phobias (such as arachnophobia), this experience is often interrupted by anxiety. Viewers must remain "on guard," trying to guess when a sensitive element will appear to look away or fast-forward, losing the story's flow.
This Proof of Concept (POC) demonstrates how technology can give autonomy back to the user. Instead of manually managing their fear, the viewer defines their preference, and the system orchestrates content delivery in a personalized, automated way.
The user chooses their preferred filter. The video flows normally, and during AI-detected moments, the transition to the protected version (Blur or Block) happens transparently, maintaining the work's continuity.
The resources used in this POC lie in the intelligent preparation of assets, divided into three technological fronts:
-
Generation (Google Veo): The original video used in this demo was generated via Google Veo, Google's advanced generative video AI, allowing for the creation of realistic scenarios tailored for detection testing.
-
Detection (YOLOv8/v11): We use Deep Learning models to scan the original video and precisely map every appearance of the sensitive element. The output is a metadata index (
spider-detections.json) that guides the server. -
Image Processing (OpenCV): Using the OpenCV library, we generate the protective variations. Where the AI detects an object, OpenCV applies spatial filters:
- Blur: High-density Gaussian smoothing to de-characterize the object without removing the scene's context.
- Block: Complete obscuration using solid masks for extreme sensitivity cases.
Unlike approaches that require re-processing entire videos, this POC utilizes Intelligent Segment Routing:
- Dynamic Manifests: The Go server rebuilds the streaming manifest file (
.m3u8) in real-time for each individual session. - Zero-Latency Switching: The system switches between original and OpenCV-processed segments seamlessly. This ensures the user can watch relaxed, while the system manages triggers visually.
This is a technical laboratory project. The application of this technique must respect copyright laws. This POC is intended for use with personal videos, AI-generated content, or public domain content where there is explicit authorization for modifications and personalized viewing.
The project is fully containerized to ensure immediate and consistent execution.
- Build:
docker build -t video-filter-poc . - Run:
docker run -p 8080:8080 video-filter-poc
- Go to:
http://localhost:8080/
This document was generated with the assistance of Gemini.


