Skip to content

psaraiva/video-filter

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

1 Commit
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 

Repository files navigation

🧪 Video Filter POC: AI-Driven Autonomy and Visual Comfort

Idioma: Português

🚩 Situation & Challenge

Watching a movie should be a moment of relaxation and immersion. However, for people with specific phobias (such as arachnophobia), this experience is often interrupted by anxiety. Viewers must remain "on guard," trying to guess when a sensitive element will appear to look away or fast-forward, losing the story's flow.

🎯 The Objective

This Proof of Concept (POC) demonstrates how technology can give autonomy back to the user. Instead of manually managing their fear, the viewer defines their preference, and the system orchestrates content delivery in a personalized, automated way.

🎥 Filter Experience

Filter Demonstration

The user chooses their preferred filter. The video flows normally, and during AI-detected moments, the transition to the protected version (Blur or Block) happens transparently, maintaining the work's continuity.

🎥 Blur

Blur Demonstration

🎥 Block

Block Demonstration


🧠 Content Resources: Veo + YOLO + OpenCV

The resources used in this POC lie in the intelligent preparation of assets, divided into three technological fronts:

  1. Generation (Google Veo): The original video used in this demo was generated via Google Veo, Google's advanced generative video AI, allowing for the creation of realistic scenarios tailored for detection testing.

  2. Detection (YOLOv8/v11): We use Deep Learning models to scan the original video and precisely map every appearance of the sensitive element. The output is a metadata index (spider-detections.json) that guides the server.

  3. Image Processing (OpenCV): Using the OpenCV library, we generate the protective variations. Where the AI detects an object, OpenCV applies spatial filters:

    • Blur: High-density Gaussian smoothing to de-characterize the object without removing the scene's context.
    • Block: Complete obscuration using solid masks for extreme sensitivity cases.

🏗️ Delivery Engineering (Go)

Unlike approaches that require re-processing entire videos, this POC utilizes Intelligent Segment Routing:

  • Dynamic Manifests: The Go server rebuilds the streaming manifest file (.m3u8) in real-time for each individual session.
  • Zero-Latency Switching: The system switches between original and OpenCV-processed segments seamlessly. This ensures the user can watch relaxed, while the system manages triggers visually.

⚖️ Ethical Use and Copyright

This is a technical laboratory project. The application of this technique must respect copyright laws. This POC is intended for use with personal videos, AI-generated content, or public domain content where there is explicit authorization for modifications and personalized viewing.


🛠️ How to Run (Docker)

The project is fully containerized to ensure immediate and consistent execution.

  1. Build:
    docker build -t video-filter-poc .
  2. Run:
    docker run -p 8080:8080 video-filter-poc
  3. Go to:
    http://localhost:8080/

This document was generated with the assistance of Gemini.

About

Using AI to apply filters to sensitive content.

Topics

Resources

Stars

Watchers

Forks

Releases

No releases published

Packages

 
 
 

Contributors