Skip to content

mofa-org/mofa-effect

Repository files navigation

MoFA Effect

AI-native video generation using DaVinci Resolve Fusion templates.

Enter a text prompt, and MoFA Effect selects matching motion graphics templates from a curated library, generates background images, writes text content, synthesizes a voiceover, patches everything together, and hands off to DaVinci Resolve for cinematic rendering.

Hero banner


How It Works

MoFA Effect is an 8-step AI pipeline that turns a text prompt into a rendered video with professional Fusion templates.

Pipeline overview

The Pipeline

Step What Happens Technology
1. Prompt User describes the video they want Interactive CLI
2. Template Selection AI picks 2-6 matching templates based on mood, energy, and visual style GPT-4o / LLM
3. Content Generation AI generates text content for each template's text elements GPT-4o / LLM
4. Background Images AI generates topic-specific background images DALL-E 3 / HuggingFace / Pollinations
5. Voiceover Text-to-speech narration synthesized from AI-written script Edge TTS
6. Template Patching Two-phase patching: LLM edits text/fonts, programmatic injects backgrounds GPT-4o + Regex
7. Resolve Render DaVinci Resolve renders each Fusion template as a video segment DaVinci Resolve
8. Final Assembly FFmpeg concatenates segments and merges voiceover audio FFmpeg

Architecture

Module architecture

mofa-effect/
|-- main.py                    # Entry point - orchestrates the full pipeline
|-- config.py                  # Central configuration (loads .env)
|-- interactive_review.py      # Pre-render content review UI
|
|-- intelligence/              # AI-powered template intelligence
|   |-- _llm_client.py        #   Centralized LLM API client (OpenAI/Groq)
|   |-- selector.py           #   AI template selection + content generation
|   |-- llm_patcher.py        #   Two-phase template patching (LLM + programmatic)
|   |-- universal_patcher.py  #   Programmatic .comp file transforms
|   |-- comp_curator.py       #   One-time library curation (613 -> 25 templates)
|   |-- profiler.py           #   Structural analysis of .comp files
|
|-- providers/                 # Swappable service providers
|   |-- image/                #   OpenAI DALL-E 3, HuggingFace, Pollinations
|   |-- tts/                  #   Edge TTS
|   |-- llm/                  #   Groq (for provider registry)
|   |-- renderer/             #   DaVinci Resolve
|   |-- registry.py           #   Provider registration and factory
|
|-- generator/                 # Asset generation services
|   |-- image_generator.py    #   Image generation with retry/fallback
|   |-- placeholder.py        #   Placeholder image generator
|
|-- renderer/                  # DaVinci Resolve integration
|   |-- resolve_bridge.py     #   Handoff JSON + script installation
|   |-- mofa_render.py        #   Runs inside Resolve (Workspace > Scripts)
|
|-- comp_usable/               # Curated template library (25 verified .comp files)
|   |-- manifest.json         #   Template metadata index
|
|-- utils/                     # Shared utilities
    |-- helpers.py            #   File reading, encoding fallbacks
    |-- logger.py             #   Logging configuration

Template Patching System

The core innovation of MoFA Effect is its two-phase template patching system that modifies DaVinci Resolve Fusion .comp files.

Two-phase patching

Phase 1: LLM-Native (Creative)

  • Sends the .comp Tools section to GPT-4o
  • LLM edits text content, adjusts fonts, modifies styles
  • Validation: brace balance, tool count preservation, size sanity
  • Falls back to regex if validation fails

Phase 2: Programmatic (Structural)

  • Background image injection behind template output
  • Ken Burns zoom animation on background
  • Text overlay injection for visual-only templates
  • Render range clamping to target duration
  • Anti-aliasing enforcement on 3D renderers

Template Library

MoFA Effect ships with 25 curated Fusion .comp templates across 13 unique visual families.

Template family showcase

Family Templates Type Best For
CH16 3D Text 6 variants 3D extruded text Titles, headings
CH17 3D Titling 8 variants Animated 3D text paths Dynamic intros, reveals
afg poster 1 3D poster layout Announcements
neural networks 1 Node visualization Tech, science topics
Scoreboard 1 Sports overlay Sports, competitions
Particles (stx) 2 3D particle effects Energy, action scenes
ObjectsFromParticles 1 Particle generation Abstract, creative
se LEDWall 1 LED wall simulation Events, concerts
UT Water 1 Water simulation Nature, calm scenes
flag 1 Flag animation National, patriotic
BigNoodle 1 Stylized text Clean text overlays
playerjoinedtext 1 Text animation Gaming, esports

Adding Custom Templates

  1. Place your .comp file in the Comp_Library/ directory
  2. Run the curator: python -m intelligence.comp_curator
  3. Verified templates are copied to comp_usable/ with metadata in manifest.json

Output Organization

Each run produces an isolated timestamped directory:

output/
  mofa_20260402_183033/
    comps/      # Patched .comp files
    images/     # AI-generated background images  
    audio/      # Voiceover WAV
    final/      # Rendered MP4 video

Setup

Prerequisites

  • DaVinci Resolve (Free or Studio) — for template rendering
  • FFmpeg — for video concatenation and audio merge
  • Python 3.10+

Installation

git clone https://github.com/YOUR_USERNAME/mofa-effect.git
cd mofa-effect
pip install -r requirements.txt
cp .env.example .env
# Edit .env with your API keys

Configuration

Edit .env to set your providers:

# LLM (for template selection + content generation)
LLM_PROVIDER=groq
LLM_API_KEY=your-groq-key
LLM_MODEL=llama-3.3-70b-versatile

# Image generation (pick one)
IMAGE_PROVIDER=openai          # Best quality (paid)
OPENAI_API_KEY=your-openai-key

# IMAGE_PROVIDER=pollinations   # Free, no key needed
# IMAGE_PROVIDER=huggingface    # Free tier
# HF_API_KEY=your-hf-key

# DaVinci Resolve path
RESOLVE_EXE=C:\Program Files\Blackmagic Design\DaVinci Resolve\Resolve.exe

Provider selection


Usage

python main.py

The interactive CLI will guide you through:

  1. Video topic — describe what you want
  2. Duration — 15, 30, 45, or 60 seconds
  3. Orientation — landscape (16:9) or portrait (9:16)
  4. Asset reuse — optionally reuse images/audio from a previous run
  5. Content review — edit AI-generated text before rendering

After the pipeline completes, open DaVinci Resolve and run:

Workspace > Scripts > MoFA_Render

The render script automatically:

  1. Creates a project with separate timelines per template
  2. Renders each template segment
  3. Concatenates all segments
  4. Merges the voiceover audio
  5. Saves the final video and cleans up intermediate files

Resolve render


Provider System

MoFA Effect uses a pluggable provider architecture. Swap any service by changing one line in .env.

Provider Registry
|
|-- LLM Providers
|   |-- groq (default) -- Llama 3.3 70B, free tier
|
|-- Image Providers
|   |-- pollinations (default) -- Free, no API key
|   |-- huggingface -- FLUX.1-schnell, free tier
|   |-- openai -- DALL-E 3, highest quality
|
|-- TTS Providers
|   |-- edge_tts (default) -- Microsoft Edge TTS, free
|
|-- Renderer Providers
    |-- resolve (default) -- DaVinci Resolve Fusion

Adding a Custom Provider

  1. Create a class implementing the interface in providers/base.py
  2. Register it in providers/registry.py
  3. Set the provider name in .env

License

MIT

About

No description, website, or topics provided.

Resources

Stars

Watchers

Forks

Releases

No releases published

Packages

 
 
 

Contributors

Languages