Analyze marketing videos and images using Google's Gemini AI to extract marketing dimensions, hooks, messaging patterns, and strategic insights.
The analyzer processes video ads and extracts:
- Hooks: Opening hooks (visual, spoken, text overlays)
- Social Proof: Testimonials, reviews, authority signals
- Pain Points: Problems being addressed
- Benefits: Value propositions and outcomes
- CTAs: Calls to action and their placement
- Urgency: Scarcity and time pressure tactics
- Offers: Pricing, deals, bundles
- Objection Handling: How concerns are addressed
- Emotional Triggers: Emotional appeals used
- Visual Style: Design and aesthetic choices
- Messaging Sequence: How the message unfolds
- Python 3.9+
- Google Cloud account with Vertex AI enabled
- Service account with Vertex AI permissions
git clone https://github.com/YOUR_USERNAME/video-ad-analyzer.git
cd video-ad-analyzer
pip install -r requirements.txt-
Create a Google Cloud project at console.cloud.google.com
-
Enable the Vertex AI API:
APIs & Services > Enable APIs > Search "Vertex AI" > Enable -
Create a service account:
IAM & Admin > Service Accounts > Create Service Account- Name:
video-analyzer - Role:
Vertex AI User
- Name:
-
Download the JSON key:
Service Account > Keys > Add Key > Create new key > JSON
cp config.example.py config.pyEdit config.py:
SERVICE_ACCOUNT_PATH = "/path/to/your-service-account.json"
PROJECT_ID = "your-project-id"# Analyze a single video
python analyze.py --video path/to/ad.mp4
# Analyze all videos in a directory
python analyze.py --directory path/to/ads/
# Limit batch size
python analyze.py --directory path/to/ads/ --max 10Results are saved to analysis_output/ (configurable) as JSON files:
analysis_output/
└── 2024-01-15_10-30-45/
├── consolidated.json # All dimensions in one file
├── hooks.json
├── social_proof.json
├── pain_points.json
├── benefits.json
├── cta.json
└── ...
{
"hooks": {
"visual_hook": {
"description": "Close-up of frustrated person looking at phone",
"timestamp": "0:00-0:03",
"effectiveness": "High - immediately relatable emotion"
},
"spoken_hook": {
"text": "Tired of apps that don't actually work?",
"tone": "Empathetic, slightly frustrated"
},
"text_hook": {
"text": "STOP SCROLLING",
"style": "Bold white text, red background"
}
}
}See config.example.py for all options:
| Option | Description | Default |
|---|---|---|
DEFAULT_MODEL |
Gemini model to use | gemini-2.0-flash |
API_CALL_DELAY |
Seconds between API calls | 30 |
NATIVE_ANALYSIS_TIMEOUT |
Max time for video analysis | 300 |
RUN_STRATEGIC_ANALYSIS |
Run pattern aggregation | True |
Video: .mp4, .mov, .avi, .mkv, .webm
Image: .jpg, .jpeg, .png, .gif, .bmp, .webp
Using gemini-2.0-flash (recommended):
- ~$0.01-0.03 per video (depending on length)
- ~$0.005 per image
Make sure SERVICE_ACCOUNT_PATH in config.py points to your downloaded JSON key file.
Ensure your service account has the Vertex AI User role.
Increase API_CALL_DELAY in config.py (default: 30 seconds).
Install ffmpeg for audio extraction:
# macOS
brew install ffmpeg
# Ubuntu
sudo apt install ffmpegMIT License - Use freely in your projects.