Skip to content

Ashot72/lip-sync-ai

Repository files navigation

🚀 From Image and Audio to Talking Video: A Lip-Sync AI Demo with fal.ai

I built a Next.js app that turns a photo and an audio clip into a realistic lip-synced video using OmniHuman v1.5 model. Lip-syncing isn't just about the movement of the lips; it’s a full-body performance where your physical energy and rhythmic motion bring the entire character's presence to life. You can either upload an audio file or record one, then upload a photo or take a snapshot with your camera. Once you have both, you can generate a lip-sync video using fal.ai.

👉 Links & Resources


🚀 Clone and Run

# Clone the repository
git clone https://github.com/Ashot72/lip-sync-ai

# Navigate into the project directory
cd lip-sync-ai

# Copy .env.local,example to create a new .env.local file, then add your FAL_KEY.
cp env.local.example .env.local

# Install dependencies
npm install

# Start the development server
npm run dev

# The app will be available at http://localhost:3000

🛠 Debugging in VS Code

  • Open the Run view (View → Run or Ctrl+Shift+D) to access the debug configuration

📺 Video: Watch on YouTube

About

From Image and Audio to Talking Video: A Lip-Sync AI Demo with fal.ai

Topics

Resources

Stars

Watchers

Forks

Releases

No releases published

Packages

 
 
 

Languages