A modular, real-time 3D digital human interaction engine for the browser.
English | 简体中文
MetaHuman Engine is a browser-native digital human interaction engine that provides 3D avatar rendering, voice conversation, visual perception, and behavior control as composable modules. Built for virtual customer service, live streaming avatars, educational assistants, and more.
| Module | Capabilities | Technology |
|---|---|---|
| Avatar | Real-time 3D rendering, facial expressions, skeletal animation | Three.js + React Three Fiber |
| Audio | TTS speech synthesis, ASR speech recognition | Web Speech API |
| Dialogue | Multi-turn conversation, local fallback, streaming (planned) | REST API with retry & degradation |
| Vision | Facial emotion analysis, head motion detection, gesture recognition | MediaPipe Face Mesh & Pose |
- Node.js >= 18.0.0
- npm >= 9.0.0
# Clone the repository
git clone https://github.com/LessUp/meta-human.git
cd meta-human
# Install dependencies
npm install
# Start the development server
npm run devCopy .env.example to .env.local and configure as needed:
cp .env.example .env.local| Variable | Description | Default |
|---|---|---|
VITE_API_BASE_URL |
Backend dialogue service URL | http://localhost:8000 |
src/
├── core/ # Core engine layer
│ ├── avatar/ # 3D avatar engine
│ ├── audio/ # Audio services (TTS / ASR)
│ ├── dialogue/ # Dialogue orchestration & transport
│ └── vision/ # Visual perception & emotion mapping
├── components/ # UI components
│ ├── ui/ # Shared UI primitives
│ ├── DigitalHumanViewer # 3D viewer
│ ├── ControlPanel # Control panel
│ └── ... # Expression / Behavior / Voice / Vision panels
├── hooks/ # Custom React hooks
├── store/ # Zustand global state
├── pages/ # Page components
├── lib/ # Utility functions
├── App.tsx # Router entry
└── main.tsx # Application entry
| Command | Description |
|---|---|
npm run dev |
Start the development server |
npm run build |
Production build |
npm run build:pages |
GitHub Pages build (/meta-human/) |
npm run preview |
Preview production build |
npm run lint |
Run ESLint checks |
npm run lint:fix |
Auto-fix ESLint issues |
npm run format |
Format code with Prettier |
npm run test |
Run tests in watch mode |
npm run test:run |
Run tests once |
npm run test:coverage |
Generate coverage report |
npm run typecheck |
TypeScript type checking |
- Framework — React 18 + TypeScript
- 3D Rendering — Three.js + React Three Fiber + Drei
- State Management — Zustand
- Styling — Tailwind CSS
- Build Tool — Vite 5
- Testing — Vitest + Testing Library
- CI/CD — GitHub Actions
- Deployment — GitHub Pages
This repository now uses GitHub Pages as the primary deployment target.
- Add a repository variable named
VITE_API_BASE_URL - Push to
masteror run theDeploy Pagesworkflow manually - After the first successful deployment, the site will be available at:
https://lessup.github.io/meta-human/
Client-side routes use hash URLs on Pages, for example:
https://lessup.github.io/meta-human/#/advanced
Use the root-level render.yaml blueprint to deploy the FastAPI backend to Render.
- Create a new Blueprint service from this repository in Render
- Confirm the generated service uses:
- Root Directory:
server - Build Command:
pip install -r requirements.txt - Start Command:
uvicorn app.main:app --host 0.0.0.0 --port $PORT - Health Check Path:
/health
- Root Directory:
- Set backend environment variables in Render
- Required for Pages access:
CORS_ALLOW_ORIGINS=https://lessup.github.io - Required for real model replies:
OPENAI_API_KEY - See
server/.env.examplefor the full variable list
- Required for Pages access:
- After deployment, copy the Render service URL, for example:
https://your-render-service.onrender.com
- Set the GitHub Actions repository variable:
VITE_API_BASE_URL=https://your-render-service.onrender.com
- Re-run the
Deploy Pagesworkflow so the frontend points to the Render backend
# Standard production build
npm run build
# GitHub Pages build (/meta-human/ base path)
npm run build:pagesThe Pages workflow reads
VITE_API_BASE_URLfrom GitHub Actions repository variables. If it is missing, the deployed app will fall back tohttp://localhost:8000, which is not suitable for production.
For backend deployment variables, use
server/.env.exampleas the Render reference template.
┌──────────────────────────────────────────┐
│ UI Layer │
│ Pages ← Components ← Hooks ← Store │
├──────────────────────────────────────────┤
│ Core Engine │
│ ┌────────┐ ┌───────┐ ┌────────┐ ┌─────┐│
│ │ Avatar │ │ Audio │ │Dialogue│ │Vision││
│ └────────┘ └───────┘ └────────┘ └─────┘│
├──────────────────────────────────────────┤
│ External Services │
│ Three.js Web Speech REST API MediaPipe│
└──────────────────────────────────────────┘
import { digitalHumanEngine } from '@/core/avatar';
digitalHumanEngine.setExpression('smile');
digitalHumanEngine.setEmotion('happy');
digitalHumanEngine.playAnimation('wave');
digitalHumanEngine.performGreeting();import { ttsService, asrService } from '@/core/audio';
// Text-to-Speech
await ttsService.speak('Hello, how can I help you?');
// Speech Recognition
asrService.start({ mode: 'command' });import { useDigitalHumanStore } from '@/store/digitalHumanStore';
const { isPlaying, currentExpression, play, pause } = useDigitalHumanStore();| Key | Action |
|---|---|
Space |
Play / Pause |
R |
Reset |
M |
Toggle mute |
V |
Toggle recording |
S |
Toggle settings panel |
1 – 4 |
Quick actions |
Esc |
Close settings |
| Browser | Version |
|---|---|
| Chrome | >= 90 |
| Edge | >= 90 |
| Firefox | >= 90 |
| Safari | >= 15 |
Speech Recognition (ASR) requires Chrome or Edge due to Web Speech API limitations.
- Fork the repository
- Create a feature branch (
git checkout -b feat/my-feature) - Commit your changes (
git commit -m 'feat: add my feature') - Push to the branch (
git push origin feat/my-feature) - Open a Pull Request
Please follow Conventional Commits specification.