Skip to content

ABIvan-Tech/AIFace

Repository files navigation

🙂 AIFace — Add some emotion to your LLM (MCP + Render Display)

AI agent → MCP → your phone renders a live face avatar (emotion in real time).

License: MIT MCP Kotlin Multiplatform Node.js CI Status MCP NPM Android iOS macOS Desktop Windows Desktop

Architecture · Emotional Interface · MCP Server · Render Display

What it is

AIFace is a 2-part system:

  • Render display client (Kotlin Multiplatform) — a passive renderer that runs a WebSocket server at ws://<phone-ip>:8765/ and advertises itself on LAN via mDNS (_ai-face._tcp).
  • MCP server (Node.js/TypeScript) — exposes a small tool surface to an LLM (e.g. Claude Desktop), discovers displays via mDNS, then sends Scene DSL updates to the selected display.

The goal: your agent can drive an expressive, real-time face avatar without the mobile app making any “meaning” decisions.

Quick start (Android + Claude Desktop)

1) Run the mobile renderer

Recommended: open mobile/ in Android Studio and run the androidApp configuration.

Once running on a device, the display endpoint is:

  • WebSocket: ws://<phone-ip>:8765/
  • mDNS service type: _ai-face._tcp

2) Build the MCP server

cd mcp
npm install
npm run build

3) Connect it to Claude Desktop (MCP)

Add the server to your Claude Desktop config (commonly ~/.claude_desktop_config.json). Use your local paths:

{
  "mcpServers": {
    "ai-face": {
      "command": "node",
      "args": ["/ABS/PATH/TO/AIFace/mcp/dist/index.js"],
      "env": {}
    }
  }
}

Screens (example flow):

Claude Setup Step 1 Claude Setup Step 2 Claude Setup Step 3 Claude Setup Step 4 Claude Setup Step 5

After that, the MCP server exposes tools such as set_emotion, push_emotion_intent, list_displays, and get_current_emotion.

Claude Code CLI (one-liner)

If you use Claude Code, you can register the server via CLI.

From npm (published):

claude mcp add --scope user --transport stdio ai-face -- npx -y ai-face-mcp-server

Local build (works without npm publishing):

claude mcp add --scope user --transport stdio ai-face -- node /ABS/PATH/TO/AIFace/mcp/dist/index.js

CI/CD & Releases

This project uses GitHub Actions for automated builds and releases. See docs/ci-cd-secrets.md for setup instructions.

Workflow overview:

  • MCP NPM — Builds and publishes MCP server to npm on tags
  • Android — Builds debug APK on PRs, signed APK/AAB on tags
  • iOS — Builds signed IPA on main/tags
  • macOS Desktop — Builds signed and notarized DMG on tags
  • Windows Desktop — Builds signed MSI on tags

All release artifacts are attached to GitHub Releases when you push a tag like v1.0.0.

How it works (short)

Claude Desktop (MCP client)
        |
        |  MCP (stdio)
        v
AI Face MCP Server
  - discovers displays via mDNS: _ai-face._tcp
  - compiles emotion -> scene updates
        |
        |  WebSocket: ws://<phone-ip>:8765/
        v
Render Display (KMP app)
  - receives: hello / set_scene / apply_mutations / reset
  - renders and animates the face

Screenshots

Android Display

iOS Display

Documentation

License

MIT — see LICENSE

About

No description, website, or topics provided.

Resources

License

Stars

Watchers

Forks

Releases

No releases published

Packages

 
 
 

Contributors