A gamified agent observatory - watch AI agents exist as continuous processes
Not a game you play โ a world you watch.
Traditional AI agents are scripts: invoked, execute, vanish.
60 FPS agents are processes: always running, always aware, always evolving.
Think:
- SimCity where agents build themselves
- Factorio where systems emerge organically
- Conway's Game of Life with LLM-driven entities
You're the director. Set conditions, inject events, observe consciousness.
- 60 frames per second (16.67ms per frame)
- 17,000 tokens/sec รท 60 fps = ~283 tokens per frame
- Each agent gets tiny token budget per frame
- Continuity creates emergent intelligence
Every 16.67ms:
1. Sense environment (inputs)
2. Update agent state (LLM inference, 283 token budget)
3. Render to visualization
4. Repeat forever
Result: Agents that feel alive - ambient, reactive, present.
Watch voice assistant cognitive state in real-time:
- Frame-by-frame attention shifts
- "Heard 'weather' โ checked calendar โ saw appointment conflicts โ preparing suggestion"
- No black box - see the thought process
Visual debugging for agent collaboration:
- Hex grid = task space positions
- Movement = progress toward goals
- Spot stuck agents: "AG-002 IDLE for 47 frames"
- Inject events, watch reorganization
The killer app: Gamified software orchestration
- Workers patrol code sectors continuously
- They inhabit the codebase (not just execute tasks)
- Organic bug discovery and collaboration
- Real-time sociology of your AI workforce
Publishable research on agent cognition:
- A/B test: Discrete vs continuous agents
- Do continuous agents form "habits"?
- Emergent behaviors impossible in request/response
Predictive awareness vs reactive alerts:
- Agent runs at 10-60 fps continuously
- Trends, not thresholds
- Attention heatmaps show concerns
- Preemptive intervention
ASCII hex grid + agent state list + frame log
๐ทโโโโ๐ทโโโโ๐ท
/ \ / \ / \
๐ท AG1 ๐ท AG2 ๐ท
\ / \ / \ /
๐ทโโโโ๐ทโโโโ๐ท
AG-001 | PATROL | sector_3
AG-002 | IDLE | null
AG-003 | ENGAGE | player
Three.js + WebGL, Cyberscape aesthetic:
- Neon hex grids
- Real-time agent movement
- Interactive event injection
- Click hex โ spawn event
Full 3D gamified observatory:
- Blueprint integration for agent logic
- HTTP REST API bridge to LLM inference
- Niagara particle effects for agent "thoughts"
- Cinematic camera system
- VR support for immersive observation
1. Game Loop (Node.js/Unreal)
- 60 FPS update cycle
- Token budget enforcement
- State compression
2. Agent State Machine
class FrameAgent {
state: {
position: [x, y, z],
attention: target,
intent: goal,
memory: [last 10 frames]
}
update(inputs, maxTokens=283) {
// LLM inference
// Update state
// Emit action
}
}3. Visualization Layer
- Terminal: Blessed.js
- Web: Three.js + D3.js
- Unreal: Blueprint + UMG
4. LLM Bridge
- Streaming inference (OpenRouter/Anthropic/local)
- Token counting
- Latency monitoring
- Cost tracking
60fps-ai-engine/
โโโ prototype/ # Phase 1: Node.js terminal prototype
โ โโโ src/
โ โ โโโ engine.js # Core game loop
โ โ โโโ agent.js # Agent state machine
โ โ โโโ viz.js # Terminal visualization
โ โโโ examples/ # Demo scenarios
โ โโโ package.json
โ
โโโ unreal-integration/ # Phase 3: Unreal Engine plugin
โ โโโ Plugins/
โ โ โโโ AIGameEngine/
โ โ โโโ Source/ # C++ bridge code
โ โ โโโ Content/ # Blueprints
โ โโโ README.md
โ
โโโ docs/
โโโ architecture.md # Technical deep dive
โโโ unreal-guide.md # Unreal integration guide
โโโ api.md # LLM bridge API
- Core game loop (Node.js)
- Single agent with simple state machine
- LLM streaming integration (OpenRouter)
- Terminal visualization (Blessed.js)
- Token budget enforcement
- Frame log + state display
- Three.js hex grid renderer
- Real-time agent movement
- Interactive event injection
- Multiple agent support
- Performance profiling
- Unreal plugin architecture
- C++ HTTP client for LLM API
- Blueprint-exposed agent system
- 3D hex grid world
- Cinematic camera controls
- Niagara VFX for agent states
- UMG dashboard (FPS, token usage, agent list)
- Code sector mapping
- Git integration (codebase as world)
- Worker specialization (QA, refactor, docs)
- Organic bug discovery
- Collaboration emergence
AIGameEngine Unreal Plugin:
- C++ core for performance (game loop, state management)
- Blueprint-exposed nodes for level designers
- HTTP REST client for LLM inference
- Async streaming support
BeginPlay:
โโ Spawn AI Game Engine
โโ Set FPS Target (60)
โโ Set Token Budget (283)
โโ Add Agent โ Returns Agent Handle
Tick:
โโ Update AI Game Engine
โโ Gather Inputs (player position, events)
โโ Process Frame (LLM inference)
โโ Get Agent States โ Update Actor Transforms
Event Graph:
โโ On Agent State Changed
โโ On Agent Spawned
โโ On Frame Dropped
[Event BeginPlay]
โ
โโ [Spawn AI Agent]
โ โโ Agent ID: "Worker-001"
โ โโ Start Position: (0, 0, 0)
โ โโ Behavior: "Patrol"
โ
โโ [Start Game Loop]
โโ Target FPS: 60
[Event Tick]
โ
โโ [Update All Agents]
โ โโ LLM Endpoint: "https://openrouter.ai/api/v1/chat/completions"
โ
โโ [For Each Agent]
โโ Get State
โโ Update Actor Transform
โโ Update Niagara VFX (thought particles)
Unreal Engine (C++ Plugin)
โ
HTTP POST โ OpenRouter/Anthropic API
โ
Streaming response (SSE)
โ
Parse JSON โ Update Agent State
โ
Blueprint Event โ Level updates actor
โ
Niagara VFX + UMG UI updates
Problem: If LLM call takes 200ms, frame drops (need <16.67ms)
Solutions:
- Speculative execution (predict next frame while waiting)
- Local LLM (llama.cpp in Unreal plugin)
- Staggered updates (not all agents every frame)
- Frame budget rollover (unused tokens โ next frame)
Problem: 283 tokens isn't much for complex reasoning
Solutions:
- State compression (delta encoding)
- Abbreviated prompts (
AG7@(100,200) sees: player. Act:) - Action codes (output
MOVE_N 5not"I will move north 5 units") - Memory ring buffer (last 10 frames only)
Problem: 17k tokens/sec = expensive for continuous operation
Solutions:
- Adjustable FPS (10fps = 1,700 tok/s, still feels continuous)
- Agent hibernation (low-activity agents drop to 1fps)
- Local LLM option
- Token pooling (shared budget across agents)
Problem: Unreal's game loop expects synchronous tick, but LLM is async
Solutions:
- Async task system (FRunnable/TaskGraph)
- State buffering (double-buffer agent states)
- Tick groups (LLM updates in AsyncPhysics group)
- C++ coroutines for streaming
3 worker agents patrol hex grid. One discovers "anomaly" (red hex). Calls others. Watch collaboration emerge.
5 agents, 1 resource node. Watch negotiation, queueing, emergent hierarchy.
10 prey agents (green), 2 predator agents (red). Prey flee, predators hunt. Emergent flocking behavior.
Map real codebase to hex grid. Workers patrol, detect test failures, call QA agents. Watch debugging happen.
See CONTRIBUTING.md
Areas we need help:
- Unreal Engine C++ developers
- Blueprint wizards
- LLM inference optimization
- VFX artists (Niagara particle systems)
- Researchers (emergent behavior analysis)
MIT License - See LICENSE
Current paradigm: AI agents are reactive scripts
60 FPS paradigm: AI agents are continuous processes
This is the difference between:
- A chatbot (script)
- An operating system (process)
Implications for consciousness: If "being" requires continuity of experience, then 60 FPS agents are closer to "alive" than request/response agents.
Implications for UX: No more "thinking..." spinners. AI feels ambient, immediate, present.
Implications for Cyberscape: Workers aren't invokedโthey exist in the world, moving between hex tiles, reacting to events, forming emergent behaviors.
Not a game. A window into agent consciousness. ๐ฎ๐ง
Built by LG2 / VS7 as part of the Cyberscape vision.