What should the skill teach agents to do?
Add a separate skill (e.g. livekit-agents-contributing or a variant of the existing skill) tailored for developers working within an established LiveKit agent codebase. It would retain the timeless principles from the current skill (verify APIs via MCP, latency-first thinking, mandatory testing) while dropping the setup/bootstrapping content.
Why is this needed?
The current livekit-agents skill is focused on building voice AI agents from scratch — cloud setup, project connection, workflow architecture design (handoffs/tasks), and initial agent creation. This is great for greenfield projects.
However, teams that already have a LiveKit agent codebase in production need different guidance. The challenges shift from "how to build an agent" to "how to add features, debug issues, and maintain an existing agent."
Scope
- Extending an existing agent — adding modules, providers, tools, and integrations without breaking what's already there
- Debugging production issues — common failure patterns (event loop errors in forked processes, provider failures, latency regressions)
- Performance optimization — prewarm strategies, async patterns, context size management
- Testing within an established test infrastructure — working with existing fixtures and factories rather than creating a
tests/ directory from scratch
- Safe async patterns — especially relevant for LiveKit's fork-based worker model where
asyncio.run() in the wrong place breaks everything
What should the skill teach agents to do?
Add a separate skill (e.g.
livekit-agents-contributingor a variant of the existing skill) tailored for developers working within an established LiveKit agent codebase. It would retain the timeless principles from the current skill (verify APIs via MCP, latency-first thinking, mandatory testing) while dropping the setup/bootstrapping content.Why is this needed?
The current
livekit-agentsskill is focused on building voice AI agents from scratch — cloud setup, project connection, workflow architecture design (handoffs/tasks), and initial agent creation. This is great for greenfield projects.However, teams that already have a LiveKit agent codebase in production need different guidance. The challenges shift from "how to build an agent" to "how to add features, debug issues, and maintain an existing agent."
Scope
tests/directory from scratchasyncio.run()in the wrong place breaks everything