Stop scattering prompts across files, repos, and providers. Package them once, run them anywhere.
Production AI systems need more than a single prompt — they need specialized prompts, shared tools, reusable fragments, safety guardrails, and version control. PromptPack is a JSON-based spec that bundles all of that into one portable file that works across providers.
The problem: As AI applications grow, prompt management becomes a mess — dozens of prompts scattered across codebases, duplicated tool definitions, no versioning, no testing, and tight coupling to a single provider.
The solution: A single .promptpack.json file that contains:
- Multiple specialized prompts that outperform one-size-fits-all approaches
- Shared tools and fragments — define once, reuse everywhere
- Multimodal support — text, images, audio, and structured content in prompt templates
- Evals — automated quality checks with Prometheus metric export, shipped alongside your prompts
- Workflows — state-machine orchestration over prompts with event-driven transitions
- Agents — A2A-compatible agent definitions for multi-agent orchestration
- Skills — progressive-disclosure knowledge loading with file, package, and inline sources
- Built-in testing metadata to track model performance across providers
- Portable format that works with OpenAI, Anthropic, Google, and local models
{
"$schema": "https://promptpack.org/schema/latest/promptpack.schema.json",
"id": "customer-support",
"name": "Customer Support Pack",
"version": "1.0.0",
"template_engine": {
"version": "v1",
"syntax": "{{variable}}"
},
"tools": {
"lookup_order": {
"type": "function",
"description": "Look up a customer order by ID",
"parameters": {
"order_id": { "type": "string", "required": true }
}
}
},
"fragments": {
"brand_voice": "Always respond in a friendly, professional tone. Use the customer's first name."
},
"prompts": {
"support": {
"system_template": "You are a {{role}} for {{company}}. {{fragments.brand_voice}}",
"tools": ["lookup_order"],
"variables": [
{ "name": "role", "type": "string", "required": true },
{ "name": "company", "type": "string", "required": true }
]
}
},
"evals": [
{
"id": "tone-check",
"type": "llm_judge",
"trigger": "sample_turns",
"sample_percentage": 10,
"params": {
"judge_prompt": "Rate the response tone 1-5 for professionalism.",
"passing_score": 4
},
"metric": {
"name": "promptpack_tone_score",
"type": "gauge",
"range": { "min": 1, "max": 5 }
}
}
]
}Learn more: Getting Started Guide
- Multi-Prompt Architecture — Specialized prompts for different scenarios instead of one-size-fits-all
- Complete Packaging — Prompts, tools, fragments, evals, and config in a single JSON file
- Evals & Metrics — Declare automated quality checks (deterministic or LLM judge) with Prometheus metric export
- Workflows — State-machine orchestration with event-driven transitions between prompts
- Agents — A2A-compatible agent definitions for multi-agent discovery and orchestration
- Skills — Progressive-disclosure knowledge loading with workflow state scoping
- Multimodal Content — Text, images, audio, and structured content in prompt templates
- Portable & Provider-Agnostic — Works across OpenAI, Anthropic, Google, and local models
- Built-in Testing — Testing metadata and quality assurance built into the spec
- Tool Integration — Define external tools once, reference them across all prompts
- Template System — Variable templating with reusable fragments for consistency
PromptPack v1.2 lets you ship quality policy alongside your prompts. Evals are automated checks that run asynchronously and produce scores — unlike validators (guardrails) which block output inline.
"evals": [
{
"id": "json_format",
"type": "json_valid",
"trigger": "every_turn",
"metric": { "name": "promptpack_json_valid", "type": "boolean" }
},
{
"id": "session-coverage",
"type": "contains_any",
"trigger": "on_session_complete",
"params": { "patterns": ["Paris", "capital"] },
"metric": { "name": "promptpack_session_coverage", "type": "boolean" }
}
]Key concepts:
| Feature | Description |
|---|---|
| Two scopes | Pack-level evals apply to all prompts; prompt-level evals override by id |
| Triggers | every_turn, on_session_complete, sample_turns, sample_sessions |
| Eval types | Runtime-defined — deterministic (contains, regex, json_valid, tools_called) or llm_judge |
| Prometheus metrics | Each eval can declare a metric (gauge, counter, histogram, boolean) for monitoring |
| Sampling | sample_turns/sample_sessions with sample_percentage for cost-effective evaluation |
See RFC-0006: Evals Extension for the full design.
PromptPack v1.3 adds two new top-level sections for orchestration:
Workflows define a state machine over your prompts — each state references a prompt key and declares event-driven transitions:
"workflow": {
"version": 1,
"entry": "greeting",
"states": {
"greeting": {
"prompt_task": "greet",
"on_event": { "need_support": "triage", "bye": "closing" }
},
"triage": {
"prompt_task": "support",
"on_event": { "escalate": "human_handoff", "resolved": "closing" }
}
}
}Agents map prompts to A2A-compatible agent definitions for multi-agent discovery and orchestration:
"agents": {
"entry": "router",
"members": {
"router": { "tags": ["triage"], "description": "Routes requests to specialists" },
"billing": { "tags": ["billing"], "input_modes": ["text/plain"] },
"tech": { "tags": ["technical"], "output_modes": ["text/plain", "application/json"] }
}
}See RFC-0005: Workflow Extension and RFC-0007: Agents Extension for the full designs.
PromptPack v1.3.1 adds progressive-disclosure knowledge loading. Skills are modular knowledge sources — file paths, package references, or inline definitions — that agents activate on demand:
"skills": [
"./skills/billing",
{ "path": "./skills/compliance", "preload": true },
{
"name": "escalation-protocol",
"description": "Steps for escalating unresolved issues",
"instructions": "When an issue cannot be resolved:\n1. Collect details\n2. Create ticket\n3. Set expectations"
}
]Workflow states can scope which skills are available per context:
"states": {
"billing_state": { "prompt_task": "billing", "on_event": {}, "skills": "./skills/billing" },
"closing": { "prompt_task": "closing", "on_event": {}, "skills": "none" }
}See RFC-0008: Skills Extension for the full design.
- Specification — Complete PromptPack spec
- Examples — Real-world usage examples
- Schema Reference — Field-by-field documentation
- File Format — YAML structure guide
- Latest:
https://promptpack.org/schema/latest/promptpack.schema.json - Versioned:
https://promptpack.org/schema/v1.3.1/promptpack.schema.json
| Component | Status | Links |
|---|---|---|
| Core Specification | v1.3.1 Stable | Spec |
| PromptKit | Stable | CLI, validation, SDK |
| PromptArena | Stable | Multi-provider testing, CI/CD |
| LangChain.js | Available | @promptpack/langchain |
| LangChain Python | Available | promptpack-python |
| JSON Schema | Available | Auto-versioned schema |
| Documentation | Live | promptpack.org |
PromptPack follows an open governance model. See our Governance Model, RFC Process, and RFC Index.
We welcome contributions! Here's how to get involved:
- Read the Contributing Guide and Code of Conduct
- Join GitHub Discussions
- Report issues using our issue templates
- Propose specification changes via the RFC process
Look for issues labeled good first issue to get started.
- Website: promptpack.org
- Discussions: GitHub Discussions
- Issues: Issue Tracker
- Contact: community@altairalabs.com
This project is licensed under the MIT License.
Built by AltairaLabs for the conversational AI community.