Welcome to the official documentation for the ProML (Prompt Markup Language). This wiki provides a comprehensive guide to the principles and features of ProML, a language designed for creating powerful, reliable, and maintainable prompts.
If you are looking for the precise document format, start with the Minimal ProML Specification that defines the required blocks and their semantics.
For developer tooling, explore the CLI reference, the local registry workflow, and the VS Code extension skeleton under tools/vscode/.
- Declarative First: Describe the task, not the step-by-step reasoning.
- Strict I/O: All output must follow a validatable schema.
- Composition & Modules: Import and reuse prompt blocks.
- Profiles & Execution Parameters: Specify engine, temperature, and other parameters in the prompt.
- Policy Layer (Security & Ethics): Built-in rules for privacy, citations, and more.
- Testability & Verification: Enable unit tests and evaluation datasets.
- Versioning & Semver: Track prompt versions and changes.
- Clear Block Structure: A fixed order for prompt sections.
- Variables & Templating: Clear rules for variable declaration and typing.
- Import & Inheritance: Import pre-built policies, styles, and patterns.
- Safe Tool Usage: Define tools with JSON Schema contracts.
- Pipelines & Steps: Describe multi-step workflows.
- Assertions & Validation: Define invariance requirements for outputs.
- Observability & Telemetry: Standard fields for cost, latency, and other metrics.
- Security & Privacy Scopes: Define trust levels for inputs.
- Fallback & Degradation: Specify fallback engines or simpler responses.
- Self-Check & Auto-Critique: Prompts can validate their own output.
- Human-Readability: YAML/Markdown-like syntax for easy editing.
- Governance & Roles: Metadata for owners, reviewers, and risk class.
- CI/CD-Friendly: Easy to lint, build, and test in pipelines.
- Documentation Requirements: Prompts have a README with metadata.
- Internationalization & Style: Definable language and date formats.
- Reproducibility & Determinism: Seed and execution profiles for reproducible outputs.
- Secure Distribution & License: Specify license, risk class, and a "kill switch".
- Caching: Cache responses for identical requests to improve performance and reduce cost.
- Batch Processing: Define how to efficiently run a prompt over a large dataset.
- Streaming: Enable real-time, token-by-token output generation.
- Human-in-the-Loop: Define points in a pipeline that require manual human approval.
- Interactive Debugging: Support for breakpoints and step-through debugging of prompts and pipelines.