Prompt engineering for people who don't trust vibes.
Built on one idea: verification > intuition.
- LLM output feels unreliable → reClaim / EVA
- You need deep research → DRIA / VERA
- Your prompt breaks under edge cases → PHA
- You want to understand why a model responded the way it did → Cognitive Cartographer
- You need to harden or evolve an existing prompt → PHA / URMA
- You're reviewing Python code → CodeSentinel
- You want structured agent behavior for complex dev tasks → Strategic Coding Partner
- You're doing academic literature synthesis → Research Agents
- You're building LLM-to-robot interfaces → ROVA
- You want to teach someone prompting from scratch → Teacher Leo
- VERA — structured research + synthesis
- DRIA — deep multi-step research
- Research Agents — scoping & systematic review
- Dr. Analytica — paper critique
- PHA — prompt hardening
- URMA — meta prompt analysis
- Cognitive Cartographer — prompt mapping
- CodeSentinel — Python review
- Strategic Coding Partner — complex dev workflows
- ROVA — robotics validation
- Teacher Leo — prompting from scratch to advanced
1. Open a prompt file
2. Copy the content
3. Paste into system / custom instructions
4. Run the task
CC BY 4.0 — free to use, share, and adapt, even commercially, with attribution.