Welcome to Pineapple Lab OS Docs β a practitioner's guide to prompt engineering, compiling and organizing techniques from Anthropic, Google, OpenAI, and academic research.
This repository shares how I've learned, organized, tested, and applied prompt engineering best practices while building AI products at Pineapple Lab. From beginner-friendly introductions to advanced reasoning frameworks, this is a comprehensive learning resource built on the shoulders of giants.
π§ͺ I'm David Edwards. I'm 21, building AI products in South Africa, and I love to learn. This repo is where I organize what I learn from brilliant researchers and engineers, test it in real projects, and share it so others don't have to spend months reading scattered documentation like I did.
You're welcome to fork, clone, copy, remix, and apply this material to your work.
Contributions, feedback, and honest corrections are all welcome β I'm learning in public.
Never done prompt engineering before? Start with our Starting Guide for Newbies to learn the fundamentals in 30 minutes.
Know the basics? Jump to the Prompt Engineering Blueprint for the complete 7-component framework with Truth Optimization integrated throughout.
Building production AI systems? Explore our specialized guides on XML Tag Structuring, Advanced Logic Building, and TOE Context Engineering for Agents.
This repo is where I organize and share what I'm learning about prompt engineering as I build AI products.
At its core, this is about learning in public and helping others. The documents here are based on studying official documentation from AI labs (Anthropic, Google, OpenAI), reading academic papers (Wei et al., Kojima et al., and others), and testing everything while building real products like RITA and OS Brick.
- π Organize tested, reusable prompting patterns for GPT, Claude, DeepSeek, and Gemini
- ποΈ Share a systematic framework for prompt engineering that I've found helpful
- π¬ Compile techniques from research and industry in one accessible place
- π€ Invite open-source contributions from fellow builders and learners
- π Make prompt engineering accessible and practical for everyone
This is not a static spec β it's an evolving learning journal, and an open invitation to learn together.
| Document | Description | For Who? |
|---|---|---|
| Starting Guide for Newbies | 30-minute introduction to prompt engineering fundamentals | Complete beginners |
| Prompt Engineering Blueprint | Complete 7-component framework with TOE integration | Intermediate+ practitioners |
| TOE Context Engineering for Agents | Truth Optimization Engine for production AI systems | Advanced engineers |
| Guide | Description | Referenced From |
|---|---|---|
| XML Tag Structuring Guide | Advanced prompt organization using XML tags | Blueprint Component 4 |
| Advanced Logic Building Guide | Chain-of-thought, conditional logic, prompt chaining | Blueprint Component 3 |
| Resource | Description |
|---|---|
| Contribution Guide | How to contribute improvements, research, and examples |
| LICENSE | Apache 2.0 - Use freely, attribute appropriately |
- A practitioner's synthesis of prompt engineering techniques from multiple authoritative sources
- Organized documentation of what works in real production applications
- Real-world examples from building AI products (RITA, OS Brick, and other Pineapple Lab projects)
- Learning resource structured from beginner to advanced
- My personal framework for organizing and applying prompt engineering knowledge
- Honest curation of brilliant work by researchers and engineers at major AI labs
- Novel academic research (I cite the researchers who did that work)
- Peer-reviewed or scientifically validated findings
- Claims of breakthrough innovations or inventions
- Replacement for official documentation from AI labs
- Formal benchmarking studies with controlled experiments
I'm a 21-year-old builder learning in public. I've spent months reading documentation from Anthropic, Google, and OpenAI. I've studied academic papers like Wei et al.'s Chain-of-Thought (19,810+ citations) and Kojima et al.'s Zero-Shot CoT. I've tested these techniques extensively while building real AI products.
What I'm sharing: My organization of this knowledge, practical examples from my work, and patterns I've found effective. I didn't invent these techniques β I learned them from the brilliant researchers and engineers who did the hard work. I'm just organizing and documenting practical application.
Standing on the shoulders of giants: All core techniques come from researchers at institutions and companies doing groundbreaking work. This repo is my way of climbing that mountain and documenting the path for others.
- Start with β Starting Guide for Newbies
- Build your first structured prompt
- Move to β Prompt Engineering Blueprint when ready
- Read β Prompt Engineering Blueprint
- Master the 7-component framework with TOE integration
- Explore specialized guides β XML Tags or Advanced Logic
- Review β TOE Context Engineering for Agents
- Apply specialized techniques from XML and Logic guides
- Test in production and contribute improvements
- π― Clarity First β Every prompt block is modular and intentional
- ποΈ Structure > Style β Treat prompts like functions, not text
- π Security-Aware β Special attention to injection risks and ambiguity
- π Multi-Model β Tested across Claude, GPT, DeepSeek, and Gemini
- π¬ Learning in Public β Sharing what I learn as I build
- π Research-Based β Built on peer-reviewed research (properly cited) and real-world testing
The Prompt Engineering Blueprint I've organized provides a systematic framework based on best practices from multiple sources:
- System Declaration (Agent Role Definition) + XML
<role>tags - Instruction Block with clear task separation + XML
<instructions>tags - Logic Block enhanced with CoT and conditional logic + XML
<reasoning>tags - Input/Output Separation using XML structural tags
- Output Formatting & Constraints with XML
<format>and<constraints>tags - Few-Shot Examples organized with XML
<examples>tags - Evaluation & Escape Hatches with validation frameworks
Each component integrates Truth Optimization principles throughout.
A systematic framework for encouraging AI honesty and reliability:
What it is: A collection of prompt engineering techniques that encourage models to express uncertainty, state assumptions, and acknowledge limitations. This builds on AI safety research, model calibration work (Guo et al., 2017), and standard software documentation practices.
Core Techniques:
- Explicit Uncertainty Acknowledgment:
[CONFIDENT],[LIKELY],[UNCERTAIN]markers - Assumption Transparency: Making AI assumptions explicit
- Limitation Disclosure: Honest about what AI can't do
- Honest Reporting Framework: Structured honesty in responses
- Validation Requirements: Built-in self-checking mechanisms
Where it comes from: I've synthesized these techniques from AI safety research, Anthropic's work on AI honesty, and standard software practices. My contribution is organizing them into a coherent framework and providing practical examples.
Real-world testing: In my experience building AI agents for Pineapple Lab products, these structured approaches have significantly reduced instances where AI generates non-functional code or makes overconfident claims. I haven't conducted formal benchmarking yet, so I can't provide quantitative metrics.
Learn more β TOE Context Engineering for Agents
XML Tag Structuring (Anthropic Best Practice) (Guide)
- Source: Documented by Anthropic in their official prompt engineering guide
- My contribution: Extensive testing and practical examples from building AI agents
- Semantic organization of prompt components
- Particularly powerful with Claude models
- Improves parseability and accuracy
- Enables complex multi-component prompts
Chain-of-Thought Reasoning (Wei et al., 2022; Kojima et al., 2022) (Guide)
- Source: Introduced by Wei et al. (2022) with 19,810+ citations, extended by Kojima et al. (2022)
- My contribution: Practical application patterns and production examples
- Explicit step-by-step problem solving
- Zero-shot and few-shot CoT patterns
- Validation and self-correction mechanisms
Advanced Conditional Logic (Guide)
- Complex decision trees and branching logic
- Multi-condition evaluation frameworks
- Error detection and recovery patterns
- Production-ready reliability
Prompt Chaining (Guide)
- Sequential task decomposition
- Parallel processing chains
- Complex workflow management
- Synthesis of multiple analyses
A comprehensive 30-minute introduction for complete beginners covering:
- What prompt engineering actually is
- The difference between chatting and engineering
- Your first structured prompt (hands-on)
- Introduction to Truth Optimization
- Clear learning path from beginner to advanced
Practical guide to Anthropic's XML tag technique:
- Core tag categories and best practices
- Chain-of-thought integration with XML
- Multi-component prompt patterns
- Conditional logic with XML structure
- Real-world implementation examples from my projects
Application of research-backed reasoning frameworks:
- Zero-shot and few-shot chain-of-thought (Wei et al., Kojima et al.)
- Multi-condition logic trees
- Recursive reasoning patterns
- Prompt chaining (sequential and parallel)
- Self-correction and validation mechanisms
- Error detection and recovery patterns
prompt_registry.jsonβ Structured schema for prompts-as-codetests/β Prompt regression + consistency testsagent_templates/β Blueprints for repeatable, role-specific agents- Integration examples β Demonstrating XML + CoT + TOE together
- Formal benchmarks β Proper testing methodology with controlled experiments
- Case studies β Detailed examples from production systems with real data
- Video tutorials β Walking through techniques with practical demonstrations
- Community patterns β Collection of patterns from other practitioners
Hey, I'm David Edwards. I'm 21, living in South Africa, and building AI products like RITA and OS Brick at Pineapple Lab.
Here's the honest truth: I'm not a PhD researcher or an AI scientist. I'm a builder who loves to learn. When I started with prompt engineering, I spent months reading scattered docs from Anthropic, Google, and OpenAI. I studied papers by Wei, Kojima, and others. It was overwhelming.
This repo is what I wish existed when I started: everything organized, practical examples from real projects, and a clear path from "complete beginner" to "building production systems."
What makes this valuable:
- Starting Guide β I distilled months of learning into 30 minutes
- XML Tag Structuring β Anthropic's technique with my real-world examples
- Advanced Logic Building β Wei & Kojima's research applied to actual products
- Truth Optimization Engine β My framework for AI honesty, synthesized from multiple sources
I didn't invent these techniques. But I've tested them for hundreds of hours, organized them coherently, and documented what actually works in production.
Why I'm sharing: I believe in learning in public. I'm standing on the shoulders of giants and documenting the climb so you don't have to figure it all out alone like I did.
Feedback is gold: If you spot errors, have suggestions, or discover better patterns β please contribute. I'm learning too, and honest corrections make this better for everyone.
Thanks for being curious and learning with me,
β David Edwards
21-year-old builder, learning in public
Pineapple Lab, South Africa
- OpenAI Prompt Engineering Docs
- Anthropic Claude Prompting
- Anthropic XML Tag Documentation
- Google Gemini Prompt Design
- Microsoft Azure OpenAI Prompt Engineering
- Wei, J., et al. (2022). Chain-of-thought prompting elicits reasoning in large language models. arXiv:2201.11903
- Kojima, T., et al. (2022). Large language models are zero-shot reasoners. arXiv:2205.11916
- Zhang, Z., et al. (2022). Automatic chain of thought prompting in large language models. arXiv:2210.03493
Contributions are strongly encouraged β especially from practitioners working in:
- π§ͺ Empirical validation of methodologies with benchmarks
- ποΈ Integration examples combining XML + CoT + TOE
- π Prompt security and safety research
- π€ Multi-agent orchestration patterns
- π§ Context engineering for complex domains
- π Performance optimizations and efficiency improvements
- π― Industry-specific applications and case studies
- Fork the repository
- Create a feature branch for your contribution
- Follow the existing documentation style and structure
- Include practical examples and real-world use cases
- Provide citations for any research or methodologies referenced
- Test your prompts across multiple models when possible (GPT, Claude, Gemini)
- Submit a pull request with a clear description of your contribution
See our Contribution Guide for detailed guidelines.
We're building an interactive tool that will allow you to:
π Benchmark Your Prompts
- Paste your current prompt into our benchmarking system
- Analyze against Pineapple Lab best practices and guidelines
- Get scored on: Structure, Clarity, TOE Integration, Logic Framework, Security
β¨ Get Enhanced Versions
- Receive AI-generated improvements based on our methodologies
- See side-by-side comparison: Your Prompt vs. Enhanced Prompt
- Understand exactly what was improved and why
πΎ Export for Your Tools
- Download enhanced prompts as formatted Markdown files
- Ready to use in Cursor, Claude Code, GitHub Copilot, or any IDE
- Includes embedded comments explaining the improvements
π Iterative Improvement
- Re-benchmark improved prompts to track progress
- Learn through practical application
- Build your prompt engineering skills over time
π Community Leaderboard
- See how your prompts compare (anonymously)
- Learn from top-performing prompt patterns
- Contribute your best prompts to help others
π Educational Mode
- Step-by-step explanations of improvements
- Links to relevant documentation sections
- Suggested learning path based on your prompt analysis
Want to help build this? We're looking for contributors with expertise in:
- Frontend development (React, Next.js)
- LLM integration and API design
- Prompt evaluation and scoring algorithms
- UX design for developer tools
- Testing and quality assurance
Interested? Open an issue with the tag [Benchmarking System] to discuss implementation ideas or volunteer to contribute.
β Star this repository to get notified when the benchmarking system launches!
Expected Launch: Q2 2025
- Week 1: Starting Guide for Newbies
- Week 2-3: Prompt Engineering Blueprint
- Week 4: Practice with real projects, explore XML Tags
- Review Prompt Engineering Blueprint
- Deep dive: XML Tag Structuring Guide
- Deep dive: Advanced Logic Building Guide
- Apply to production: TOE Context Engineering
- Master all core documents
- Implement in production systems
- Benchmark and validate
- Contribute improvements back to the community
Every technique is properly attributed. I cite the researchers (Wei et al., Kojima et al.) and companies (Anthropic, Google, OpenAI) who did the original work. My contribution is organization and practical application.
From complete newbie to advanced production systems β I've organized the full learning journey with appropriate depth at each level. This is what I wish existed when I started.
Real-world examples from my work, production-tested patterns, and actionable guidance. No theoretical fluff β just what actually works.
I show how techniques work together: XML tags + Chain-of-thought + Truth Optimization patterns = More reliable AI systems.
Built to share knowledge openly. Your contributions and corrections make this better for everyone.
Techniques I've tested across GPT (OpenAI), Claude (Anthropic), Gemini (Google), and DeepSeek.
Pineapple_Lab_OS_Docs/
β
βββ README.md # You are here
β
βββ π Core Learning Path
β βββ Starting_Guide_for_Newbies_Prompt_Engineering.md # Start here (beginners)
β βββ Prompt_Engineering_Blueprint.md # Master framework
β βββ Pineapple_Lab_TOE_Context_Engineering_for_Agents.md # Production systems
β
βββ π Advanced Technique Guides
β βββ XML_Tag_Structuring_Guide.md # Advanced organization
β βββ Advanced_Logic_Building_Guide.md # Sophisticated reasoning
β
βββ π€ Community
βββ Contribution_Guide.md # How to contribute
βββ LICENSE # Apache 2.0
Want immediate improvements? Try these:
Before: "Write marketing copy"
After: "You are an experienced email marketing specialist. Write marketing copy..."
Add to any prompt: "Use [CONFIDENT] for things you're sure about and [UNCERTAIN] for things you're not."
<instructions>
What you want done
</instructions>
<data>
The information to process
</data>
<format>
How you want the response
</format>Add to any complex problem: "Let's think step by step:"
Before: "Analyze this"
After: "Analyze this and format as: Finding β Evidence β Recommendation"
β Start with the Newbies Guide
β Read the Prompt Engineering Blueprint
β Explore TOE Context Engineering
β Read the Contribution Guide
This project is licensed under the Apache 2.0 License - see the LICENSE file for details.
TL;DR: Use it freely. Modify it. Build on it. Just give attribution and don't sue us.
Questions? Open an issue on GitHub
Improvements? Submit a pull request
Collaboration? Reach out through GitHub discussions
To be completely transparent about what I've added vs. what I learned from others:
- Chain-of-Thought reasoning β Wei et al. (2022), Kojima et al. (2022)
- XML tag structuring β Anthropic official documentation
- Few-shot prompting β Brown et al. (2020)
- System prompts, clear instructions β Universal best practices from all major AI labs
- Prompt chaining concepts β ReAct paper, AutoGPT, and agent research
- Model calibration β Guo et al. (2017)
- Organization: Compiled scattered techniques from dozens of sources into one coherent framework
- Examples: Created 50+ specific examples from building PropTech AI agents (RITA, OS Brick)
- Patterns: Documented recurring patterns I've found effective in production
- Synthesis: Connected techniques from different sources into integrated workflows
- Practical focus: Emphasized what works in production vs. pure theory
- Domain application: Showed how to apply general techniques to specific domains
- Learning path: Created structured progression from beginner to advanced
- TOE Framework: Organized AI honesty techniques into a systematic approach
You could spend months reading Anthropic's docs, Google's docs, OpenAI's docs, and 20+ research papers.
Or you could read this guide where I've done that work and organized it for you.
That's the service I'm providing: curation, synthesis, and practical application guidance.
This is a practitioner's guide based on synthesis of existing work.
- Add 100+ more real-world examples from RITA and other Pineapple Lab products
- Create video tutorials demonstrating techniques
- Build community of practitioners sharing patterns
- Document failure cases and anti-patterns
- Complete comprehensive bibliography with all sources
- Conduct formal benchmarking with proper methodology
- Collaborate with researchers to validate claims with data
- Publish case studies with quantitative results from production
- Open-source example codebases showing techniques in action
- Present findings at conferences (as practitioner insights)
- If pursuing academic path: Partner with researchers on formal studies
- Develop novel techniques specific to my domain (if they emerge)
- Build tools that automate these patterns
- Publish peer-reviewed papers (if warranted by genuine novel findings)
I'm starting as a documenter and practitioner. My goal is to eventually contribute original research if I discover something genuinely novel. But I'm not there yet, and I won't pretend I am.
Honest positioning now β Credible voice later.
π Pineapple Lab β Learning in Public
At Pineapple Lab, I believe in building openly and learning collaboratively. These resources are made freely available so others can learn from what I'm discovering, use what's helpful, and improve upon it.
I'm not claiming to revolutionize AI. I'm just sharing what I learn as I build, hoping it helps others on the same journey.
Learn with me.
Last Updated: October 2025
Version: 2.1 (Honest repositioning - practitioner's guide, not research)
Previous: v2.0 (Beginner guide, XML structuring, advanced logic)
Repository: github.com/pineapple-lab/docs
Let's learn and build together, honestly. π