Skip to content

Agentic Proposal Generator: Multi-agent AI system using Claude Agent SDK that converts B2B discovery call transcripts into business proposals.

License

Notifications You must be signed in to change notification settings

griv32/agentic-proposal-generator

Agentic Proposal Generator

Python License: GPL v3 Claude Agent SDK CI

Transform discovery call transcripts into professional business proposals using AI multi-agent orchestration.

This is a multi-agent AI system powered by Claude Agent SDK that uses specialist agents (Solutions Engineer, Solution Architect, Commercial Analyst, Proposal Writer, Quality Reviewer) working in parallel phases to create high-quality B2B proposals with built-in self-correction.


🌟 What Makes This Different?

Multi-Agent System with Parallel Execution:

  • Parallel execution - Independent agents work simultaneously
  • Self-correcting - Quality reviewer provides feedback, agents iterate
  • Specialist expertise - Each agent brings deep domain knowledge
  • Orchestrated workflow - Optimized phases with built-in quality loops

System Architecture

β”Œβ”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”
β”‚                     MULTI-AGENT SYSTEM                          β”‚
β”œβ”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€
β”‚                                                                 β”‚
β”‚  Phase 1: PARALLEL ANALYSIS                                     β”‚
β”‚  β”Œβ”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”  β”Œβ”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”  β”Œβ”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”         β”‚
β”‚  β”‚  Solutions   β”‚  β”‚  Solution    β”‚  β”‚  Commercial  β”‚         β”‚
β”‚  β”‚  Engineer    β”‚  β”‚  Architect   β”‚  β”‚  Director    β”‚         β”‚
β”‚  β””β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”˜  β””β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”˜  β””β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”˜         β”‚
β”‚         β”‚                 β”‚                  β”‚                  β”‚
β”‚         β””β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”΄β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”˜                  β”‚
β”‚                          β”‚                                      β”‚
β”‚  Phase 2: SYNTHESIS                                             β”‚
β”‚                  β”Œβ”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”                              β”‚
β”‚                  β”‚   Proposal    β”‚                              β”‚
β”‚                  β”‚   Writer      β”‚                              β”‚
β”‚                  β””β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”˜                              β”‚
β”‚                          β”‚                                      β”‚
β”‚  Phase 3: QUALITY REVIEW                                        β”‚
β”‚                  β”Œβ”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”                              β”‚
β”‚                  β”‚   Quality     β”‚                              β”‚
β”‚                  β”‚   Reviewer    β”‚ ←── Feedback Loop            β”‚
β”‚                  β””β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”˜                              β”‚
β”‚                          β”‚                                      β”‚
β”‚                    Final Proposal                               β”‚
β”‚                                                                 β”‚
β”‚  πŸ› οΈ  MCP TOOLS (Available to All Agents)                       β”‚
β”‚  β€’ analyze_company_from_transcript  β€’ calculate_roi             β”‚
β”‚  β€’ assess_technical_risks           β€’ check_proposal_consistencyβ”‚
β”‚  β€’ check_claim_reasonableness       β€’ extract_key_entities     β”‚
β””β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”˜

🎯 Key Features

βœ… 5 Specialist AI Agents

  • Solutions Engineer: Analyzes discovery calls with technical and business expertise
  • Solution Architect: Designs implementation approach
  • Commercial Analyst: Creates pricing and ROI analysis
  • Proposal Writer: Synthesizes everything into cohesive narrative
  • Quality Reviewer: Validates quality with rejection authority

βœ… MCP Tools Integration

  • 6 AI-powered tools for enhanced capabilities
  • Company research, ROI calculation, risk assessment
  • Consistency checking, fact verification, entity extraction
  • Smart tool use - Claude decides when to leverage each tool

βœ… Human-in-the-Loop Review

  • Interactive approval after AI quality review
  • View, approve, or request changes to proposals
  • Iterative refinement with human feedback
  • Skippable for automated workflows (--skip-human-review)

βœ… Self-Correction Loop

  • Quality threshold enforcement (default: 8/10)
  • Agents iterate based on reviewer feedback
  • Automatic consistency checking (timeline, budget, scope)

βœ… Parallel Orchestration

  • Solutions Engineer, Solution Architect, Commercial Analyst run in parallel
  • Cost-optimized with transcript summarization

βœ… Production-Ready

  • Built on Claude Agent SDK
  • Async/await patterns for performance
  • Comprehensive error handling
  • Full type hints and validation

πŸš€ Quick Start

Prerequisites

Installation

This project uses UV for fast, reliable dependency management.

# Clone repository
git clone https://github.com/griv32/agentic-proposal-generator.git
cd agentic-proposal-generator

# Install dependencies with UV (automatically creates venv)
uv sync

# Configure API key
echo "ANTHROPIC_API_KEY=your-key-here" > .env

Alternative (without UV):

# Traditional pip installation
python -m venv venv
source venv/bin/activate  # On Windows: venv\Scripts\activate
pip install -e .

Generate Your First Proposal

Option 1: Use sample data

# With UV
uv run python example_usage.py

# Or activate venv first
source .venv/bin/activate
python example_usage.py

Option 2: Use your own transcript

# With UV
uv run agentic-proposal-generator --input your_transcript.txt --output ./proposals

# Or with activated venv
agentic-proposal-generator --input your_transcript.txt --output ./proposals

Done! Watch the agents collaborate in real-time and receive a professional proposal.


πŸ“– How It Works

The Multi-Agent Process

1. Three specialist agents run in PARALLEL:
   β†’ Solutions Engineer extracts requirements
   β†’ Solution Architect designs approach
   β†’ Commercial Analyst creates pricing/ROI
           ↓
2. Proposal Writer synthesizes all parallel inputs
           ↓
3. Quality Reviewer evaluates (score 1-10)
           ↓
4. If score < 8: Provide feedback β†’ Agents revise β†’ Re-review
   If score β‰₯ 8: Approve β†’ Final proposal ready

Example Agent Collaboration

Solutions Engineer: "Customer wants 6-month timeline, $200K budget"
Solution Architect: "Here's a 52-week implementation plan"
Quality Reviewer: "❌ REJECTED - Timeline mismatch (6 months β‰  52 weeks)"
Solution Architect: "Revised to 24-week plan with parallel workstreams"
Quality Reviewer: "βœ… APPROVED - Score 9/10"

This self-correction happens automatically!


πŸ’» Usage

Command Line

# Basic usage (with UV) - includes human review
uv run agentic-proposal-generator --input transcript.txt

# Skip human review for automation
uv run agentic-proposal-generator --input transcript.txt --skip-human-review

# Custom output location
uv run agentic-proposal-generator --input transcript.txt --output ./my-proposals

# Adjust quality threshold
uv run agentic-proposal-generator --input transcript.txt --quality-threshold 9.0

# See all options
uv run agentic-proposal-generator --help

Python API

import asyncio
from agentic_proposal_generator import AgenticProposalGenerator

async def generate():
    generator = AgenticProposalGenerator()

    with open("transcript.txt") as f:
        transcript = f.read()

    async for message in generator.generate_proposal(transcript):
        print(message, end="", flush=True)

asyncio.run(generate())

πŸ—οΈ Architecture

Project Structure

agentic_proposal_generator/
β”œβ”€β”€ .claude/
β”‚   └── agents/               # Agent definitions (6 specialists)
β”‚       β”œβ”€β”€ solutions_engineer.md
β”‚       β”œβ”€β”€ solution_architect.md
β”‚       β”œβ”€β”€ commercial_analyst.md
β”‚       β”œβ”€β”€ proposal_writer.md
β”‚       └── quality_reviewer.md
β”œβ”€β”€ src/
β”‚   └── agentic_proposal_generator/
β”‚       β”œβ”€β”€ agents/                # Agent implementations
β”‚       β”œβ”€β”€ mcp/                   # MCP tools
β”‚       β”‚   β”œβ”€β”€ tools.py           # 6 AI-powered tools
β”‚       β”‚   └── __init__.py
β”‚       β”œβ”€β”€ parallel_orchestrator.py # Parallel execution engine
β”‚       β”œβ”€β”€ events.py              # Event system
β”‚       β”œβ”€β”€ output_parser.py       # Terminal formatting
β”‚       └── cli.py                 # Command-line interface
β”œβ”€β”€ sample_data/
β”‚   └── discovery_call_transcript.txt
└── HUMAN_REVIEW.md           # Human-in-the-loop docs

Agent Definitions

Each agent is defined in .claude/agents/[name].md with:

  • Clear role and expertise
  • Input/output specifications
  • Quality standards
  • Collaboration guidelines
  • Examples of good vs bad output

πŸŽ“ What You Get

Every generated proposal includes:

  • Executive Summary - Compelling, customer-specific value proposition
  • Customer Situation - Pain points and requirements from transcript
  • Proposed Solution - Technical approach and implementation plan
  • Implementation Phases - Detailed timeline with activities and deliverables
  • ROI Analysis - Credible, fact-based return on investment
  • Investment Summary - Pricing breakdown and payment terms
  • Next Steps - Clear path forward

Output Formats:

  • Markdown (.md) - Human-readable
  • JSON (.json) - Machine-parseable

πŸ”§ Configuration

Required

# Anthropic API Key
ANTHROPIC_API_KEY=your-key-here

Optional

# Override default model
CLAUDE_MODEL=claude-sonnet-4-20250514

πŸ§ͺ Development

Setup Development Environment

# Install with dev dependencies (UV handles this automatically)
uv sync

# Format code
uv run black src/ tests/
uv run isort src/ tests/

# Type checking
uv run mypy src/

# Lint
uv run ruff check src/ tests/

Testing

This project uses a comprehensive testing strategy with mocked tests for development and live tests for validation.

Mock Tests (Fast, Free, CI-Ready)

Mock tests use dummy Claude responses and make zero API calls. Run these during development:

# Run all mock tests (unit + integration)
uv run pytest tests/unit tests/integration -v

# Run only unit tests (individual agents)
uv run pytest tests/unit -v

# Run only integration tests (full workflow)
uv run pytest tests/integration -v

# Run with coverage report
uv run pytest tests/unit tests/integration --cov=src --cov-report=html

# Fast check before committing
uv run pytest tests/unit tests/integration -x  # Stop on first failure

Benefits:

  • ⚑ Fast: Complete test suite runs in ~5 seconds
  • πŸ’° Free: Zero API costs
  • πŸ”„ Reliable: No network dependencies or rate limits
  • πŸš€ CI/CD Ready: Perfect for GitHub Actions
  • 🎯 Coverage: Tests all agents, events, cost tracking, and human review

Live Tests (Real API, Costs Money)

Live tests make real Claude API calls and cost ~$0.15-$0.20 per run:

# ⚠️ WARNING: These tests cost money!
export ANTHROPIC_API_KEY='your-key-here'

# Run all live tests
uv run pytest tests/manual/ -v

# Run specific live test
uv run pytest tests/manual/test_parallel_orchestration_live.py -v

When to use live tests:

  • Before major releases
  • After significant agent logic changes
  • To validate real API integration
  • To benchmark actual costs

Test Structure

tests/
β”œβ”€β”€ unit/                          # Individual agent tests
β”‚   β”œβ”€β”€ test_solutions_engineer.py
β”‚   β”œβ”€β”€ test_solution_architect.py
β”‚   β”œβ”€β”€ test_commercial_analyst.py
β”‚   β”œβ”€β”€ test_proposal_writer.py
β”‚   └── test_quality_reviewer.py
β”œβ”€β”€ integration/                   # End-to-end workflow tests
β”‚   β”œβ”€β”€ test_full_workflow.py
β”‚   β”œβ”€β”€ test_cost_tracking.py
β”‚   └── test_human_review.py
β”œβ”€β”€ manual/                        # Live tests (cost money!)
β”‚   β”œβ”€β”€ test_parallel_orchestration_live.py
β”‚   └── test_cost_optimization_live.py
β”œβ”€β”€ fixtures/                      # Mock data and responses
β”‚   β”œβ”€β”€ mock_responses.py
β”‚   └── sample_data.py
└── conftest.py                    # Pytest configuration

Testing Agent Behavior

  1. Start with 3 agents (analyst, writer, reviewer)
  2. Test with simple transcript using mock tests
  3. Validate self-correction loop
  4. Add specialist agents incrementally
  5. Refine agent prompts based on output
  6. Run live tests before releasing

πŸ“š Resources


🀝 Contributing

Contributions welcome! This project follows standard open-source contribution practices.

Adding New Agents

  1. Create agent definition in .claude/agents/[name].md
  2. Define role, expertise, and quality standards
  3. Update orchestrator to include new agent
  4. Test in isolation before integrating
  5. Update documentation

πŸ“ License

GPL v3 - See LICENSE file for details


⚠️ Important Notes

API Costs

  • Typical proposal: $0.15-$0.20 in API costs (with optimizations)
  • Cost optimization via transcript summarization (23-50% savings)
  • Parallel execution reduces time, not cost (same API usage)
  • Quality iterations may increase costs
  • Use quality threshold to balance cost vs quality

Limitations

  • Requires Claude API (not OpenAI compatible)
  • Python 3.10+ only
  • English-only (multi-language planned)
  • No persistent memory (stateless sessions)

Security

  • Never commit API keys to repository
  • Use .env file (added to .gitignore)
  • Review agent outputs before sending to customers
  • Human-in-the-loop review enabled by default
  • MCP tools analyze transcript data directly (no external APIs needed)
  • See SECURITY.md for comprehensive security guidelines

Disclaimer

The transcription files and text used in examples contain entirely fictional content generated for demonstration purposes only. All company names, person names, business scenarios, and details are completely fictitious and do not represent, reference, or relate to any real individuals, companies, or business situations. Any resemblance to actual persons, living or dead, or actual companies is purely coincidental.


πŸ’¬ Support


Ready to transform your discovery calls into winning proposals with AI-powered multi-agent orchestration?

Get Started | View Examples | Human Review

About

Agentic Proposal Generator: Multi-agent AI system using Claude Agent SDK that converts B2B discovery call transcripts into business proposals.

Topics

Resources

License

Contributing

Security policy

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published

Languages