Transform discovery call transcripts into professional business proposals using AI multi-agent orchestration.
This is a multi-agent AI system powered by Claude Agent SDK that uses specialist agents (Solutions Engineer, Solution Architect, Commercial Analyst, Proposal Writer, Quality Reviewer) working in parallel phases to create high-quality B2B proposals with built-in self-correction.
Multi-Agent System with Parallel Execution:
- Parallel execution - Independent agents work simultaneously
- Self-correcting - Quality reviewer provides feedback, agents iterate
- Specialist expertise - Each agent brings deep domain knowledge
- Orchestrated workflow - Optimized phases with built-in quality loops
βββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββ
β MULTI-AGENT SYSTEM β
βββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββ€
β β
β Phase 1: PARALLEL ANALYSIS β
β ββββββββββββββββ ββββββββββββββββ ββββββββββββββββ β
β β Solutions β β Solution β β Commercial β β
β β Engineer β β Architect β β Director β β
β ββββββββββββββββ ββββββββββββββββ ββββββββββββββββ β
β β β β β
β βββββββββββββββββββ΄βββββββββββββββββββ β
β β β
β Phase 2: SYNTHESIS β
β βββββββββββββββββ β
β β Proposal β β
β β Writer β β
β βββββββββββββββββ β
β β β
β Phase 3: QUALITY REVIEW β
β βββββββββββββββββ β
β β Quality β β
β β Reviewer β βββ Feedback Loop β
β βββββββββββββββββ β
β β β
β Final Proposal β
β β
β π οΈ MCP TOOLS (Available to All Agents) β
β β’ analyze_company_from_transcript β’ calculate_roi β
β β’ assess_technical_risks β’ check_proposal_consistencyβ
β β’ check_claim_reasonableness β’ extract_key_entities β
βββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββ
β 5 Specialist AI Agents
- Solutions Engineer: Analyzes discovery calls with technical and business expertise
- Solution Architect: Designs implementation approach
- Commercial Analyst: Creates pricing and ROI analysis
- Proposal Writer: Synthesizes everything into cohesive narrative
- Quality Reviewer: Validates quality with rejection authority
β MCP Tools Integration
- 6 AI-powered tools for enhanced capabilities
- Company research, ROI calculation, risk assessment
- Consistency checking, fact verification, entity extraction
- Smart tool use - Claude decides when to leverage each tool
β Human-in-the-Loop Review
- Interactive approval after AI quality review
- View, approve, or request changes to proposals
- Iterative refinement with human feedback
- Skippable for automated workflows (
--skip-human-review)
β Self-Correction Loop
- Quality threshold enforcement (default: 8/10)
- Agents iterate based on reviewer feedback
- Automatic consistency checking (timeline, budget, scope)
β Parallel Orchestration
- Solutions Engineer, Solution Architect, Commercial Analyst run in parallel
- Cost-optimized with transcript summarization
β Production-Ready
- Built on Claude Agent SDK
- Async/await patterns for performance
- Comprehensive error handling
- Full type hints and validation
- Python 3.10 or higher
- Anthropic API key (get one here)
This project uses UV for fast, reliable dependency management.
# Clone repository
git clone https://github.com/griv32/agentic-proposal-generator.git
cd agentic-proposal-generator
# Install dependencies with UV (automatically creates venv)
uv sync
# Configure API key
echo "ANTHROPIC_API_KEY=your-key-here" > .envAlternative (without UV):
# Traditional pip installation
python -m venv venv
source venv/bin/activate # On Windows: venv\Scripts\activate
pip install -e .Option 1: Use sample data
# With UV
uv run python example_usage.py
# Or activate venv first
source .venv/bin/activate
python example_usage.pyOption 2: Use your own transcript
# With UV
uv run agentic-proposal-generator --input your_transcript.txt --output ./proposals
# Or with activated venv
agentic-proposal-generator --input your_transcript.txt --output ./proposalsDone! Watch the agents collaborate in real-time and receive a professional proposal.
1. Three specialist agents run in PARALLEL:
β Solutions Engineer extracts requirements
β Solution Architect designs approach
β Commercial Analyst creates pricing/ROI
β
2. Proposal Writer synthesizes all parallel inputs
β
3. Quality Reviewer evaluates (score 1-10)
β
4. If score < 8: Provide feedback β Agents revise β Re-review
If score β₯ 8: Approve β Final proposal ready
Solutions Engineer: "Customer wants 6-month timeline, $200K budget"
Solution Architect: "Here's a 52-week implementation plan"
Quality Reviewer: "β REJECTED - Timeline mismatch (6 months β 52 weeks)"
Solution Architect: "Revised to 24-week plan with parallel workstreams"
Quality Reviewer: "β
APPROVED - Score 9/10"
This self-correction happens automatically!
# Basic usage (with UV) - includes human review
uv run agentic-proposal-generator --input transcript.txt
# Skip human review for automation
uv run agentic-proposal-generator --input transcript.txt --skip-human-review
# Custom output location
uv run agentic-proposal-generator --input transcript.txt --output ./my-proposals
# Adjust quality threshold
uv run agentic-proposal-generator --input transcript.txt --quality-threshold 9.0
# See all options
uv run agentic-proposal-generator --helpimport asyncio
from agentic_proposal_generator import AgenticProposalGenerator
async def generate():
generator = AgenticProposalGenerator()
with open("transcript.txt") as f:
transcript = f.read()
async for message in generator.generate_proposal(transcript):
print(message, end="", flush=True)
asyncio.run(generate())agentic_proposal_generator/
βββ .claude/
β βββ agents/ # Agent definitions (6 specialists)
β βββ solutions_engineer.md
β βββ solution_architect.md
β βββ commercial_analyst.md
β βββ proposal_writer.md
β βββ quality_reviewer.md
βββ src/
β βββ agentic_proposal_generator/
β βββ agents/ # Agent implementations
β βββ mcp/ # MCP tools
β β βββ tools.py # 6 AI-powered tools
β β βββ __init__.py
β βββ parallel_orchestrator.py # Parallel execution engine
β βββ events.py # Event system
β βββ output_parser.py # Terminal formatting
β βββ cli.py # Command-line interface
βββ sample_data/
β βββ discovery_call_transcript.txt
βββ HUMAN_REVIEW.md # Human-in-the-loop docs
Each agent is defined in .claude/agents/[name].md with:
- Clear role and expertise
- Input/output specifications
- Quality standards
- Collaboration guidelines
- Examples of good vs bad output
Every generated proposal includes:
- Executive Summary - Compelling, customer-specific value proposition
- Customer Situation - Pain points and requirements from transcript
- Proposed Solution - Technical approach and implementation plan
- Implementation Phases - Detailed timeline with activities and deliverables
- ROI Analysis - Credible, fact-based return on investment
- Investment Summary - Pricing breakdown and payment terms
- Next Steps - Clear path forward
Output Formats:
- Markdown (
.md) - Human-readable - JSON (
.json) - Machine-parseable
# Anthropic API Key
ANTHROPIC_API_KEY=your-key-here# Override default model
CLAUDE_MODEL=claude-sonnet-4-20250514# Install with dev dependencies (UV handles this automatically)
uv sync
# Format code
uv run black src/ tests/
uv run isort src/ tests/
# Type checking
uv run mypy src/
# Lint
uv run ruff check src/ tests/This project uses a comprehensive testing strategy with mocked tests for development and live tests for validation.
Mock tests use dummy Claude responses and make zero API calls. Run these during development:
# Run all mock tests (unit + integration)
uv run pytest tests/unit tests/integration -v
# Run only unit tests (individual agents)
uv run pytest tests/unit -v
# Run only integration tests (full workflow)
uv run pytest tests/integration -v
# Run with coverage report
uv run pytest tests/unit tests/integration --cov=src --cov-report=html
# Fast check before committing
uv run pytest tests/unit tests/integration -x # Stop on first failureBenefits:
- β‘ Fast: Complete test suite runs in ~5 seconds
- π° Free: Zero API costs
- π Reliable: No network dependencies or rate limits
- π CI/CD Ready: Perfect for GitHub Actions
- π― Coverage: Tests all agents, events, cost tracking, and human review
Live tests make real Claude API calls and cost ~$0.15-$0.20 per run:
# β οΈ WARNING: These tests cost money!
export ANTHROPIC_API_KEY='your-key-here'
# Run all live tests
uv run pytest tests/manual/ -v
# Run specific live test
uv run pytest tests/manual/test_parallel_orchestration_live.py -vWhen to use live tests:
- Before major releases
- After significant agent logic changes
- To validate real API integration
- To benchmark actual costs
tests/
βββ unit/ # Individual agent tests
β βββ test_solutions_engineer.py
β βββ test_solution_architect.py
β βββ test_commercial_analyst.py
β βββ test_proposal_writer.py
β βββ test_quality_reviewer.py
βββ integration/ # End-to-end workflow tests
β βββ test_full_workflow.py
β βββ test_cost_tracking.py
β βββ test_human_review.py
βββ manual/ # Live tests (cost money!)
β βββ test_parallel_orchestration_live.py
β βββ test_cost_optimization_live.py
βββ fixtures/ # Mock data and responses
β βββ mock_responses.py
β βββ sample_data.py
βββ conftest.py # Pytest configuration
- Start with 3 agents (analyst, writer, reviewer)
- Test with simple transcript using mock tests
- Validate self-correction loop
- Add specialist agents incrementally
- Refine agent prompts based on output
- Run live tests before releasing
- Claude Agent SDK Documentation
- Alternative LangChain Implementation
- Article: Multi-Agent AI for B2B Sales
Contributions welcome! This project follows standard open-source contribution practices.
- Create agent definition in
.claude/agents/[name].md - Define role, expertise, and quality standards
- Update orchestrator to include new agent
- Test in isolation before integrating
- Update documentation
GPL v3 - See LICENSE file for details
- Typical proposal: $0.15-$0.20 in API costs (with optimizations)
- Cost optimization via transcript summarization (23-50% savings)
- Parallel execution reduces time, not cost (same API usage)
- Quality iterations may increase costs
- Use quality threshold to balance cost vs quality
- Requires Claude API (not OpenAI compatible)
- Python 3.10+ only
- English-only (multi-language planned)
- No persistent memory (stateless sessions)
- Never commit API keys to repository
- Use
.envfile (added to.gitignore) - Review agent outputs before sending to customers
- Human-in-the-loop review enabled by default
- MCP tools analyze transcript data directly (no external APIs needed)
- See SECURITY.md for comprehensive security guidelines
The transcription files and text used in examples contain entirely fictional content generated for demonstration purposes only. All company names, person names, business scenarios, and details are completely fictitious and do not represent, reference, or relate to any real individuals, companies, or business situations. Any resemblance to actual persons, living or dead, or actual companies is purely coincidental.
- Issues: GitHub Issues
- Discussions: GitHub Discussions
- Article: Read the full story
Ready to transform your discovery calls into winning proposals with AI-powered multi-agent orchestration?