Skip to content

lumenworksco/LangChain

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

3 Commits
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 

Repository files navigation

LangChain Multi-Agent Content Pipeline

A multi-agent content pipeline powered by LangGraph and HuggingFace. Three specialized AI agents collaborate to research, write, and review articles on any topic — completely free.

Python 3.11+ License: MIT

How It Works

                          ┌──────────────────────┐
                          │   Enter a topic       │
                          └─────────┬────────────┘
                                    │
                          ┌─────────▼────────────┐
                          │   Researcher Agent    │
                          │   Gathers facts,      │
                          │   perspectives, data   │
                          └─────────┬────────────┘
                                    │
                          ┌─────────▼────────────┐
                          │   Writer Agent        │
                          │   Produces a polished  │
                          │   article draft        │
                          └─────────┬────────────┘
                                    │
                          ┌─────────▼────────────┐
                          │   Reviewer Agent      │
                          │   Evaluates quality    │
                          │   and accuracy         │
                          └─────────┬────────────┘
                                    │
                            ┌───────┴───────┐
                            │               │
                     APPROVE ▼          REVISE ▼
                   ┌─────────────┐  ┌──────────────┐
                   │  Finalize   │  │ Back to Writer │
                   │  & Save     │  │ with feedback  │──┐
                   └─────────────┘  └──────────────┘   │
                                           ▲           │
                                           └───────────┘
                                         (up to N rounds)

Three agents work together in a loop:

  1. Researcher — Gathers comprehensive research on the topic (facts, perspectives, examples, counterarguments)
  2. Writer — Produces a well-structured article from the research, or revises based on feedback
  3. Reviewer — Evaluates the draft on accuracy, clarity, structure, engagement, and completeness. Returns APPROVE or REVISE with actionable feedback

The revision loop continues until the Reviewer approves or the max revision count is reached.

Features

  • Interactive CLI — Just run python run.py, no command-line arguments needed
  • Live progress — Animated spinners show which agent is working in real time
  • Rich terminal UI — Colored panels, formatted Markdown output, styled menus
  • Auto-save — Articles are automatically saved to the output/ directory
  • Revision loop — Agents iterate to improve quality (configurable rounds)
  • Rate-limit resilience — Automatic retry with exponential backoff on 429 errors
  • 100% free — Uses HuggingFace's free Inference API (no credit card required)

Quick Start

Prerequisites

Setup

# Clone the repo
git clone https://github.com/YOUR_USERNAME/langchain-multi-agent.git
cd langchain-multi-agent

# Create virtual environment
python3 -m venv .venv
source .venv/bin/activate

# Install dependencies
pip install -e .

# Add your HuggingFace token
cp .env.example .env
# Edit .env and paste your token from https://huggingface.co/settings/tokens

Run

python run.py

That's it! The interactive runner will guide you through the rest.

Usage

Interactive Mode (recommended)

python run.py

You'll see a welcome screen, enter a topic, and watch the agents work. After generation, a menu lets you:

  • [1] View the full article in the terminal (rendered Markdown)
  • [2] Save a copy to a custom file path
  • [3] Generate another article
  • [4] Quit

Module Mode

python -m src.main

Same interactive experience, useful if run.py isn't in your path.

Project Structure

langchain-multi-agent/
├── run.py                 # Entry point — just run this
├── pyproject.toml         # Dependencies and project metadata
├── .env.example           # Template for API token
├── .gitignore
│
├── src/
│   ├── main.py            # Interactive CLI with rich terminal UI
│   ├── config.py          # LLM setup, retry logic
│   ├── state.py           # Pipeline state (shared between agents)
│   ├── graph.py           # LangGraph workflow orchestration
│   │
│   └── agents/
│       ├── researcher.py  # Research agent — gathers information
│       ├── writer.py      # Writer agent — drafts and revises
│       └── reviewer.py    # Reviewer agent — evaluates and approves
│
└── output/                # Auto-saved articles (gitignored)

Configuration

Setting Location Default
LLM model src/config.py Qwen/Qwen2.5-72B-Instruct
Max tokens src/config.py 4096
Max retries (rate limit) src/config.py 5
Retry backoff src/config.py 10s * attempt
Max revision rounds Interactive prompt 2
Output directory src/main.py output/

Switching Models

Edit MODEL in src/config.py to any chat model available on HuggingFace's Inference API:

MODEL = "meta-llama/Llama-3.3-70B-Instruct"  # or any supported model

Tech Stack

License

MIT

About

No description, website, or topics provided.

Resources

License

Stars

Watchers

Forks

Releases

No releases published

Packages

 
 
 

Contributors

Languages