Skip to content

DeepLink-org/Pulsing

Folders and files

NameName
Last commit message
Last commit date

Latest commit

Β 

History

56 Commits
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 

Pulsing

CI License Python 3.10+ Rust

δΈ­ζ–‡ζ–‡ζ‘£

Lightweight distributed framework designed for high-performance AI applications.

πŸš€ Zero Dependencies β€” Pure Rust + Tokio, no NATS/etcd/Redis 🌐 Auto Discovery β€” Built-in Gossip protocol for cluster management πŸ”€ Location Transparent β€” Same API for local and remote Actors ⚑ Streaming Ready β€” Native support for LLM streaming responses πŸ€– Agent Friendly β€” Integrates with AutoGen, LangGraph out of the box

πŸš€ Get Started in 5 Minutes

Installation

pip install pulsing

Your First Multi-Agent Application

import asyncio
from pulsing.actor import remote, resolve
from pulsing.agent import runtime

@remote
class Greeter:
    def __init__(self, display_name: str):
        self.display_name = display_name

    def greet(self, message: str) -> str:
        return f"[{self.display_name}] Received: {message}"

    async def chat_with(self, peer_name: str, message: str) -> str:
        peer = await resolve(peer_name)
        return await peer.greet(f"From {self.display_name}: {message}")

async def main():
    async with runtime():
        # Create two agents
        alice = await Greeter.spawn(display_name="Alice", name="alice")
        bob = await Greeter.spawn(display_name="Bob", name="bob")

        # Agent communication
        reply = await alice.chat_with("bob", "Hello!")
        print(reply)  # [Bob] Received: From Alice: Hello!

asyncio.run(main())

That's it! @remote turns a regular class into a distributed Actor, and resolve() enables agents to discover and communicate with each other.

πŸ’‘ I want to...

Scenario Example Description
Quick start examples/quickstart/ Get started in 10 lines
Multi-Agent collaboration examples/agent/pulsing/ AI debate, brainstorming, role-playing
Distributed LLM inference pulsing actor router/vllm GPU cluster inference service
Integrate AutoGen examples/agent/autogen/ One line to go distributed
Integrate LangGraph examples/agent/langgraph/ Execute graphs across nodes

🎯 Core Capabilities

1. Multi-Agent Collaboration

Multiple AI Agents working in parallel and communicating:

from pulsing.agent import agent, runtime, llm

@agent(role="Researcher", goal="Deep analysis")
class Researcher:
    async def analyze(self, topic: str) -> str:
        client = await llm()
        return await client.ainvoke(f"Analyze: {topic}")

@agent(role="Reviewer", goal="Evaluate proposals")
class Reviewer:
    async def review(self, proposal: str) -> str:
        client = await llm()
        return await client.ainvoke(f"Review: {proposal}")

async with runtime():
    researcher = await Researcher.spawn(name="researcher")
    reviewer = await Reviewer.spawn(name="reviewer")

    # Parallel work and collaboration
    analysis = await researcher.analyze("AI trends")
    feedback = await reviewer.review(analysis)
# Run MBTI personality discussion example
python examples/agent/pulsing/mbti_discussion.py --mock --group-size 6

# Run parallel idea generation example
python examples/agent/pulsing/parallel_ideas_async.py --mock --n-ideas 5

2. One Line to Distributed

Develop locally, scale seamlessly to clusters:

# Standalone mode (development)
async with runtime():
    agent = await MyAgent.spawn(name="agent")

# Distributed mode (production) β€” just add address
async with runtime(addr="0.0.0.0:8001"):
    agent = await MyAgent.spawn(name="agent")

# Other nodes auto-discover
async with runtime(addr="0.0.0.0:8002", seeds=["node1:8001"]):
    agent = await resolve("agent")  # Cross-node transparent call

3. LLM Inference Service

Out-of-the-box GPU cluster inference:

# Start Router (OpenAI-compatible API)
pulsing actor router --addr 0.0.0.0:8000 --http_port 8080 --model_name my-llm

# Start vLLM Worker (can have multiple)
pulsing actor vllm --model Qwen/Qwen2.5-0.5B --addr 0.0.0.0:8002 --seeds 127.0.0.1:8000

# Test
curl http://localhost:8080/v1/chat/completions \
  -d '{"model": "my-llm", "messages": [{"role": "user", "content": "Hello"}]}'

4. Agent Framework Integration

Have existing AutoGen/LangGraph code? One-line migration:

# AutoGen: Replace runtime
from pulsing.autogen import PulsingRuntime
runtime = PulsingRuntime(addr="0.0.0.0:8000")

# LangGraph: Wrap the graph
from pulsing.langgraph import with_pulsing
distributed_app = with_pulsing(app, seeds=["gpu-server:8001"])

πŸ“š Example Guide

examples/
β”œβ”€β”€ quickstart/              # ⭐ 5-minute quickstart
β”‚   └── hello_agent.py       #    First Agent
β”œβ”€β”€ agent/
β”‚   β”œβ”€β”€ pulsing/             # ⭐⭐ Multi-Agent apps
β”‚   β”‚   β”œβ”€β”€ mbti_discussion.py      # MBTI personality discussion
β”‚   β”‚   └── parallel_ideas_async.py # Parallel idea generation
β”‚   β”œβ”€β”€ autogen/             # AutoGen integration
β”‚   └── langgraph/           # LangGraph integration
β”œβ”€β”€ python/                  # ⭐⭐ Basic examples
β”‚   β”œβ”€β”€ ping_pong.py         #    Actor basics
β”‚   β”œβ”€β”€ cluster.py           #    Cluster communication
β”‚   └── ...
└── rust/                    # Rust examples

πŸ”§ Technical Features

  • Zero external dependencies: Pure Rust + Tokio, no NATS/etcd/Redis needed
  • Gossip protocol: Built-in SWIM protocol for node discovery and failure detection
  • Location transparency: Same API for local and remote Actors
  • Streaming messages: Native support for streaming requests/responses (LLM-ready)
  • Type safety: Rust Behavior API provides compile-time message type checking

πŸ“¦ Project Structure

Pulsing/
β”œβ”€β”€ crates/                   # Rust core
β”‚   β”œβ”€β”€ pulsing-actor/        #   Actor System
β”‚   └── pulsing-py/           #   Python bindings
β”œβ”€β”€ python/pulsing/           # Python package
β”‚   β”œβ”€β”€ actor/                #   Actor API
β”‚   β”œβ”€β”€ agent/                #   Agent toolkit
β”‚   β”œβ”€β”€ autogen/              #   AutoGen integration
β”‚   └── langgraph/            #   LangGraph integration
β”œβ”€β”€ examples/                 # Example code
└── docs/                     # Documentation

πŸ› οΈ Development

# Development build
maturin develop

# Run tests
pytest tests/python/
cargo test --workspace

πŸ“„ License

Apache-2.0

About

A Datacenter Scale Distributed Framework for LLM Inference and Agent Applications

Topics

Resources

License

Code of conduct

Contributing

Security policy

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published

Contributors 177

Languages