Lightweight distributed framework designed for high-performance AI applications.
π Zero Dependencies β Pure Rust + Tokio, no NATS/etcd/Redis π Auto Discovery β Built-in Gossip protocol for cluster management π Location Transparent β Same API for local and remote Actors β‘ Streaming Ready β Native support for LLM streaming responses π€ Agent Friendly β Integrates with AutoGen, LangGraph out of the box
pip install pulsingimport asyncio
from pulsing.actor import remote, resolve
from pulsing.agent import runtime
@remote
class Greeter:
def __init__(self, display_name: str):
self.display_name = display_name
def greet(self, message: str) -> str:
return f"[{self.display_name}] Received: {message}"
async def chat_with(self, peer_name: str, message: str) -> str:
peer = await resolve(peer_name)
return await peer.greet(f"From {self.display_name}: {message}")
async def main():
async with runtime():
# Create two agents
alice = await Greeter.spawn(display_name="Alice", name="alice")
bob = await Greeter.spawn(display_name="Bob", name="bob")
# Agent communication
reply = await alice.chat_with("bob", "Hello!")
print(reply) # [Bob] Received: From Alice: Hello!
asyncio.run(main())That's it! @remote turns a regular class into a distributed Actor, and resolve() enables agents to discover and communicate with each other.
| Scenario | Example | Description |
|---|---|---|
| Quick start | examples/quickstart/ |
Get started in 10 lines |
| Multi-Agent collaboration | examples/agent/pulsing/ |
AI debate, brainstorming, role-playing |
| Distributed LLM inference | pulsing actor router/vllm |
GPU cluster inference service |
| Integrate AutoGen | examples/agent/autogen/ |
One line to go distributed |
| Integrate LangGraph | examples/agent/langgraph/ |
Execute graphs across nodes |
Multiple AI Agents working in parallel and communicating:
from pulsing.agent import agent, runtime, llm
@agent(role="Researcher", goal="Deep analysis")
class Researcher:
async def analyze(self, topic: str) -> str:
client = await llm()
return await client.ainvoke(f"Analyze: {topic}")
@agent(role="Reviewer", goal="Evaluate proposals")
class Reviewer:
async def review(self, proposal: str) -> str:
client = await llm()
return await client.ainvoke(f"Review: {proposal}")
async with runtime():
researcher = await Researcher.spawn(name="researcher")
reviewer = await Reviewer.spawn(name="reviewer")
# Parallel work and collaboration
analysis = await researcher.analyze("AI trends")
feedback = await reviewer.review(analysis)# Run MBTI personality discussion example
python examples/agent/pulsing/mbti_discussion.py --mock --group-size 6
# Run parallel idea generation example
python examples/agent/pulsing/parallel_ideas_async.py --mock --n-ideas 5Develop locally, scale seamlessly to clusters:
# Standalone mode (development)
async with runtime():
agent = await MyAgent.spawn(name="agent")
# Distributed mode (production) β just add address
async with runtime(addr="0.0.0.0:8001"):
agent = await MyAgent.spawn(name="agent")
# Other nodes auto-discover
async with runtime(addr="0.0.0.0:8002", seeds=["node1:8001"]):
agent = await resolve("agent") # Cross-node transparent callOut-of-the-box GPU cluster inference:
# Start Router (OpenAI-compatible API)
pulsing actor router --addr 0.0.0.0:8000 --http_port 8080 --model_name my-llm
# Start vLLM Worker (can have multiple)
pulsing actor vllm --model Qwen/Qwen2.5-0.5B --addr 0.0.0.0:8002 --seeds 127.0.0.1:8000
# Test
curl http://localhost:8080/v1/chat/completions \
-d '{"model": "my-llm", "messages": [{"role": "user", "content": "Hello"}]}'Have existing AutoGen/LangGraph code? One-line migration:
# AutoGen: Replace runtime
from pulsing.autogen import PulsingRuntime
runtime = PulsingRuntime(addr="0.0.0.0:8000")
# LangGraph: Wrap the graph
from pulsing.langgraph import with_pulsing
distributed_app = with_pulsing(app, seeds=["gpu-server:8001"])examples/
βββ quickstart/ # β 5-minute quickstart
β βββ hello_agent.py # First Agent
βββ agent/
β βββ pulsing/ # ββ Multi-Agent apps
β β βββ mbti_discussion.py # MBTI personality discussion
β β βββ parallel_ideas_async.py # Parallel idea generation
β βββ autogen/ # AutoGen integration
β βββ langgraph/ # LangGraph integration
βββ python/ # ββ Basic examples
β βββ ping_pong.py # Actor basics
β βββ cluster.py # Cluster communication
β βββ ...
βββ rust/ # Rust examples
- Zero external dependencies: Pure Rust + Tokio, no NATS/etcd/Redis needed
- Gossip protocol: Built-in SWIM protocol for node discovery and failure detection
- Location transparency: Same API for local and remote Actors
- Streaming messages: Native support for streaming requests/responses (LLM-ready)
- Type safety: Rust Behavior API provides compile-time message type checking
Pulsing/
βββ crates/ # Rust core
β βββ pulsing-actor/ # Actor System
β βββ pulsing-py/ # Python bindings
βββ python/pulsing/ # Python package
β βββ actor/ # Actor API
β βββ agent/ # Agent toolkit
β βββ autogen/ # AutoGen integration
β βββ langgraph/ # LangGraph integration
βββ examples/ # Example code
βββ docs/ # Documentation
# Development build
maturin develop
# Run tests
pytest tests/python/
cargo test --workspaceApache-2.0