+
+## π Overview
+
+The **Browser-Use Automator** is a production-grade template designed for AI-driven web navigation and data extraction. Unlike traditional scrapers that rely on fragile CSS selectors, this agent utilizes a vision-capable LLM to "see" the browser's Document Object Model (DOM). It interacts with elements (buttons, inputs, complex JS dropdowns) exactly like a human user, making it ideal for automating tasks across dynamic or authenticated websites.
+
+### Infrastructure Deployment
+* **Compute & Vision:** The agent requires an LLM with strong reasoning capabilities (GPT-4o) to process visual element coordinates and DOM trees.
+* **Environment Isolation:** The template utilizes Playwright as the high-performance browser controller, supporting both **Headless** (background execution) and **Headed** (visible window) modes.
+* **Binary Provisioning:** To ensure cross-distribution compatibility (e.g., Kali, Ubuntu, Debian), the setup script automatically handles fallback Chromium and FFmpeg binary installations.
+
+---
+
+## β Prerequisites
+
+1. **Python 3.10+**: Required for the asynchronous automation loop.
+2. **OpenAI API Key**: Generate one via the [OpenAI Platform Dashboard](https://platform.openai.com/api-keys).
+3. **System Dependencies**: Ensure `libgbm-dev` and `libnss3` are installed on your Linux host.
+
+---
+
+## ποΈ Setup & Deployment
+
+**1. Environment Setup**
+```bash
+python3 -m venv venv
+source venv/bin/activate
+pip install -r requirements.txt
+python3 -m playwright install chromium
+```
+
+**2. Configure Secrets**
+```bash
+cp .env.example .env
+# Enter your OPENAI_API_KEY in the .env file
+```
+
+**3. Execution & Testing**
+The master orchestrator allows for dual-mode testing depending on your debugging requirements:
+```bash
+python run.py
+```
+
+---
+
+## π‘ Usage Guide
+
+* **Terminal CLI:** Optimized for high-speed data extraction and automated cron-job workflows.
+* **Streamlit UI:** Provides a "Headed" toggle to watch the browser actions in real-time. Use the text area to define natural language instructions.
+
+**Example Tasks:**
+* *"Go to Amazon, search for 'RTX 5090', and give me the price of the first result that is in stock."*
+* *"Navigate to the GitHub trending page and summarize the top 3 Python repositories today."*
+* *"Search for the latest stock price of NVIDIA and tell me the daily change percentage."*
+
+---
+
+## π Official Documentation & References
+
+* [Browser-Use Github Repository](https://github.com/browser-use/browser-use)
+* [Playwright Python Documentation](https://playwright.dev/python/docs/intro)
+* [Saturn Cloud Help Center](https://saturncloud.io/docs/)
diff --git a/examples/start-agents/browser_automator/app.py b/examples/start-agents/browser_automator/app.py
new file mode 100644
index 00000000..8e4e4396
--- /dev/null
+++ b/examples/start-agents/browser_automator/app.py
@@ -0,0 +1,51 @@
+import streamlit as st
+import asyncio
+import os
+from browser_use import Agent
+from browser_use.llm import ChatOpenAI
+from dotenv import load_dotenv
+
+load_dotenv()
+
+st.set_page_config(page_title="Browser-Use Automator", page_icon="π€", layout="wide")
+
+st.title("π Browser-Use Automator")
+st.markdown("---")
+
+# Native wrapper for stability
+class BrowserUseLLM(ChatOpenAI):
+ def __init__(self, **kwargs):
+ super().__init__(**kwargs)
+
+with st.sidebar:
+ st.header("βοΈ Configuration")
+ model_name = st.selectbox("LLM Model", ["gpt-4o", "gpt-4o-mini"])
+ st.info("π‘ **Pro Tip:** Use gpt-4o for complex navigation tasks.")
+
+prompt = st.text_area("π― Enter Automation Task:", "Go to news.ycombinator.com and find the top story title.", height=100)
+
+if st.button("π Execute Automation", use_container_width=True):
+ if not prompt:
+ st.warning("Please enter a task first.")
+ else:
+ async def run_automation():
+ llm = ChatOpenAI(model=model_name)
+ agent = Agent(task=prompt, llm=llm)
+ return await agent.run()
+
+ with st.status("π€ Agent is navigating the web...", expanded=True) as status:
+ try:
+ st.write("Initializing Browser...")
+ result = asyncio.run(run_automation())
+ status.update(label="β Automation Complete!", state="complete", expanded=False)
+
+ st.subheader("π Final Result")
+ # Extract the final string answer from the history list
+ final_answer = result.final_result()
+ st.success(final_answer)
+
+ with st.expander("π οΈ View Technical Execution Logs (JSON)"):
+ st.json(result.model_dump())
+
+ except Exception as e:
+ st.error(f"β Automation failed: {e}")
diff --git a/examples/start-agents/browser_automator/automator_cli.py b/examples/start-agents/browser_automator/automator_cli.py
new file mode 100644
index 00000000..04d9b2f0
--- /dev/null
+++ b/examples/start-agents/browser_automator/automator_cli.py
@@ -0,0 +1,32 @@
+import asyncio
+import os
+from dotenv import load_dotenv
+from browser_use import Agent
+# Note: Switching from langchain_openai to the native browser_use wrapper
+from browser_use.llm import ChatOpenAI
+
+load_dotenv()
+
+async def main():
+ # The native wrapper handles all 'provider' and 'ainvoke' issues internally
+ llm = ChatOpenAI(model="gpt-4o")
+
+ task = "Go to https://news.ycombinator.com and tell me the title of the top post."
+
+ agent = Agent(
+ task=task,
+ llm=llm,
+ # We increase the failure limit slightly for complex sites
+ max_failures=5
+ )
+
+ print(f"π Running Production Task: {task}")
+ try:
+ result = await agent.run()
+ print("\nβ Final Report:")
+ print(result)
+ except Exception as e:
+ print(f"β Automation failed: {e}")
+
+if __name__ == "__main__":
+ asyncio.run(main())
diff --git a/examples/start-agents/browser_automator/requirements.txt b/examples/start-agents/browser_automator/requirements.txt
new file mode 100644
index 00000000..afaa2113
--- /dev/null
+++ b/examples/start-agents/browser_automator/requirements.txt
@@ -0,0 +1,5 @@
+browser-use
+playwright
+langchain-openai
+streamlit
+python-dotenv
\ No newline at end of file
diff --git a/examples/start-agents/browser_automator/run.py b/examples/start-agents/browser_automator/run.py
new file mode 100644
index 00000000..9159c45d
--- /dev/null
+++ b/examples/start-agents/browser_automator/run.py
@@ -0,0 +1,36 @@
+import subprocess
+import sys
+import os
+from dotenv import load_dotenv
+
+load_dotenv()
+
+def main():
+ print("==================================================")
+ print("π Browser-Use Automator Launcher")
+ print("==================================================")
+
+ if not os.getenv("OPENAI_API_KEY"):
+ print("β Error: OPENAI_API_KEY not found in .env")
+ sys.exit(1)
+
+ print("\nSelect Testing Mode:")
+ print("1. Streamlit Web UI (Visual Debugging)")
+ print("2. Terminal CLI (Headless Extraction)")
+
+ choice = input("\n> ")
+
+ try:
+ if choice == '1':
+ print("\nπ₯οΈ Launching Streamlit...")
+ subprocess.run([sys.executable, "-m", "streamlit", "run", "app.py"])
+ elif choice == '2':
+ print("\nπ Launching CLI...")
+ subprocess.run([sys.executable, "automator_cli.py"])
+ else:
+ print("Invalid choice. Exiting.")
+ except KeyboardInterrupt:
+ print("\nπ Shutdown complete.")
+
+if __name__ == "__main__":
+ main()
diff --git a/examples/start-agents/cagent_AI_agent/.env.example b/examples/start-agents/cagent_AI_agent/.env.example
new file mode 100644
index 00000000..55af96a5
--- /dev/null
+++ b/examples/start-agents/cagent_AI_agent/.env.example
@@ -0,0 +1 @@
+OPENAI_API_KEY="sk-your-openai-api-key-here"
\ No newline at end of file
diff --git a/examples/start-agents/cagent_AI_agent/README.md b/examples/start-agents/cagent_AI_agent/README.md
new file mode 100644
index 00000000..510637d2
--- /dev/null
+++ b/examples/start-agents/cagent_AI_agent/README.md
@@ -0,0 +1,124 @@
+# π³ Docker Agent (Multi-Agent) Production Starter
+
+*Cloud deployment architecture verified for [Saturn Cloud](https://saturncloud.io/).*
+
+**Hardware:** CPU/GPU | **Resource:** YAML Configuration & Web App | **Tech Stack:** Docker Agent, Python, MCP, Streamlit
+
+
+
+
+
+
+
+
+## π Overview
+
+This template provides a production-grade multi-agent system. It utilizes **Docker Agent** as the declarative backend orchestration engine (handling memory, tool routing, and sub-agent delegation via YAML), and wraps it in a **Streamlit** web dashboard for end-user interaction.
+
+By decoupling the CLI runtime from the frontend using a headless execution pattern (`docker agent exec`), this architecture behaves exactly like a modern AI microservice.
+
+---
+
+## ποΈ Local Setup & Installation
+
+**1. Install Python Dependencies**
+Ensure you have Python installed for the MCP server and Streamlit frontend.
+```bash
+python -m venv venv
+source venv/bin/activate
+pip install -r requirements.txt
+
+```
+
+**2. Install Docker Agent**
+
+* **Mac/Windows:** Install **Docker Desktop 4.63+**, which includes the `docker agent` CLI plugin natively.
+* **Linux (or Podman users):** Download the standalone binary directly from the official repository to bypass local container runtime conflicts:
+```bash
+curl -L -o docker-agent [https://github.com/docker/docker-agent/releases/latest/download/docker-agent-linux-amd64](https://github.com/docker/docker-agent/releases/latest/download/docker-agent-linux-amd64)
+chmod +x docker-agent
+
+```
+
+
+
+---
+
+## π Environment Configuration
+
+Docker Agent runs as a compiled binary, meaning it reads environment variables directly from your active terminal session rather than automatically parsing `.env` files.
+
+**1. Create your `.env` file:**
+
+```bash
+cp .env.example .env
+# Edit .env and add your API keys (e.g., OPENAI_API_KEY or GOOGLE_API_KEY)
+
+```
+
+**2. Inject the variables into your terminal session:**
+Run the following command before starting the agent to auto-export your `.env` contents into your active shell environment:
+
+```bash
+set -a; source .env; set +a
+
+```
+
+---
+
+## π Execution Methods
+
+This template supports two distinct ways to interact with the multi-agent system: an interactive terminal for debugging, and a web dashboard for production usage.
+
+### Method 1: Interactive Terminal (TUI)
+
+Great for debugging and watching the agents collaborate step-by-step in real-time.
+
+**Run the command:**
+*(If using Docker Desktop, use `docker agent run agent.yaml`)*
+
+```bash
+./docker-agent run agent.yaml
+
+```
+
+**Test Prompts:**
+Paste these into the terminal sequentially to verify the tools and memory:
+
+1. *"Please analyze the following text for me: 'Docker Agent allows developers to build complex multi-agent systems using a declarative YAML syntax. It is incredibly fast and modular.'"* (Tests Python MCP execution).
+2. *"From now on, I want you to remember a strict formatting preference. Whenever you write an analysis report, you must output the final result entirely in markdown bullet points, and add a short, funny haiku at the very end."* (Tests SQLite Memory database).
+
+### Method 2: Web Dashboard (Streamlit)
+
+Great for end-user interaction. The web server programmatically executes the Docker Agent headlessly in the background.
+
+**Run the command:**
+
+```bash
+streamlit run app.py
+
+```
+
+*The dashboard will automatically open in your browser at `http://localhost:8501`. Type your tasks into the chat box, and the UI will stream the final output once the backend agents complete their collaboration.*
+
+---
+
+## βοΈ Cloud Deployment
+
+To deploy this multi-agent web application to production, you can provision a resource on [Saturn Cloud](https://saturncloud.io/).
+
+**Deployment Specifications:**
+
+1. **Resource Type:** Streamlit Deployment / Python Server.
+2. **Environment Variables:** Inject your chosen model provider's API key directly into the Saturn Cloud secrets manager (do not commit your `.env` file).
+3. **Start Command:** `streamlit run app.py --server.port 8000 --server.address 0.0.0.0`
+4. **Binary Inclusion:** Ensure the Linux `docker-agent` binary is downloaded and made executable within the container workspace during the build phase so the Streamlit subprocess can successfully call it.
+
+---
+
+## π Official Documentation & References
+
+* **Deployment Infrastructure:** [Saturn Cloud Documentation](https://saturncloud.io/docs/)
+* **Docker Agent Repository:** [Docker Agent GitHub](https://github.com/docker/docker-agent)
+* **UI Framework:** [Streamlit Docs](https://docs.streamlit.io/)
+
diff --git a/examples/start-agents/cagent_AI_agent/agent.yaml b/examples/start-agents/cagent_AI_agent/agent.yaml
new file mode 100644
index 00000000..48891c67
--- /dev/null
+++ b/examples/start-agents/cagent_AI_agent/agent.yaml
@@ -0,0 +1,35 @@
+version: "2"
+agents:
+ root:
+ model: openai/gpt-4o-mini
+ description: "The primary orchestrator agent"
+ instruction: |
+ You are the coordinator of a data analysis team.
+ FIRST, use your 'todo' tool to create a checklist of tasks.
+ THEN, plan your execution using the 'think' tool,
+ delegate data processing to the 'analyzer', and formatting to the 'writer'.
+ Store important user preferences using the 'memory' tool.
+ Mark each todo as done.
+ toolsets:
+ - type: mcp
+ command: python
+ args: ["mcp_server.py"]
+ - type: think
+ - type: todo
+ - type: memory
+ path: agent_memory.db
+ sub_agents: [analyzer, writer]
+
+ analyzer:
+ model: openai/gpt-4o-mini
+ description: "Data analysis specialist"
+ instruction: |
+ You are a data analysis agent. Use your MCP tools to analyze the text provided
+ by the root agent and return the exact word and character counts.
+
+ writer:
+ model: openai/gpt-4o-mini
+ description: "Content generation specialist"
+ instruction: |
+ You are a technical writer. Take the raw metrics from the analyzer and format
+ them into a concise, highly professional summary report.
\ No newline at end of file
diff --git a/examples/start-agents/cagent_AI_agent/app.py b/examples/start-agents/cagent_AI_agent/app.py
new file mode 100644
index 00000000..472e52e1
--- /dev/null
+++ b/examples/start-agents/cagent_AI_agent/app.py
@@ -0,0 +1,73 @@
+import streamlit as st
+import subprocess
+import os
+
+# 1. Page Configuration
+st.set_page_config(page_title="Docker Agent UI", page_icon="π³", layout="centered")
+st.title("π³ Docker Multi-Agent Dashboard")
+st.markdown("A production web UI wrapping a declarative Docker Agent hierarchy.")
+
+# 2. Initialize Chat Memory in the Browser
+if "messages" not in st.session_state:
+ st.session_state.messages = []
+
+# 3. Render previous conversation
+for msg in st.session_state.messages:
+ with st.chat_message(msg["role"]):
+ st.write(msg["content"])
+
+# 4. Handle new web input
+user_input = st.chat_input("Enter a task for the agent team...")
+
+if user_input:
+ # Display the user's prompt in the UI
+ st.session_state.messages.append({"role": "user", "content": user_input})
+ with st.chat_message("user"):
+ st.write(user_input)
+
+ # Execute the backend agent
+ with st.chat_message("assistant"):
+
+ # Determine execution command
+ binary = "./docker-agent" if os.path.exists("./docker-agent") else "docker"
+ args = ["agent", "run", "agent.yaml", user_input] if binary == "docker" else [binary, "run", "agent.yaml", user_input]
+ env = os.environ.copy()
+
+ # 5. The "Antigravity" UI: Real-time status streaming
+ full_log = ""
+ with st.status("Agent team is coordinating...", expanded=True) as status_box:
+
+ # Popen allows us to read the terminal output line-by-line as it happens
+ process = subprocess.Popen(
+ args,
+ stdout=subprocess.PIPE,
+ stderr=subprocess.STDOUT,
+ text=True,
+ env=env
+ )
+
+ for line in process.stdout:
+ full_log += line
+
+ # Intercept CLI events and turn them into UI notifications
+ if line.startswith("Calling "):
+ tool_name = line.split("Calling ")[1].split("(")[0]
+ st.markdown(f"β Executing tool: **`{tool_name}`**")
+ elif line.startswith("--- Agent:"):
+ agent_name = line.replace("--- Agent:", "").replace("---", "").strip()
+ st.markdown(f"π Handing off to **{agent_name.capitalize()}**...")
+
+ # Wait for the process to finish and collapse the status box
+ process.wait()
+ status_box.update(label="Tasks completed successfully!", state="complete", expanded=False)
+
+ # 6. Extract and display ONLY the final text
+ # Docker Agent tool logs always end with a closing parenthesis and a newline ")\n"
+ if ")\n" in full_log:
+ final_output = full_log.rpartition(")\n")[-1].strip()
+ else:
+ final_output = full_log.strip()
+
+ # Display the clean result to the user
+ st.write(final_output)
+ st.session_state.messages.append({"role": "assistant", "content": final_output})
\ No newline at end of file
diff --git a/examples/start-agents/cagent_AI_agent/mcp_server.py b/examples/start-agents/cagent_AI_agent/mcp_server.py
new file mode 100644
index 00000000..573ba3b3
--- /dev/null
+++ b/examples/start-agents/cagent_AI_agent/mcp_server.py
@@ -0,0 +1,15 @@
+from mcp.server.fastmcp import FastMCP
+
+# Initialize the MCP tool server
+mcp = FastMCP("DataAnalyzer")
+
+@mcp.tool()
+def analyze_text_metrics(text_input: str) -> str:
+ """A custom Python tool that analyzes string length and word count."""
+ words = len(text_input.split())
+ chars = len(text_input)
+ return f"Analysis complete: {words} words, {chars} characters."
+
+if __name__ == "__main__":
+ # Runs the server over standard I/O so cagent can communicate with it
+ mcp.run()
\ No newline at end of file
diff --git a/examples/start-agents/cagent_AI_agent/requirements.txt b/examples/start-agents/cagent_AI_agent/requirements.txt
new file mode 100644
index 00000000..0a52c63d
--- /dev/null
+++ b/examples/start-agents/cagent_AI_agent/requirements.txt
@@ -0,0 +1,3 @@
+mcp>=1.0.0
+fastmcp>=0.1.0
+streamlit>=1.32.0
\ No newline at end of file
diff --git a/examples/start-agents/k8s-kaos/README.md b/examples/start-agents/k8s-kaos/README.md
new file mode 100644
index 00000000..ddb2f43a
--- /dev/null
+++ b/examples/start-agents/k8s-kaos/README.md
@@ -0,0 +1,119 @@
+# βΈοΈ KAOS (Kubernetes Agent Orchestration System) Starter
+
+*Cloud deployment architecture verified for [Saturn Cloud](https://saturncloud.io/).*
+
+**Hardware:** CPU/GPU | **Resource:** Kubernetes YAML & Web App | **Tech Stack:** Kubernetes, FastAPI, Streamlit, MCP, In-Cluster LLM
+
+
+
+## π Overview
+
+This template provides a production-ready, full-stack implementation of **Letta (formerly MemGPT)**.
+
+Traditional LLMs suffer from context window limitations, causing them to forget information over time. Letta resolves this using an "LLM Operating System" architecture that divides memory into Core Memory (immediate context) and Archival Memory (infinite vector storage). The agent autonomously edits, appends, and replaces its own memory blocks, resulting in a perpetually stateful AI companion.
+
+### Infrastructure Deployment
+This template uses a decoupled microservice architecture for maximum scalability:
+* **The Backend (Docker Compose):** Manages the Letta API Server and a dedicated PostgreSQL database. It utilizes an `init.sql` script to automatically inject the `pgvector` extension on boot, ensuring the Archival Memory tables compile perfectly.
+* **The Frontend (Python Launcher):** A lightweight client layer (`run.py`) that connects to the backend API, allowing you to interface with the agent via a headless Terminal CLI or a rich Streamlit Web Dashboard.
+
+---
+
+## β Prerequisites
+
+1. **Docker & Docker Compose:** Required to run the Letta backend and PostgreSQL database.
+2. **Python 3.10+:** Required for the frontend UI clients.
+3. **OpenAI API Key:** Required for the core LLM and Embedding models. Generate one at the [OpenAI Developer Platform](https://platform.openai.com/api-keys).
+
+---
+
+## ποΈ Setup & Deployment
+
+**1. Configure Environment Variables**
+```bash
+cp .env.example .env
+```
+Open `.env` and add your active `OPENAI_API_KEY`. Ensure `LETTA_PG_URI` is pointing to the Dockerized database (`postgresql+asyncpg://letta:letta@letta_db:5432/letta`).
+
+**2. Spin Up the Backend Services**
+Launch the Letta Server and Postgres database. On the very first boot, the system will inject the `vector` extension and run ~150 Alembic schema migrations.
+```bash
+sudo docker compose up -d
+```
+*(Note: Wait ~15-30 seconds on the initial boot for the database tables to build before launching the frontend. You can verify readiness with `sudo docker logs letta-server`).*
+
+**3. Setup the Frontend Environment**
+Create a virtual environment and install the client dependencies:
+```bash
+python3 -m venv venv
+source venv/bin/activate
+pip install -r requirements.txt
+```
+
+**4. Launch the UI**
+Run the frontend orchestrator to connect to the backend:
+```bash
+python run.py
+```
+*(Select Option 1 for the Streamlit UI, or Option 2 for headless terminal testing).*
+
+---
+
+## π‘ Usage Guide
+
+Once the Streamlit Web Dashboard is running, you can interact with the agent and monitor its internal state in real-time. Use the **Database Inspector** in the right-hand sidebar to visualize the agent actively editing its own Postgres memory blocks.
+
+**Example Prompts to Test Stateful Memory:**
+* *"Hi, my name is Alex and I am severely allergic to peanuts."* (Click 'Refresh Database' to watch the agent rewrite its 'Human' core memory block).
+* *"I'm going to a baseball game today, what kind of snacks should I get?"* (The agent will proactively exclude peanuts based on its retained memory).
+* *"What was my name again, and what did I tell you about my diet?"*
+
+---
+
+## π Official Documentation & References
+
+* [Saturn Cloud Documentation](https://saturncloud.io/docs/)
+* [Letta Official Documentation](https://docs.letta.com/)
+* [pgvector Documentation](https://github.com/pgvector/pgvector)
diff --git a/examples/start-agents/letta_starter/app.py b/examples/start-agents/letta_starter/app.py
new file mode 100644
index 00000000..ddea50e3
--- /dev/null
+++ b/examples/start-agents/letta_starter/app.py
@@ -0,0 +1,110 @@
+import os
+import time
+import streamlit as st
+from dotenv import load_dotenv
+from letta_client import Letta
+
+load_dotenv()
+
+st.set_page_config(page_title="Letta Companion", page_icon="π§ ", layout="wide")
+
+if not os.getenv("OPENAI_API_KEY"):
+ st.error("β OPENAI_API_KEY not found in .env file.")
+ st.stop()
+
+@st.cache_resource
+def init_letta_client():
+ try:
+ # Connects to your Dockerized Letta Server
+ return Letta(base_url="http://localhost:8283")
+ except Exception:
+ return None
+
+client = init_letta_client()
+
+if not client:
+ st.error("β Cannot connect to Letta Server. Is Docker running?")
+ st.stop()
+
+@st.cache_resource
+def get_or_create_agent():
+ # Initializing the default memory for your agent
+ memory_blocks = [
+ {"label": "human", "value": "User information is unknown. Ask the user for their name and preferences."},
+ {"label": "persona", "value": "Your name is Echo. You are a personalized, highly empathetic AI companion. You actively update your database to remember facts about the user."}
+ ]
+ return client.agents.create(
+ name=f"echo-ui-{int(time.time())}",
+ model="openai/gpt-4o-mini",
+ embedding="openai/text-embedding-3-small",
+ memory_blocks=memory_blocks,
+ )
+
+agent_state = get_or_create_agent()
+
+# --- UI Setup ---
+st.title("π§ Letta (MemGPT) Companion")
+st.markdown("A stateful AI companion with perpetual memory. Watch it edit its own PostgreSQL database to remember what you tell it!")
+
+col1, col2 = st.columns([2, 1])
+
+with col1:
+ if "messages" not in st.session_state:
+ st.session_state.messages = []
+
+ if st.button("ποΈ Reset Conversation"):
+ st.session_state.messages = []
+ st.rerun()
+
+ for msg in st.session_state.messages:
+ with st.chat_message(msg["role"]):
+ st.markdown(msg["content"])
+
+ if prompt := st.chat_input("E.g., Hi, my name is Alex and my favorite color is blue..."):
+ st.session_state.messages.append({"role": "user", "content": prompt})
+ with st.chat_message("user"):
+ st.markdown(prompt)
+
+ with st.chat_message("assistant"):
+ with st.spinner("π€ Echo is thinking (and updating its Postgres database)..."):
+ try:
+ response = client.agents.messages.create(
+ agent_id=agent_state.id,
+ messages=[{"role": "user", "content": prompt}]
+ )
+
+ final_answer = ""
+ for msg in response.messages:
+ if hasattr(msg, 'message_type'):
+ # Capture internal thoughts
+ if msg.message_type == 'internal_monologue':
+ with st.expander("π Agent Thought Process"):
+ st.write(msg.content)
+ # Capture final spoken message
+ elif msg.message_type == 'assistant_message' and msg.content:
+ final_answer = msg.content
+ st.markdown(final_answer)
+ st.session_state.messages.append({"role": "assistant", "content": final_answer})
+ except Exception as e:
+ st.error(f"Error executing agent: {e}")
+
+with col2:
+ st.subheader("ποΈ Postgres DB Inspector")
+ st.info("Watch the agent edit its Core Memory in real-time!")
+
+ # We use a button to manually refresh the view of the database
+ if st.button("π Refresh Database"):
+ pass
+
+ try:
+ # Fetch live memory blocks directly from the Letta Server
+ human_block = client.agents.blocks.retrieve(agent_id=agent_state.id, block_label="human")
+ persona_block = client.agents.blocks.retrieve(agent_id=agent_state.id, block_label="persona")
+
+ st.markdown("**User (Human) Block:**")
+ st.code(human_block.value, language="text")
+
+ st.markdown("**Agent (Persona) Block:**")
+ st.code(persona_block.value, language="text")
+ except Exception as e:
+ st.warning("Memory blocks initializing...")
diff --git a/examples/start-agents/letta_starter/docker-compose.yml b/examples/start-agents/letta_starter/docker-compose.yml
new file mode 100644
index 00000000..0cd95816
--- /dev/null
+++ b/examples/start-agents/letta_starter/docker-compose.yml
@@ -0,0 +1,30 @@
+version: '3.8'
+
+services:
+ letta_db:
+ image: ankane/pgvector:latest
+ container_name: letta-postgres
+ environment:
+ - POSTGRES_USER=letta
+ - POSTGRES_PASSWORD=letta
+ - POSTGRES_DB=letta
+ ports:
+ - "5432:5432"
+ volumes:
+ - letta_db_data:/var/lib/postgresql/data
+ - ./init.sql:/docker-entrypoint-initdb.d/init.sql
+
+ letta_server:
+ image: letta/letta:latest
+ container_name: letta-server
+ depends_on:
+ - letta_db
+ ports:
+ - "8283:8283"
+ env_file:
+ - .env
+ environment:
+ - LETTA_PG_URI=postgresql+asyncpg://letta:letta@letta_db:5432/letta
+
+volumes:
+ letta_db_data:
diff --git a/examples/start-agents/letta_starter/init.sql b/examples/start-agents/letta_starter/init.sql
new file mode 100644
index 00000000..0aa0fc22
--- /dev/null
+++ b/examples/start-agents/letta_starter/init.sql
@@ -0,0 +1 @@
+CREATE EXTENSION IF NOT EXISTS vector;
diff --git a/examples/start-agents/letta_starter/letta_cli.py b/examples/start-agents/letta_starter/letta_cli.py
new file mode 100644
index 00000000..d9aceffe
--- /dev/null
+++ b/examples/start-agents/letta_starter/letta_cli.py
@@ -0,0 +1,61 @@
+import os
+import sys
+import time
+from dotenv import load_dotenv
+from letta_client import Letta
+
+load_dotenv()
+
+if not os.getenv("OPENAI_API_KEY"):
+ print("β Error: OPENAI_API_KEY not found in .env file.")
+ sys.exit(1)
+
+try:
+ client = Letta(base_url="http://localhost:8283")
+except Exception:
+ print("β Error: Could not connect to Letta server. Please run using `python run.py`")
+ sys.exit(1)
+
+print("==================================================")
+print("π§ Letta (MemGPT) CLI Companion")
+print("π‘ Type 'exit' to quit.")
+print("==================================================\n")
+
+memory_blocks = [
+ {"label": "human", "value": "User information is unknown. Ask the user for their name and preferences."},
+ {"label": "persona", "value": "Your name is Echo. You are a personalized, highly empathetic AI companion. You actively update your memory to remember facts about the user."}
+]
+
+agent_state = client.agents.create(
+ name=f"echo-cli-{int(time.time())}",
+ model="openai/gpt-4o-mini",
+ embedding="openai/text-embedding-3-small",
+ memory_blocks=memory_blocks,
+)
+
+while True:
+ try:
+ user_input = input("\nπ§ You: ")
+ if user_input.lower() in ['exit', 'quit']:
+ break
+ if not user_input.strip():
+ continue
+
+ print("π€ Echo is thinking...\n")
+
+ response = client.agents.messages.create(
+ agent_id=agent_state.id,
+ messages=[{"role": "user", "content": user_input}]
+ )
+
+ for msg in response.messages:
+ if hasattr(msg, 'message_type'):
+ if msg.message_type == 'internal_monologue':
+ print(f" [Thought: {msg.content}]")
+ elif msg.message_type == 'assistant_message' and msg.content:
+ print(f"\n⨠Echo: {msg.content}")
+
+ except KeyboardInterrupt:
+ break
+ except Exception as e:
+ print(f"\nβ An error occurred: {e}")
diff --git a/examples/start-agents/letta_starter/requirements.txt b/examples/start-agents/letta_starter/requirements.txt
new file mode 100644
index 00000000..026b728d
--- /dev/null
+++ b/examples/start-agents/letta_starter/requirements.txt
@@ -0,0 +1,5 @@
+letta
+letta-client
+python-dotenv
+streamlit
+asyncpg
\ No newline at end of file
diff --git a/examples/start-agents/letta_starter/run.py b/examples/start-agents/letta_starter/run.py
new file mode 100644
index 00000000..69e03cf3
--- /dev/null
+++ b/examples/start-agents/letta_starter/run.py
@@ -0,0 +1,37 @@
+import subprocess
+import sys
+import os
+
+def main():
+ print("==================================================")
+ print("π Letta (MemGPT) Frontend Launcher")
+ print("==================================================")
+
+ # Check if running in a Cloud environment
+ is_cloud = os.getenv("CLOUD_ENV", "false").lower() == "true"
+
+ try:
+ if is_cloud:
+ print("\nβοΈ Cloud environment detected. Auto-launching Streamlit...")
+ subprocess.run([sys.executable, "-m", "streamlit", "run", "app.py", "--server.port=8000", "--server.address=0.0.0.0"])
+ else:
+ print("\nWhich interface would you like to run?")
+ print("1. Streamlit Web Dashboard (UI Testing)")
+ print("2. Terminal CLI (Headless Testing)")
+
+ choice = input("\n> ")
+
+ if choice == '1':
+ print("\nπ Launching Streamlit Dashboard...")
+ subprocess.run([sys.executable, "-m", "streamlit", "run", "app.py"])
+ elif choice == '2':
+ print("\nπ₯οΈ Launching Terminal CLI...")
+ subprocess.run([sys.executable, "letta_cli.py"])
+ else:
+ print("Invalid choice. Shutting down.")
+
+ except KeyboardInterrupt:
+ print("\nβ Exiting frontend launcher...")
+
+if __name__ == "__main__":
+ main()
diff --git a/examples/start-agents/mcp_sqlite_server/.env.example b/examples/start-agents/mcp_sqlite_server/.env.example
new file mode 100644
index 00000000..75b8c398
--- /dev/null
+++ b/examples/start-agents/mcp_sqlite_server/.env.example
@@ -0,0 +1,2 @@
+# The path to your local SQLite database file
+SQLITE_DB_PATH="local_data.db"
diff --git a/examples/start-agents/mcp_sqlite_server/README.md b/examples/start-agents/mcp_sqlite_server/README.md
new file mode 100644
index 00000000..635aa31c
--- /dev/null
+++ b/examples/start-agents/mcp_sqlite_server/README.md
@@ -0,0 +1,88 @@
+# ποΈ MCP SQLite Agent
+
+*Cloud deployment architecture verified for [Saturn Cloud](https://saturncloud.io/).*
+
+**Hardware:** CPU/GPU | **Resource:** Python Server | **Tech Stack:** MCP, SQLite, Python
+
+
+
+
+
+
+
+
+## π Overview
+
+The **MCP SQLite Agent** is a production-grade Model Context Protocol server. It acts as a bridge between local SQLite databases and AI models, allowing agents to perform natural language data analysis via structured SQL execution and schema reflection.
+
+### Infrastructure Deployment
+* **Hybrid Interface:** Supports both **Stdio Transport** (for production AI connections) and **Web Inspector** (for developer testing).
+* **Environment Isolation:** Uses a dedicated Python virtual environment to manage `aiosqlite` and the `mcp` SDK without system-level conflicts.
+* **Auto-Provisioning:** The orchestrator automatically initializes a sample `users` table if no database is detected, ensuring immediate "plug-and-play" functionality.
+
+---
+
+## β Prerequisites
+
+1. **Python 3.10+**: Core runtime.
+2. **Node.js & NPM**: Mandatory for the **MCP Inspector** UI.
+3. **Claude Desktop**: (Optional) Recommended client for production testing.
+
+---
+
+## ποΈ Setup & Deployment
+
+**1. System & Python Setup**
+```bash
+sudo apt update && sudo apt install -y nodejs npm
+python3 -m venv venv
+source venv/bin/activate
+pip install -r requirements.txt
+```
+
+**2. Secret Configuration**
+```bash
+cp .env.example .env
+# Ensure SQLITE_DB_PATH points to your desired .db file
+```
+
+**3. Launching the Orchestrator**
+```bash
+python run.py
+```
+
+---
+
+## π‘ Usage Guide
+
+### Mode 1: Testing with MCP Inspector (Web UI)
+Use this mode to verify your tools are working before connecting to an AI.
+1. Run `python run.py` and select **Option 2**.
+2. In the browser, click the **Connect** button (ensure command is `python3` and args is `server.py`).
+3. Navigate to the **Tools** tab.
+4. Select `query_db` and enter `SELECT * FROM users;` to test data retrieval.
+
+### Mode 2: Production (Claude Desktop)
+To give Claude "eyes" on your local database:
+1. Locate your Claude Desktop config (typically `~/Library/Application Support/Claude/claude_desktop_config.json`).
+2. Add the following entry to the `mcpServers` object:
+```json
+"mcp-sqlite": {
+ "command": "/path/to/your/venv/bin/python3",
+ "args": ["/path/to/your/mcp_sqlite_server/server.py"],
+ "env": { "SQLITE_DB_PATH": "/path/to/your/local_data.db" }
+}
+```
+3. Restart Claude and look for the π¨ icon.
+
+**Example Prompts:**
+* *"List the tables in my local database."*
+* *"Who are the users registered as 'Engineer'?"*
+
+---
+
+## π Official Documentation & References
+
+* [MCP Documentation](https://modelcontextprotocol.io/)
+* [FastMCP Python SDK](https://github.com/modelcontextprotocol/python-sdk)
+* [aiosqlite Reference](https://aiosqlite.omnilib.dev/)
diff --git a/examples/start-agents/mcp_sqlite_server/requirements.txt b/examples/start-agents/mcp_sqlite_server/requirements.txt
new file mode 100644
index 00000000..2d3b9d23
--- /dev/null
+++ b/examples/start-agents/mcp_sqlite_server/requirements.txt
@@ -0,0 +1,4 @@
+mcp
+aiosqlite
+python-dotenv
+
diff --git a/examples/start-agents/mcp_sqlite_server/run.py b/examples/start-agents/mcp_sqlite_server/run.py
new file mode 100644
index 00000000..93de97f3
--- /dev/null
+++ b/examples/start-agents/mcp_sqlite_server/run.py
@@ -0,0 +1,43 @@
+import subprocess
+import sys
+import os
+from dotenv import load_dotenv
+
+load_dotenv()
+
+def main():
+ print("==================================================")
+ print("ποΈ MCP SQLite Server Orchestrator")
+ print("==================================================")
+
+ db_path = os.getenv("SQLITE_DB_PATH")
+ if not os.path.exists(db_path):
+ print(f"β οΈ Warning: Database '{db_path}' not found. Creating a sample DB...")
+ import sqlite3
+ conn = sqlite3.connect(db_path)
+ conn.execute("CREATE TABLE users (id INTEGER PRIMARY KEY, name TEXT, role TEXT)")
+ conn.execute("INSERT INTO users (name, role) VALUES ('Admin', 'Superuser'), ('Dev', 'Engineer')")
+ conn.commit()
+ conn.close()
+
+ print("\nSelect Mode:")
+ print("1. Run MCP Server (Standard Stdio)")
+ print("2. Run MCP Inspector (Web UI for Testing Tools)")
+
+ choice = input("\n> ")
+
+ try:
+ if choice == '1':
+ print("π Server starting... (Connect your MCP client to this process)")
+ subprocess.run([sys.executable, "server.py"])
+ elif choice == '2':
+ print("π Launching MCP Inspector...")
+ # This requires the mcp[cli] extra
+ subprocess.run(["npx", "@modelcontextprotocol/inspector", "python3", "server.py"])
+ else:
+ print("Invalid choice.")
+ except KeyboardInterrupt:
+ print("\nπ Shutdown complete.")
+
+if __name__ == "__main__":
+ main()
diff --git a/examples/start-agents/mcp_sqlite_server/server.py b/examples/start-agents/mcp_sqlite_server/server.py
new file mode 100644
index 00000000..36cf65c8
--- /dev/null
+++ b/examples/start-agents/mcp_sqlite_server/server.py
@@ -0,0 +1,40 @@
+import asyncio
+import os
+import aiosqlite
+from mcp.server.fastmcp import FastMCP
+from dotenv import load_dotenv
+
+load_dotenv()
+
+# Initialize FastMCP Server
+mcp = FastMCP("SQLite-Local-Server")
+DB_PATH = os.getenv("SQLITE_DB_PATH", "local_data.db")
+
+@mcp.tool()
+async def query_db(sql: str):
+ """Execute a read-only SQL query on the local SQLite database."""
+ async with aiosqlite.connect(DB_PATH) as db:
+ db.row_factory = aiosqlite.Row
+ async with db.execute(sql) as cursor:
+ rows = await cursor.fetchall()
+ return [dict(row) for row in rows]
+
+@mcp.tool()
+async def list_tables():
+ """List all tables available in the local database."""
+ async with aiosqlite.connect(DB_PATH) as db:
+ async with db.execute("SELECT name FROM sqlite_master WHERE type='table';") as cursor:
+ rows = await cursor.fetchall()
+ return [row[0] for row in rows]
+
+@mcp.tool()
+async def describe_table(table_name: str):
+ """Get the schema/columns for a specific table."""
+ async with aiosqlite.connect(DB_PATH) as db:
+ async with db.execute(f"PRAGMA table_info({table_name});") as cursor:
+ rows = await cursor.fetchall()
+ return [dict(row) for row in rows]
+
+if __name__ == "__main__":
+ # Run the server using the MCP stdio transport
+ mcp.run()
diff --git a/examples/start-agents/meta_gpt_factory/Dockerfile b/examples/start-agents/meta_gpt_factory/Dockerfile
new file mode 100644
index 00000000..1520ab9a
--- /dev/null
+++ b/examples/start-agents/meta_gpt_factory/Dockerfile
@@ -0,0 +1,26 @@
+FROM python:3.11-slim
+
+# Install system dependencies
+RUN apt-get update && apt-get install -y \
+ git curl libnss3 libnspr4 libatk1.0-0 libatk-bridge2.0-0 \
+ libcups2 libdrm2 libxkbcommon0 libxcomposite1 libxdamage1 \
+ libxext6 libxfixes3 libxrandr2 libgbm1 libasound2 \
+ && rm -rf /var/lib/apt/lists/*
+
+WORKDIR /app
+
+# Step 1: Install the Anchors and Pip upgrade
+COPY requirements.txt .
+RUN pip install --no-cache-dir --upgrade pip && \
+ pip install --no-cache-dir -r requirements.txt
+
+# Step 2: Install Playwright Chromium
+RUN playwright install chromium
+
+RUN mkdir -p /app/config /app/workspace
+COPY . .
+
+EXPOSE 8501
+ENV METAGPT_CONFIG_PATH=/app/config/config2.yaml
+
+CMD ["streamlit", "run", "app.py", "--server.address=0.0.0.0"]
\ No newline at end of file
diff --git a/examples/start-agents/meta_gpt_factory/README.md b/examples/start-agents/meta_gpt_factory/README.md
new file mode 100644
index 00000000..708403d2
--- /dev/null
+++ b/examples/start-agents/meta_gpt_factory/README.md
@@ -0,0 +1,71 @@
+# π MetaGPT Software Factory
+
+*Cloud deployment architecture verified for [Saturn Cloud](https://saturncloud.io/).*
+
+**Hardware:** 4+ vCPU / 8GB+ RAM | **Resource:** Python Server | **Tech Stack:** MetaGPT, Docker, Python
+
+
+
+
+
+
+
+
+
+## π Overview
+
+The **MetaGPT Software Factory** is a production-grade multi-agent framework that simulates an entire software company. By assigning specialized rolesβProduct Manager, Architect, and Engineerβthe system transforms a single natural language requirement into a comprehensive repository including PRDs, system designs, and executable code.
+
+### Infrastructure Deployment
+* **Environment Isolation:** Containerized via **Docker** using a Python 3.11-slim base and 2026 dependency anchors to prevent conflicts and ensure a "clean room" execution environment.
+* **Persistent Workspace:** Uses Docker volume mounting to map internal agent outputs to the local `./workspace` directory, ensuring all generated assets remain persistent.
+* **Headless Research:** Integrated with **Playwright** to allow the Product Manager agent to perform real-time market research via an automated Chromium browser.
+
+---
+
+## β Prerequisites
+
+1. **OpenAI API Key**: Required for the agentic reasoning engine (`gpt-4o`). [Get Key](https://platform.openai.com/)
+2. **Docker & Docker Compose**: Mandatory for environment orchestration and isolation.
+3. **Kali Linux Users**: Run `export DOCKER_HOST=unix:///var/run/docker.sock` to ensure Docker takes priority over Podman.
+4. **Hardware Context**: Minimum 8GB RAM recommended for parallel agent processing.
+
+---
+
+## ποΈ Setup & Deployment
+
+**1. Secret Configuration**
+```bash
+cat << 'EOF' > config2.yaml
+llm:
+ api_type: "openai"
+ api_key: "sk-YOUR_KEY_HERE"
+ model: "gpt-4o"
+ base_url: "https://api.openai.com/v1"
+EOF
+```
+
+**2. Launching the Factory**
+```bash
+docker-compose up --build
+```
+
+---
+
+## π‘ Usage Guide
+
+### Mode 1: Streamlit Dashboard (Visual SOP)
+1. Run `docker-compose up` and navigate to `http://localhost:8501`.
+2. Input your software project idea.
+3. Click **"Start Production"** to trigger the waterfall process.
+
+### Mode 2: CLI Power-User
+```bash
+docker-compose run factory python run.py "Create a secure Flask API with JWT authentication"
+```
+
+## π Official Documentation & References
+
+* [MetaGPT Official Documentation](https://docs.deepwisdom.ai/main/en/)
+* [Docker Compose Specification](https://docs.docker.com/compose/)
+* [Playwright Python SDK](https://playwright.dev/python/docs/intro)
diff --git a/examples/start-agents/meta_gpt_factory/app.py b/examples/start-agents/meta_gpt_factory/app.py
new file mode 100644
index 00000000..23f942d7
--- /dev/null
+++ b/examples/start-agents/meta_gpt_factory/app.py
@@ -0,0 +1,36 @@
+import streamlit as st
+import asyncio
+from metagpt.team import Team
+from metagpt.roles import ProductManager, Architect, ProjectManager, Engineer
+
+# UI Setup
+st.set_page_config(page_title="MetaGPT Factory", layout="wide")
+st.title("ποΈ AI Software Factory")
+
+idea = st.text_area("What should the factory build?", placeholder="e.g. Create a CLI-based Password Manager in Python")
+
+if st.button("Start Production"):
+ if idea:
+ async def run_factory():
+ # The engine finds the config automatically via the Docker ENV variable
+ team = Team()
+ team.hire([ProductManager(), Architect(), ProjectManager(), Engineer()])
+ team.invest(3.0)
+
+ # Run_project is a sync method in 0.8.2
+ team.run_project(idea)
+
+ # This is the part that actually runs the agents
+ await team.run(n_round=5)
+ st.success("Software Production Complete! Check your /workspace folder.")
+
+ with st.spinner("Agents are collaborating... check terminal for live logs."):
+ try:
+ # We use a new event loop to avoid conflicts with Streamlit's loop
+ loop = asyncio.new_event_loop()
+ asyncio.set_event_loop(loop)
+ loop.run_until_complete(run_factory())
+ except Exception as e:
+ st.error(f"Execution Error: {e}")
+ else:
+ st.warning("Please enter a project idea.")
\ No newline at end of file
diff --git a/examples/start-agents/meta_gpt_factory/config2.yaml b/examples/start-agents/meta_gpt_factory/config2.yaml
new file mode 100644
index 00000000..8c1b754e
--- /dev/null
+++ b/examples/start-agents/meta_gpt_factory/config2.yaml
@@ -0,0 +1,5 @@
+llm:
+ api_type: "openai"
+ api_key: "sk-YOUR_KEY_HERE"
+ model: "gpt-4o"
+ base_url: "https://api.openai.com/v1"
diff --git a/examples/start-agents/meta_gpt_factory/docker-compose.yml b/examples/start-agents/meta_gpt_factory/docker-compose.yml
new file mode 100644
index 00000000..7755882b
--- /dev/null
+++ b/examples/start-agents/meta_gpt_factory/docker-compose.yml
@@ -0,0 +1,12 @@
+services:
+ factory:
+ build: .
+ container_name: software_factory
+ volumes:
+ - ./workspace:/app/workspace
+ - ./config2.yaml:/app/config/config2.yaml
+ ports:
+ - "8501:8501"
+ environment:
+ - PYTHONUNBUFFERED=1
+ - METAGPT_CONFIG_PATH=/app/config/config2.yaml
\ No newline at end of file
diff --git a/examples/start-agents/meta_gpt_factory/requirements.txt b/examples/start-agents/meta_gpt_factory/requirements.txt
new file mode 100644
index 00000000..c991c78d
--- /dev/null
+++ b/examples/start-agents/meta_gpt_factory/requirements.txt
@@ -0,0 +1,21 @@
+# --- PRIMARY ENGINES ---
+metagpt==0.8.2
+openai==1.39.0
+httpx==0.27.2
+pydantic==2.9.2
+
+# --- THE ANCHORS ---
+protobuf==4.25.3
+google-api-core==2.19.1
+google-auth==2.29.0
+googleapis-common-protos==1.63.0
+proto-plus==1.23.0
+opentelemetry-api==1.24.0
+opentelemetry-sdk==1.24.0
+importlib-metadata==7.0.0
+
+# --- UI & SUPPORT ---
+streamlit==1.32.0
+playwright
+beautifulsoup4
+python-docx
\ No newline at end of file
diff --git a/examples/start-agents/meta_gpt_factory/run.py b/examples/start-agents/meta_gpt_factory/run.py
new file mode 100644
index 00000000..fa53e957
--- /dev/null
+++ b/examples/start-agents/meta_gpt_factory/run.py
@@ -0,0 +1,25 @@
+import asyncio
+from metagpt.team import Team
+from metagpt.roles import ProductManager, Architect, ProjectManager, Engineer
+
+async def startup(idea: str):
+ # Create the Software Company Team
+ company = Team()
+
+ # Hire the standard production roles
+ company.hire([
+ ProductManager(),
+ Architect(),
+ ProjectManager(),
+ Engineer()
+ ])
+
+ # Run the production line
+ company.run_project(idea)
+ await company.run(n_round=5)
+ print(f"β Production Finished! Check your /workspace folder.")
+
+if __name__ == "__main__":
+ import sys
+ idea = " ".join(sys.argv[1:]) if len(sys.argv) > 1 else input("π Project Idea: ")
+ asyncio.run(startup(idea=idea))
\ No newline at end of file
diff --git a/examples/start-agents/microsoft_autogen/.env.example b/examples/start-agents/microsoft_autogen/.env.example
new file mode 100644
index 00000000..141ca06f
--- /dev/null
+++ b/examples/start-agents/microsoft_autogen/.env.example
@@ -0,0 +1 @@
+OPENAI_API_KEY="sk-your-openai-key-goes-here"
\ No newline at end of file
diff --git a/examples/start-agents/microsoft_autogen/README.md b/examples/start-agents/microsoft_autogen/README.md
new file mode 100644
index 00000000..816cf306
--- /dev/null
+++ b/examples/start-agents/microsoft_autogen/README.md
@@ -0,0 +1,98 @@
+# π€ Microsoft AutoGen Starter
+
+*Cloud deployment architecture verified for [Saturn Cloud](https://saturncloud.io/).*
+
+**Hardware:** CPU/GPU | **Resource:** Python Script & Web App | **Tech Stack:** AutoGen, Python, Streamlit
+
+
+
+## π Overview
+
+The **Sayna Voice Agent** is a production-ready, ultra-low latency voice infrastructure template.
+
+It utilizes **Deepgram's End-to-End Voice Agent API**, which dramatically reduces latency by handling the Speech-to-Text (STT), Large Language Model (LLM) routing, and Text-to-Speech (TTS) entirely on their edge servers.
+
+### β¨ Key Capabilities
+* **Managed LLM Orchestration:** Deepgram acts as a centralized orchestrator. By passing a JSON configuration to the WebSocket, Deepgram natively routes the transcript to third-party LLMs (like Google Gemini, OpenAI, or Anthropic) on their backend. You only need a single Deepgram API key to power the entire pipeline.
+* **Cross-Browser Audio Streaming:** The `index.html` frontend features a custom Web Audio API downsampler, ensuring strict browsers (like Firefox) can capture 48kHz hardware microphones and stream them at the required 16kHz without throwing `DOMExceptions`.
+* **Real-Time Visualizer:** Includes a live-rendering `