Skip to content
This repository was archived by the owner on Jul 4, 2025. It is now read-only.

Commit 3eb7342

Browse files
authored
Merge pull request #353 from janhq/feat/autogen-docs
Chore: Nitro x Autogen documentation
2 parents 23856d0 + 2ea5b2f commit 3eb7342

File tree

7 files changed

+101
-1
lines changed

7 files changed

+101
-1
lines changed

docs/docs/examples/autogen.md

Lines changed: 99 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,99 @@
1+
---
2+
title: Nitro with Autogen
3+
description: Nitro intergration guide for using Autogen.
4+
keywords: [Nitro, autogen, autogen studio, autogen 2.0, litellm, ollama, Jan, fast inference, inference server, local AI, large language model, OpenAI compatible, open source, llama]
5+
---
6+
7+
This guide demonstrates how to use Nitro with Autogen to develope a multi-agent framework.
8+
9+
## What is AutoGen?
10+
11+
AutoGen makes developing multi-agent conversations a breeze. It's perfect for complex Language Model (LLM) projects, offering flexible, interactive agents. These agents can work with LLMs, human input, and other tools in various combinations.
12+
13+
AutoGen Studio upgrades AutoGen with a user-friendly drag-and-drop interface. It simplifies creating and tweaking agents and workflows. You can start chat sessions, track chat history and files, and monitor time spent. It also lets users add extra skills to agents and share their projects easily, catering to all user levels.
14+
15+
## Setting Up
16+
17+
### Install AutoGen Studio
18+
19+
Just run:
20+
21+
```bash
22+
pip install autogenstudio
23+
```
24+
25+
### Launch AutoGen Studio
26+
Use this command:
27+
28+
```bash
29+
autogenstudio ui --port 8000
30+
```
31+
32+
For more on AutoGen, visit their [page](https://microsoft.github.io/autogen/blog/2023/12/01/AutoGenStudio/).
33+
34+
![Autogen Studio page](img/autogen_page.png)
35+
36+
## Using a Local Model with Nitro
37+
38+
**1. Start Nitro Server**
39+
40+
Open your terminal and run:
41+
```bash
42+
nitro
43+
```
44+
**2. Download Model**
45+
46+
To get the [Stealth 7B](https://huggingface.co/janhq/stealth-v1.3-GGUF) model, enter:
47+
48+
```bash title="Get a model"
49+
mkdir model && cd model
50+
wget -O stealth-7b-model.gguf https://huggingface.co/janhq/stealth-v1.3-GGUF/resolve/main/stealth-v1.3.Q4_K_M.gguf
51+
```
52+
53+
> Explore more models at [The Bloke](https://huggingface.co/TheBloke).
54+
55+
**3. Load the Model**
56+
57+
Run this to load the model:
58+
59+
```bash title="Load model to the server"
60+
curl http://localhost:3928/inferences/llamacpp/loadmodel \
61+
-H 'Content-Type: application/json' \
62+
-d '{
63+
"llama_model_path": "model/stealth-7b-model.gguf",
64+
"ctx_len": 512,
65+
"ngl": 100,
66+
}'
67+
```
68+
69+
## Setting Up a Local Agent
70+
71+
In AutoGen Studio, go to the `Agent`` tab and set up a new agent.
72+
73+
**Key setting:** In `Model` section, use `Base URL`: http://localhost:3928/v1.
74+
75+
![Local LLM with AutoGen](img/autogen_localllm.png)
76+
77+
## Crafting a Workflow
78+
79+
Create a new workflow in `Workflows` tab
80+
81+
Navigate to the `Workflows` tab to create a new workflow. Change the `Sender` model to your Stealth model.
82+
83+
![Create local LLM work flow with AutoGen](img/autogen_workflow.png)
84+
85+
Make sure the `Receiver` uses the agent you just set up.
86+
87+
![Configure Receiver in AutoGen](img/autogen_receiver.png)
88+
89+
## Set a dummy OpenAI API Key
90+
91+
Set a dummy enviroment variable for OpenAI
92+
93+
```bash
94+
export OPENAI_API_KEY=sk-***
95+
```
96+
97+
You're all set! Test your agent in the `Playground`.
98+
99+
![Example local LLM with Autogen](img/autogen_stealth.png)
77 KB
Loading
48.2 KB
Loading
6.03 KB
Loading
71.8 KB
Loading
74.1 KB
Loading

docs/sidebars.js

Lines changed: 2 additions & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -67,7 +67,8 @@ const sidebars = {
6767
"examples/openai-node",
6868
"examples/openai-python",
6969
"examples/colab",
70-
"examples/chatboxgpt"
70+
"examples/chatboxgpt",
71+
"examples/autogen"
7172
],
7273
},
7374
// {

0 commit comments

Comments
 (0)