Skip to content

Commit bbe7e50

Browse files
SonAIengineclaude
andcommitted
docs: README에 Gateway Tools (search_tools + call_tool) 사용 예제 추가
LangChain 섹션 전면 개편: - Gateway Tools: 62 tools → 2 meta-tools, 92% 토큰 절감 예제 - Auto-filter Agent: 매 턴 자동 필터링 예제 - 패턴 비교표 (Gateway vs Auto-filter vs Manual) - At a Glance에 LangChain Gateway 행 추가 - 한국어/중국어/일본어 README 동시 업데이트 - xgen-workflow 시뮬레이션 e2e 테스트 + 토큰 절감 검증 테스트 Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
1 parent 724567b commit bbe7e50

5 files changed

Lines changed: 724 additions & 11 deletions

File tree

README-ja.md

Lines changed: 36 additions & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -373,12 +373,44 @@ from graph_tool_call.middleware import patch_anthropic
373373
patch_anthropic(client, graph=tg, top_k=5)
374374
```
375375

376-
### LangChain統合
376+
### LangChain / LangGraph統合
377377

378378
```bash
379-
pip install graph-tool-call[langchain]
379+
pip install graph-tool-call[langchain] langgraph
380380
```
381381

382+
#### Gateway Tools(大規模ツールセット推奨)
383+
384+
50~500+個のツールを**2個のmeta-tool**`search_tools` + `call_tool`)に変換:
385+
386+
```python
387+
from graph_tool_call.langchain import create_gateway_tools
388+
389+
# 62個のツールを2個のgateway meta-toolに変換
390+
gateway = create_gateway_tools(all_tools, top_k=10)
391+
392+
# 2個のツールだけagentに渡す
393+
agent = create_react_agent(model=llm, tools=gateway)
394+
```
395+
396+
| | 全ツールバインド | Gateway(2個) |
397+
|---|:---:|:---:|
398+
| **62 tools** | ~6,090 tokens/turn | ~475 tokens/turn |
399+
| **トークン削減** || **92%** |
400+
401+
#### 自動フィルタリングAgent
402+
403+
毎ターン自動的に関連ツールのみLLMにバインド:
404+
405+
```python
406+
from graph_tool_call.langchain import create_agent
407+
408+
agent = create_agent(llm, tools=all_200_tools, top_k=5)
409+
```
410+
411+
<details>
412+
<summary>LangChain Retriever(Document返却)</summary>
413+
382414
```python
383415
from graph_tool_call import ToolGraph
384416
from graph_tool_call.langchain import GraphToolRetriever
@@ -393,6 +425,8 @@ for doc in docs:
393425
print(doc.metadata["tags"]) # ["order"]
394426
```
395427

428+
</details>
429+
396430
---
397431

398432
## ベンチマーク

README-ko.md

Lines changed: 38 additions & 3 deletions
Original file line numberDiff line numberDiff line change
@@ -468,19 +468,52 @@ from graph_tool_call.middleware import patch_anthropic
468468
patch_anthropic(client, graph=tg, top_k=5)
469469
```
470470

471-
### LangChain 연동
471+
### LangChain / LangGraph 연동
472472

473473
```bash
474-
pip install graph-tool-call[langchain]
474+
pip install graph-tool-call[langchain] langgraph
475+
```
476+
477+
#### Gateway Tools (대규모 tool set 추천)
478+
479+
50~500+ tool을 **2개 meta-tool**로 변환. LLM이 검색 → 실행하는 구조:
480+
481+
```python
482+
from graph_tool_call.langchain import create_gateway_tools
483+
484+
# Slack, GitHub, Jira, MS365 등 다양한 소스의 tool 62개
485+
gateway = create_gateway_tools(all_tools, top_k=10)
486+
# → [search_tools, call_tool] 2개만 agent에 전달
487+
488+
agent = create_react_agent(model=llm, tools=gateway)
489+
result = agent.invoke({"messages": [("user", "PROJ-123 이슈를 Done으로 변경해줘")]})
490+
```
491+
492+
| | 전체 tool 바인딩 | Gateway (2개) |
493+
|---|:---:|:---:|
494+
| **62 tools** | ~6,090 tokens/turn | ~475 tokens/turn |
495+
| **토큰 절감** || **92%** |
496+
497+
#### 자동 필터링 Agent
498+
499+
매 턴 자동으로 관련 tool만 LLM에 바인딩:
500+
501+
```python
502+
from graph_tool_call.langchain import create_agent
503+
504+
agent = create_agent(llm, tools=all_200_tools, top_k=5)
505+
# 매 턴 유저 메시지 기반으로 ~5개 tool만 노출
475506
```
476507

508+
<details>
509+
<summary>LangChain Retriever (Document 반환)</summary>
510+
477511
```python
478512
from graph_tool_call import ToolGraph
479513
from graph_tool_call.langchain import GraphToolRetriever
480514

481515
tg = ToolGraph.from_url("https://api.example.com/openapi.json")
482516

483-
# LangChain retriever — 모든 chain/agent와 호환
484517
retriever = GraphToolRetriever(tool_graph=tg, top_k=5)
485518
docs = retriever.invoke("cancel an order")
486519

@@ -489,6 +522,8 @@ for doc in docs:
489522
print(doc.metadata["tags"]) # ["order"]
490523
```
491524

525+
</details>
526+
492527
---
493528

494529
## 벤치마크

README-zh_CN.md

Lines changed: 36 additions & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -372,12 +372,44 @@ from graph_tool_call.middleware import patch_anthropic
372372
patch_anthropic(client, graph=tg, top_k=5)
373373
```
374374

375-
### LangChain 集成
375+
### LangChain / LangGraph 集成
376376

377377
```bash
378-
pip install graph-tool-call[langchain]
378+
pip install graph-tool-call[langchain] langgraph
379379
```
380380

381+
#### Gateway Tools(推荐用于大规模工具集)
382+
383+
将 50~500+ 个工具转换为 **2个 meta-tool**`search_tools` + `call_tool`):
384+
385+
```python
386+
from graph_tool_call.langchain import create_gateway_tools
387+
388+
# 将 62 个工具转为 2 个 gateway meta-tool
389+
gateway = create_gateway_tools(all_tools, top_k=10)
390+
391+
# 只需 2 个工具传入 agent
392+
agent = create_react_agent(model=llm, tools=gateway)
393+
```
394+
395+
| | 绑定全部工具 | Gateway(2个) |
396+
|---|:---:|:---:|
397+
| **62 tools** | ~6,090 tokens/turn | ~475 tokens/turn |
398+
| **Token 减少** || **92%** |
399+
400+
#### 自动过滤 Agent
401+
402+
每轮自动只绑定相关工具到 LLM:
403+
404+
```python
405+
from graph_tool_call.langchain import create_agent
406+
407+
agent = create_agent(llm, tools=all_200_tools, top_k=5)
408+
```
409+
410+
<details>
411+
<summary>LangChain Retriever(返回 Document)</summary>
412+
381413
```python
382414
from graph_tool_call import ToolGraph
383415
from graph_tool_call.langchain import GraphToolRetriever
@@ -392,6 +424,8 @@ for doc in docs:
392424
print(doc.metadata["tags"]) # ["order"]
393425
```
394426

427+
</details>
428+
395429
---
396430

397431
## 基准测试

README.md

Lines changed: 56 additions & 4 deletions
Original file line numberDiff line numberDiff line change
@@ -53,6 +53,7 @@ LLM agents need tools. But as tool count grows, two things break:
5353
| **Zero dependencies** | Core runs on Python stdlib only — add extras as needed |
5454
| **Any tool source** | Auto-ingest from OpenAPI / Swagger / MCP / Python functions |
5555
| **History-aware** | Previously called tools are demoted; next-step tools are boosted |
56+
| **LangChain Gateway** | 62 tools → 2 meta-tools, **92% token reduction** per turn |
5657
| **MCP Proxy** | 172 tools across servers → 3 meta-tools, saving ~1,200 tokens/turn |
5758

5859
---
@@ -524,7 +525,53 @@ toolkit.graph.enable_embedding("ollama/qwen3-embedding:0.6b")
524525
pip install graph-tool-call[langchain] langgraph
525526
```
526527

527-
**Drop-in agent** — pass all your tools, graph-tool-call **automatically filters per turn**:
528+
Three integration patterns — pick the one that fits your architecture:
529+
530+
#### Gateway Tools (recommended for large tool sets)
531+
532+
Convert 50~500+ tools into **2 meta-tools** (`search_tools` + `call_tool`).
533+
The LLM searches first, then calls — no tool definitions bloat in context.
534+
535+
```python
536+
from graph_tool_call.langchain import create_gateway_tools
537+
538+
# 62 tools from Slack, GitHub, Jira, MS365, custom APIs...
539+
all_tools = slack_tools + github_tools + jira_tools + ms365_tools + api_tools
540+
541+
# Convert to 2 gateway meta-tools
542+
gateway = create_gateway_tools(all_tools, top_k=10)
543+
# → [search_tools, call_tool]
544+
545+
# Use with any LangChain agent — only 2 tools in context
546+
agent = create_react_agent(model=llm, tools=gateway)
547+
result = agent.invoke({"messages": [("user", "PROJ-123 이슈를 Done으로 변경해줘")]})
548+
```
549+
550+
**How it works** — the LLM drives the search:
551+
552+
```text
553+
User: "Cancel order #500"
554+
555+
LLM calls search_tools(query="cancel order")
556+
→ returns: cancel_order, get_order, process_refund (with parameter info)
557+
558+
LLM calls call_tool(tool_name="cancel_order", arguments={"order_id": 500})
559+
→ returns: {"order_id": 500, "status": "cancelled"}
560+
561+
LLM: "Order #500 has been cancelled."
562+
```
563+
564+
| | All tools bound | Gateway (2 tools) |
565+
|---|:---:|:---:|
566+
| **62 tools** | ~6,090 tokens/turn | ~475 tokens/turn |
567+
| **Token reduction** || **92%** |
568+
| **Accuracy** (qwen3.5:4b) || 70% (100% with GPT-4o) |
569+
570+
> Works with **any existing LangChain agent setup**. Just replace `tools=all_tools` with `tools=create_gateway_tools(all_tools)`.
571+
572+
#### Auto-filtering Agent (transparent per-turn filtering)
573+
574+
The agent automatically filters tools each turn — the LLM never sees the full list:
528575

529576
```python
530577
from graph_tool_call.langchain import create_agent
@@ -537,11 +584,16 @@ result = agent.invoke({"messages": [("user", "cancel my order")]})
537584
# Turn 2: LLM sees [next relevant tools based on conversation]
538585
```
539586

540-
Each turn, the latest user message is used to retrieve relevant tools via ToolGraph,
541-
and only those are bound to the model — **saving tokens automatically**.
587+
#### Which to use?
588+
589+
| Pattern | Best for | How it works |
590+
|---------|----------|--------------|
591+
| **Gateway** | 50+ tools, existing agents | LLM explicitly searches → calls |
592+
| **Auto-filter** | New agents, simple setup | Transparent per-turn tool swap |
593+
| **Manual** | Full control | You call `filter_tools()` yourself |
542594

543595
<details>
544-
<summary>Manual filtering (more control)</summary>
596+
<summary>Manual filtering</summary>
545597

546598
```python
547599
from graph_tool_call import filter_tools

0 commit comments

Comments
 (0)