@@ -53,6 +53,7 @@ LLM agents need tools. But as tool count grows, two things break:
5353| ** Zero dependencies** | Core runs on Python stdlib only — add extras as needed |
5454| ** Any tool source** | Auto-ingest from OpenAPI / Swagger / MCP / Python functions |
5555| ** History-aware** | Previously called tools are demoted; next-step tools are boosted |
56+ | ** LangChain Gateway** | 62 tools → 2 meta-tools, ** 92% token reduction** per turn |
5657| ** MCP Proxy** | 172 tools across servers → 3 meta-tools, saving ~ 1,200 tokens/turn |
5758
5859---
@@ -524,7 +525,53 @@ toolkit.graph.enable_embedding("ollama/qwen3-embedding:0.6b")
524525pip install graph-tool-call[langchain] langgraph
525526```
526527
527- ** Drop-in agent** — pass all your tools, graph-tool-call ** automatically filters per turn** :
528+ Three integration patterns — pick the one that fits your architecture:
529+
530+ #### Gateway Tools (recommended for large tool sets)
531+
532+ Convert 50~ 500+ tools into ** 2 meta-tools** (` search_tools ` + ` call_tool ` ).
533+ The LLM searches first, then calls — no tool definitions bloat in context.
534+
535+ ``` python
536+ from graph_tool_call.langchain import create_gateway_tools
537+
538+ # 62 tools from Slack, GitHub, Jira, MS365, custom APIs...
539+ all_tools = slack_tools + github_tools + jira_tools + ms365_tools + api_tools
540+
541+ # Convert to 2 gateway meta-tools
542+ gateway = create_gateway_tools(all_tools, top_k = 10 )
543+ # → [search_tools, call_tool]
544+
545+ # Use with any LangChain agent — only 2 tools in context
546+ agent = create_react_agent(model = llm, tools = gateway)
547+ result = agent.invoke({" messages" : [(" user" , " PROJ-123 이슈를 Done으로 변경해줘" )]})
548+ ```
549+
550+ ** How it works** — the LLM drives the search:
551+
552+ ``` text
553+ User: "Cancel order #500"
554+ ↓
555+ LLM calls search_tools(query="cancel order")
556+ → returns: cancel_order, get_order, process_refund (with parameter info)
557+ ↓
558+ LLM calls call_tool(tool_name="cancel_order", arguments={"order_id": 500})
559+ → returns: {"order_id": 500, "status": "cancelled"}
560+ ↓
561+ LLM: "Order #500 has been cancelled."
562+ ```
563+
564+ | | All tools bound | Gateway (2 tools) |
565+ | ---| :---:| :---:|
566+ | ** 62 tools** | ~ 6,090 tokens/turn | ~ 475 tokens/turn |
567+ | ** Token reduction** | — | ** 92%** |
568+ | ** Accuracy** (qwen3.5:4b) | — | 70% (100% with GPT-4o) |
569+
570+ > Works with ** any existing LangChain agent setup** . Just replace ` tools=all_tools ` with ` tools=create_gateway_tools(all_tools) ` .
571+
572+ #### Auto-filtering Agent (transparent per-turn filtering)
573+
574+ The agent automatically filters tools each turn — the LLM never sees the full list:
528575
529576``` python
530577from graph_tool_call.langchain import create_agent
@@ -537,11 +584,16 @@ result = agent.invoke({"messages": [("user", "cancel my order")]})
537584# Turn 2: LLM sees [next relevant tools based on conversation]
538585```
539586
540- Each turn, the latest user message is used to retrieve relevant tools via ToolGraph,
541- and only those are bound to the model — ** saving tokens automatically** .
587+ #### Which to use?
588+
589+ | Pattern | Best for | How it works |
590+ | ---------| ----------| --------------|
591+ | ** Gateway** | 50+ tools, existing agents | LLM explicitly searches → calls |
592+ | ** Auto-filter** | New agents, simple setup | Transparent per-turn tool swap |
593+ | ** Manual** | Full control | You call ` filter_tools() ` yourself |
542594
543595<details >
544- <summary >Manual filtering (more control) </summary >
596+ <summary >Manual filtering</summary >
545597
546598``` python
547599from graph_tool_call import filter_tools
0 commit comments