Skip to content
Open
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension


Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
52 changes: 52 additions & 0 deletions docs/source/components/agents/agent-spec.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,52 @@
<!--
SPDX-FileCopyrightText: Copyright (c) 2026, NVIDIA CORPORATION & AFFILIATES. All rights reserved.
SPDX-License-Identifier: Apache-2.0

Licensed under the Apache License, Version 2.0 (the "License");
you may not use this file except in compliance with the License.
You may obtain a copy of the License at

http://www.apache.org/licenses/LICENSE-2.0

Unless required by applicable law or agreed to in writing, software
distributed under the License is distributed on an "AS IS" BASIS,
WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
See the License for the specific language governing permissions and
limitations under the License.
-->

# Agent Spec Workflow

This workflow allows running an [Agent Spec] configuration inside NeMo Agent Toolkit by converting it to a LangGraph component via the Agent Spec → LangGraph adapter.
Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🟡 Minor

Use the full product name on first mention and correct casing of “toolkit.”

First mention should be “NVIDIA NeMo Agent toolkit”, with lowercase “toolkit” in body text. As per coding guidelines, please adjust this line.

🔧 Suggested edit
-This workflow allows running an [Agent Spec] configuration inside NeMo Agent Toolkit by converting it to a LangGraph component via the Agent Spec → LangGraph adapter.
+This workflow allows running an [Agent Spec] configuration inside NVIDIA NeMo Agent toolkit by converting it to a LangGraph component via the Agent Spec → LangGraph adapter.
📝 Committable suggestion

‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.

Suggested change
This workflow allows running an [Agent Spec] configuration inside NeMo Agent Toolkit by converting it to a LangGraph component via the Agent Spec → LangGraph adapter.
This workflow allows running an [Agent Spec] configuration inside NVIDIA NeMo Agent toolkit by converting it to a LangGraph component via the Agent Spec → LangGraph adapter.
🤖 Prompt for AI Agents
In `@docs/source/components/agents/agent-spec.md` at line 20, Change the first
mention of the product from "NeMo Agent Toolkit" to the full product name
"NVIDIA NeMo Agent toolkit" in the sentence starting "This workflow allows
running an [Agent Spec] configuration inside NeMo Agent Toolkit..." and ensure
all subsequent occurrences use lowercase "toolkit"; update the sentence text
accordingly and scan the document for any other instances of "NeMo Agent
Toolkit" to correct their casing to "NeMo Agent toolkit".

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The fundamental issue here is all of the LLM configuration is embedded into the agentspec definition and you cannot rely on reuse of any component. Agents often share components and the agentspec specification doesn't address that.


## Install

- Install optional extra:

```bash
pip install 'nvidia-nat[agentspec]'
```

## Example configuration

```yaml
workflow:
_type: agent_spec
description: Agent Spec workflow
<!-- path-check-skip-next-line -->
agentspec_path: path/to/agent_spec.yaml # or agentspec_yaml / agentspec_json
tool_names: [pretty_formatting]
max_history: 15
verbose: true
```

Exactly one of `agentspec_yaml`, `agentspec_json`, or `agentspec_path` must be provided.

## Notes and limitations

- Tools: NeMo Agent toolkit built-in tools provided in `tool_names` are exposed to the adapter `tool_registry` by name. If the Agent Spec also defines tools, the registries are merged; duplicate names are overwritten by built-in tools.
- I/O: Inputs are standard `ChatRequest` messages; the workflow returns a `ChatResponse`.
- Streaming: Non supported.
Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🟡 Minor

Fix typo: “Non supported” → “Not supported.”

This is a simple spelling/grammar correction. As per coding guidelines, fix the typo.

🔧 Suggested edit
-- Streaming: Non supported. 
+- Streaming: Not supported.
📝 Committable suggestion

‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.

Suggested change
- Streaming: Non supported.
- Streaming: Not supported.
🧰 Tools
🪛 LanguageTool

[grammar] ~49-~49: Ensure spelling is correct
Context: ... returns a ChatResponse. - Streaming: Non supported. - Checkpointing: Not suppor...

(QB_NEW_EN_ORTHOGRAPHY_ERROR_IDS_1)

🤖 Prompt for AI Agents
In `@docs/source/components/agents/agent-spec.md` at line 49, Replace the typo
string "Streaming: Non supported." with the correct phrasing "Streaming: Not
supported." in the agent specification docs (look for the exact text "Streaming:
Non supported." in agent-spec.md) so the sentence reads correctly with proper
grammar and punctuation.

- Checkpointing: Not supported.

[Agent Spec]: https://github.com/oracle/agent-spec
1 change: 1 addition & 0 deletions pyproject.toml
Original file line number Diff line number Diff line change
Expand Up @@ -80,6 +80,7 @@ crewai = ["nvidia-nat-crewai"]
data-flywheel = ["nvidia-nat-data-flywheel"]
ingestion = ["nvidia-nat-ingestion"] # meta-package
langchain = ["nvidia-nat-langchain"]
agentspec = ["nvidia-nat-langchain", "pyagentspec>=0.1", "langgraph-agentspec-adapter>=0.1"] # TODO: How do we actually reference the langgraph adapter as a dependency?
Copy link
Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

To be updated with pyagentspec[langgraph] 26.1

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

TODO: How do we actually reference the langgraph adapter as a dependency?

This would best be done as a separate package rather than folded into the core library. e.g. nvidia_nat_agentspec which could have appropriate dependencies on nvidia_nat_langchain and anything else it would need.

This really should not be an extra due to hard package requirements of it existing. We are in the process of fully isolating all agents from the core library for this exact reason.

llama-index = ["nvidia-nat-llama-index"]
Comment on lines +83 to 84
Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🟡 Minor

🧩 Analysis chain

🏁 Script executed:

# First, examine the current state of pyproject.toml around lines 83-84
head -n 90 pyproject.toml | tail -n 20

Repository: NVIDIA/NeMo-Agent-Toolkit

Length of output: 952


🏁 Script executed:

# Check for the existence of these packages as workspace dependencies
fd -t d "nvidia-nat-langchain|pyagentspec|langgraph-agentspec-adapter" packages/

Repository: NVIDIA/NeMo-Agent-Toolkit

Length of output: 51


🏁 Script executed:

# Search for these packages in uv.lock to see if they're already listed
rg -n "pyagentspec|langgraph-agentspec-adapter|nvidia-nat-langchain" uv.lock | head -20

Repository: NVIDIA/NeMo-Agent-Toolkit

Length of output: 431


🏁 Script executed:

# Check the structure of packages directory to understand workspace deps
ls -la packages/ | head -20

Repository: NVIDIA/NeMo-Agent-Toolkit

Length of output: 1333


🏁 Script executed:

# Search for pyagentspec and langgraph-agentspec-adapter in all pyproject.toml files
rg -n "pyagentspec|langgraph-agentspec-adapter" --type toml

Repository: NVIDIA/NeMo-Agent-Toolkit

Length of output: 256


🏁 Script executed:

# Check the full optional-dependencies section for context
sed -n '76,115p' pyproject.toml

Repository: NVIDIA/NeMo-Agent-Toolkit

Length of output: 1431


Update agentspec optional dependencies to use correct version format and alphabetical ordering, then sync uv.lock.

External dependencies pyagentspec and langgraph-agentspec-adapter must use ~=0.1 instead of >=0.1 per project versioning standards, and should be alphabetically ordered within the extra list. These packages also need to be added to uv.lock via uv pip install <pkg> --sync.

🔧 Suggested edit
- agentspec = ["nvidia-nat-langchain", "pyagentspec>=0.1", "langgraph-agentspec-adapter>=0.1"]  # TODO: How do we actually reference the langgraph adapter as a dependency?
+ agentspec = ["nvidia-nat-langchain", "langgraph-agentspec-adapter~=0.1", "pyagentspec~=0.1"]  # TODO: How do we actually reference the langgraph adapter as a dependency?
🤖 Prompt for AI Agents
In `@pyproject.toml` around lines 83 - 84, Update the agentspec extra list to use
the patch-compatible version specifier "~=0.1" for pyagentspec and
langgraph-agentspec-adapter and reorder the entries alphabetically by package
name (e.g., langgraph-agentspec-adapter, nvidia-nat-langchain, pyagentspec) in
the agentspec extra; then run uv pip install langgraph-agentspec-adapter~=0.1
pyagentspec~=0.1 --sync (or run separate uv pip install <pkg> --sync for each)
to update uv.lock so the lockfile is in sync with the pyproject extras.

mcp = ["nvidia-nat-mcp"]
mem0ai = ["nvidia-nat-mem0ai"]
Expand Down
73 changes: 73 additions & 0 deletions scripts/agentspec_smoke.py
Original file line number Diff line number Diff line change
@@ -0,0 +1,73 @@
# SPDX-FileCopyrightText: Copyright (c) 2026, NVIDIA CORPORATION & AFFILIATES. All rights reserved.
# SPDX-License-Identifier: Apache-2.0
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
import asyncio
import sys
import types


async def main():
# Ensure NAT src is importable if running from repo root
import os
repo_root = os.path.dirname(os.path.abspath(__file__))
src_dir = os.path.join(os.path.dirname(repo_root), "src")
if src_dir not in sys.path:
sys.path.insert(0, src_dir)

# Force registration imports
import nat.agent.register # noqa: F401

# Create a fake adapter module that returns a stub component
class StubComponent:

async def ainvoke(self, value):
if isinstance(value, dict) and "messages" in value:
msgs = value["messages"]
last_user = next((m.get("content") for m in reversed(msgs) if m.get("role") == "user"), "")
return {"output": last_user}
return {"output": str(value)}

class StubLoader:

def __init__(self, *args, **kwargs):
pass

def load_yaml(self, _):
return StubComponent()

fake_mod = types.ModuleType("langgraph_agentspec_adapter.agentspecloader")
fake_mod.AgentSpecLoader = StubLoader
sys.modules["langgraph_agentspec_adapter"] = types.ModuleType("langgraph_agentspec_adapter")
sys.modules["langgraph_agentspec_adapter.agentspecloader"] = fake_mod

# Import registers agent workflows (including Agent Spec)
import nat.agent.agentspec.register # noqa: F401
from nat.agent.agentspec.config import AgentSpecWorkflowConfig
from nat.builder.workflow_builder import WorkflowBuilder

spec_yaml = """
component_type: Agent
name: echo-agent
description: echo
"""

cfg = AgentSpecWorkflowConfig(llm_name="dummy", agentspec_yaml=spec_yaml, tool_names=[])
async with WorkflowBuilder() as builder:
fn = await builder.set_workflow(cfg)
out = await fn.acall_invoke(input_message="hello agentspec")
print("OK:", out)


if __name__ == "__main__":
asyncio.run(main())
64 changes: 64 additions & 0 deletions src/nat/agent/agentspec/config.py
Original file line number Diff line number Diff line change
@@ -0,0 +1,64 @@
# SPDX-FileCopyrightText: Copyright (c) 2026, NVIDIA CORPORATION & AFFILIATES. All rights reserved.
# SPDX-License-Identifier: Apache-2.0
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
from pathlib import Path

from pydantic import Field
from pydantic import model_validator

from nat.data_models.agent import AgentBaseConfig
from nat.data_models.component_ref import FunctionGroupRef
from nat.data_models.component_ref import FunctionRef


class AgentSpecWorkflowConfig(AgentBaseConfig, name="agent_spec"):
"""
NAT function that executes an Agent Spec configuration via the LangGraph adapter.

Provide exactly one of agentspec_yaml, agentspec_json, or agentspec_path.
Optionally supply tool_names to make NAT/LC tools available to the Agent Spec runtime.
"""

description: str = Field(default="Agent Spec Workflow", description="Description of this workflow.")

agentspec_yaml: str | None = Field(default=None, description="Inline Agent Spec YAML content")
agentspec_json: str | None = Field(default=None, description="Inline Agent Spec JSON content")
Comment on lines +35 to +36
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Neither a YAML string nor JSON string make sense for a YAML configuration object.

I could understand wanting the entire spec defined in the configuration, but structured data is the correct way to do this.

agentspec_path: str | None = Field(default=None, description="Path to an Agent Spec YAML/JSON file")

tool_names: list[FunctionRef | FunctionGroupRef] = Field(
default_factory=list, description="Optional list of tool names/groups to expose to the Agent Spec runtime.")

max_history: int = Field(default=15, description="Maximum number of messages to keep in conversation history.")

@model_validator(mode="after")
def _validate_sources(self):
provided = [self.agentspec_yaml, self.agentspec_json, self.agentspec_path]
cnt = sum(1 for v in provided if v)
if cnt != 1:
raise ValueError("Exactly one of agentspec_yaml, agentspec_json, or agentspec_path must be provided")
return self
Comment on lines +44 to +50
Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🟡 Minor

🧩 Analysis chain

🏁 Script executed:

cat -n src/nat/agent/agentspec/config.py | head -60

Repository: NVIDIA/NeMo-Agent-Toolkit

Length of output: 3202


🏁 Script executed:

rg "def _" src/nat/agent/agentspec/ -A 1 | head -40

Repository: NVIDIA/NeMo-Agent-Toolkit

Length of output: 701


🏁 Script executed:

rg "@model_validator" src/nat/data_models/ -A 3 | head -60

Repository: NVIDIA/NeMo-Agent-Toolkit

Length of output: 3953


🏁 Script executed:

cat src/nat/data_models/agent.py | head -100

Repository: NVIDIA/NeMo-Agent-Toolkit

Length of output: 1698


Add an explicit return type for _validate_sources method.

As a validator returning self, add the return type annotation -> Self for consistency with type-checking requirements and to keep the method properly typed.

🔧 Suggested edit
+from typing import Self
@@
-    def _validate_sources(self):
+    def _validate_sources(self) -> Self:
         provided = [self.agentspec_yaml, self.agentspec_json, self.agentspec_path]
         cnt = sum(1 for v in provided if v)
         if cnt != 1:
             raise ValueError("Exactly one of agentspec_yaml, agentspec_json, or agentspec_path must be provided")
         return self
🧰 Tools
🪛 Ruff (0.14.13)

49-49: Avoid specifying long messages outside the exception class

(TRY003)

🤖 Prompt for AI Agents
In `@src/nat/agent/agentspec/config.py` around lines 44 - 50, The
_validate_sources validator currently returns self but lacks a type annotation;
update the method signature of _validate_sources (the function decorated with
`@model_validator`) to include an explicit return type -> Self and ensure Self is
imported (from typing or typing_extensions as appropriate for the project),
keeping the body and the final "return self" unchanged so static type checkers
recognize the correct return type.



def read_agentspec_payload(config: AgentSpecWorkflowConfig) -> tuple[str, str]:
"""Return (format, payload_str) where format is 'yaml' or 'json'."""
if config.agentspec_yaml:
return ("yaml", config.agentspec_yaml)
if config.agentspec_json:
return ("json", config.agentspec_json)
assert config.agentspec_path
path = Path(config.agentspec_path)
text = path.read_text(encoding="utf-8")
ext = path.suffix.lower()
fmt = "json" if ext == ".json" else "yaml"
return (fmt, text)
Comment on lines +53 to +64
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This should not live in the config file here. It is an implementation detail.

154 changes: 154 additions & 0 deletions src/nat/agent/agentspec/register.py
Original file line number Diff line number Diff line change
@@ -0,0 +1,154 @@
# SPDX-FileCopyrightText: Copyright (c) 2026, NVIDIA CORPORATION & AFFILIATES. All rights reserved.
# SPDX-License-Identifier: Apache-2.0
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.

import io
import logging
from typing import Any

from nat.builder.framework_enum import LLMFrameworkEnum
from nat.builder.function_info import FunctionInfo
from nat.cli.register_workflow import register_function
from nat.data_models.api_server import ChatRequest
from nat.data_models.api_server import ChatRequestOrMessage
from nat.data_models.api_server import ChatResponse
from nat.data_models.api_server import Usage
from nat.utils.type_converter import GlobalTypeConverter

from .config import AgentSpecWorkflowConfig
from .config import read_agentspec_payload

logger = logging.getLogger(__name__)


def _to_plain_messages(messages: list[Any]) -> list[dict[str, Any]]:
plain: list[dict[str, Any]] = []
for m in messages:
# Accept either NAT Message models or LangChain BaseMessage dicts
role = None
content = None
if isinstance(m, dict):
role = m.get("role")
content = m.get("content")
else:
# Try NAT Message model
if hasattr(m, "role"):
role = getattr(m.role, "value", None) or str(getattr(m, "role"))
# Various content shapes
if hasattr(m, "content"):
c = getattr(m, "content")
if isinstance(c, str):
content = c
else:
try:
buf = io.StringIO()
for part in c:
if hasattr(part, "text"):
buf.write(str(getattr(part, "text")))
else:
buf.write(str(part))
content = buf.getvalue()
except Exception:
content = str(c)
# Fallback: LangChain BaseMessage has .type
if role is None and hasattr(m, "type"):
role = str(getattr(m, "type"))
if content is None and hasattr(m, "content"):
content = str(getattr(m, "content"))
plain.append({"role": role or "user", "content": content or ""})
return plain


@register_function(config_type=AgentSpecWorkflowConfig, framework_wrappers=[LLMFrameworkEnum.LANGCHAIN])
async def agent_spec_workflow(config: AgentSpecWorkflowConfig, builder):
# Lazy import to make the dependency optional unless this workflow is used
try:
from langgraph_agentspec_adapter.agentspecloader import AgentSpecLoader # type: ignore
except Exception as e: # pragma: no cover - import error path
raise ImportError("Agent Spec adapter not installed. Install with: pip install 'nvidia-nat[agentspec]'") from e

# Build tool registry from NAT tool names if provided
tools = await builder.get_tools(tool_names=config.tool_names, wrapper_type=LLMFrameworkEnum.LANGCHAIN)
tool_registry = {getattr(t, "name", f"tool_{i}"): t for i, t in enumerate(tools)} if tools else {}

fmt, payload = read_agentspec_payload(config)
loader = AgentSpecLoader(tool_registry=tool_registry, checkpointer=None, config=None)

# Compile Agent Spec to a LangGraph component
if fmt == "yaml":
component = loader.load_yaml(payload)
else:
component = loader.load_json(payload)

async def _response_fn(chat_request_or_message: ChatRequestOrMessage) -> ChatResponse | str:
from langchain_core.messages import trim_messages # lazy import with LANGCHAIN wrapper

from nat.agent.base import AGENT_LOG_PREFIX

try:
message = GlobalTypeConverter.get().convert(chat_request_or_message, to_type=ChatRequest)

# Trim message history
trimmed = trim_messages(messages=[m.model_dump() for m in message.messages],
max_tokens=config.max_history,
strategy="last",
token_counter=len,
start_on="human",
include_system=True)

# Best-effort: pass messages in a generic shape expected by adapter graphs
input_state: dict[str, Any] = {"messages": _to_plain_messages(trimmed)}

result: Any
result = await component.ainvoke(input_state)

# Heuristic extraction of assistant content
content: str | None = None
if isinstance(result, dict):
msgs = result.get("messages")
if isinstance(msgs, list) and msgs:
for entry in reversed(msgs):
# LangChain BaseMessage objects have `.type` (e.g., 'ai', 'human') and `.content`
if hasattr(entry, "type") and hasattr(entry, "content"):
role = getattr(entry, "type", None)
if role in ("ai", "assistant", "system"):
content = str(getattr(entry, "content", ""))
break
# Dict-shaped message
if isinstance(entry, dict):
role = entry.get("role")
if role in ("assistant", "system", "ai"):
content = str(entry.get("content", ""))
break
if content is None and "output" in result:
content = str(result.get("output"))
if content is None and isinstance(result, str):
content = result
if content is None:
content = str(result)

prompt_tokens = sum(len(str(msg.content).split()) for msg in message.messages)
completion_tokens = len(content.split()) if content else 0
usage = Usage(prompt_tokens=prompt_tokens,
completion_tokens=completion_tokens,
total_tokens=prompt_tokens + completion_tokens)
response = ChatResponse.from_string(content, usage=usage)
if chat_request_or_message.is_string:
return GlobalTypeConverter.get().convert(response, to_type=str)
return response
except Exception as ex: # pragma: no cover - surface original exception
logger.error("%s Agent Spec workflow failed: %s", AGENT_LOG_PREFIX, str(ex))
raise

yield FunctionInfo.from_fn(_response_fn, description=config.description)
1 change: 1 addition & 0 deletions src/nat/agent/register.py
Original file line number Diff line number Diff line change
Expand Up @@ -23,3 +23,4 @@
from .responses_api_agent import register as responses_api_agent
from .rewoo_agent import register as rewoo_agent
from .tool_calling_agent import register as tool_calling_agent
from .agentspec import register as agentspec
4 changes: 3 additions & 1 deletion tests/conftest.py
Original file line number Diff line number Diff line change
Expand Up @@ -53,7 +53,9 @@
PROJECT_DIR = os.path.dirname(TESTS_DIR)
SRC_DIR = os.path.join(PROJECT_DIR, "src")
EXAMPLES_DIR = os.path.join(PROJECT_DIR, "examples")
sys.path.append(SRC_DIR)
# Prepend local src so tests run against workspace code rather than any installed package
if SRC_DIR not in sys.path:
sys.path.insert(0, SRC_DIR)

os.environ.setdefault("DASK_DISTRIBUTED__WORKER__PYTHON", sys.executable)

Expand Down
Loading