diff --git a/examples/frameworks/microsoft_agent_framework_demo/README.md b/examples/frameworks/microsoft_agent_framework_demo/README.md new file mode 100644 index 0000000000..6a474d25c7 --- /dev/null +++ b/examples/frameworks/microsoft_agent_framework_demo/README.md @@ -0,0 +1,104 @@ + + +# Semantic Kernel Example + +A minimal example using Semantic Kernel showcasing a multi-agent travel planning system where an Itinerary Agent creates a travel schedule, a Budget Agent ensures cost compliance, and a Summarizer Agent formats the final itinerary. **Please note that we only support OpenAI models currently**. + +## Table of Contents + +- [Key Features](#key-features) +- [Installation and Setup](#installation-and-setup) + - [Install this Workflow](#install-this-workflow) + - [Set Up API Keys](#set-up-api-keys) +- [Adding Long-Term Memory](#adding-long-term-memory) + +## Key Features + +- **Semantic Kernel Framework Integration:** Demonstrates NeMo Agent toolkit support for Microsoft's Semantic Kernel framework alongside other frameworks like LangChain/LangGraph. +- **Multi-Agent Travel Planning:** Shows three specialized agents working together - an Itinerary Agent for schedule creation, a Budget Agent for cost management, and a Summarizer Agent for final formatting. +- **Cross-Agent Coordination:** Demonstrates how different agents can collaborate on a complex task, with each agent contributing its specialized capabilities to the overall workflow. +- **Long-Term Memory Integration:** Includes optional Mem0 platform integration for persistent memory, allowing agents to remember user preferences (like vegan dining or luxury hotel preferences) across sessions. +- **OpenAI Model Support:** Showcases NeMo Agent toolkit compatibility with OpenAI models through the Semantic Kernel framework integration. + +## Installation and Setup + +If you have not already done so, follow the instructions in the [Install Guide](../../../docs/source/quick-start/installing.md#install-from-source) to create the development environment and install NeMo Agent toolkit. + +### Install this Workflow + +From the root directory of the NeMo Agent toolkit library, run the following commands: + +```bash +uv pip install -e examples/frameworks/microsoft_agent_framework_demo +``` + +### Set Up API Keys + +You need to set your OpenAI API key as an environment variable to access OpenAI AI services: + +```bash +export OPENAI_API_KEY= +``` + +## Adding Long-Term Memory + + With NeMo Agent toolkit, adding Long Term Memory (LTM) is as simple as adding a new section in the configuration file. + +Once you add the LTM configuration, export your Mem0 API key, which is a prerequisite for using the LTM service. To create an API key, refer to the instructions in the [Mem0 Platform Guide](https://docs.mem0.ai/platform/quickstart). + +Once you have your API key, export it as follows: + +```bash +export MEM0_API_KEY= +``` + +Then, you can run the workflow with the LTM configuration as follows: + +```bash +nat run --config_file examples/frameworks/microsoft_agent_framework/configs/config.yml --input "Create a 3-day travel itinerary for Tokyo in April, suggest hotels within a USD 2000 budget. I like staying at expensive hotels and am vegan" +``` + +**Expected Workflow Output** +The workflow produces a large amount of output, the end of the output should contain something similar to the following: + +```console +Workflow Result: +['Below is your final 3-day Tokyo itinerary along with a cost breakdown and special notes based on your preferences for upscale accommodations and vegan dining options. This plan keeps your overall USD 2000 budget in mind while highlighting luxury experiences and convenience.\n\n──────────────────────────────\nItinerary Overview\n──────────────────────────────\n• Trip dates: April 15 – April 18, 2024 (3 nights)\n• Location: Tokyo, Japan\n• Focus: Upscale hotel experience and vegan-friendly dining/activities\n• Estimated Total Budget: USD 2000\n\n──────────────────────────────\nDay 1 – Arrival & Check-In\n──────────────────────────────\n• Arrive in Tokyo and transfer to your hotel.\n• Check in at the Luxury Penthouse (approx. USD 250 per night). \n - 3-night cost: ~USD 750.\n• Spend the evening settling in and reviewing your itinerary.\n• Budget note: Approximately USD 1250 remains for transportation, meals (vegan options), and other expenses.\n\n──────────────────────────────\nDay 2 – Exploring Tokyo\n──────────────────────────────\n• Morning:\n - Enjoy a leisurely breakfast at a nearby vegan-friendly café.\n - Visit local attractions (e.g., upscale districts like Ginza or cultural areas such as Asakusa).\n• Afternoon:\n - Explore boutique shopping, art galleries, or gardens.\n - Alternatively, join a guided tour that includes stops at renowned cultural spots.\n• Evening:\n - Dine at a well-reviewed vegan restaurant.\n - Return to your hotel for a relaxing night.\n• Budget note: Allocate funds carefully for either private tours or special dining spots that cater to vegan diets.\n\n──────────────────────────────\nDay 3 – Final Day & Departure\n──────────────────────────────\n• Morning:\n - Enjoy a hearty vegan breakfast.\n - Visit any remaining attractions or enjoy some leisure time shopping.\n• Afternoon:\n - Return to your hotel to check out.\n - Ensure your remaining funds cover any last-minute transit for departure.\n• Evening:\n - Depart for the airport, completing your upscale Tokyo experience.\n\n──────────────────────────────\nCost Breakdown\n──────────────────────────────\n• Hotel (Luxury Penthouse): USD 250 per night × 3 = ~USD 750\n• Remaining Budget:\n - Transportation, meals (vegan options), and incidental expenses: ~USD 1250\n - This allows flexibility for private tours, upscale experiences, and vegan dining experiences.\n• Overall Estimated Expenditure: Within USD 2000\n\n──────────────────────────────\nAdditional Notes\n──────────────────────────────\n• Your preference for expensive or upscale stays has been prioritized with the Luxury Penthouse option.\n• Vegan dining suggestions can be explored further by researching local vegan-friendly restaurants or booking a specialized food tour.\n• If you’d like more detailed recommendations on transit options, precise activity booking, or additional upscale experiences (e.g., fine dining, traditional cultural performances), please let me know!\n\nThis plan gives you a luxury Tokyo experience within your budget while accommodating your vegan lifestyle. Enjoy your trip!'] +``` + +Please note that it is normal to see the LLM produce some errors on occasion as it handles complex structured tool calls. The workflow will automatically attempt to correct and retry the failed tool calls. + +Assuming we've successfully added our preference for vegan restaurants in the last prompt to the agent, let us attempt to retrieve a more personalized itinerary with vegan dining options: + +```bash +nat run --config_file examples/frameworks/semantic_kernel_demo/configs/config.yml --input "On a 1-day travel itinerary for Tokyo in April, suggest restaurants I would enjoy." +``` + +**Expected Workflow Output** +```console +Workflow Result: +['Here’s your final one-day Tokyo itinerary for April, with high-quality vegan-friendly dining recommendations that blend seamlessly with your sightseeing plans, along with a cost breakdown:\n\n───────────────────────────── \nItinerary Overview\n\nMorning/Breakfast – Ain Soph. Journey \n• Start your day with a creative vegan breakfast. Enjoy dishes like hearty vegan pancakes or fresh smoothie bowls in a cozy atmosphere – an ideal energizer before hitting the city. \n• Location: Options available in vibrant neighborhoods like Shinjuku or Ginza.\n\nMidday/Lunch – T’s Restaurant \n• Savor a bowl of vegan ramen and other Japanese-inspired dishes. This spot is conveniently located near major transit hubs and popular attractions like the Imperial Palace, making it a perfect lunch stop. \n• Location: Near Tokyo Station and central attractions.\n\nAfternoon Snack – Seasonal Cafe near Cherry Blossoms \n• While sightseeing, particularly near parks like Ueno or along the Meguro River, take a break at a local boutique cafe. Enjoy a refreshing herbal tea and a light plant-based treat, complemented by the beautiful bloom of cherry blossoms. \n• Location: In the vicinity of your chosen park or river stroll.\n\nEvening/Dinner – AIN SOPH. Soar (or Similar Venue) \n• Conclude your day with an elegant dining experience. Indulge in innovative vegan courses that creatively reimagine traditional flavors, in a serene setting ideal for unwinding after a busy day. \n• Location: Commonly found in stylish districts like Shinjuku.\n\n───────────────────────────── \nCost Breakdown (Estimates per Person)\n\n1. Breakfast at Ain Soph. Journey: ¥1,000–¥1,500 \n2. Lunch at T’s Restaurant: ¥800–¥1,300 \n3. Afternoon Snack at a Seasonal Cafe: ¥300–¥500 \n4. Dinner at AIN SOPH. Soar: ¥1,500–¥2,000 \n\nTotal Estimated Daily Dining Cost: Approximately ¥3,600–¥5,300 per person\n\n───────────────────────────── \nAdditional Notes\n\n• Timing Tip: Plan your park visits for early morning or later afternoon to enjoy the cherry blossoms with fewer crowds and ideal light. \n• Transportation: Utilize Tokyo’s efficient subway system to seamlessly move between Shinjuku, Ginza, Ueno, or other districts, ensuring you maximize your day. \n• Reservations: It is advisable to reserve tables at popular spots like Ain Soph. Journey and AIN SOPH. Soar during the busy cherry blossom season. \n• Dietary Focus: Each restaurant has been selected for its innovation with vegan-friendly menus, ensuring that each dining experience complements your travel itinerary.\n\n───────────────────────────── \nEnjoy your one-day trip in Tokyo this April with delicious, thoughtfully curated dining stops and memorable sightseeing opportunities!'] +``` + +The above output demonstrates that the agent was able to draw from memory to provide vegan-friendly recommendations. + +Note: The long-term memory feature relies on LLM-based tool invocation, which can occasionally be non-deterministic. If you notice that the memory functionality isn't working as expected (e.g., the agent doesn't remember your preferences), try these solutions: +* Re-run your first and second inputs to ensure proper tool invocation +* Fine-tune the `long_term_memory_instructions` section in `config.yml` to better guide the agent's memory usage + +These steps will help ensure your preferences are correctly stored and retrieved by the agent. diff --git a/examples/frameworks/microsoft_agent_framework_demo/configs b/examples/frameworks/microsoft_agent_framework_demo/configs new file mode 120000 index 0000000000..937e6b6459 --- /dev/null +++ b/examples/frameworks/microsoft_agent_framework_demo/configs @@ -0,0 +1 @@ +src/nat_semantic_kernel_demo/configs \ No newline at end of file diff --git a/examples/frameworks/microsoft_agent_framework_demo/data b/examples/frameworks/microsoft_agent_framework_demo/data new file mode 120000 index 0000000000..6fa1e2a682 --- /dev/null +++ b/examples/frameworks/microsoft_agent_framework_demo/data @@ -0,0 +1 @@ +src/nat_semantic_kernel_demo/data \ No newline at end of file diff --git a/examples/frameworks/microsoft_agent_framework_demo/pyproject.toml b/examples/frameworks/microsoft_agent_framework_demo/pyproject.toml new file mode 100644 index 0000000000..71a19e1d86 --- /dev/null +++ b/examples/frameworks/microsoft_agent_framework_demo/pyproject.toml @@ -0,0 +1,25 @@ +[build-system] +build-backend = "setuptools.build_meta" +requires = ["setuptools >= 64", "setuptools-scm>=8"] + +[tool.setuptools_scm] +git_describe_command = "git describe --long --first-parent" +root = "../../.." + +[project] +name = "nat_microsoft_agent_framework_demo" +dynamic = ["version"] +dependencies = [ + "nvidia-nat[maf]~=1.4", + "usearch==2.21.0", +] +requires-python = ">=3.11,<3.14" +description = "Microsoft Agent Framework Example" +keywords = ["ai", "rag", "agents"] +classifiers = ["Programming Language :: Python"] + +[tool.uv.sources] +nvidia-nat = { path = "../../..", editable = true } + +[project.entry-points.'nat.components'] +nat_microsoft_agent_framework_demo = "microsoft_agent_framework_demo.register" diff --git a/examples/frameworks/microsoft_agent_framework_demo/src/microsoft_agent_framework_demo/__init__.py b/examples/frameworks/microsoft_agent_framework_demo/src/microsoft_agent_framework_demo/__init__.py new file mode 100644 index 0000000000..e69de29bb2 diff --git a/examples/frameworks/microsoft_agent_framework_demo/src/microsoft_agent_framework_demo/configs/config.yml b/examples/frameworks/microsoft_agent_framework_demo/src/microsoft_agent_framework_demo/configs/config.yml new file mode 100644 index 0000000000..3897017762 --- /dev/null +++ b/examples/frameworks/microsoft_agent_framework_demo/src/microsoft_agent_framework_demo/configs/config.yml @@ -0,0 +1,121 @@ +# SPDX-FileCopyrightText: Copyright (c) 2025, NVIDIA CORPORATION & AFFILIATES. All rights reserved. +# SPDX-License-Identifier: Apache-2.0 +# +# Licensed under the Apache License, Version 2.0 (the "License"); +# you may not use this file except in compliance with the License. +# You may obtain a copy of the License at +# +# http://www.apache.org/licenses/LICENSE-2.0 +# +# Unless required by applicable law or agreed to in writing, software +# distributed under the License is distributed on an "AS IS" BASIS, +# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. +# See the License for the specific language governing permissions and +# limitations under the License. + + +general: + front_end: + _type: fastapi + cors: + allow_origins: ['*'] + step_adaptor: + mode: default + +memory: + saas_memory: + _type: mem0_memory + +functions: + hotel_price_maf: + _type: microsoft_agent_framework_demo/hotel_price_maf + local_events_maf: + _type: microsoft_agent_framework_demo/local_events_maf + add_memory: + _type: add_memory + memory: saas_memory + description: | + Add any facts about user preferences to long term memory. Always use this if users mention a preference. + The input to this tool should be a string that describes the user's preference, not the question or answer. + + get_memory: + _type: get_memory + memory: saas_memory + description: | + Always call this tool before calling any other tools, even if the user does not mention to use it. + The question should be about user preferences which will help you format your response. + For example: "How does the user like responses formatted?". Example of inputs are + + +llms: + + aoai_llm: + _type: azure_openai + model_name: gpt-4.1 + api_key: + api_version: 2025-01-01-preview + base_url: https://.cognitiveservices.azure.com + deployment_name: gpt-4.1 + +workflow: + _type: maf + tool_names: [hotel_price_maf, local_events_maf, add_memory, get_memory] + llm_name: aoai_llm + verbose: true + itinerary_expert_name: ItineraryExpert + itinerary_expert_instructions: | + You are an itinerary expert specializing in creating detailed travel plans. + Focus on the attractions, best times to visit, and other important logistics. + Avoid discussing costs or budgets; leave that to the Budget Advisor. + + budget_advisor_name: BudgetAdvisor + budget_advisor_instructions: | + You are a budget advisor skilled at estimating costs for travel plans. + Your job is to provide detailed pricing estimates, optimize for cost-effectiveness, + and ensure all travel costs fit within a reasonable budget. + Avoid giving travel advice or suggesting activities. + + summarize_agent_name: Summarizer + summarize_agent_instructions: | + You will summarize and create the final plan and format the output. + If the total cost is not within a provided budget, provide options or ask for more information + Compile information into a clear, well-structured, user-friendly travel plan. Include sections for the itinerary, + cost breakdown, and any notes from the budget advisor. Avoid duplicating information. + + long_term_memory_instructions: | + You have access to long term memory. + IMPORTANT MEMORY TOOL REQUIREMENTS: + 1. You MUST call get_memory tool FIRST, before calling any other tools + 2. You MUST use user_id "user_1" for all memory operations + 3. You MUST include ALL required parameters when calling memory tools + 4. When calling add_memory or get_memory, you MUST use the exact format as below, don't include any other content, + and make sure the input is a valid JSON object. + + For get_memory tool, you MUST use this exact format: + { + "query": "user preferences", + "top_k": 1, + "user_id": "user_1" + } + + For add_memory tool, you MUST use this exact format: + { + "conversation": [ + { + "role": "user", + "content": "Hi, I'm Alex. I'm looking for a trip to New York" + }, + { + "role": "assistant", + "content": "Hello Alex! I've noted you are looking for a trip to New York." + } + ], + "user_id": "user_1", + "metadata": { + "key_value_pairs": { + "type": "travel", + "relevance": "high" + } + }, + "memory": "User is looking for a trip to New York." + } diff --git a/examples/frameworks/microsoft_agent_framework_demo/src/microsoft_agent_framework_demo/data/hotel_prices.json b/examples/frameworks/microsoft_agent_framework_demo/src/microsoft_agent_framework_demo/data/hotel_prices.json new file mode 100644 index 0000000000..73b7cd05b3 --- /dev/null +++ b/examples/frameworks/microsoft_agent_framework_demo/src/microsoft_agent_framework_demo/data/hotel_prices.json @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:7bdc639461909b2c298528db842d8bf148f1fdd60dab55785e2ca009df6f6c77 +size 280 diff --git a/examples/frameworks/microsoft_agent_framework_demo/src/microsoft_agent_framework_demo/data/local_events.json b/examples/frameworks/microsoft_agent_framework_demo/src/microsoft_agent_framework_demo/data/local_events.json new file mode 100644 index 0000000000..cb5d3f5ab1 --- /dev/null +++ b/examples/frameworks/microsoft_agent_framework_demo/src/microsoft_agent_framework_demo/data/local_events.json @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:e9fe6cd6942cfb26412ca3e176e85cd9760a0339763f0480f69597cb0bcbc44d +size 422 diff --git a/examples/frameworks/microsoft_agent_framework_demo/src/microsoft_agent_framework_demo/hotel_price_tool.py b/examples/frameworks/microsoft_agent_framework_demo/src/microsoft_agent_framework_demo/hotel_price_tool.py new file mode 100644 index 0000000000..676c03166e --- /dev/null +++ b/examples/frameworks/microsoft_agent_framework_demo/src/microsoft_agent_framework_demo/hotel_price_tool.py @@ -0,0 +1,80 @@ +# SPDX-FileCopyrightText: Copyright (c) 2025, NVIDIA CORPORATION & AFFILIATES. All rights reserved. +# SPDX-License-Identifier: Apache-2.0 +# +# Licensed under the Apache License, Version 2.0 (the "License"); +# you may not use this file except in compliance with the License. +# You may obtain a copy of the License at +# +# http://www.apache.org/licenses/LICENSE-2.0 +# +# Unless required by applicable law or agreed to in writing, software +# distributed under the License is distributed on an "AS IS" BASIS, +# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. +# See the License for the specific language governing permissions and +# limitations under the License. + +from pydantic import BaseModel + +from nat.builder.builder import Builder +from nat.builder.function_info import FunctionInfo +from nat.cli.register_workflow import register_function +from nat.data_models.function import FunctionBaseConfig + + +class HotelPriceToolConfig(FunctionBaseConfig, name="hotel_price_maf"): + data_path: str = "examples/frameworks/microsoft_agent_framework_demo/data/hotel_prices.json" + date_format: str = "%Y-%m-%d" + + +class HotelOffer(BaseModel): + name: str + price_per_night: float + total_price: float + city: str + checkin: str + checkout: str + + +class HotelOffersResponse(BaseModel): + offers: list[HotelOffer] + +from agent_framework import ai_function + +#@ai_function(name="hotel_price", description="Retrieves hotel prices information for any location") +@register_function(config_type=HotelPriceToolConfig) +async def hotel_price_maf(tool_config: HotelPriceToolConfig, builder: Builder): + + import json + + with open(tool_config.data_path, encoding='utf-8') as f: + hotel_prices = json.load(f) + + search_date_format = tool_config.date_format + + async def _get_hotel_price(city: str, checkin: str, checkout: str) -> HotelOffersResponse: + from datetime import datetime + + base_hotels = hotel_prices + + # Parse the checkin and checkout dates assuming 'YYYY-MM-DD' format + checkin_dt = datetime.strptime(checkin, search_date_format) + checkout_dt = datetime.strptime(checkout, search_date_format) + nights = (checkout_dt - checkin_dt).days + + offers = [] + for hotel in base_hotels: + total_price = hotel["price_per_night"] * nights + offers.append( + HotelOffer(name=hotel["name"], + price_per_night=hotel["price_per_night"], + total_price=total_price, + city=city, + checkin=checkin, + checkout=checkout)) + + return HotelOffersResponse(offers=offers) + + yield FunctionInfo.from_fn( + _get_hotel_price, + description=( + "This tool returns a list of hotels and nightly prices for the given city and checkin/checkout dates.")) diff --git a/examples/frameworks/microsoft_agent_framework_demo/src/microsoft_agent_framework_demo/local_events_tool.py b/examples/frameworks/microsoft_agent_framework_demo/src/microsoft_agent_framework_demo/local_events_tool.py new file mode 100644 index 0000000000..288c768332 --- /dev/null +++ b/examples/frameworks/microsoft_agent_framework_demo/src/microsoft_agent_framework_demo/local_events_tool.py @@ -0,0 +1,53 @@ +# SPDX-FileCopyrightText: Copyright (c) 2025, NVIDIA CORPORATION & AFFILIATES. All rights reserved. +# SPDX-License-Identifier: Apache-2.0 +# +# Licensed under the Apache License, Version 2.0 (the "License"); +# you may not use this file except in compliance with the License. +# You may obtain a copy of the License at +# +# http://www.apache.org/licenses/LICENSE-2.0 +# +# Unless required by applicable law or agreed to in writing, software +# distributed under the License is distributed on an "AS IS" BASIS, +# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. +# See the License for the specific language governing permissions and +# limitations under the License. + +from pydantic import BaseModel + +from nat.builder.builder import Builder +from nat.builder.function_info import FunctionInfo +from nat.cli.register_workflow import register_function +from nat.data_models.function import FunctionBaseConfig + +from agent_framework import ai_function + +class LocalEvent(BaseModel): + name: str + cost: float + city: str + + +class LocalEventsResponse(BaseModel): + events: list[LocalEvent] + + +class LocalEventsToolConfig(FunctionBaseConfig, name="local_events_maf"): + data_path: str = "examples/frameworks/microsoft_agent_framework_demo/data/local_events.json" + + +#@ai_function(name="local_price", description="Retrieves local events information for any location") +@register_function(config_type=LocalEventsToolConfig) +async def local_events_maf(tool_config: LocalEventsToolConfig, builder: Builder): + + import json + + with open(tool_config.data_path) as f: + events = LocalEventsResponse.model_validate({"events": json.load(f)}).events + + async def _local_events(city: str) -> LocalEventsResponse: + return LocalEventsResponse(events=[e for e in events if e.city == city]) + + yield FunctionInfo.from_fn( + _local_events, + description=("This tool can provide information and cost of local events and activities in a city")) diff --git a/examples/frameworks/microsoft_agent_framework_demo/src/microsoft_agent_framework_demo/register.py b/examples/frameworks/microsoft_agent_framework_demo/src/microsoft_agent_framework_demo/register.py new file mode 100644 index 0000000000..ca65a11b98 --- /dev/null +++ b/examples/frameworks/microsoft_agent_framework_demo/src/microsoft_agent_framework_demo/register.py @@ -0,0 +1,168 @@ +# SPDX-FileCopyrightText: Copyright (c) 2025, NVIDIA CORPORATION & AFFILIATES. All rights reserved. +# SPDX-License-Identifier: Apache-2.0 +# +# Licensed under the Apache License, Version 2.0 (the "License"); +# you may not use this file except in compliance with the License. +# You may obtain a copy of the License at +# +# http://www.apache.org/licenses/LICENSE-2.0 +# +# Unless required by applicable law or agreed to in writing, software +# distributed under the License is distributed on an "AS IS" BASIS, +# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. +# See the License for the specific language governing permissions and +# limitations under the License. + +import logging + +from pydantic import Field + +from nat.builder.builder import Builder +from nat.builder.framework_enum import LLMFrameworkEnum +from nat.builder.function_info import FunctionInfo +from nat.cli.register_workflow import register_function +from nat.data_models.component_ref import FunctionRef +from nat.data_models.component_ref import LLMRef +from nat.data_models.function import FunctionBaseConfig + +from agent_framework import GroupChatBuilder, GroupChatStateSnapshot + +from . import hotel_price_tool # noqa: F401, pylint: disable=unused-import +from . import local_events_tool # noqa: F401, pylint: disable=unused-import + +logger = logging.getLogger(__name__) + + +class SKTravelPlanningWorkflowConfig(FunctionBaseConfig, name="maf"): + tool_names: list[FunctionRef] = Field(default_factory=list, + description="The list of tools to provide to the Microsoft Agent Framework.") + llm_name: LLMRef = Field(description="The LLM model to use with the Microsoft Agent Framework.") + verbose: bool = Field(default=False, description="Set the verbosity of the Microsoft Agent Framework's logging.") + itinerary_expert_name: str = Field(description="The name of the itinerary expert.") + itinerary_expert_instructions: str = Field(description="The instructions for the itinerary expert.") + budget_advisor_name: str = Field(description="The name of the budget advisor.") + budget_advisor_instructions: str = Field(description="The instructions for the budget advisor.") + summarize_agent_name: str = Field(description="The name of the summarizer agent.") + summarize_agent_instructions: str = Field(description="The instructions for the summarizer agent.") + long_term_memory_instructions: str = Field(default="", + description="The instructions for using the long term memory.") + + +@register_function(config_type=SKTravelPlanningWorkflowConfig, framework_wrappers=[LLMFrameworkEnum.MAF]) +async def maf_travel_planning_workflow(config: SKTravelPlanningWorkflowConfig, builder: Builder): + + from agent_framework import ChatAgent + + + chat_service = await builder.get_llm(config.llm_name, wrapper_type=LLMFrameworkEnum.MAF) + + + tools = await builder.get_tools(config.tool_names, wrapper_type=LLMFrameworkEnum.MAF) + + + itinerary_expert_name = config.itinerary_expert_name + itinerary_expert_instructions = config.itinerary_expert_instructions + config.long_term_memory_instructions + budget_advisor_name = config.budget_advisor_name + budget_advisor_instructions = config.budget_advisor_instructions + config.long_term_memory_instructions + summarize_agent_name = config.summarize_agent_name + summarize_agent_instructions = config.summarize_agent_instructions + config.long_term_memory_instructions + + agent_itinerary = ChatAgent( + name=itinerary_expert_name, + chat_client=chat_service, + instructions=itinerary_expert_instructions, + tools=tools + ) + + agent_budget = ChatAgent( + name=budget_advisor_name, + chat_client=chat_service, + instructions=budget_advisor_instructions, + tools=tools + ) + + + + agent_summary = ChatAgent( + name=summarize_agent_name, + chat_client=chat_service, + instructions=summarize_agent_instructions, + tools=tools + ) + + + + agents=[agent_itinerary, agent_budget, agent_summary] + + global current_agent_idx + current_agent_idx = -1 + + def round_robin_speaker(state: GroupChatStateSnapshot) -> str | None: + global current_agent_idx + + if current_agent_idx > len(agents): + current_agent_idx = -1 + + round_idx = state["round_index"] + history = state["history"] + + if not history: + return None + + + print("aaaa:", history[-1].message.text) + + response = any(keyword in history[-1].message.text.lower() + for keyword in ["final plan", "total cost", "more information"]) + + print("Current response: ", response) + + if response: + return None + else: + current_agent_idx += 1 + return agents[current_agent_idx].name + + + + + # Build the group chat workflow + chat = ( + GroupChatBuilder() + .select_speakers(round_robin_speaker, display_name="Orchestrator") + .participants(agents) + .build() + ) + + + async def _response_fn(input_message: str) -> str: + + + from agent_framework import AgentRunUpdateEvent, WorkflowOutputEvent + + task = input_message + + print(f"Task: {task}\n") + print("=" * 80) + # Run the workflow + async for event in chat.run_stream(task): + if isinstance(event, AgentRunUpdateEvent): + # Print streaming agent updates + print(f"[{event.executor_id}]: {event.data}", end="", flush=True) + elif isinstance(event, WorkflowOutputEvent): + # Workflow completed + final_message = event.data + author = getattr(final_message, "author_name", "System") + text = getattr(final_message, "text", str(final_message)) + print(f"\n\n[{author}]\n{text}") + print("-" * 80) + + def convert_dict_to_str(response: dict) -> str: + return response["output"] + + try: + yield FunctionInfo.create(single_fn=_response_fn, converters=[convert_dict_to_str]) + except GeneratorExit: + logger.exception("Exited early!") + finally: + logger.debug("Cleaning up") \ No newline at end of file diff --git a/examples/frameworks/microsoft_agent_framework_demo/tests/test_semantic_kernel_workflow.py b/examples/frameworks/microsoft_agent_framework_demo/tests/test_semantic_kernel_workflow.py new file mode 100644 index 0000000000..bdd2eba2a0 --- /dev/null +++ b/examples/frameworks/microsoft_agent_framework_demo/tests/test_semantic_kernel_workflow.py @@ -0,0 +1,34 @@ +# SPDX-FileCopyrightText: Copyright (c) 2025, NVIDIA CORPORATION & AFFILIATES. All rights reserved. +# SPDX-License-Identifier: Apache-2.0 +# +# Licensed under the Apache License, Version 2.0 (the "License"); +# you may not use this file except in compliance with the License. +# You may obtain a copy of the License at +# +# http://www.apache.org/licenses/LICENSE-2.0 +# +# Unless required by applicable law or agreed to in writing, software +# distributed under the License is distributed on an "AS IS" BASIS, +# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. +# See the License for the specific language governing permissions and +# limitations under the License. + +from pathlib import Path + +import pytest + + +@pytest.mark.usefixtures("mem0_api_key", "openai_api_key") +@pytest.mark.integration +async def test_full_workflow(): + from nat.test.utils import locate_example_config + from nat.test.utils import run_workflow + from nat_semantic_kernel_demo.register import SKTravelPlanningWorkflowConfig + + config_file: Path = locate_example_config(SKTravelPlanningWorkflowConfig) + + await run_workflow( + config_file=config_file, + question=("Create a 3-day travel itinerary for Tokyo in April, covering hotels and activities within a USD " + "2000 budget."), + expected_answer="budget") diff --git a/packages/nvidia_nat_maf/LICENSE-3rd-party.txt b/packages/nvidia_nat_maf/LICENSE-3rd-party.txt new file mode 120000 index 0000000000..bab0d1f8a7 --- /dev/null +++ b/packages/nvidia_nat_maf/LICENSE-3rd-party.txt @@ -0,0 +1 @@ +../../LICENSE-3rd-party.txt \ No newline at end of file diff --git a/packages/nvidia_nat_maf/LICENSE.md b/packages/nvidia_nat_maf/LICENSE.md new file mode 120000 index 0000000000..f0608a63ae --- /dev/null +++ b/packages/nvidia_nat_maf/LICENSE.md @@ -0,0 +1 @@ +../../LICENSE.md \ No newline at end of file diff --git a/packages/nvidia_nat_maf/pyproject.toml b/packages/nvidia_nat_maf/pyproject.toml new file mode 100644 index 0000000000..06bb93001d --- /dev/null +++ b/packages/nvidia_nat_maf/pyproject.toml @@ -0,0 +1,53 @@ +[build-system] +build-backend = "setuptools.build_meta" +requires = ["setuptools >= 64", "setuptools-scm>=8"] + + +[tool.setuptools.packages.find] +where = ["src"] +include = ["nat.*"] + + +[tool.setuptools_scm] +git_describe_command = "git describe --long --first-parent" +root = "../.." + + +[project] +name = "nvidia-nat-maf" +dynamic = ["version"] +dependencies = [ + # Keep package version constraints as open as possible to avoid conflicts with other packages. Always define a minimum + # version when adding a new package. If unsure, default to using `~=` instead of `==`. Does not apply to nvidia-nat packages. + # Keep sorted!!! +] +requires-python = ">=3.11,<3.14" +description = "Subpackage for Microsoft Agent integration in NeMo Agent toolkit" +readme = "src/nat/meta/pypi.md" +keywords = ["ai", "rag", "agents"] +license = { text = "Apache-2.0" } +authors = [{ name = "NVIDIA Corporation" }] +maintainers = [{ name = "NVIDIA Corporation" }] +classifiers = [ + "Programming Language :: Python", + "Programming Language :: Python :: 3.11", + "Programming Language :: Python :: 3.12", + "Programming Language :: Python :: 3.13", +] + +[project.urls] +documentation = "https://docs.nvidia.com/nemo/agent-toolkit/latest/" +source = "https://github.com/NVIDIA/NeMo-Agent-Toolkit" + + +[tool.uv] +managed = true +config-settings = { editable_mode = "compat" } + + +[tool.uv.sources] +nvidia-nat = { workspace = true } + + +[project.entry-points.'nat.components'] +nat_maf = "nat.plugins.maf.register" diff --git a/packages/nvidia_nat_maf/src/nat/meta/pypi.md b/packages/nvidia_nat_maf/src/nat/meta/pypi.md new file mode 100644 index 0000000000..23628beecf --- /dev/null +++ b/packages/nvidia_nat_maf/src/nat/meta/pypi.md @@ -0,0 +1,23 @@ + + +![NVIDIA NeMo Agent Toolkit](https://media.githubusercontent.com/media/NVIDIA/NeMo-Agent-Toolkit/refs/heads/main/docs/source/_static/banner.png "NeMo Agent toolkit banner image") + +# NVIDIA NeMo Agent Toolkit Subpackage +This is a subpackage for Microsoft Agent Framework (MAF) integration in NeMo Agent toolkit. + +For more information about the NVIDIA NeMo Agent toolkit, please visit the [NeMo Agent toolkit GitHub Repo](https://github.com/NVIDIA/NeMo-Agent-Toolkit). diff --git a/packages/nvidia_nat_maf/src/nat/plugins/maf/__init__.py b/packages/nvidia_nat_maf/src/nat/plugins/maf/__init__.py new file mode 100644 index 0000000000..e69de29bb2 diff --git a/packages/nvidia_nat_maf/src/nat/plugins/maf/llm.py b/packages/nvidia_nat_maf/src/nat/plugins/maf/llm.py new file mode 100644 index 0000000000..b67fec9da6 --- /dev/null +++ b/packages/nvidia_nat_maf/src/nat/plugins/maf/llm.py @@ -0,0 +1,80 @@ +# SPDX-FileCopyrightText: Copyright (c) 2025, NVIDIA CORPORATION & AFFILIATES. All rights reserved. +# SPDX-License-Identifier: Apache-2.0 +# +# Licensed under the Apache License, Version 2.0 (the "License"); +# you may not use this file except in compliance with the License. +# You may obtain a copy of the License at +# +# http://www.apache.org/licenses/LICENSE-2.0 +# +# Unless required by applicable law or agreed to in writing, software +# distributed under the License is distributed on an "AS IS" BASIS, +# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. +# See the License for the specific language governing permissions and +# limitations under the License. + +from typing import TypeVar + +from nat.builder.builder import Builder +from nat.builder.framework_enum import LLMFrameworkEnum +from nat.cli.register_workflow import register_llm_client +from nat.data_models.common import get_secret_value +from nat.data_models.llm import LLMBaseConfig +from nat.data_models.retry_mixin import RetryMixin +from nat.data_models.thinking_mixin import ThinkingMixin +from nat.llm.azure_openai_llm import AzureOpenAIModelConfig +from nat.llm.openai_llm import OpenAIModelConfig +from nat.llm.utils.thinking import BaseThinkingInjector +from nat.llm.utils.thinking import FunctionArgumentWrapper +from nat.llm.utils.thinking import patch_with_thinking +from nat.utils.exception_handlers.automatic_retries import patch_with_retry +from nat.utils.responses_api import validate_no_responses_api +from nat.utils.type_utils import override + +ModelType = TypeVar("ModelType") + + +def _patch_llm_based_on_config(client: ModelType, llm_config: LLMBaseConfig) -> ModelType: + + #TODO: Needs to be implemented as per MAF + + return client + + +@register_llm_client(config_type=AzureOpenAIModelConfig, wrapper_type=LLMFrameworkEnum.SEMANTIC_KERNEL) +async def azure_openai_semantic_kernel(llm_config: AzureOpenAIModelConfig, _builder: Builder): + + validate_no_responses_api(llm_config, LLMFrameworkEnum.SEMANTIC_KERNEL) + + + from agent_framework.azure import AzureOpenAIChatClient + # The AzureKeyCredential is required for API key authentication + from azure.core.credentials import AzureKeyCredential + + credential = AzureKeyCredential(get_secret_value(llm_config.api_key)) + + llm = AzureOpenAIChatClient( + # Use 'endpoint' for the API base/URL + endpoint=llm_config.azure_endpoint, + # Pass the credential object + #credential=credential, + api_key=get_secret_value(llm_config.api_key), + # Specify the deployment name + deployment_name=llm_config.azure_deployment, + # For this client, the model_name argument is less critical than deployment_name + model_name=llm_config.azure_deployment + ) + + yield _patch_llm_based_on_config(llm, llm_config) + + +@register_llm_client(config_type=OpenAIModelConfig, wrapper_type=LLMFrameworkEnum.SEMANTIC_KERNEL) +async def openai_semantic_kernel(llm_config: OpenAIModelConfig, _builder: Builder): + + from agent_framework.openai import OpenAIResponsesClient + + validate_no_responses_api(llm_config, LLMFrameworkEnum.SEMANTIC_KERNEL) + + llm = OpenAIResponsesClient() + + yield _patch_llm_based_on_config(llm, llm_config) diff --git a/packages/nvidia_nat_maf/src/nat/plugins/maf/register.py b/packages/nvidia_nat_maf/src/nat/plugins/maf/register.py new file mode 100644 index 0000000000..e06722d34f --- /dev/null +++ b/packages/nvidia_nat_maf/src/nat/plugins/maf/register.py @@ -0,0 +1,22 @@ +# SPDX-FileCopyrightText: Copyright (c) 2025, NVIDIA CORPORATION & AFFILIATES. All rights reserved. +# SPDX-License-Identifier: Apache-2.0 +# +# Licensed under the Apache License, Version 2.0 (the "License"); +# you may not use this file except in compliance with the License. +# You may obtain a copy of the License at +# +# http://www.apache.org/licenses/LICENSE-2.0 +# +# Unless required by applicable law or agreed to in writing, software +# distributed under the License is distributed on an "AS IS" BASIS, +# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. +# See the License for the specific language governing permissions and +# limitations under the License. + +# flake8: noqa +# isort:skip_file + +# Import any providers which need to be automatically registered here + +from . import llm +from . import tool_wrapper diff --git a/packages/nvidia_nat_maf/src/nat/plugins/maf/tool_wrapper.py b/packages/nvidia_nat_maf/src/nat/plugins/maf/tool_wrapper.py new file mode 100644 index 0000000000..3b6adf2188 --- /dev/null +++ b/packages/nvidia_nat_maf/src/nat/plugins/maf/tool_wrapper.py @@ -0,0 +1,149 @@ +# SPDX-FileCopyrightText: Copyright (c) 2025, NVIDIA CORPORATION & AFFILIATES. All rights reserved. +# SPDX-License-Identifier: Apache-2.0 +# +# Licensed under the Apache License, Version 2.0 (the "License"); +# you may not use this file except in compliance with the License. +# You may obtain a copy of the License at +# +# http://www.apache.org/licenses/LICENSE-2.0 +# +# Unless required by applicable law or agreed to in writing, software +# distributed under the License is distributed on an "AS IS" BASIS, +# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. +# See the License for the specific language governing permissions and +# limitations under the License. + +import logging +import types +from collections.abc import Callable +from dataclasses import is_dataclass +from typing import Any +from typing import Union +from typing import get_args +from typing import get_origin + +from pydantic import BaseModel + +from nat.builder.builder import Builder +from nat.builder.framework_enum import LLMFrameworkEnum +from nat.builder.function import Function +from nat.cli.register_workflow import register_tool_wrapper + +logger = logging.getLogger(__name__) + + +def get_type_info(field_type): + """Maps Python types to common string types used in function calling schemas.""" + origin = get_origin(field_type) + if origin is None: + # It’s a simple type + name = getattr(field_type, "__name__", str(field_type)).lower() + if name in ('str', 'int', 'float', 'bool'): + return name + return 'object' # Default for complex types like dataclasses/Pydantic models + + # Handle Union types specially + if origin in (Union, types.UnionType): + # Pick the first type that isn’t NoneType + non_none = [arg for arg in get_args(field_type) if arg is not type(None)] + if non_none: + return get_type_info(non_none[0]) + + return 'string' # fallback + + # For other generics (e.g., List, Dict), use the origin's name or 'array'/'object' + name = getattr(origin, "__name__", str(origin)).lower() + if name in ('list', 'array'): + return 'array' + return 'object' + + +def resolve_type(t): + """Resolves Unions to their non-None type or returns the type itself.""" + origin = get_origin(t) + if origin in (Union, types.UnionType): + # Pick the first type that isn’t NoneType + for arg in get_args(t): + if arg is not type(None): + return arg + + return t # fallback + return t + + +@register_tool_wrapper(wrapper_type=LLMFrameworkEnum.MICROSOFT_AGENT_FRAMEWORK) # 1. Changed framework type +def microsoft_agent_framework_tool_wrapper(name: str, fn: Function, builder: Builder): + """ + Wraps a nat Function into an OpenAI-compatible function schema + for use with the Microsoft Agent Framework. + """ + + # MAF/OpenAI tools expect a callable and a schema (dict). + # We define the callable here and the schema below. + async def callable_ainvoke(*args, **kwargs): + """Standard asynchronous tool invocation.""" + return await fn.acall_invoke(*args, **kwargs) + + # MAF tools typically don't directly stream; they are usually defined + # as single-call functions in the schema. We'll prioritize the single + # invoke path for simplicity and MAF convention. + + def generate_openai_function_schema(nat_function: Function, function_name: str) -> dict[str, Any]: + """ + Generates an OpenAI-compatible function schema dictionary. + This is the format MAF tools often expect. + """ + + # 2. Extract properties for the function schema + input_schema = nat_function.input_schema.model_fields + required_params = [] + properties = {} + + for arg_name, annotation in input_schema.items(): + type_obj = resolve_type(annotation.annotation) + + # MAF/OpenAI schema doesn't support complex nested types well; + # we check but still generate the schema for the root function. + if isinstance(type_obj, type) and (issubclass(type_obj, BaseModel) or is_dataclass(type_obj)): + # Complex type warning: MAF might struggle with these parameters + logger.warning( + "Nested non-native model detected in input schema for parameter: %s. " + "This may not be fully supported by the MAF model.", + arg_name) + + param_type = get_type_info(annotation.annotation) + + # Map Python/Pydantic type info to OpenAI schema properties + param_def = { + "type": param_type, + # Description often comes from the Pydantic field's description/docstring + "description": annotation.description or f"Parameter {arg_name} of type {param_type}.", + } + properties[arg_name] = param_def + + if annotation.is_required(): + required_params.append(arg_name) + + schema = { + "type": "function", + "function": { + "name": function_name, + "description": nat_function.description, + "parameters": { + "type": "object", + "properties": properties, + "required": required_params, + } + } + } + return schema + + + # 3. Generate the tool schema and return the callable alongside it. + tool_schema = generate_openai_function_schema(nat_function=fn, function_name=name) + + # The MAF/OpenAI format returns a list of tools, where each tool is a dict + # containing the callable and the schema. + # The return format should be a dict of {tool_name: (callable, tool_schema_dict)} + # or just {tool_name: callable} depending on the exact MAF wrapper expectation. + # \ No newline at end of file diff --git a/packages/nvidia_nat_maf/tests/test_llm_sk.py b/packages/nvidia_nat_maf/tests/test_llm_sk.py new file mode 100644 index 0000000000..c2ae8983a8 --- /dev/null +++ b/packages/nvidia_nat_maf/tests/test_llm_sk.py @@ -0,0 +1,82 @@ +# SPDX-FileCopyrightText: Copyright (c) 2025, NVIDIA CORPORATION & AFFILIATES. All rights reserved. +# SPDX-License-Identifier: Apache-2.0 +# +# Licensed under the Apache License, Version 2.0 (the "License"); +# you may not use this file except in compliance with the License. +# You may obtain a copy of the License at +# +# http://www.apache.org/licenses/LICENSE-2.0 +# +# Unless required by applicable law or agreed to in writing, software +# distributed under the License is distributed on an "AS IS" BASIS, +# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. +# See the License for the specific language governing permissions and +# limitations under the License. +# pylint: disable=unused-argument, not-async-context-manager + +from unittest.mock import MagicMock +from unittest.mock import patch + +import pytest + +from nat.builder.builder import Builder +from nat.builder.framework_enum import LLMFrameworkEnum +from nat.data_models.llm import APITypeEnum +from nat.llm.openai_llm import OpenAIModelConfig +from nat.plugins.semantic_kernel.llm import openai_semantic_kernel + +# --------------------------------------------------------------------------- +# OpenAI → Semantic-Kernel wrapper tests +# --------------------------------------------------------------------------- + + +class TestOpenAISemanticKernel: + """Tests for the openai_semantic_kernel wrapper.""" + + @pytest.fixture + def mock_builder(self) -> Builder: + return MagicMock(spec=Builder) + + @pytest.fixture + def oa_cfg(self): + return OpenAIModelConfig(model_name="gpt-4o") + + @pytest.fixture + def oa_cfg_responses(self): + # Using the RESPONSES API must be rejected by the wrapper. + return OpenAIModelConfig(model_name="gpt-4o", api_type=APITypeEnum.RESPONSES) + + @patch("semantic_kernel.connectors.ai.open_ai.OpenAIChatCompletion") + async def test_basic_creation(self, mock_sk, oa_cfg, mock_builder): + """Ensure the wrapper instantiates OpenAIChatCompletion with the right model id.""" + async with openai_semantic_kernel(oa_cfg, mock_builder) as llm_obj: + mock_sk.assert_called_once() + assert mock_sk.call_args.kwargs["ai_model_id"] == "gpt-4o" + assert llm_obj is mock_sk.return_value + + @patch("semantic_kernel.connectors.ai.open_ai.OpenAIChatCompletion") + async def test_responses_api_blocked(self, mock_sk, oa_cfg_responses, mock_builder): + """Selecting APIType.RESPONSES must raise a ValueError.""" + with pytest.raises(ValueError, match="Responses API is not supported"): + async with openai_semantic_kernel(oa_cfg_responses, mock_builder): + pass + mock_sk.assert_not_called() + + +# --------------------------------------------------------------------------- +# Registration decorator sanity check +# --------------------------------------------------------------------------- + + +@patch("nat.cli.type_registry.GlobalTypeRegistry") +def test_decorator_registration(mock_global_registry): + """Verify that register_llm_client decorated the Semantic-Kernel wrapper.""" + registry = MagicMock() + mock_global_registry.get.return_value = registry + + # Pretend decorator execution populated the map. + registry._llm_client_map = { + (OpenAIModelConfig, LLMFrameworkEnum.SEMANTIC_KERNEL): openai_semantic_kernel, + } + + assert (registry._llm_client_map[(OpenAIModelConfig, LLMFrameworkEnum.SEMANTIC_KERNEL)] is openai_semantic_kernel) diff --git a/packages/nvidia_nat_maf/tests/test_sk_decorator.py b/packages/nvidia_nat_maf/tests/test_sk_decorator.py new file mode 100644 index 0000000000..8a74cc0ba8 --- /dev/null +++ b/packages/nvidia_nat_maf/tests/test_sk_decorator.py @@ -0,0 +1,206 @@ +# SPDX-FileCopyrightText: Copyright (c) 2024-2025, NVIDIA CORPORATION & AFFILIATES. All rights reserved. +# SPDX-License-Identifier: Apache-2.0 +# +# Licensed under the Apache License, Version 2.0 (the "License"); +# you may not use this file except in compliance with the License. +# You may obtain a copy of the License at +# +# http://www.apache.org/licenses/LICENSE-2.0 +# +# Unless required by applicable law or agreed to in writing, software +# distributed under the License is distributed on an "AS IS" BASIS, +# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. +# See the License for the specific language governing permissions and +# limitations under the License. + +from pydantic import BaseModel + +# Import the semantic_kernel_tool_wrapper from tool_wrapper.py +from nat.plugins.semantic_kernel.tool_wrapper import semantic_kernel_tool_wrapper + +# ---------------------------- +# Dummy Models for Testing +# ---------------------------- + + +class DummyInput(BaseModel): + value: int + + +class DummyOutput(BaseModel): + result: int + + +# Models for nested type testing +class InnerModel(BaseModel): + x: int + + +class OuterModel(BaseModel): + inner: InnerModel + y: str + + +class NestedOutput(BaseModel): + result: int + + +# ---------------------------- +# Dummy Function Implementations +# ---------------------------- + + +class DummyFunction: + """Dummy function with simple input/output.""" + + def __init__(self): + self.description = "Dummy description" + # Create a simple config object with attribute 'type' + self.config = type('Config', (), {'type': 'dummy_func'}) + self.has_single_output = True + self.has_streaming_output = False + self.input_schema = DummyInput + self.single_output_schema = DummyOutput + self.streaming_output_schema = None + + async def acall_invoke(self, *args, **kwargs): + # For testing, simply multiply the input value by 2 + input_obj = args[0] + return DummyOutput(result=input_obj.value * 2) + + +class DummyNestedFunction: + """Dummy function using a nested BaseModel for input.""" + + def __init__(self): + self.description = "Nested function" + self.config = type('Config', (), {'type': 'nested_func'}) + self.has_single_output = True + self.has_streaming_output = False + self.input_schema = OuterModel + self.single_output_schema = NestedOutput + self.streaming_output_schema = None + + async def acall_invoke(self, *args, **kwargs): + # For testing, sum inner.x and the length of y + outer = args[0] + return NestedOutput(result=outer.inner.x + len(outer.y)) + + +class DummyStreamingFunction: + """Dummy function that simulates a streaming output.""" + + def __init__(self): + self.description = "Streaming function" + self.config = type('Config', (), {'type': 'streaming_func'}) + self.has_single_output = False + self.has_streaming_output = True + self.input_schema = DummyInput + self.streaming_output_schema = DummyOutput + self.single_output_schema = None + + async def acall_stream(self, *args, **kwargs): + # For simplicity, return the first value from the streaming generator + async for item in self._astream(args[0]): + yield item + + async def _astream(self, value): + for i in range(3): + yield DummyOutput(result=value.value + i) + + +# ---------------------------- +# Pytest Unit Tests +# ---------------------------- + + +async def test_semantic_kernel_tool_wrapper_simple_arguments(): + """Test the tool wrapper with a function that has simple arguments.""" + dummy_fn = DummyFunction() + # Invoke the semantic kernel tool wrapper + wrapper = semantic_kernel_tool_wrapper('dummy_func', dummy_fn, builder=None) + + # Ensure the wrapper returns a dictionary with our function name as key + assert 'dummy_func' in wrapper + decorated_func = wrapper['dummy_func'] + + # Check that kernel function attributes are set + assert hasattr(decorated_func, '__kernel_function__') + assert decorated_func.__kernel_function__ is True + assert decorated_func.__kernel_function_name__ == dummy_fn.config.type + assert decorated_func.__kernel_function_description__ == dummy_fn.description + + # Check that __kernel_function_parameters__ contains the expected parameter + params = getattr(decorated_func, '__kernel_function_parameters__') + # DummyInput has one field 'value' + assert isinstance(params, list) + assert any(param['name'] == 'value' for param in params) + + # Check the __kernel_function_streaming__ attribute (should be False for single output) + assert getattr(decorated_func, '__kernel_function_streaming__') is False + + # Call the decorated function with a simple DummyInput + dummy_input = DummyInput(value=5) + result = await decorated_func(dummy_input) + # Expect the output to be value * 2 + assert result.result == 10 + + # Also check return type info (for DummyOutput, field 'result' is int) + return_type = getattr(decorated_func, '__kernel_function_return_type__') + assert return_type == 'int' + + +async def test_semantic_kernel_tool_wrapper_nested_base_model(): + """Test the tool wrapper with a function that uses nested BaseModel types in its input.""" + dummy_fn = DummyNestedFunction() + wrapper = semantic_kernel_tool_wrapper('nested_func', dummy_fn, builder=None) + + assert 'nested_func' in wrapper + decorated_func = wrapper['nested_func'] + + # Extract kernel function parameters + params = getattr(decorated_func, '__kernel_function_parameters__') + # OuterModel has two fields: 'inner' (a nested BaseModel) and 'y' (a simple type) + inner_param = next(param for param in params if param['name'] == 'inner') + y_param = next(param for param in params if param['name'] == 'y') + + # For nested BaseModel fields, include_in_function_choices should be False + assert inner_param['include_in_function_choices'] is False + # For simple types (like str), it should remain True + assert y_param['include_in_function_choices'] is True + + # Check the __kernel_function_streaming__ attribute (should be False for single output) + assert getattr(decorated_func, '__kernel_function_streaming__') is False + + # Test function invocation + dummy_input = OuterModel(inner=InnerModel(x=3), y='test') + result = await decorated_func(dummy_input) + # Expected: inner.x (3) + length of 'test' (4) = 7 + assert result.result == 7 + + # Check return type info + return_type = getattr(decorated_func, '__kernel_function_return_type__') + assert return_type == 'int' + + +async def test_semantic_kernel_tool_wrapper_streaming(): + """Test the tool wrapper with a function that has streaming output.""" + dummy_fn = DummyStreamingFunction() + wrapper = semantic_kernel_tool_wrapper('streaming_func', dummy_fn, builder=None) + + assert 'streaming_func' in wrapper + decorated_func = wrapper['streaming_func'] + + # For streaming functions, __kernel_function_streaming__ should be True + assert getattr(decorated_func, '__kernel_function_streaming__') is True + + dummy_input = DummyInput(value=10) + results = [] + async for item in decorated_func(dummy_input): + results.append(item) + # Verify that we get the complete streaming output from the generator + # For DummyStreamingFunction, _astream yields three items with result values: value + 0, value + 1, and value + 2 + assert len(results) == 3 + assert results[0].result == 10 + assert results[1].result == 11 + assert results[2].result == 12