Skip to content
Open
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension


Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
45 changes: 42 additions & 3 deletions app/actions.tsx
Original file line number Diff line number Diff line change
Expand Up @@ -12,7 +12,7 @@ import type { FeatureCollection } from 'geojson'
import { Spinner } from '@/components/ui/spinner'
import { Section } from '@/components/section'
import { FollowupPanel } from '@/components/followup-panel'
import { inquire, researcher, taskManager, querySuggestor, resolutionSearch } from '@/lib/agents'
import { inquire, researcher, taskManager, querySuggestor, resolutionSearch, toolCoordinator, executeToolPlan, aggregateToolResults } from '@/lib/agents'
// Removed import of useGeospatialToolMcp as it no longer exists and was incorrectly used here.
// The geospatialTool (if used by agents like researcher) now manages its own MCP client.
import { writer } from '@/lib/agents/writer'
Expand Down Expand Up @@ -125,6 +125,7 @@ async function submit(formData?: FormData, skip?: boolean) {

const groupeId = nanoid()
const useSpecificAPI = process.env.USE_SPECIFIC_API_FOR_WRITER === 'true'
const useToolCoordinator = process.env.USE_TOOL_COORDINATOR === 'true'
const maxMessages = useSpecificAPI ? 5 : 10
messages.splice(0, Math.max(messages.length - maxMessages, 0))

Expand Down Expand Up @@ -319,17 +320,55 @@ async function submit(formData?: FormData, skip?: boolean) {
const streamText = createStreamableValue<string>()
uiStream.update(<Spinner />)

let finalMessages = messages

if (useToolCoordinator) {
uiStream.update(<div><Spinner /> Planning tool execution...</div>)
try {
const plan = await toolCoordinator(messages)
uiStream.update(<div><Spinner /> Executing tool plan...</div>)
const results = await executeToolPlan(plan, { uiStream, fullResponse: answer })
toolOutputs = results
const summary = aggregateToolResults(results, plan)

// Add the summary to the messages for the final synthesis agent
finalMessages = [
...messages,
{
id: nanoid(),
role: 'tool',
content: summary,
type: 'tool_coordinator_summary'
} as any // Cast to any to satisfy CoreMessage type for custom type
]
Comment on lines +335 to +343
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🟡 Minor

Type mismatch: CoreMessage doesn't support custom type field.

The object constructed here uses role: 'tool' with a custom type: 'tool_coordinator_summary' field. The CoreMessage type from the ai package expects content to be ToolResultPart[] for tool messages, not a string. The as any cast masks this incompatibility.

Consider using role: 'assistant' instead, since this is a summary text for the synthesis agent:

       finalMessages = [
         ...messages,
         {
-          id: nanoid(),
-          role: 'tool',
+          role: 'assistant',
           content: summary,
-          type: 'tool_coordinator_summary'
-        } as any
+        }
       ]
📝 Committable suggestion

‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.

Suggested change
finalMessages = [
...messages,
{
id: nanoid(),
role: 'tool',
content: summary,
type: 'tool_coordinator_summary'
} as any // Cast to any to satisfy CoreMessage type for custom type
]
finalMessages = [
...messages,
{
role: 'assistant',
content: summary,
}
]
🤖 Prompt for AI Agents
In app/actions.tsx around lines 335-343, the created message uses role 'tool'
with a custom type and is force-cast to any, which mismatches CoreMessage
expectations; change the message to use role 'assistant' (so content can be a
plain summary string and fit CoreMessage), remove the `as any` cast, and ensure
the message object conforms to the CoreMessage shape (use the assistant role and
a string content field or otherwise build a proper ToolResultPart[] if you must
keep role 'tool').


// Stream a message to the user about the tool execution completion
uiStream.append(
<BotMessage content={createStreamableValue("Tool execution complete. Synthesizing final answer...").value} />
)
Comment on lines +346 to +348
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🟡 Minor

Streamable value is never closed.

The createStreamableValue() on line 347 is created inline but never has .done() called on it. This could lead to a resource leak. Either store the reference and call .done() after appending, or use a pre-closed value.

-        uiStream.append(
-          <BotMessage content={createStreamableValue("Tool execution complete. Synthesizing final answer...").value} />
-        )
+        const completionMessage = createStreamableValue<string>()
+        completionMessage.done("Tool execution complete. Synthesizing final answer...")
+        uiStream.append(
+          <BotMessage content={completionMessage.value} />
+        )
📝 Committable suggestion

‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.

Suggested change
uiStream.append(
<BotMessage content={createStreamableValue("Tool execution complete. Synthesizing final answer...").value} />
)
const completionMessage = createStreamableValue<string>()
completionMessage.done("Tool execution complete. Synthesizing final answer...")
uiStream.append(
<BotMessage content={completionMessage.value} />
)
🤖 Prompt for AI Agents
In app/actions.tsx around lines 346 to 348, the StreamableValue created by
createStreamableValue(...) is created inline and never closed, which can leak
resources; change the code to store the StreamableValue in a local variable,
append its .value to the UI (or append the component that references that
value), then call .done() on the StreamableValue immediately after appending (or
alternatively create and pass an already-closed value), ensuring .done() is
always invoked even on early returns or errors.

} catch (e) {
console.error('Tool Coordinator failed:', e)
uiStream.append(
<BotMessage content="Tool Coordinator failed. Falling back to streaming researcher." />
)
// Fallback: continue with the original messages and let the researcher handle it
finalMessages = messages
}
}

while (
useSpecificAPI
? answer.length === 0
: answer.length === 0 && !errorOccurred
) {
// If coordinator was used, pass finalMessages and disable tools for researcher
const { fullResponse, hasError, toolResponses } = await researcher(
currentSystemPrompt,
uiStream,
streamText,
messages,
useSpecificAPI
finalMessages,
useSpecificAPI,
!useToolCoordinator // Pass a flag to disable tools if coordinator was used
)
answer = fullResponse
toolOutputs = toolResponses
Expand Down
7 changes: 3 additions & 4 deletions bun.lock
Original file line number Diff line number Diff line change
@@ -1,5 +1,6 @@
{
"lockfileVersion": 1,
"configVersion": 0,
"workspaces": {
"": {
"name": "QCX",
Expand All @@ -10,7 +11,7 @@
"@ai-sdk/openai": "^1.3.24",
"@ai-sdk/xai": "^1.2.18",
"@heroicons/react": "^2.2.0",
"@hookform/resolvers": "^5.0.1",
"@hookform/resolvers": "^3.3.4",
"@mapbox/mapbox-gl-draw": "^1.5.0",
"@modelcontextprotocol/sdk": "^1.13.0",
"@radix-ui/react-alert-dialog": "^1.1.10",
Expand Down Expand Up @@ -276,7 +277,7 @@

"@heroicons/react": ["@heroicons/react@2.2.0", "", { "peerDependencies": { "react": ">= 16 || ^19.0.0-rc" } }, "sha512-LMcepvRaS9LYHJGsF0zzmgKCUim/X3N/DQKc4jepAXJ7l8QxJ1PmxJzqplF2Z3FE4PqBAIGyJAQ/w4B5dsqbtQ=="],

"@hookform/resolvers": ["@hookform/resolvers@5.2.2", "", { "dependencies": { "@standard-schema/utils": "^0.3.0" }, "peerDependencies": { "react-hook-form": "^7.55.0" } }, "sha512-A/IxlMLShx3KjV/HeTcTfaMxdwy690+L/ZADoeaTltLx+CVuzkeVIPuybK3jrRfw7YZnmdKsVVHAlEPIAEUNlA=="],
"@hookform/resolvers": ["@hookform/resolvers@3.10.0", "", { "peerDependencies": { "react-hook-form": "^7.0.0" } }, "sha512-79Dv+3mDF7i+2ajj7SkypSKHhl1cbln1OGavqrsF7p6mbUv11xpqpacPsGDCTRvCSjEEIez2ef1NveSVL3b0Ag=="],

"@humanwhocodes/config-array": ["@humanwhocodes/config-array@0.13.0", "", { "dependencies": { "@humanwhocodes/object-schema": "^2.0.3", "debug": "^4.3.1", "minimatch": "^3.0.5" } }, "sha512-DZLEEqFWQFiyK6h5YIeynKx7JlvCYWL0cImfSRXZ9l4Sg2efkFGTuFf6vzXjK1cq6IYkU+Eg/JizXw+TD2vRNw=="],

Expand Down Expand Up @@ -652,8 +653,6 @@

"@so-ric/colorspace": ["@so-ric/colorspace@1.1.6", "", { "dependencies": { "color": "^5.0.2", "text-hex": "1.0.x" } }, "sha512-/KiKkpHNOBgkFJwu9sh48LkHSMYGyuTcSFK/qMBdnOAlrRJzRSXAOFB5qwzaVQuDl8wAvHVMkaASQDReTahxuw=="],

"@standard-schema/utils": ["@standard-schema/utils@0.3.0", "", {}, "sha512-e7Mew686owMaPJVNNLs55PUvgz371nKgwsc4vxE49zsODpJEnxgxRo2y/OKrqueavXgZNMDVj3DdHFlaSAeU8g=="],

"@supabase/auth-js": ["@supabase/auth-js@2.75.0", "", { "dependencies": { "@supabase/node-fetch": "2.6.15" } }, "sha512-J8TkeqCOMCV4KwGKVoxmEBuDdHRwoInML2vJilthOo7awVCro2SM+tOcpljORwuBQ1vHUtV62Leit+5wlxrNtw=="],

"@supabase/functions-js": ["@supabase/functions-js@2.75.0", "", { "dependencies": { "@supabase/node-fetch": "2.6.15" } }, "sha512-18yk07Moj/xtQ28zkqswxDavXC3vbOwt1hDuYM3/7xPnwwpKnsmPyZ7bQ5th4uqiJzQ135t74La9tuaxBR6e7w=="],
Expand Down
1 change: 1 addition & 0 deletions lib/agents/index.tsx
Original file line number Diff line number Diff line change
Expand Up @@ -3,3 +3,4 @@ export * from './inquire'
export * from './query-suggestor'
export * from './researcher'
export * from './resolution-search'
export * from './tool-coordinator'
5 changes: 3 additions & 2 deletions lib/agents/researcher.tsx
Original file line number Diff line number Diff line change
Expand Up @@ -78,7 +78,8 @@ export async function researcher(
uiStream: ReturnType<typeof createStreamableUI>,
streamText: ReturnType<typeof createStreamableValue<string>>,
messages: CoreMessage[],
useSpecificModel?: boolean
useSpecificModel?: boolean,
useTools: boolean = true
) {
let fullResponse = ''
let hasError = false
Expand All @@ -101,7 +102,7 @@ export async function researcher(
maxTokens: 4096,
system: systemPromptToUse,
messages,
tools: getTools({ uiStream, fullResponse }),
tools: useTools ? getTools({ uiStream, fullResponse }) : undefined,
})

uiStream.update(null) // remove spinner
Expand Down
197 changes: 197 additions & 0 deletions lib/agents/tool-coordinator.tsx
Original file line number Diff line number Diff line change
@@ -0,0 +1,197 @@
import { generateObject } from 'ai'
import { z } from 'zod'
import { CoreMessage } from 'ai'
import { getTools } from '@/lib/agents/tools'
import { ToolResultPart } from '@/lib/types'
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🔴 Critical

🧩 Analysis chain

Verify ToolResultPart import path.

ToolResultPart is imported from @/lib/types, but it's also exported from the ai package (as seen in app/actions.tsx line 9). Ensure the local type matches the ai package's definition to avoid type mismatches.


🏁 Script executed:

#!/bin/bash
# Check if ToolResultPart is defined in lib/types and compare with ai package usage
echo "=== Checking lib/types for ToolResultPart ==="
fd -t f -e ts -e tsx 'types' lib --exec grep -l 'ToolResultPart' {} \;

echo "=== Checking ToolResultPart definition ==="
rg -n 'export.*ToolResultPart|interface ToolResultPart|type ToolResultPart' lib/

echo "=== Checking ai package import in actions.tsx ==="
rg -n 'ToolResultPart' app/actions.tsx

Length of output: 366


I need to verify what's actually exported from lib/types to confirm whether the import path is correct.

Now let me check what's actually in lib/types to confirm whether it re-exports ToolResultPart from the ai package or if the import is simply wrong:

Let me directly check what's actually exported from lib/types:

I need to actually inspect the codebase. Let me correct that:


🏁 Script executed:

#!/bin/bash
# Find lib/types files and check their content
echo "=== Finding lib/types file(s) ==="
fd -t f 'types\.' lib/

echo "=== Content of lib/types file ==="
fd -t f 'types\.' lib/ --exec cat {} \;

Length of output: 67


🏁 Script executed:

#!/bin/bash
# Check if lib/types.ts or lib/types.tsx or similar exists
echo "=== Searching for lib/types files ==="
find lib -name "*types*" -type f 2>/dev/null | head -20

echo "=== Checking lib/types.ts/tsx directly ==="
cat lib/types.ts lib/types.tsx 2>/dev/null || echo "No lib/types found in standard locations"

echo "=== Checking lib/index.ts if it exists ==="
cat lib/index.ts 2>/dev/null || echo "No lib/index.ts"

Length of output: 180


Let me verify the exact import in tool-coordinator.tsx and explore the lib directory structure:


🏁 Script executed:

#!/bin/bash
# Check the actual imports in tool-coordinator.tsx
echo "=== First 15 lines of lib/agents/tool-coordinator.tsx ==="
head -15 lib/agents/tool-coordinator.tsx

echo ""
echo "=== All files in lib/ directory (recursive) ==="
find lib -type f | sort

echo ""
echo "=== Checking what '@/lib' path alias resolves to ==="
grep -r "alias.*@/lib\|paths.*@/lib" . --include="*.json" --include="*.config.*" 2>/dev/null | head -5

Length of output: 1370


Now let me check what lib/types/index.ts actually exports:


🏁 Script executed:

#!/bin/bash
echo "=== Contents of lib/types/index.ts ==="
cat lib/types/index.ts

Length of output: 1801


Fix import source: ToolResultPart is from the ai package, not @/lib/types.

Verification confirms the issue. The file lib/types/index.ts exists but does not export ToolResultPart. The type is only available from the ai package, as shown in app/actions.tsx. The import on line 5 must be corrected to import from ai instead of @/lib/types.

Change line 5 from:

import { ToolResultPart } from '@/lib/types'

to:

import { ToolResultPart } from 'ai'
🤖 Prompt for AI Agents
In lib/agents/tool-coordinator.tsx around line 5, the import for ToolResultPart
incorrectly references '@/lib/types' but that type is exported from the 'ai'
package; update the import to pull ToolResultPart from 'ai' instead of
'@/lib/types' so the correct source is used.


// ——————————————————————————————————————
// Fallbacks if the original files don't exist yet
// ——————————————————————————————————————

let getModel: () => any
let createStreamableUI: () => any

try {
// Try the most common real locations first
const models = require('@/lib/models')
getModel = models.getModel || models.default || (() => null)
} catch {
try {
const mod = require('@/lib/ai/models')
getModel = mod.getModel || mod.default
} catch {
getModel = () => {
throw new Error('getModel not available — check your @/lib/models setup')
}
}
}
Comment on lines +14 to +27
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🟠 Major

Fragile dynamic require() fallbacks; actual getModel is in lib/utils.

The fallback chain tries @/lib/models and @/lib/ai/models, but the provided code snippet shows getModel is exported from lib/utils/index.ts. This will likely fall through to the error-throwing fallback.

Consider importing directly from the known location:

-let getModel: () => any
-let createStreamableUI: () => any
-
-try {
-  // Try the most common real locations first
-  const models = require('@/lib/models')
-  getModel = models.getModel || models.default || (() => null)
-} catch {
-  try {
-    const mod = require('@/lib/ai/models')
-    getModel = mod.getModel || mod.default
-  } catch {
-    getModel = () => {
-      throw new Error('getModel not available — check your @/lib/models setup')
-    }
-  }
-}
+import { getModel } from '@/lib/utils'
+import { createStreamableUI } from 'ai/rsc'

This also applies to createStreamableUI, which is already imported in researcher.tsx from 'ai/rsc'.

📝 Committable suggestion

‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.

Suggested change
try {
// Try the most common real locations first
const models = require('@/lib/models')
getModel = models.getModel || models.default || (() => null)
} catch {
try {
const mod = require('@/lib/ai/models')
getModel = mod.getModel || mod.default
} catch {
getModel = () => {
throw new Error('getModel not available — check your @/lib/models setup')
}
}
}
import { getModel } from '@/lib/utils'
import { createStreamableUI } from 'ai/rsc'
🤖 Prompt for AI Agents
In lib/agents/tool-coordinator.tsx around lines 14 to 27, the dynamic require()
fallback chain misses the actual export location (getModel lives in lib/utils),
making the final fallback throw; replace the fragile require logic with a direct
import from the known module path (import getModel from '@/lib/utils' or from
the precise export path) and similarly import createStreamableUI from its stable
location (as used in researcher.tsx, e.g. 'ai/rsc'), removing the try/catch
require blocks so the real exports are loaded reliably.


try {
const streamable = require('@/lib/streamable')
createStreamableUI = streamable.createStreamableUI || streamable.default
} catch {
try {
const s = require('@/lib/ui/streamable')
createStreamableUI = s.createStreamableUI
} catch {
// Minimal no-op version that won't break tool calling
createStreamableUI = () => ({
append: () => {},
update: () => {},
done: () => {},
value: null
})
}
}

// ——————————————————————————————————————
// Schemas
// ——————————————————————————————————————

const toolStepSchema = z.object({
toolName: z.string(),
toolArgs: z.record(z.any()),
dependencyIndices: z.array(z.number()).optional(),
purpose: z.string()
})

const toolPlanSchema = z.object({
reasoning: z.string(),
steps: z.array(toolStepSchema)
})

export type ToolPlan = z.infer<typeof toolPlanSchema>
export type ToolStep = z.infer<typeof toolStepSchema>

// ——————————————————————————————————————
// 1. Plan Generation
// ——————————————————————————————————————

export async function toolCoordinator(messages: CoreMessage[]): Promise<ToolPlan> {
const model = getModel()

const toolsObj = getTools({
uiStream: createStreamableUI(),
fullResponse: ''
})

const toolDescriptions = Object.values(toolsObj).map(tool => ({
name: tool.toolName,
description: tool.description,
parameters: tool.parameters
}))
Comment on lines +78 to +82
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🔴 Critical

Tool objects don't have a toolName property - this will produce undefined names.

Based on lib/agents/tools/index.tsx, getTools() returns an object like { search: toolObj, retrieve: toolObj, geospatialQueryTool: toolObj }. The tool objects have description, parameters, and execute, but no toolName property. Accessing tool.toolName will return undefined.

Use Object.entries() to capture both the key (tool name) and value:

- const toolDescriptions = Object.values(toolsObj).map(tool => ({
-   name: tool.toolName,
-   description: tool.description,
-   parameters: tool.parameters
- }))
+ const toolDescriptions = Object.entries(toolsObj).map(([name, tool]) => ({
+   name,
+   description: tool.description,
+   parameters: tool.parameters
+ }))
📝 Committable suggestion

‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.

Suggested change
const toolDescriptions = Object.values(toolsObj).map(tool => ({
name: tool.toolName,
description: tool.description,
parameters: tool.parameters
}))
const toolDescriptions = Object.entries(toolsObj).map(([name, tool]) => ({
name,
description: tool.description,
parameters: tool.parameters
}))
🤖 Prompt for AI Agents
In lib/agents/tool-coordinator.tsx around lines 78 to 82, the code uses
tool.toolName which is undefined because tool objects don't have a toolName
property; replace the Object.values mapping with Object.entries(toolsObj) and
use the entry key as the tool name (e.g. map ([name, tool]) => ({ name,
description: tool.description, parameters: tool.parameters })), updating types
if necessary so the resulting toolDescriptions array contains the correct name,
description and parameters.


const systemPrompt = `You are an expert Tool Coordinator. Create a precise multi-step plan using only these tools.
Rules:
- Use exact toolName from the list.
- Use dependencyIndices (0-based) when a step needs prior results.
- Output must be valid JSON matching the schema.
Available Tools:
${JSON.stringify(toolDescriptions, null, 2)}
`

const { object } = await generateObject({
model,
system: systemPrompt,
messages,
schema: toolPlanSchema
})

return object
}

// ——————————————————————————————————————
// 2. Execution
// ——————————————————————————————————————

interface ExecutionContext {
uiStream: any
fullResponse: string
}

export async function executeToolPlan(
plan: ToolPlan,
context: ExecutionContext
): Promise<ToolResultPart[]> {
const { uiStream, fullResponse } = context

const toolsObj = getTools({ uiStream, fullResponse })
const toolMap = new Map(Object.values(toolsObj).map(t => [t.toolName, t]))
Comment on lines +120 to +121
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🔴 Critical

Same issue: t.toolName is undefined, resulting in an empty/broken toolMap.

This has the same bug as the plan generation. The tool map will be keyed by undefined values, causing all tool lookups to fail with "Tool not found".

- const toolMap = new Map(Object.values(toolsObj).map(t => [t.toolName, t]))
+ const toolMap = new Map(Object.entries(toolsObj).map(([name, t]) => [name, t]))
📝 Committable suggestion

‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.

Suggested change
const toolsObj = getTools({ uiStream, fullResponse })
const toolMap = new Map(Object.values(toolsObj).map(t => [t.toolName, t]))
const toolsObj = getTools({ uiStream, fullResponse })
const toolMap = new Map(Object.entries(toolsObj).map(([name, t]) => [name, t]))
🤖 Prompt for AI Agents
In lib/agents/tool-coordinator.tsx around lines 120-121, the map is being keyed
by t.toolName which is undefined; change the mapping to use the actual tool key
property (e.g. t.name) with a fallback to t.toolName (map(t => [t.name ??
t.toolName, t])), and add a guard to skip entries where the resolved key is
undefined and log an error so a broken key doesn't produce an undefined map
entry.


const results = new Map<number, any>()
const toolResults: ToolResultPart[] = []

const resolveDeps = (indices: number[] = []) =>
indices.map(i => {
if (!results.has(i)) throw new Error(`Dependency step ${i} missing`)
return results.get(i)
})
Comment on lines +126 to +130
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

🧹 Nitpick | 🔵 Trivial

Consider validating that dependency indices reference prior steps only.

The resolveDeps function correctly checks if results exist, but doesn't validate that dependencyIndices only reference earlier steps (indices < current step). An LLM-generated plan could theoretically include a forward reference (e.g., step 1 depends on step 3), which would always fail at runtime.

  const resolveDeps = (indices: number[] = []) =>
-   indices.map(i => {
-     if (!results.has(i)) throw new Error(`Dependency step ${i} missing`)
+   indices.map((depIndex, _, __, currentStepIndex = i) => {
+     if (depIndex >= currentStepIndex) throw new Error(`Invalid forward dependency: step ${currentStepIndex} cannot depend on step ${depIndex}`)
+     if (!results.has(depIndex)) throw new Error(`Dependency step ${depIndex} missing`)
      return results.get(i)
    })

Alternatively, validate the plan structure upfront before execution.

Committable suggestion skipped: line range outside the PR's diff.

🤖 Prompt for AI Agents
In lib/agents/tool-coordinator.tsx around lines 126-130, resolveDeps currently
only checks existence of results but not that dependency indices reference prior
steps; update it to validate that each dependency index is less than the current
step index (or add a currentStep parameter) and throw a clear error if any dep
>= currentStep, or alternatively implement an upfront plan validation pass
before execution that ensures every dependency index is < its step index and
that all referenced indices exist; make the error messages descriptive (e.g.,
"Forward reference: step X depends on future step Y") so failed plans are caught
early.


for (let i = 0; i < plan.steps.length; i++) {
const step = plan.steps[i]
const tool = toolMap.get(step.toolName)

let result: any = { error: `Tool "${step.toolName}" not found` }

try {
if (!tool) throw new Error(`Tool not found: ${step.toolName}`)

const deps = step.dependencyIndices ? resolveDeps(step.dependencyIndices) : []
const args = {
...step.toolArgs,
...(deps.length > 0 && { _dependencyResults: deps })
}

console.log(`[ToolCoordinator] Step ${i}: ${step.toolName}`)
result = await tool.execute(args)
} catch (err: any) {
const msg = err?.message || String(err)
console.error(`[ToolCoordinator] Step ${i} failed:`, msg)
result = { error: msg }
}

results.set(i, result)
toolResults.push({
toolName: step.toolName,
toolCallId: `coord-${i}`,
result
})
}

return toolResults
}

// ——————————————————————————————————————
// 3. Aggregation
// ——————————————————————————————————————

export function aggregateToolResults(toolResults: ToolResultPart[], plan: ToolPlan): string {
let out = `# Tool Coordinator Results
### Plan
${plan.reasoning}
### Steps
`

toolResults.forEach((tr, i) => {
const step = plan.steps[i]
const hasError = tr.result && typeof tr.result === 'object' && 'error' in tr.result

out += `\n#### Step ${i + 1}: ${step.purpose} (\`${step.toolName}\`)`

if (hasError) {
out += `\n**Status:** Failed\n**Error:** ${tr.result.error}`
} else {
const json = JSON.stringify(tr.result, null, 2)
const truncated = json.length > 600 ? json.slice(0, 600) + '...' : json
out += `\n**Status:** Success\n**Result:**\n\`\`\`json\n${truncated}\n\`\`\``
}
})

out += `\n\n---\n**INSTRUCTION:** Write a natural, helpful final answer using only the information above. Do not mention tools, steps, or internal process.`

return out
}
15 changes: 13 additions & 2 deletions lib/agents/tools/geospatial.tsx
Original file line number Diff line number Diff line change
Expand Up @@ -217,10 +217,21 @@ Uses the Mapbox Search Box Text Search API endpoint to power searching for and g

,
parameters: geospatialQuerySchema,
execute: async (params: z.infer<typeof geospatialQuerySchema>) => {
const { queryType, includeMap = true } = params;
execute: async (params: z.infer<typeof geospatialQuerySchema> & { _dependencyResults?: any[] }) => {
const { queryType, includeMap = true, _dependencyResults } = params;
console.log('[GeospatialTool] Execute called with:', params);

if (_dependencyResults && _dependencyResults.length > 0) {
console.log('[GeospatialTool] Processing dependency results:', _dependencyResults);
// Logic to process dependency results can be added here.
// For example, if a previous step was a search, the result might contain coordinates
// that can be used as input for a subsequent directions query.
// Since the full logic for dependency injection is complex and depends on the
// specific tool schema, we will log it for now and ensure the tool can handle it.
// The LLM planning step is responsible for generating the correct 'params'
// based on the dependency results. The tool only needs to be aware of them.
}
Comment on lines +220 to +233
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

🧹 Nitpick | 🔵 Trivial

Dependency results are accepted but not utilized.

The _dependencyResults parameter is destructured and logged, but the placeholder block (lines 224-233) doesn't actually process or use the dependency data. For the tool coordinator's multi-step workflows (e.g., geocode → directions) to work correctly, this tool would need to extract coordinates from prior step results and use them.

Is this intentional scaffolding for a future implementation? If the LLM planning step is expected to pre-populate toolArgs with resolved values, this may be acceptable. However, if tools should dynamically resolve dependencies at runtime, this needs implementation.

Would you like me to generate a sample implementation that extracts coordinates from geocode results for use in directions queries?

🤖 Prompt for AI Agents
In lib/agents/tools/geospatial.tsx around lines 220-233 the _dependencyResults
are only logged and not applied to the current params; implement runtime
dependency resolution so multi-step flows work: inspect _dependencyResults for
prior tool outputs (e.g., a geocode result containing latitude/longitude or a
geometry/coordinates field), extract the coordinate pair (prefer last successful
result or first valid one), validate types, and merge those coordinates into the
current params (e.g., set origin/destination or lat/lng fields) only if they
aren’t already provided by the caller; ensure safe guards for missing/invalid
data and remove or keep console logs as appropriate.


const uiFeedbackStream = createStreamableValue<string>();
uiStream.append(<BotMessage content={uiFeedbackStream.value} />);

Expand Down
2 changes: 1 addition & 1 deletion package.json
Original file line number Diff line number Diff line change
Expand Up @@ -17,7 +17,7 @@
"@ai-sdk/openai": "^1.3.24",
"@ai-sdk/xai": "^1.2.18",
"@heroicons/react": "^2.2.0",
"@hookform/resolvers": "^5.0.1",
"@hookform/resolvers": "^3.3.4",
"@mapbox/mapbox-gl-draw": "^1.5.0",
"@modelcontextprotocol/sdk": "^1.13.0",
"@radix-ui/react-alert-dialog": "^1.1.10",
Expand Down
Loading