-
Notifications
You must be signed in to change notification settings - Fork 30
fix: enable json output for workflows and update logs #238
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
base: develop
Are you sure you want to change the base?
Changes from all commits
File filter
Filter by extension
Conversations
Jump to
Diff view
Diff view
There are no files selected for viewing
| Original file line number | Diff line number | Diff line change |
|---|---|---|
|
|
@@ -291,7 +291,7 @@ async def execute_message_processor( | |
| ), | ||
| ) | ||
|
|
||
| required_inputs = inputs['required'] | ||
| required_inputs = inputs.get('required') or [] | ||
|
There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. Validate Line 294 now treats Proposed fix- required_inputs = inputs.get('required') or []
+ required_inputs = inputs.get('required')
+ if not isinstance(required_inputs, list):
+ return JSONResponse(
+ status_code=status.HTTP_500_INTERNAL_SERVER_ERROR,
+ content=response_formatter.buildErrorResponse(
+ "Invalid processor YAML: input_schema.required must be a list"
+ ),
+ )🤖 Prompt for AI Agents |
||
| execution_inputs = {} | ||
| for input in required_inputs: | ||
| if input not in payload.input_data.keys(): | ||
|
|
@@ -301,7 +301,7 @@ async def execute_message_processor( | |
| f'Input `{input}` is required but not provided' | ||
| ), | ||
| ) | ||
| execution_inputs[input] = payload.input_data[input] | ||
| execution_inputs = payload.input_data | ||
|
There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. Don't pass undeclared inputs straight to Hermes. Line 304 now forwards the entire 🤖 Prompt for AI Agents |
||
|
|
||
| try: | ||
| result = await processor_service.execute_message_processor( | ||
|
|
||
Some generated files are not rendered by default. Learn more about how customized files appear on GitHub.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
🧩 Analysis chain
🏁 Script executed:
Repository: rootflo/wavefront
Length of output: 1392
🏁 Script executed:
Repository: rootflo/wavefront
Length of output: 12844
🏁 Script executed:
Repository: rootflo/wavefront
Length of output: 399
Add identity guard to prevent stale cleanup from clearing the active controller.
cleanup()unconditionally nullsabortControllerRef.currentat line 310. If request A is aborted and request B stores a new controller before A'sfinallyblock runs, A's cleanup will clear B's ref, making the active stream unabortable and causing resource leaks.Proposed fix
const handleSSEInference = async ( inputs: string | Array<{ role: 'user' | 'assistant'; content: ChatMessageContent }>, variables: Record<string, unknown> ) => { if (!id) return; + let controller: AbortController | null = null; try { setIsStreaming(true); setStreamingEvents([]); // Clear previous events immediately - const controller = new AbortController(); + controller = new AbortController(); abortControllerRef.current = controller; // ... rest of function ... function cleanup() { setRunningInference(false); setIsStreaming(false); - abortControllerRef.current = null; + if (abortControllerRef.current === controller) { + abortControllerRef.current = null; + } } };📝 Committable suggestion
🤖 Prompt for AI Agents