Skip to content

feat: add structured LLM output support (response_format)#169

Open
Ridwannurudeen wants to merge 1 commit intoOpenGradient:mainfrom
Ridwannurudeen:feat/structured-llm-output
Open

feat: add structured LLM output support (response_format)#169
Ridwannurudeen wants to merge 1 commit intoOpenGradient:mainfrom
Ridwannurudeen:feat/structured-llm-output

Conversation

@Ridwannurudeen
Copy link

@Ridwannurudeen Ridwannurudeen commented Feb 24, 2026

Summary

Adds response_format parameter to chat() and completion() methods, enabling JSON schema enforcement for predictable, machine-readable LLM output. Follows the OpenAI structured outputs specification.

  • Add ResponseFormat dataclass to types.py with to_dict() serialization
  • Thread response_format through all 5 LLM methods (2 public + 3 internal)
  • Add --response-format and --response-format-file CLI options for both chat and completion commands
  • Export ResponseFormat from the opengradient package
  • Add examples/llm_structured_output.py demonstrating sentiment analysis with schema enforcement
  • Add 6 unit tests covering serialization, passthrough, and all 3 LLM paths (completion, chat, streaming)

Usage

import opengradient as og

result = client.llm.chat(
    model=og.TEE_LLM.GPT_4O,
    messages=[{"role": "user", "content": "Analyze: 'Great product!'"}],
    max_tokens=200,
    response_format={
        "type": "json_schema",
        "json_schema": {
            "name": "sentiment",
            "schema": {
                "type": "object",
                "properties": {
                    "label": {"type": "string", "enum": ["positive", "negative", "neutral"]},
                    "score": {"type": "number"},
                },
                "required": ["label", "score"],
            },
        },
    },
)

# Or using the typed helper:
fmt = og.ResponseFormat(type="json_schema", json_schema={...})
result = client.llm.chat(..., response_format=fmt)

CLI

opengradient chat --model openai/gpt-4o \
  --messages '[{"role":"user","content":"Classify this text"}]' \
  --response-format '{"type":"json_schema","json_schema":{...}}'

# Or from file:
opengradient chat --model openai/gpt-4o \
  --messages-file msgs.json \
  --response-format-file schema.json

Test plan

  • Verify ResponseFormat.to_dict() serializes correctly (json_object and json_schema)
  • Verify json_schema key is omitted when not provided
  • Test completion() forwards response_format to internal method
  • Test chat() non-streaming forwards response_format to internal method
  • Test chat() streaming forwards response_format to internal method
  • Verify dict passthrough works (no wrapping)
  • Verify no breaking changes (all 21 tests pass)

Closes #155

Add response_format parameter to chat() and completion() methods,
enabling JSON schema enforcement for predictable, machine-readable
LLM output. Follows the OpenAI structured outputs specification.

Changes:
- Add ResponseFormat dataclass to types.py
- Thread response_format through all LLM methods (public + internal)
- Add --response-format and --response-format-file CLI options
- Export ResponseFormat from opengradient package
- Add llm_structured_output.py example with sentiment analysis demo

Closes OpenGradient#155
@Ridwannurudeen Ridwannurudeen force-pushed the feat/structured-llm-output branch from 0f617e8 to 06d3d92 Compare February 24, 2026 18:43
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

Add support for structured LLM output

1 participant