Skip to content

[bot] OpenAI: Video generation API (client.videos.create()) not instrumented #304

@braintrust-bot

Description

@braintrust-bot

Summary

The OpenAI Video generation API (client.videos.create()) is not instrumented. Calls to generate, edit, extend, or remix videos using Sora models produce zero Braintrust tracing. This is a GA resource in the OpenAI Python SDK for AI video generation, and is the video counterpart to client.images.generate() which IS instrumented in this repo.

What is missing

OpenAI Resource Method Instrumented?
client.images generate(), edit(), create_variation() Yes
client.videos create() No
client.videos create_and_poll() No
client.videos edit() No
client.videos extend() No
client.videos remix() No
client.videos retrieve(), list(), delete(), poll() No (CRUD — lower priority)
client.videos download_content() No (download — lower priority)

The generative-execution-relevant surfaces are create(), edit(), extend(), and remix() — all of which submit video generation jobs to Sora models. The create_and_poll() convenience method wraps create() with polling.

At minimum, instrumentation for create() and create_and_poll() should create a span capturing:

  • Input: prompt text, model name, reference image/video assets
  • Output: video ID, status, duration, resolution
  • Metrics: submission latency (for create()), total generation latency (for create_and_poll())
  • Metadata: model, resolution, aspect ratio, duration config

This mirrors the pattern already used by the Images integration (ImagesPatcher / _WrapImages in py/src/braintrust/integrations/openai/patchers.py), which creates spans for generate(), edit(), and create_variation().

Video API capabilities

The OpenAI Videos API supports:

  • create() — text-to-video and image-to-video generation
  • edit() — editing a source video with a new prompt
  • extend() — extending a completed video
  • remix() — remixing with a refreshed prompt
  • create_character() — creating a character from an uploaded video

Both sync (Videos) and async (AsyncVideos) client variants exist in the OpenAI SDK. The resource is in the main namespace (not beta).

Braintrust docs status

not_found — The OpenAI integration page documents chat completions, structured outputs, function calling, images, audio, and streaming. No mention of video generation or Sora.

Upstream sources

  • OpenAI Python SDK Videos class: openai/resources/videos.py — defines create(), create_and_poll(), edit(), extend(), remix(), retrieve(), list(), delete(), download_content()
  • The resource file is auto-generated from OpenAI's official OpenAPI spec (Stainless)
  • The SDK exposes both Videos and AsyncVideos classes

Local files inspected

  • py/src/braintrust/integrations/openai/patchers.py — defines patchers for ChatCompletions, Embeddings, Moderations, Audio (Speech, Transcriptions, Translations), Images, Responses; zero references to video or Videos
  • py/src/braintrust/integrations/openai/tracing.py — wrapper functions for chat, embeddings, moderations, audio, images, responses; no video wrappers
  • py/src/braintrust/integrations/openai/integration.py — integration class registers patchers; no VideosPatcher
  • py/src/braintrust/integrations/openai/test_openai.py — no video test cases
  • py/noxfile.pytest_openai sessions exist but no video coverage

Relationship to existing issues

Metadata

Metadata

Assignees

No one assigned

    Projects

    No projects

    Milestone

    No milestone

    Relationships

    None yet

    Development

    No branches or pull requests

    Issue actions