Summary
The OpenAI Video generation API (client.videos.create()) is not instrumented. Calls to generate, edit, extend, or remix videos using Sora models produce zero Braintrust tracing. This is a GA resource in the OpenAI Python SDK for AI video generation, and is the video counterpart to client.images.generate() which IS instrumented in this repo.
What is missing
| OpenAI Resource |
Method |
Instrumented? |
client.images |
generate(), edit(), create_variation() |
Yes |
client.videos |
create() |
No |
client.videos |
create_and_poll() |
No |
client.videos |
edit() |
No |
client.videos |
extend() |
No |
client.videos |
remix() |
No |
client.videos |
retrieve(), list(), delete(), poll() |
No (CRUD — lower priority) |
client.videos |
download_content() |
No (download — lower priority) |
The generative-execution-relevant surfaces are create(), edit(), extend(), and remix() — all of which submit video generation jobs to Sora models. The create_and_poll() convenience method wraps create() with polling.
At minimum, instrumentation for create() and create_and_poll() should create a span capturing:
- Input: prompt text, model name, reference image/video assets
- Output: video ID, status, duration, resolution
- Metrics: submission latency (for
create()), total generation latency (for create_and_poll())
- Metadata: model, resolution, aspect ratio, duration config
This mirrors the pattern already used by the Images integration (ImagesPatcher / _WrapImages in py/src/braintrust/integrations/openai/patchers.py), which creates spans for generate(), edit(), and create_variation().
Video API capabilities
The OpenAI Videos API supports:
create() — text-to-video and image-to-video generation
edit() — editing a source video with a new prompt
extend() — extending a completed video
remix() — remixing with a refreshed prompt
create_character() — creating a character from an uploaded video
Both sync (Videos) and async (AsyncVideos) client variants exist in the OpenAI SDK. The resource is in the main namespace (not beta).
Braintrust docs status
not_found — The OpenAI integration page documents chat completions, structured outputs, function calling, images, audio, and streaming. No mention of video generation or Sora.
Upstream sources
- OpenAI Python SDK
Videos class: openai/resources/videos.py — defines create(), create_and_poll(), edit(), extend(), remix(), retrieve(), list(), delete(), download_content()
- The resource file is auto-generated from OpenAI's official OpenAPI spec (Stainless)
- The SDK exposes both
Videos and AsyncVideos classes
Local files inspected
py/src/braintrust/integrations/openai/patchers.py — defines patchers for ChatCompletions, Embeddings, Moderations, Audio (Speech, Transcriptions, Translations), Images, Responses; zero references to video or Videos
py/src/braintrust/integrations/openai/tracing.py — wrapper functions for chat, embeddings, moderations, audio, images, responses; no video wrappers
py/src/braintrust/integrations/openai/integration.py — integration class registers patchers; no VideosPatcher
py/src/braintrust/integrations/openai/test_openai.py — no video test cases
py/noxfile.py — test_openai sessions exist but no video coverage
Relationship to existing issues
Summary
The OpenAI Video generation API (
client.videos.create()) is not instrumented. Calls to generate, edit, extend, or remix videos using Sora models produce zero Braintrust tracing. This is a GA resource in the OpenAI Python SDK for AI video generation, and is the video counterpart toclient.images.generate()which IS instrumented in this repo.What is missing
client.imagesgenerate(),edit(),create_variation()client.videoscreate()client.videoscreate_and_poll()client.videosedit()client.videosextend()client.videosremix()client.videosretrieve(),list(),delete(),poll()client.videosdownload_content()The generative-execution-relevant surfaces are
create(),edit(),extend(), andremix()— all of which submit video generation jobs to Sora models. Thecreate_and_poll()convenience method wrapscreate()with polling.At minimum, instrumentation for
create()andcreate_and_poll()should create a span capturing:create()), total generation latency (forcreate_and_poll())This mirrors the pattern already used by the Images integration (
ImagesPatcher/_WrapImagesinpy/src/braintrust/integrations/openai/patchers.py), which creates spans forgenerate(),edit(), andcreate_variation().Video API capabilities
The OpenAI Videos API supports:
create()— text-to-video and image-to-video generationedit()— editing a source video with a new promptextend()— extending a completed videoremix()— remixing with a refreshed promptcreate_character()— creating a character from an uploaded videoBoth sync (
Videos) and async (AsyncVideos) client variants exist in the OpenAI SDK. The resource is in the main namespace (notbeta).Braintrust docs status
not_found — The OpenAI integration page documents chat completions, structured outputs, function calling, images, audio, and streaming. No mention of video generation or Sora.
Upstream sources
Videosclass:openai/resources/videos.py— definescreate(),create_and_poll(),edit(),extend(),remix(),retrieve(),list(),delete(),download_content()VideosandAsyncVideosclassesLocal files inspected
py/src/braintrust/integrations/openai/patchers.py— defines patchers forChatCompletions,Embeddings,Moderations,Audio(Speech, Transcriptions, Translations),Images,Responses; zero references tovideoorVideospy/src/braintrust/integrations/openai/tracing.py— wrapper functions for chat, embeddings, moderations, audio, images, responses; no video wrapperspy/src/braintrust/integrations/openai/integration.py— integration class registers patchers; no VideosPatcherpy/src/braintrust/integrations/openai/test_openai.py— no video test casespy/noxfile.py—test_openaisessions exist but no video coverageRelationship to existing issues
models.generate_videos()) not instrumented #236 tracks the same gap for Google GenAI (models.generate_videos())ImagesPatcherwithgenerate(),edit(),create_variation()) establishes the pattern and precedent for generative media API tracing in this integration