Skip to content
Merged
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension


Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
2 changes: 1 addition & 1 deletion .release-please-manifest.json
Original file line number Diff line number Diff line change
@@ -1,3 +1,3 @@
{
".": "0.50.0"
".": "0.51.0"
}
6 changes: 3 additions & 3 deletions .stats.yml
Original file line number Diff line number Diff line change
@@ -1,4 +1,4 @@
configured_endpoints: 147
openapi_spec_url: https://storage.googleapis.com/stainless-sdk-openapi-specs/openai%2Fopenai-a0aa54a302fbd7fff4ed7ad8a8547587d37b63324fc4af652bfa685ee9f8da44.yml
openapi_spec_hash: e45c5af19307cfc8b9baa4b8f8e865a0
config_hash: 2cbce279be85ff86a2fabbc85d62b011
openapi_spec_url: https://storage.googleapis.com/stainless-sdk-openapi-specs/openai%2Fopenai-6bfe886b5ded0fe3bf37ca672698814e16e0836a093ceef65dac37ae44d1ad6b.yml
openapi_spec_hash: 6b1344a59044318e824c8d1af96033c7
config_hash: 7f49c38fa3abe9b7038ffe62262c4912
14 changes: 14 additions & 0 deletions CHANGELOG.md
Original file line number Diff line number Diff line change
@@ -1,5 +1,19 @@
# Changelog

## 0.51.0 (2026-02-24)

Full Changelog: [v0.50.0...v0.51.0](https://github.com/openai/openai-ruby/compare/v0.50.0...v0.51.0)

### Features

* **api:** add phase ([656f591](https://github.com/openai/openai-ruby/commit/656f5917f52cfbe388f9d706ab372b958301985e))


### Bug Fixes

* **api:** fix phase enum ([c055d7b](https://github.com/openai/openai-ruby/commit/c055d7b2fe06a49ad64db6dc880e30f41d42e22d))
* **api:** phase docs ([766b70c](https://github.com/openai/openai-ruby/commit/766b70cc83834acf57f1d2c31ea5026ff0d70b7f))

## 0.50.0 (2026-02-24)

Full Changelog: [v0.49.1...v0.50.0](https://github.com/openai/openai-ruby/compare/v0.49.1...v0.50.0)
Expand Down
2 changes: 1 addition & 1 deletion Gemfile.lock
Original file line number Diff line number Diff line change
Expand Up @@ -11,7 +11,7 @@ GIT
PATH
remote: .
specs:
openai (0.50.0)
openai (0.51.0)
base64
cgi
connection_pool
Expand Down
2 changes: 1 addition & 1 deletion README.md
Original file line number Diff line number Diff line change
Expand Up @@ -15,7 +15,7 @@ To use this gem, install via Bundler by adding the following to your application
<!-- x-release-please-start-version -->

```ruby
gem "openai", "~> 0.50.0"
gem "openai", "~> 0.51.0"
```

<!-- x-release-please-end -->
Expand Down
33 changes: 32 additions & 1 deletion lib/openai/models/responses/easy_input_message.rb
Original file line number Diff line number Diff line change
Expand Up @@ -18,13 +18,24 @@ class EasyInputMessage < OpenAI::Internal::Type::BaseModel
# @return [Symbol, OpenAI::Models::Responses::EasyInputMessage::Role]
required :role, enum: -> { OpenAI::Responses::EasyInputMessage::Role }

# @!attribute phase
# The phase of an assistant message.
#
# Use `commentary` for an intermediate assistant message and `final_answer` for
# the final assistant message. For follow-up requests with models like
# `gpt-5.3-codex` and later, preserve and resend phase on all assistant messages.
# Omitting it can degrade performance. Not used for user messages.
#
# @return [Symbol, OpenAI::Models::Responses::EasyInputMessage::Phase, nil]
optional :phase, enum: -> { OpenAI::Responses::EasyInputMessage::Phase }, nil?: true

# @!attribute type
# The type of the message input. Always `message`.
#
# @return [Symbol, OpenAI::Models::Responses::EasyInputMessage::Type, nil]
optional :type, enum: -> { OpenAI::Responses::EasyInputMessage::Type }

# @!method initialize(content:, role:, type: nil)
# @!method initialize(content:, role:, phase: nil, type: nil)
# Some parameter documentations has been truncated, see
# {OpenAI::Models::Responses::EasyInputMessage} for more details.
#
Expand All @@ -38,6 +49,8 @@ class EasyInputMessage < OpenAI::Internal::Type::BaseModel
#
# @param role [Symbol, OpenAI::Models::Responses::EasyInputMessage::Role] The role of the message input. One of `user`, `assistant`, `system`, or
#
# @param phase [Symbol, OpenAI::Models::Responses::EasyInputMessage::Phase, nil] The phase of an assistant message.
#
# @param type [Symbol, OpenAI::Models::Responses::EasyInputMessage::Type] The type of the message input. Always `message`.

# Text, image, or audio input to the model, used to generate a response. Can also
Expand Down Expand Up @@ -74,6 +87,24 @@ module Role
# @return [Array<Symbol>]
end

# The phase of an assistant message.
#
# Use `commentary` for an intermediate assistant message and `final_answer` for
# the final assistant message. For follow-up requests with models like
# `gpt-5.3-codex` and later, preserve and resend phase on all assistant messages.
# Omitting it can degrade performance. Not used for user messages.
#
# @see OpenAI::Models::Responses::EasyInputMessage#phase
module Phase
extend OpenAI::Internal::Type::Enum

COMMENTARY = :commentary
Copy link
Copy Markdown

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

P2 Badge Add final_answer to phase enum

The new phase support is incomplete: EasyInputMessage::Phase (and similarly ResponseOutputMessage::Phase) only declares :commentary, but the field documentation in these same models says messages can be tagged as "commentary" or "final_answer". When responses include phase: "final_answer", this value will not match the enum module checks and won’t be exposed as a typed enum value, which breaks the intended “preserve and resend phase” workflow for follow-up requests.

Useful? React with 👍 / 👎.

FINAL_ANSWER = :final_answer

# @!method self.values
# @return [Array<Symbol>]
end

# The type of the message input. Always `message`.
#
# @see OpenAI::Models::Responses::EasyInputMessage#type
Expand Down
10 changes: 9 additions & 1 deletion lib/openai/models/responses/response_compact_params.rb
Original file line number Diff line number Diff line change
Expand Up @@ -42,7 +42,13 @@ class ResponseCompactParams < OpenAI::Internal::Type::BaseModel
# @return [String, nil]
optional :previous_response_id, String, nil?: true

# @!method initialize(model:, input: nil, instructions: nil, previous_response_id: nil, request_options: {})
# @!attribute prompt_cache_key
# A key to use when reading from or writing to the prompt cache.
#
# @return [String, nil]
optional :prompt_cache_key, String, nil?: true

# @!method initialize(model:, input: nil, instructions: nil, previous_response_id: nil, prompt_cache_key: nil, request_options: {})
# Some parameter documentations has been truncated, see
# {OpenAI::Models::Responses::ResponseCompactParams} for more details.
#
Expand All @@ -54,6 +60,8 @@ class ResponseCompactParams < OpenAI::Internal::Type::BaseModel
#
# @param previous_response_id [String, nil] The unique ID of the previous response to the model. Use this to create multi-tu
#
# @param prompt_cache_key [String, nil] A key to use when reading from or writing to the prompt cache.
#
# @param request_options [OpenAI::RequestOptions, Hash{Symbol=>Object}]

# Model ID used to generate the response, like `gpt-5` or `o3`. OpenAI offers a
Expand Down
33 changes: 32 additions & 1 deletion lib/openai/models/responses/response_output_message.rb
Original file line number Diff line number Diff line change
Expand Up @@ -36,7 +36,18 @@ class ResponseOutputMessage < OpenAI::Internal::Type::BaseModel
# @return [Symbol, :message]
required :type, const: :message

# @!method initialize(id:, content:, status:, role: :assistant, type: :message)
# @!attribute phase
# The phase of an assistant message.
#
# Use `commentary` for an intermediate assistant message and `final_answer` for
# the final assistant message. For follow-up requests with models like
# `gpt-5.3-codex` and later, preserve and resend phase on all assistant messages.
# Omitting it can degrade performance. Not used for user messages.
#
# @return [Symbol, OpenAI::Models::Responses::ResponseOutputMessage::Phase, nil]
optional :phase, enum: -> { OpenAI::Responses::ResponseOutputMessage::Phase }, nil?: true

# @!method initialize(id:, content:, status:, phase: nil, role: :assistant, type: :message)
# Some parameter documentations has been truncated, see
# {OpenAI::Models::Responses::ResponseOutputMessage} for more details.
#
Expand All @@ -48,6 +59,8 @@ class ResponseOutputMessage < OpenAI::Internal::Type::BaseModel
#
# @param status [Symbol, OpenAI::Models::Responses::ResponseOutputMessage::Status] The status of the message input. One of `in_progress`, `completed`, or
#
# @param phase [Symbol, OpenAI::Models::Responses::ResponseOutputMessage::Phase, nil] The phase of an assistant message.
#
# @param role [Symbol, :assistant] The role of the output message. Always `assistant`.
#
# @param type [Symbol, :message] The type of the output message. Always `message`.
Expand Down Expand Up @@ -82,6 +95,24 @@ module Status
# @!method self.values
# @return [Array<Symbol>]
end

# The phase of an assistant message.
#
# Use `commentary` for an intermediate assistant message and `final_answer` for
# the final assistant message. For follow-up requests with models like
# `gpt-5.3-codex` and later, preserve and resend phase on all assistant messages.
# Omitting it can degrade performance. Not used for user messages.
#
# @see OpenAI::Models::Responses::ResponseOutputMessage#phase
module Phase
extend OpenAI::Internal::Type::Enum

COMMENTARY = :commentary
FINAL_ANSWER = :final_answer

# @!method self.values
# @return [Array<Symbol>]
end
end
end
end
Expand Down
4 changes: 3 additions & 1 deletion lib/openai/resources/responses.rb
Original file line number Diff line number Diff line change
Expand Up @@ -473,7 +473,7 @@ def cancel(response_id, params = {})
# For ZDR-compatible compaction details, see
# [Compaction (advanced)](https://platform.openai.com/docs/guides/conversation-state#compaction-advanced).
#
# @overload compact(model:, input: nil, instructions: nil, previous_response_id: nil, request_options: {})
# @overload compact(model:, input: nil, instructions: nil, previous_response_id: nil, prompt_cache_key: nil, request_options: {})
#
# @param model [Symbol, String, OpenAI::Models::Responses::ResponseCompactParams::Model, nil] Model ID used to generate the response, like `gpt-5` or `o3`. OpenAI offers a wi
#
Expand All @@ -483,6 +483,8 @@ def cancel(response_id, params = {})
#
# @param previous_response_id [String, nil] The unique ID of the previous response to the model. Use this to create multi-tu
#
# @param prompt_cache_key [String, nil] A key to use when reading from or writing to the prompt cache.
#
# @param request_options [OpenAI::RequestOptions, Hash{Symbol=>Object}, nil]
#
# @return [OpenAI::Models::Responses::CompactedResponse]
Expand Down
2 changes: 1 addition & 1 deletion lib/openai/version.rb
Original file line number Diff line number Diff line change
@@ -1,5 +1,5 @@
# frozen_string_literal: true

module OpenAI
VERSION = "0.50.0"
VERSION = "0.51.0"
end
59 changes: 59 additions & 0 deletions rbi/openai/models/responses/easy_input_message.rbi
Original file line number Diff line number Diff line change
Expand Up @@ -22,6 +22,19 @@ module OpenAI
sig { returns(OpenAI::Responses::EasyInputMessage::Role::OrSymbol) }
attr_accessor :role

# The phase of an assistant message.
#
# Use `commentary` for an intermediate assistant message and `final_answer` for
# the final assistant message. For follow-up requests with models like
# `gpt-5.3-codex` and later, preserve and resend phase on all assistant messages.
# Omitting it can degrade performance. Not used for user messages.
sig do
returns(
T.nilable(OpenAI::Responses::EasyInputMessage::Phase::OrSymbol)
)
end
attr_accessor :phase

# The type of the message input. Always `message`.
sig do
returns(
Expand All @@ -44,6 +57,8 @@ module OpenAI
params(
content: OpenAI::Responses::EasyInputMessage::Content::Variants,
role: OpenAI::Responses::EasyInputMessage::Role::OrSymbol,
phase:
T.nilable(OpenAI::Responses::EasyInputMessage::Phase::OrSymbol),
type: OpenAI::Responses::EasyInputMessage::Type::OrSymbol
).returns(T.attached_class)
end
Expand All @@ -54,6 +69,13 @@ module OpenAI
# The role of the message input. One of `user`, `assistant`, `system`, or
# `developer`.
role:,
# The phase of an assistant message.
#
# Use `commentary` for an intermediate assistant message and `final_answer` for
# the final assistant message. For follow-up requests with models like
# `gpt-5.3-codex` and later, preserve and resend phase on all assistant messages.
# Omitting it can degrade performance. Not used for user messages.
phase: nil,
# The type of the message input. Always `message`.
type: nil
)
Expand All @@ -64,6 +86,8 @@ module OpenAI
{
content: OpenAI::Responses::EasyInputMessage::Content::Variants,
role: OpenAI::Responses::EasyInputMessage::Role::OrSymbol,
phase:
T.nilable(OpenAI::Responses::EasyInputMessage::Phase::OrSymbol),
type: OpenAI::Responses::EasyInputMessage::Type::OrSymbol
}
)
Expand Down Expand Up @@ -134,6 +158,41 @@ module OpenAI
end
end

# The phase of an assistant message.
#
# Use `commentary` for an intermediate assistant message and `final_answer` for
# the final assistant message. For follow-up requests with models like
# `gpt-5.3-codex` and later, preserve and resend phase on all assistant messages.
# Omitting it can degrade performance. Not used for user messages.
module Phase
extend OpenAI::Internal::Type::Enum

TaggedSymbol =
T.type_alias do
T.all(Symbol, OpenAI::Responses::EasyInputMessage::Phase)
end
OrSymbol = T.type_alias { T.any(Symbol, String) }

COMMENTARY =
T.let(
:commentary,
OpenAI::Responses::EasyInputMessage::Phase::TaggedSymbol
)
FINAL_ANSWER =
T.let(
:final_answer,
OpenAI::Responses::EasyInputMessage::Phase::TaggedSymbol
)

sig do
override.returns(
T::Array[OpenAI::Responses::EasyInputMessage::Phase::TaggedSymbol]
)
end
def self.values
end
end

# The type of the message input. Always `message`.
module Type
extend OpenAI::Internal::Type::Enum
Expand Down
8 changes: 8 additions & 0 deletions rbi/openai/models/responses/response_compact_params.rbi
Original file line number Diff line number Diff line change
Expand Up @@ -54,6 +54,10 @@ module OpenAI
sig { returns(T.nilable(String)) }
attr_accessor :previous_response_id

# A key to use when reading from or writing to the prompt cache.
sig { returns(T.nilable(String)) }
attr_accessor :prompt_cache_key

sig do
params(
model:
Expand All @@ -69,6 +73,7 @@ module OpenAI
),
instructions: T.nilable(String),
previous_response_id: T.nilable(String),
prompt_cache_key: T.nilable(String),
request_options: OpenAI::RequestOptions::OrHash
).returns(T.attached_class)
end
Expand All @@ -91,6 +96,8 @@ module OpenAI
# [conversation state](https://platform.openai.com/docs/guides/conversation-state).
# Cannot be used in conjunction with `conversation`.
previous_response_id: nil,
# A key to use when reading from or writing to the prompt cache.
prompt_cache_key: nil,
request_options: {}
)
end
Expand All @@ -111,6 +118,7 @@ module OpenAI
),
instructions: T.nilable(String),
previous_response_id: T.nilable(String),
prompt_cache_key: T.nilable(String),
request_options: OpenAI::RequestOptions
}
)
Expand Down
Loading