diff --git a/README.md b/README.md index a5e3755..a86eb3c 100644 --- a/README.md +++ b/README.md @@ -43,7 +43,7 @@ If you make multiple calls to the LLM, you can call `stub_response` more than on ```ruby it 'returns multiple stubbed responses' do - RubyLLM::Test.stub_responses(['Hello, world!', 'How are you?']) + RubyLLM::Test.stub_responses('Hello, world!', 'How are you?') response1 = MyLLMClient.call('Hello?') response2 = MyLLMClient.call('How are you?') @@ -52,6 +52,34 @@ it 'returns multiple stubbed responses' do end ``` +### Stubbing with a Message + +If you stub with a string, it will be wrapped in a `RubyLLM::Message` with the role of `:assistant`. If you want more control over the message, you can stub with a `RubyLLM::Message` directly. For example: + +```ruby +it 'returns a stubbed message' do + message = RubyLLM::Message.new(role: :assistant, content: 'Hello, world!') + RubyLLM::Test.stub_response(message) + + response = MyLLMClient.call('Hello?') + expect(response).to eq(message) +end +``` + +### Stubbing with a Hash Returns JSON + +If you stub with a hash, it will be converted to a `RubyLLM::Message` with the content set to the JSON representation of the hash. For example: + +```ruby +it 'returns a stubbed JSON message' do + hash = { key: 'value' } + RubyLLM::Test.stub_response(hash) + + response = MyLLMClient.call('Hello?') + expect(response.content).to eq(hash.to_json) +end +``` + ### Resetting Stubs Make sure to reset stubs after each test to avoid interference before or between tests. @@ -66,7 +94,7 @@ You can also stub responses in a block, which handles the setup and teardown of ```ruby -RubyLLM::Test.stub_response('Hello, world!') do +RubyLLM::Test.with_responses('Hello, world!') do response = MyLLMClient.call('Hello?') expect(response).to eq('Hello, world!') end