Add batch request support for generating API payloads#342
Add batch request support for generating API payloads#342marckohlbrugge wants to merge 1 commit intocrmne:mainfrom
Conversation
…aking calls
This feature allows users to generate request payloads for batch processing
without actually making API calls. Currently only OpenAI supports batch
requests, with other providers raising NotImplementedError.
Key changes:
- Add `for_batch_request` method to Chat class for enabling batch mode
- Add `render_payload_for_batch_request` to Provider base (raises NotImplementedError)
- Implement batch request formatting for OpenAI provider
- Override method in all OpenAI-based providers to raise NotImplementedError
- Add comprehensive test coverage
- Add documentation for batch request usage
Usage:
```ruby
chat = RubyLLM.chat.for_batch_request
chat.ask("What's 2 + 2?")
payload = chat.complete # Returns payload instead of making API call
```
This provides the foundation (step 1) for batch processing, allowing users
to implement the remaining steps (combining requests, submitting to batch
endpoints, polling status, processing results) based on their needs.
🤖 Generated with [Claude Code](https://claude.ai/code)
Co-Authored-By: Claude <noreply@anthropic.com>
|
I named it I also explored a more flexible We could also consider a more flexible |
|
Hey @marckohlbrugge, thanks for kicking off this feature! I’ve been evaluating RubyLLM, and the batch support is super helpful for my use case. Are you planning to keep working on it further? |
|
Would love to see this feature! |
Summary
for_batch_requestmethod to generate request payloads without making API callsNotImplementedErrorMotivation
As discussed, this implements step 1 of batch request support - generating the API payloads. This allows users to:
The remaining steps (combining requests, submitting to batch endpoints, polling, processing results) can be implemented by users based on their specific needs.
Implementation Details
Core Changes
for_batch_requestmethod toChatclass that sets a flag to generate payloads instead of making API callsrender_payload_for_batch_requesttoProviderbase class (raisesNotImplementedErrorby default)NotImplementedErrorUsage
Provider Support
NotImplementedErrorwith clear messageTesting
spec/ruby_llm/chat_batch_request_spec.rbDocumentation
docs/batch_requests.mdwith usage examples and notes🤖 Generated with Claude Code