Conversation
…cation
Replaced `bytes.Split(rawJSON, []byte("\n"))` and `bufio.Scanner` with a manual zero-allocation byte slice loop using `bytes.IndexByte(..., '\n')` in:
- `ConvertClaudeResponseToOpenAINonStream`
- `ConvertClaudeResponseToGeminiNonStream`
- `ConvertClaudeResponseToOpenAIResponsesNonStream`
This eliminates massive buffer allocations (e.g. 50MB scanner buffers) and intermediate slice allocations when processing large Claude Code API responses.
Co-authored-by: rschumann <360788+rschumann@users.noreply.github.com>
|
👋 Jules, reporting for duty! I'm here to lend a hand with this pull request. When you start a review, I'll add a 👀 emoji to each comment to let you know I've read it. I'll focus on feedback directed at me and will do my best to stay out of conversations between you and other bots or reviewers to keep the noise down. I'll push a commit with your requested changes shortly after. Please note there might be a delay between these steps, but rest assured I'm on the job! For more direct control, you can switch me to Reactive Mode. When this mode is on, I will only act on comments where you specifically mention me with New to Jules? Learn more at jules.google/docs. For security, I will only act on instructions from the user who triggered this task. |
💡 What: Refactored the non-streaming Claude response converters (
ConvertClaudeResponseToOpenAINonStream,ConvertClaudeResponseToGeminiNonStream, andConvertClaudeResponseToOpenAIResponsesNonStream) to use a manual, zero-allocation byte slice loop instead ofbytes.Splitorbufio.Scanner.🎯 Why: To eliminate massive memory allocations. The previous implementations used
bytes.Split(rawJSON, []byte("\n"))which allocates numerous slice headers and intermediate strings, orbufio.NewScannerwith a pre-allocated 50MB buffer. The new pattern processes the byte slice directly in place usingbytes.IndexByte, drastically reducing memory pressure and GC pauses during response processing.📊 Impact: Reduces per-request allocations significantly, eliminating the fixed 50MB buffer allocation for Gemini/Responses translators and avoiding heavy slice header allocations for OpenAI translators. This will measurably improve API gateway throughput and latency for large non-streaming Claude responses.
🔬 Measurement:
go test ./internal/translator/... -bench=. -benchmemandgo test ./... -shortconfirm no functional regressions and improved processing speed.PR created automatically by Jules for task 3709167231679972496 started by @rschumann