⚡ Bolt: Optimize path normalization with 1-item cache#224
Conversation
Implements a 1-item cache for `normalizePath` and `relativeFilePath` processing in `SearchEngine`. This reduces redundant string operations when indexing multiple items from the same file. Benchmarks show a ~8-25% improvement in indexing time for large batches of items sharing file paths. Co-authored-by: AhmmedSamier <17784876+AhmmedSamier@users.noreply.github.com>
|
👋 Jules, reporting for duty! I'm here to lend a hand with this pull request. When you start a review, I'll add a 👀 emoji to each comment to let you know I've read it. I'll focus on feedback directed at me and will do my best to stay out of conversations between you and other bots or reviewers to keep the noise down. I'll push a commit with your requested changes shortly after. Please note there might be a delay between these steps, but rest assured I'm on the job! For more direct control, you can switch me to Reactive Mode. When this mode is on, I will only act on comments where you specifically mention me with New to Jules? Learn more at jules.google/docs. For security, I will only act on instructions from the user who triggered this task. |
💡 What: Implemented a 1-item cache (last input memoization) for
normalizePathandrelativeFilePathnormalization inSearchEngine.🎯 Why: Indexing involves processing thousands of items. Items are often grouped by file (e.g. 50 symbols from File A, then 50 from File B). Redundant calls to
path.normalizeandreplaceAllfor the same file path were consuming unnecessary CPU cycles.📊 Impact: Reduces indexing time by ~8-25% in synthetic benchmarks (963ms vs 1279ms baseline for 102k items).
🔬 Measurement: Confirmed via
benchmarks/indexing_repro.ts.PR created automatically by Jules for task 14170975379687895300 started by @AhmmedSamier