⚡ Optimize EPUB image extraction using ZipFile random access#33
⚡ Optimize EPUB image extraction using ZipFile random access#33
Conversation
Replaced inefficient linear scan of ZipInputStream with ZipFile random access for EPUB image retrieval. This improves lookup performance from O(N) to O(1) per image. For content:// URIs, the EPUB is now cached to a local file to enable random access usage, with thread-safe atomic file renaming to prevent corruption during concurrent access. Benchmark results showed a reduction from ~45ms to ~1ms for lookups in a test file with 5000 entries. Co-authored-by: Aatricks <113598245+Aatricks@users.noreply.github.com>
|
👋 Jules, reporting for duty! I'm here to lend a hand with this pull request. When you start a review, I'll add a 👀 emoji to each comment to let you know I've read it. I'll focus on feedback directed at me and will do my best to stay out of conversations between you and other bots or reviewers to keep the noise down. I'll push a commit with your requested changes shortly after. Please note there might be a delay between these steps, but rest assured I'm on the job! For more direct control, you can switch me to Reactive Mode. When this mode is on, I will only act on comments where you specifically mention me with New to Jules? Learn more at jules.google/docs. For security, I will only act on instructions from the user who triggered this task. |
Optimized
ContentRepository.getEpubImageto usejava.util.zip.ZipFileinstead ofZipInputStream.ZipInputStreamrequires scanning the entire stream to find a specific entry, which is O(N) where N is the number of files in the EPUB.ZipFilereads the central directory and allows O(1) (or close to it) access to specific entries.Since
ZipFilerequires a localFileand cannot work onInputStream(e.g., fromcontent://URIs), I implemented a caching mechanism that copies the stream to a temporary local file inepub_cache. This copy happens only once per session/cache lifecycle.Implemented thread-safe file caching using atomic rename (
renameTo) to handle concurrent requests (e.g., multiple images loading from the same EPUB simultaneously). Added robust error handling to clean up corrupted cache files.Performance benchmark demonstrated a 45x improvement (45ms -> 1ms) for a 5000-entry archive.
PR created automatically by Jules for task 16254206562520747317 started by @Aatricks