-
Notifications
You must be signed in to change notification settings - Fork 84
Optimize file upload performance and fix estimated upload speed #2496
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
base: master
Are you sure you want to change the base?
Conversation
- Remove Base64 encoding overhead (33% size reduction) - Increase chunk size from 2MB to 20MB - Implement raw binary upload via XMLHttpRequest - Add X-CSRF-Token header support in local_prepend.php - Fix upload speed calculation formula - Add HTTP status code validation and timeout handling - Maintain backward compatibility with legacy Base64 method - Use hash_equals() for timing-attack safe CSRF validation Performance: Achieved 92 MB/s upload speed (vs ~22 MB/s baseline) Security: OWASP-compliant header-based CSRF, strict null checking Compatibility: Works with existing Base64 uploads, no nginx changes Fixes unraid#2495
WalkthroughClient upload was rewritten to send 20MB binary chunks with native XMLHttpRequest and CSRF header support; server now accepts both legacy base64 and raw Changes
Sequence DiagramsequenceDiagram
participant Browser as Client (Browser)
participant XHR as XMLHttpRequest
participant Server as Control.php
participant FS as File System
rect rgba(220,235,255,0.6)
Note over Browser,XHR: startUpload() resets cancel, stores currentXhr
Browser->>XHR: open POST (chunk binary)\nHeaders: Content-Type: application/octet-stream, X-CSRF-Token
end
loop Per chunk (20MB)
XHR->>Server: POST (php://input)
Server->>Server: validate CSRF, mode, file, start/cancel\ncheck chunk size
alt cancel requested
Server->>FS: delete temp/partial file
Server-->>XHR: "stop"
else write chunk
Server->>FS: append chunk to file
alt write success
Server-->>XHR: "ok"
else write failure
Server->>FS: delete temp file
Server-->>XHR: "error:write"
end
end
XHR-->>Browser: response ("ok"/"stop"/"error:*")
Browser->>Browser: update progress/speed\nadvance to next chunk or stop
end
rect rgba(220,255,220,0.6)
Note over Browser,Server: Completion/cleanup
Browser->>Browser: finalize UI state
Server->>FS: final file present or cleaned up
end
Estimated code review effort🎯 3 (Moderate) | ⏱️ ~25 minutes Poem
Pre-merge checks and finishing touches❌ Failed checks (1 warning)
✅ Passed checks (4 passed)
✨ Finishing touches
🧪 Generate unit tests (beta)
Thanks for using CodeRabbit! It's free for OSS, and your support helps us grow. If you like it, consider giving us a shout-out. Comment |
🔧 PR Test Plugin AvailableA test plugin has been generated for this PR that includes the modified files. Version: 📥 Installation Instructions:Install via Unraid Web UI:
Alternative: Direct Download
|
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Actionable comments posted: 2
🧹 Nitpick comments (2)
emhttp/plugins/dynamix/include/Control.php (1)
49-52: Consider strict comparison for$startparameter.The loose comparison
$start==0will match empty strings, which could cause unintended behavior if the parameter is missing or malformed. Consider using strict comparison or explicit type casting.🔎 Proposed fix
- $start = $_POST['start'] ?? $_GET['start'] ?? 0; + $start = (int)($_POST['start'] ?? $_GET['start'] ?? 0); $cancel = $_POST['cancel'] ?? $_GET['cancel'] ?? 0; $local = "/var/tmp/".basename($file).".tmp"; - if ($start==0) { + if ($start === 0) {emhttp/plugins/dynamix/Browse.page (1)
940-945: UnusedFileReader- dead code from previous implementation.The
readervariable andFileReaderinstantiation are no longer used since the upload now usesxhr.send(blob)directly. Consider removing this dead code.🔎 Proposed fix
-var reader = {}; var cancel = 0; function startUpload(files) { if (files.length == 0) return; - reader = new FileReader(); window.onbeforeunload = function(e){return '';};
📜 Review details
Configuration used: Repository UI
Review profile: CHILL
Plan: Pro
📒 Files selected for processing (3)
emhttp/plugins/dynamix/Browse.pageemhttp/plugins/dynamix/include/Control.phpemhttp/plugins/dynamix/include/local_prepend.php
🧰 Additional context used
🧠 Learnings (1)
📚 Learning: 2025-06-18T17:09:35.579Z
Learnt from: Squidly271
Repo: unraid/webgui PR: 2264
File: emhttp/plugins/dynamix/include/.login.php:0-0
Timestamp: 2025-06-18T17:09:35.579Z
Learning: In the Unraid webGUI login system (emhttp/plugins/dynamix/include/.login.php), the cooldown timer restarting on page reload during the cooldown period is intentional behavior, not a bug. When a form is resubmitted during cooldown, the timer should restart to provide consistent user feedback.
Applied to files:
emhttp/plugins/dynamix/include/local_prepend.php
🔇 Additional comments (5)
emhttp/plugins/dynamix/include/local_prepend.php (1)
33-50: LGTM! Solid CSRF token validation improvements.The changes properly implement OWASP-compliant header-based CSRF with:
- Null-safe token extraction from POST or
X-CSRF-Tokenheader- Timing-attack resistant comparison using
hash_equals()- Explicit null check before comparison to prevent type errors
This correctly supports both legacy form submissions and the new XHR binary uploads.
emhttp/plugins/dynamix/include/Control.php (1)
67-78: Good backward-compatible dual upload support.The logic correctly prioritizes legacy base64 POST data while falling back to raw binary via
php://input. TheFILE_APPENDsemantics are correct for chunked writes.One consideration: for very large chunks (20MB),
file_get_contents('php://input')loads the entire chunk into memory. This is acceptable given the server context, but ensurememory_limitin php.ini can accommodate this.emhttp/plugins/dynamix/Browse.page (3)
896-901: Good XHR configuration with proper security headers.The implementation correctly:
- Uses
application/octet-streamfor raw binary uploads- Includes the CSRF token via header for authentication
- Sets a reasonable 10-minute timeout for large chunks
903-935: Robust error handling with proper status validation.The implementation correctly handles:
- HTTP status codes outside 200-299 range
- Server-side error responses ('stop', 'error')
- Network errors via
onerror- Timeout scenarios via
ontimeoutAll error paths properly clean up via
stopUpload().
892-892: Good choice increasing chunk size to 20MB.This aligns with the PR objective to reduce per-request overhead. Combined with removing Base64 encoding (~33% size reduction), this should significantly improve upload throughput. The 10-minute timeout per chunk (Line 901) is appropriately sized for this chunk size on slower connections.
- Fix variable shadowing in progress calculation (use 'completed' instead of reusing 'start') - Add error handling when temp file doesn't exist (e.g., after server restart)
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Actionable comments posted: 0
🧹 Nitpick comments (3)
emhttp/plugins/dynamix/Browse.page (2)
892-902: Consider the timeout duration for slow connections.The 10-minute timeout (line 901) requires a minimum upload speed of ~34 kB/s for 20MB chunks. While reasonable for most scenarios, this may cause timeouts on very slow or congested networks.
💡 Optional: Make timeout configurable or increase buffer
Consider either increasing the timeout to 15-20 minutes for better tolerance of slow connections, or making it configurable based on chunk size:
- xhr.timeout = 600000; // 10 minutes per chunk + xhr.timeout = 1200000; // 20 minutes per chunk (more tolerant of slow connections)Or calculate dynamically:
- xhr.timeout = 600000; // 10 minutes per chunk + xhr.timeout = Math.max(600000, slice / 1024 * 60); // ~1 minute per MB, minimum 10 minutes
930-936: Error handling is functional but could provide more specific user feedback.Both
onerrorandontimeouthandlers callstopUpload(file.name, true), which shows a generic error message (line 877). Distinguishing between network errors and timeouts could improve user experience.💡 Optional: Provide more specific error messages
Consider passing different error types to provide clearer feedback:
xhr.onerror = function() { - stopUpload(file.name,true); + stopUpload(file.name, true, 'network'); }; xhr.ontimeout = function() { - stopUpload(file.name,true); + stopUpload(file.name, true, 'timeout'); };Then update
stopUploadto show appropriate messages based on error type.emhttp/plugins/dynamix/include/Control.php (1)
67-78: Excellent backward compatibility with dual upload support.The dual upload mechanism (lines 71-78) elegantly maintains backward compatibility:
- Legacy clients can continue using Base64-encoded
$_POST['data'](line 74)- New clients use raw binary via
php://input(line 77)This achieves the PR objective of "maintains backward compatibility with legacy Base64 uploads; no nginx changes required."
💡 Optional: Add chunk size validation
Consider adding validation to prevent excessively large chunks from filling disk space:
} else { // New raw binary upload method (read from request body) $chunk = file_get_contents('php://input'); + if (strlen($chunk) > 21000000) { // slightly more than 20MB to allow overhead + die('error'); + } }This provides defense-in-depth, though PHP's
post_max_sizemay already limit this.
📜 Review details
Configuration used: Repository UI
Review profile: CHILL
Plan: Pro
📒 Files selected for processing (2)
emhttp/plugins/dynamix/Browse.pageemhttp/plugins/dynamix/include/Control.php
🔇 Additional comments (5)
emhttp/plugins/dynamix/Browse.page (2)
903-928: LGTM! Upload progress calculation correctly fixed.The HTTP status validation (lines 904-907) properly handles error responses, and the progress calculation (lines 913-922) correctly uses the
completedvariable to track bytes from previous files, avoiding the variable shadowing issue flagged in the previous review.The formula
bytesTransferred = completed + next(line 919) accurately computes total bytes transferred across all files.
938-938: Efficient raw binary upload.Sending the raw blob directly eliminates the ~33% Base64 encoding overhead, achieving the performance goals stated in the PR objectives.
emhttp/plugins/dynamix/include/Control.php (3)
45-52: Good backward-compatible parameter handling.The GET fallback for
mode,file,start, andcancelparameters (lines 45, 47, 49-50) properly supports the new XMLHttpRequest-based upload flow where metadata is passed in the URL query string and file data in the request body.The condition change from
$_POST['start']==0to$start==0(line 52) correctly handles both POST and GET parameter sources.Note: File paths in URL query parameters may appear in web server logs, but this is an acceptable trade-off for this chunked upload mechanism.
62-66: LGTM! Temp file error handling correctly implemented.The check for missing temp file (lines 63-66) properly addresses the issue flagged in the previous review. If the temp file doesn't exist (e.g., after server restart or timeout), the code now returns an explicit error instead of failing silently.
79-83: Correct chunked upload implementation.The
FILE_APPENDflag (line 79) properly appends chunks to the target file, and error handling (lines 79-82) ensures cleanup if the write fails.
- Add dynamic timeout calculation based on chunk size (~1 minute per MB, minimum 10 minutes) - Add specific error messages for timeout, network, and HTTP errors - Add chunk size validation (max 21MB) for defense-in-depth - Fix temp file cleanup for multi-file uploads - Fix type casting for start/cancel parameters to work with strict comparison - Add temp file cleanup on write errors - Add detailed error codes for debugging (tempfile, chunksize, write) - Remove unused FileReader variable (dead code cleanup) - Use strict comparison for start parameter check
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Actionable comments posted: 2
Caution
Some comments are outside the diff and can’t be posted inline due to platform limitations.
⚠️ Outside diff range comments (1)
emhttp/plugins/dynamix/Browse.page (1)
952-961: Critical bug: Cancel variable not reset between uploads.The global
cancelvariable is initialized to 0 at line 952 and set to 1 when the user clicks the cancel button (line 957). However, it's never reset back to 0 after an upload completes or is cancelled. This means:
- User starts upload (cancel = 0)
- User clicks "Cancel" (cancel = 1)
- Upload stops
- User starts new upload (cancel still = 1)
- First chunk includes
cancel=1in URL (line 902)- Server responds with 'stop' (checked at line 914)
- New upload immediately fails
Users must refresh the page to upload again after canceling.
🔎 Proposed fix
Reset
cancelto 0 when starting a new upload:function startUpload(files) { if (files.length == 0) return; + cancel = 0; window.onbeforeunload = function(e){return '';}; $('#dfm_uploadButton').val("_(Cancel)_").prop('onclick',null).off('click').click(function(){cancel=1;}); dfm.running = true; const d = new Date(); uploadFile(files,0,0,d.getTime()); }
♻️ Duplicate comments (1)
emhttp/plugins/dynamix/Browse.page (1)
920-933: LGTM: Progress calculation correctly fixed.The progress calculation now correctly computes
bytesTransferred = completed + next, wherecompletedtracks bytes from previous files andnextis the current position in the current file. This properly addresses the past review comment about variable shadowing and incorrect offset calculation.
🧹 Nitpick comments (1)
emhttp/plugins/dynamix/Browse.page (1)
895-950: Consider aborting XHR immediately on cancel.When the user clicks the cancel button, the current XHR request continues until completion before the next chunk checks the
cancelflag. For 20MB chunks on slower connections, this could result in a noticeable delay before the upload actually stops.💡 Suggested improvement
Store the XHR object and abort it when cancel is clicked:
var cancel = 0; var currentXhr = null; function uploadFile(files,index,start,time) { var file = files[index]; var slice = 20971520; var next = start + slice; var blob = file.slice(start, next); currentXhr = new XMLHttpRequest(); var url = '/webGui/include/Control.php?mode=upload&file=' + encodeURIComponent(dir + '/' + dfm_htmlspecialchars(file.name)) + '&start=' + start + '&cancel=' + cancel; currentXhr.open('POST', url, true); // ... rest of XHR setup } function startUpload(files) { if (files.length == 0) return; cancel = 0; window.onbeforeunload = function(e){return '';}; $('#dfm_uploadButton').val("_(Cancel)_").prop('onclick',null).off('click').click(function(){ cancel=1; if (currentXhr) currentXhr.abort(); }); dfm.running = true; const d = new Date(); uploadFile(files,0,0,d.getTime()); }
📜 Review details
Configuration used: Repository UI
Review profile: CHILL
Plan: Pro
📒 Files selected for processing (2)
emhttp/plugins/dynamix/Browse.pageemhttp/plugins/dynamix/include/Control.php
🚧 Files skipped from review as they are similar to previous changes (1)
- emhttp/plugins/dynamix/include/Control.php
🔇 Additional comments (3)
emhttp/plugins/dynamix/Browse.page (3)
897-897: LGTM: Chunk size increase.The 20MB chunk size (vs previous 2MB) combined with raw binary upload significantly reduces overhead and improves throughput, aligning with the PR objective to optimize upload performance.
906-906: Verify timeout calculation.The timeout formula
Math.max(600000, slice / 1024 * 60)calculates approximately 1 minute per MB with a 10-minute minimum. For a 20MB chunk, this yields ~20.5 minutes. While this seems reasonable per chunk, verify this doesn't cause issues on slower connections or with larger files.For reference, the timeout calculation:
- 20971520 bytes ÷ 1024 = 20480 KB
- 20480 KB × 60 ms/KB = 1,228,800 ms ≈ 20.5 minutes per 20MB chunk
- Minimum timeout: 600,000 ms = 10 minutes
934-938: LGTM: Multi-file temp file cleanup.The code correctly cleans up the temp file for each completed upload (lines 935-936) before starting the next file. This addresses the multi-file temp file cleanup mentioned in the PR objectives and prevents temp file accumulation.
- Store xhr reference and call abort() to immediately stop request - Add onabort handler for clean UI reset without error dialog - Check cancel flag in onerror to prevent error on user-initiated abort - Reset cancel flag on each upload start - Add HTTP status code to error messages for better debugging - Move cancel check before file creation to prevent partial files - Wait for cancel cleanup to complete before UI reset (callback)
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Actionable comments posted: 0
🧹 Nitpick comments (1)
emhttp/plugins/dynamix/Browse.page (1)
943-953: Add error handling to the cancellation POST request.If the cleanup POST fails (e.g., network issue that prompted the cancel),
stopUploadis never called—leaving the UI in a stuck state with the "Cancel" button anddfm.running = true.🔎 Proposed fix
xhr.onabort = function() { // User cancelled upload - trigger deletion via cancel=1 parameter $.post('/webGui/include/Control.php', { mode: 'upload', file: dir + '/' + dfm_htmlspecialchars(file.name), start: 0, cancel: 1 - }, function() { - stopUpload(file.name, false); - }); + }) + .always(function() { + stopUpload(file.name, false); + }); };Using
.always()ensures the UI resets regardless of whether the server cleanup succeeded.
📜 Review details
Configuration used: Repository UI
Review profile: CHILL
Plan: Pro
📒 Files selected for processing (2)
emhttp/plugins/dynamix/Browse.pageemhttp/plugins/dynamix/include/Control.php
🚧 Files skipped from review as they are similar to previous changes (1)
- emhttp/plugins/dynamix/include/Control.php
🧰 Additional context used
🧠 Learnings (1)
📚 Learning: 2025-12-28T15:54:58.673Z
Learnt from: mgutt
Repo: unraid/webgui PR: 2496
File: emhttp/plugins/dynamix/Browse.page:901-906
Timestamp: 2025-12-28T15:54:58.673Z
Learning: In the unraid/webgui codebase, local_prepend.php is configured as auto_prepend_file in /etc/php.ini, which means it automatically executes before every PHP script (including Control.php). CSRF validation and other security checks are performed in local_prepend.php, so individual PHP files like Control.php don't need to implement their own CSRF validation.
Applied to files:
emhttp/plugins/dynamix/Browse.page
🔇 Additional comments (4)
emhttp/plugins/dynamix/Browse.page (4)
869-884: LGTM! Error handling now differentiates timeout, network, and HTTP errors.The HTTP error extraction using
errorType.indexOf('http') === 0withsubstring(5)correctly parses the 'http:STATUS' format passed from line 912. This addresses the previous review feedback about missing HTTP error messages.
896-941: Binary upload implementation looks solid.The rewrite correctly implements:
- Raw binary transfer via XHR with
application/octet-stream- CSRF token passed via header (validated by
local_prepend.phpper retrieved learnings)- Dynamic timeout (~1 min/MB, minimum 10 min)
- Fixed progress calculation using
completed + next- HTTP status validation before processing response
Minor observation: Line 938's async cleanup POST before starting the next file could theoretically race, but this is an acceptable performance tradeoff.
955-963: Error handlers correctly differentiate user cancellation from actual errors.The
cancel === 1check inonerrorprevents showing an error dialog when the user intentionally aborted, whileontimeoutcorrectly routes to the timeout-specific error message.
968-982: Cancel/abort flow is well-implemented.The pattern of storing the XHR reference globally, resetting the cancel flag on new uploads, and conditionally calling
abort()ensures proper lifecycle management. The guardif (currentXhr)handles edge cases where the reference might be null.
Consistent with existing unset of POST csrf_token field to minimize token exposure after validation completes.
Use .always() instead of success callback to ensure stopUpload() is called even if the cleanup POST request fails, preventing UI from getting stuck in cancel state.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Actionable comments posted: 0
🧹 Nitpick comments (2)
emhttp/plugins/dynamix/Browse.page (2)
904-904: Potential double-slash in file path ifdirends with/.If
diralready ends with a trailing slash, the constructed path will contain//. While most servers handle this gracefully, it's worth ensuring consistency.🔎 Proposed fix
- var url = '/webGui/include/Control.php?mode=upload&file=' + encodeURIComponent(dir + '/' + dfm_htmlspecialchars(file.name)) + '&start=' + start + '&cancel=' + cancel; + var filePath = dir.replace(/\/+$/, '') + '/' + dfm_htmlspecialchars(file.name); + var url = '/webGui/include/Control.php?mode=upload&file=' + encodeURIComponent(filePath) + '&start=' + start + '&cancel=' + cancel;
969-970: Consider clearingcurrentXhrreference after upload completion.The global
currentXhrretains a reference to the last XHR object after uploads complete. While not a memory leak (it's just one object), setting it tonullinstopUploadwould be cleaner.🔎 Proposed fix
function stopUpload(file,error,errorType) { window.onbeforeunload = null; + currentXhr = null; $.post('/webGui/include/Control.php',{mode:'stop',file:encodeURIComponent(dfm_htmlspecialchars(file.name))});
📜 Review details
Configuration used: Repository UI
Review profile: CHILL
Plan: Pro
📒 Files selected for processing (1)
emhttp/plugins/dynamix/Browse.page
🧰 Additional context used
🧠 Learnings (1)
📚 Learning: 2025-12-28T15:54:58.673Z
Learnt from: mgutt
Repo: unraid/webgui PR: 2496
File: emhttp/plugins/dynamix/Browse.page:901-906
Timestamp: 2025-12-28T15:54:58.673Z
Learning: In the unraid/webgui codebase, local_prepend.php is configured as auto_prepend_file in /etc/php.ini, which means it automatically executes before every PHP script (including Control.php). CSRF validation and other security checks are performed in local_prepend.php, so individual PHP files like Control.php don't need to implement their own CSRF validation.
Applied to files:
emhttp/plugins/dynamix/Browse.page
🔇 Additional comments (5)
emhttp/plugins/dynamix/Browse.page (5)
869-884: LGTM! Error handling improvements are well-implemented.The extended
stopUploadfunction now properly handles different error types (timeout, network, http) with user-friendly messages. The HTTP status extraction usingerrorType.substring(5)correctly parses the'http:XXX'format passed from the XHR handler.
922-935: Progress calculation is now correct.The previous variable shadowing issue has been fixed. Using
completedfor bytes from previous files andnext(which representsstart + slice) for the current file offset correctly calculatesbytesTransferred.
943-954: Good use of.always()for robust cleanup.Using
.always()ensuresstopUploadruns regardless of whether the cancellation POST succeeds or fails, preventing the UI from getting stuck in a cancel state. This matches the PR commit message intention.
898-908: Chunk size and timeout configuration look appropriate.The 20MB chunk size with raw binary transfer aligns with PR objectives (~92 MB/s vs ~22 MB/s baseline). The timeout formula provides generous margins for slow connections while the 10-minute minimum prevents premature timeouts on smaller chunks.
976-979: Cancel button handling is clean and effective.The cancel flow properly:
- Sets the cancel flag
- Aborts the current XHR (triggering
onabort)- The
onaborthandler then cleans up server-side and UI stateThis approach correctly handles mid-transfer cancellation.
- Strip trailing slashes from dir before constructing file path - Clear currentXhr reference in stopUpload for clean state - Use consistent filePath variable in both upload and cancel
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Actionable comments posted: 0
🧹 Nitpick comments (1)
emhttp/plugins/dynamix/Browse.page (1)
910-910: Consider clarifying the timeout calculation comment.The timeout formula
slice / 1024 * 60correctly implements "~1 minute per MB" (60 seconds per MB = 1000ms per KB with a scaling factor), yielding ~20 minutes for a 20MB chunk. However, the calculation path (bytes → KB → milliseconds) isn't immediately obvious.🔎 Optional: More explicit timeout calculation
- xhr.timeout = Math.max(600000, slice / 1024 * 60); // ~1 minute per MB, minimum 10 minutes + xhr.timeout = Math.max(600000, (slice / (1024*1024)) * 60000); // 1 minute per MB, minimum 10 minutesThis makes the intent clearer: divide by 1024² to get MB, multiply by 60,000ms to get minutes.
📜 Review details
Configuration used: Repository UI
Review profile: CHILL
Plan: Pro
📒 Files selected for processing (1)
emhttp/plugins/dynamix/Browse.page
🧰 Additional context used
🧠 Learnings (1)
📚 Learning: 2025-12-28T15:54:58.673Z
Learnt from: mgutt
Repo: unraid/webgui PR: 2496
File: emhttp/plugins/dynamix/Browse.page:901-906
Timestamp: 2025-12-28T15:54:58.673Z
Learning: In the unraid/webgui codebase, local_prepend.php is configured as auto_prepend_file in /etc/php.ini, which means it automatically executes before every PHP script (including Control.php). CSRF validation and other security checks are performed in local_prepend.php, so individual PHP files like Control.php don't need to implement their own CSRF validation.
Applied to files:
emhttp/plugins/dynamix/Browse.page
🔇 Additional comments (4)
emhttp/plugins/dynamix/Browse.page (4)
869-885: LGTM! Enhanced error handling.The function signature extension and error categorization (timeout, network, HTTP) provide clear user feedback. Clearing
currentXhrensures clean state on upload completion or cancellation.
924-942: LGTM! Progress calculation and cleanup fixed.The progress calculation correctly uses
completed(sum of previous files) plusnext(current position in current file) to compute total bytes transferred. This addresses the variable shadowing issue from the previous review.The temp file cleanup (line 940) between file uploads ensures proper resource management when multiple files are uploaded sequentially.
971-985: LGTM! Clean cancellation mechanism.The
currentXhrglobal enables graceful abort via the Cancel button. Resettingcancelat the start of each upload ensures clean state. The button handler correctly triggersxhr.abort(), which flows to theonaborthandler for cleanup.
945-956: The onabort cancellation flow is correctly implemented. Control.php properly handles both POST patterns:
mode='upload'withcancel=1(from onabort): Deletes both the temporary file and any partial actual file created during upload (Control.php lines 54-58).mode='stop'(from stopUpload): Deletes only the temporary file (Control.php lines 153-154).The divergence is intentional: user cancellation needs to remove partially-created files, while normal completion only needs to clean up the temporary staging file. Both patterns are correctly handled.
Performance: Achieved 92 MB/s upload speed (vs ~22 MB/s baseline)
Security: OWASP-compliant header-based CSRF, strict null checking
Compatibility: Works with existing Base64 uploads, no nginx changes. This was kept, because I was not sure if anyone maybe uses the upload feature of the File Manager as "API".
Before (wrong output, real upload was ~22 MB/s):

After:

Fixes #2495
Summary by CodeRabbit
✏️ Tip: You can customize this high-level summary in your review settings.