Skip to content

Conversation

@mgutt
Copy link
Contributor

@mgutt mgutt commented Dec 28, 2025

  • Remove Base64 encoding overhead (33% size reduction)
  • Increase chunk size from 2MB to 20MB
  • Implement raw binary upload via XMLHttpRequest
  • Add X-CSRF-Token header support in local_prepend.php
  • Fix upload speed calculation formula
  • Add HTTP status code validation and timeout handling
  • Maintain backward compatibility with legacy Base64 method
  • Use hash_equals() for timing-attack safe CSRF validation
  • Optimized clean up when multiple files are uploaded (only the last created tmp file was deleted)

Performance: Achieved 92 MB/s upload speed (vs ~22 MB/s baseline)
Security: OWASP-compliant header-based CSRF, strict null checking
Compatibility: Works with existing Base64 uploads, no nginx changes. This was kept, because I was not sure if anyone maybe uses the upload feature of the File Manager as "API".

Before (wrong output, real upload was ~22 MB/s):
image

After:
image

Fixes #2495

Summary by CodeRabbit

  • New Features
    • Cancel now reliably aborts active uploads; upload button toggles to cancel during transfers
    • Switch to direct binary chunked uploads (larger 20MB chunks) for faster, fewer round-trips
  • Bug Fixes
    • Distinct error messages for timeouts, network, HTTP and cancel events
    • More accurate progress and speed reporting using real elapsed time
    • CSRF tokens accepted via header or form and stronger upload validation with safer cancellation handling

✏️ Tip: You can customize this high-level summary in your review settings.

- Remove Base64 encoding overhead (33% size reduction)
- Increase chunk size from 2MB to 20MB
- Implement raw binary upload via XMLHttpRequest
- Add X-CSRF-Token header support in local_prepend.php
- Fix upload speed calculation formula
- Add HTTP status code validation and timeout handling
- Maintain backward compatibility with legacy Base64 method
- Use hash_equals() for timing-attack safe CSRF validation

Performance: Achieved 92 MB/s upload speed (vs ~22 MB/s baseline)
Security: OWASP-compliant header-based CSRF, strict null checking
Compatibility: Works with existing Base64 uploads, no nginx changes

Fixes unraid#2495
@coderabbitai
Copy link
Contributor

coderabbitai bot commented Dec 28, 2025

Walkthrough

Client upload was rewritten to send 20MB binary chunks with native XMLHttpRequest and CSRF header support; server now accepts both legacy base64 and raw php://input, enforces a ~21MB chunk guard, supports cancellation, and returns explicit chunk-level responses.

Changes

Cohort / File(s) Summary
Frontend Upload Refactor
emhttp/plugins/dynamix/Browse.page
Replaced FileReader+Base64 with per-chunk binary XMLHttpRequest (20MB chunks); added currentXhr; updated startUpload/stopUpload(file,error,errorType) signatures; implemented abort/onabort handling; progress and speed computed from elapsed time and accumulated bytes.
Backend Upload Handler
emhttp/plugins/dynamix/include/Control.php
Added $_POST/$_GET fallbacks for mode, file, start, cancel; pre-create cancellation that removes partial files; dual-path input support (legacy $_POST['data'] base64 and new php://input raw); enforces chunk-size guard (~21MB); improved tempfile and write error handling and explicit error replies.
CSRF Token Validation
emhttp/plugins/dynamix/include/local_prepend.php
Accept CSRF token from $_POST['csrf_token'] or HTTP_X_CSRF_TOKEN header; null-safe checks and hash_equals() used for comparison; retains termination and token-unset behavior on failure.

Sequence Diagram

sequenceDiagram
    participant Browser as Client (Browser)
    participant XHR as XMLHttpRequest
    participant Server as Control.php
    participant FS as File System

    rect rgba(220,235,255,0.6)
    Note over Browser,XHR: startUpload() resets cancel, stores currentXhr
    Browser->>XHR: open POST (chunk binary)\nHeaders: Content-Type: application/octet-stream, X-CSRF-Token
    end

    loop Per chunk (20MB)
        XHR->>Server: POST (php://input)
        Server->>Server: validate CSRF, mode, file, start/cancel\ncheck chunk size
        alt cancel requested
            Server->>FS: delete temp/partial file
            Server-->>XHR: "stop"
        else write chunk
            Server->>FS: append chunk to file
            alt write success
                Server-->>XHR: "ok"
            else write failure
                Server->>FS: delete temp file
                Server-->>XHR: "error:write"
            end
        end
        XHR-->>Browser: response ("ok"/"stop"/"error:*")
        Browser->>Browser: update progress/speed\nadvance to next chunk or stop
    end

    rect rgba(220,255,220,0.6)
    Note over Browser,Server: Completion/cleanup
    Browser->>Browser: finalize UI state
    Server->>FS: final file present or cleaned up
    end
Loading

Estimated code review effort

🎯 3 (Moderate) | ⏱️ ~25 minutes

Poem

🐰 I hop in twenty-megabyte strides,

Binary crumbs across the tides.
A CSRF hat keeps traffic tight,
Abort the leap — I end the flight.
Files stitch up, and bytes take flight.

Pre-merge checks and finishing touches

❌ Failed checks (1 warning)
Check name Status Explanation Resolution
Docstring Coverage ⚠️ Warning Docstring coverage is 0.00% which is insufficient. The required threshold is 80.00%. You can run @coderabbitai generate docstrings to improve docstring coverage.
✅ Passed checks (4 passed)
Check name Status Explanation
Title check ✅ Passed The title accurately summarizes the main changes: optimizing upload performance through chunk size increase and binary transfer, and fixing the upload speed calculation formula.
Linked Issues check ✅ Passed The PR fully addresses issue #2495 by fixing upload speed calculation, increasing chunk size from 2MB to 20MB, replacing Base64 with binary transfer, implementing CSRF header support, and reducing HTTP request overhead.
Out of Scope Changes check ✅ Passed All changes are scoped to addressing issue #2495: upload speed display, chunk optimization, binary transfer, CSRF validation, and temporary file cleanup are all directly related objectives.
Description Check ✅ Passed Check skipped - CodeRabbit’s high-level summary is enabled.
✨ Finishing touches
  • 📝 Generate docstrings
🧪 Generate unit tests (beta)
  • Create PR with unit tests
  • Post copyable unit tests in a comment

Thanks for using CodeRabbit! It's free for OSS, and your support helps us grow. If you like it, consider giving us a shout-out.

❤️ Share

Comment @coderabbitai help to get the list of available commands and usage tips.

@github-actions
Copy link

github-actions bot commented Dec 28, 2025

🔧 PR Test Plugin Available

A test plugin has been generated for this PR that includes the modified files.

Version: 2025.12.29.0835
Build: View Workflow Run

📥 Installation Instructions:

Install via Unraid Web UI:

  1. Go to Plugins → Install Plugin
  2. Copy and paste this URL:
https://preview.dl.unraid.net/pr-plugins/pr-2496/webgui-pr-2496.plg
  1. Click Install

Alternative: Direct Download

⚠️ Important Notes:

  • Testing only: This plugin is for testing PR changes
  • Backup included: Original files are automatically backed up
  • Easy removal: Files are restored when plugin is removed
  • Conflicts: Remove this plugin before installing production updates

📝 Modified Files:

Click to expand file list
emhttp/plugins/dynamix/Browse.page
emhttp/plugins/dynamix/include/Control.php
emhttp/plugins/dynamix/include/local_prepend.php

🔄 To Remove:

Navigate to Plugins → Installed Plugins and remove webgui-pr-2496, or run:

plugin remove webgui-pr-2496

🤖 This comment is automatically generated and will be updated with each new push to this PR.

Copy link
Contributor

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 2

🧹 Nitpick comments (2)
emhttp/plugins/dynamix/include/Control.php (1)

49-52: Consider strict comparison for $start parameter.

The loose comparison $start==0 will match empty strings, which could cause unintended behavior if the parameter is missing or malformed. Consider using strict comparison or explicit type casting.

🔎 Proposed fix
-  $start = $_POST['start'] ?? $_GET['start'] ?? 0;
+  $start = (int)($_POST['start'] ?? $_GET['start'] ?? 0);
   $cancel = $_POST['cancel'] ?? $_GET['cancel'] ?? 0;
   $local = "/var/tmp/".basename($file).".tmp";
-  if ($start==0) {
+  if ($start === 0) {
emhttp/plugins/dynamix/Browse.page (1)

940-945: Unused FileReader - dead code from previous implementation.

The reader variable and FileReader instantiation are no longer used since the upload now uses xhr.send(blob) directly. Consider removing this dead code.

🔎 Proposed fix
-var reader = {};
 var cancel = 0;

 function startUpload(files) {
   if (files.length == 0) return;
-  reader = new FileReader();
   window.onbeforeunload = function(e){return '';};
📜 Review details

Configuration used: Repository UI

Review profile: CHILL

Plan: Pro

📥 Commits

Reviewing files that changed from the base of the PR and between 987fa42 and fba4112.

📒 Files selected for processing (3)
  • emhttp/plugins/dynamix/Browse.page
  • emhttp/plugins/dynamix/include/Control.php
  • emhttp/plugins/dynamix/include/local_prepend.php
🧰 Additional context used
🧠 Learnings (1)
📚 Learning: 2025-06-18T17:09:35.579Z
Learnt from: Squidly271
Repo: unraid/webgui PR: 2264
File: emhttp/plugins/dynamix/include/.login.php:0-0
Timestamp: 2025-06-18T17:09:35.579Z
Learning: In the Unraid webGUI login system (emhttp/plugins/dynamix/include/.login.php), the cooldown timer restarting on page reload during the cooldown period is intentional behavior, not a bug. When a form is resubmitted during cooldown, the timer should restart to provide consistent user feedback.

Applied to files:

  • emhttp/plugins/dynamix/include/local_prepend.php
🔇 Additional comments (5)
emhttp/plugins/dynamix/include/local_prepend.php (1)

33-50: LGTM! Solid CSRF token validation improvements.

The changes properly implement OWASP-compliant header-based CSRF with:

  • Null-safe token extraction from POST or X-CSRF-Token header
  • Timing-attack resistant comparison using hash_equals()
  • Explicit null check before comparison to prevent type errors

This correctly supports both legacy form submissions and the new XHR binary uploads.

emhttp/plugins/dynamix/include/Control.php (1)

67-78: Good backward-compatible dual upload support.

The logic correctly prioritizes legacy base64 POST data while falling back to raw binary via php://input. The FILE_APPEND semantics are correct for chunked writes.

One consideration: for very large chunks (20MB), file_get_contents('php://input') loads the entire chunk into memory. This is acceptable given the server context, but ensure memory_limit in php.ini can accommodate this.

emhttp/plugins/dynamix/Browse.page (3)

896-901: Good XHR configuration with proper security headers.

The implementation correctly:

  • Uses application/octet-stream for raw binary uploads
  • Includes the CSRF token via header for authentication
  • Sets a reasonable 10-minute timeout for large chunks

903-935: Robust error handling with proper status validation.

The implementation correctly handles:

  • HTTP status codes outside 200-299 range
  • Server-side error responses ('stop', 'error')
  • Network errors via onerror
  • Timeout scenarios via ontimeout

All error paths properly clean up via stopUpload().


892-892: Good choice increasing chunk size to 20MB.

This aligns with the PR objective to reduce per-request overhead. Combined with removing Base64 encoding (~33% size reduction), this should significantly improve upload throughput. The 10-minute timeout per chunk (Line 901) is appropriately sized for this chunk size on slower connections.

- Fix variable shadowing in progress calculation (use 'completed' instead of reusing 'start')
- Add error handling when temp file doesn't exist (e.g., after server restart)
Copy link
Contributor

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 0

🧹 Nitpick comments (3)
emhttp/plugins/dynamix/Browse.page (2)

892-902: Consider the timeout duration for slow connections.

The 10-minute timeout (line 901) requires a minimum upload speed of ~34 kB/s for 20MB chunks. While reasonable for most scenarios, this may cause timeouts on very slow or congested networks.

💡 Optional: Make timeout configurable or increase buffer

Consider either increasing the timeout to 15-20 minutes for better tolerance of slow connections, or making it configurable based on chunk size:

- xhr.timeout = 600000; // 10 minutes per chunk
+ xhr.timeout = 1200000; // 20 minutes per chunk (more tolerant of slow connections)

Or calculate dynamically:

- xhr.timeout = 600000; // 10 minutes per chunk
+ xhr.timeout = Math.max(600000, slice / 1024 * 60); // ~1 minute per MB, minimum 10 minutes

930-936: Error handling is functional but could provide more specific user feedback.

Both onerror and ontimeout handlers call stopUpload(file.name, true), which shows a generic error message (line 877). Distinguishing between network errors and timeouts could improve user experience.

💡 Optional: Provide more specific error messages

Consider passing different error types to provide clearer feedback:

  xhr.onerror = function() {
-   stopUpload(file.name,true);
+   stopUpload(file.name, true, 'network');
  };
  
  xhr.ontimeout = function() {
-   stopUpload(file.name,true);
+   stopUpload(file.name, true, 'timeout');
  };

Then update stopUpload to show appropriate messages based on error type.

emhttp/plugins/dynamix/include/Control.php (1)

67-78: Excellent backward compatibility with dual upload support.

The dual upload mechanism (lines 71-78) elegantly maintains backward compatibility:

  • Legacy clients can continue using Base64-encoded $_POST['data'] (line 74)
  • New clients use raw binary via php://input (line 77)

This achieves the PR objective of "maintains backward compatibility with legacy Base64 uploads; no nginx changes required."

💡 Optional: Add chunk size validation

Consider adding validation to prevent excessively large chunks from filling disk space:

  } else {
    // New raw binary upload method (read from request body)
    $chunk = file_get_contents('php://input');
+   if (strlen($chunk) > 21000000) { // slightly more than 20MB to allow overhead
+     die('error');
+   }
  }

This provides defense-in-depth, though PHP's post_max_size may already limit this.

📜 Review details

Configuration used: Repository UI

Review profile: CHILL

Plan: Pro

📥 Commits

Reviewing files that changed from the base of the PR and between fba4112 and 1c6bfac.

📒 Files selected for processing (2)
  • emhttp/plugins/dynamix/Browse.page
  • emhttp/plugins/dynamix/include/Control.php
🔇 Additional comments (5)
emhttp/plugins/dynamix/Browse.page (2)

903-928: LGTM! Upload progress calculation correctly fixed.

The HTTP status validation (lines 904-907) properly handles error responses, and the progress calculation (lines 913-922) correctly uses the completed variable to track bytes from previous files, avoiding the variable shadowing issue flagged in the previous review.

The formula bytesTransferred = completed + next (line 919) accurately computes total bytes transferred across all files.


938-938: Efficient raw binary upload.

Sending the raw blob directly eliminates the ~33% Base64 encoding overhead, achieving the performance goals stated in the PR objectives.

emhttp/plugins/dynamix/include/Control.php (3)

45-52: Good backward-compatible parameter handling.

The GET fallback for mode, file, start, and cancel parameters (lines 45, 47, 49-50) properly supports the new XMLHttpRequest-based upload flow where metadata is passed in the URL query string and file data in the request body.

The condition change from $_POST['start']==0 to $start==0 (line 52) correctly handles both POST and GET parameter sources.

Note: File paths in URL query parameters may appear in web server logs, but this is an acceptable trade-off for this chunked upload mechanism.


62-66: LGTM! Temp file error handling correctly implemented.

The check for missing temp file (lines 63-66) properly addresses the issue flagged in the previous review. If the temp file doesn't exist (e.g., after server restart or timeout), the code now returns an explicit error instead of failing silently.


79-83: Correct chunked upload implementation.

The FILE_APPEND flag (line 79) properly appends chunks to the target file, and error handling (lines 79-82) ensures cleanup if the write fails.

- Add dynamic timeout calculation based on chunk size (~1 minute per MB, minimum 10 minutes)
- Add specific error messages for timeout, network, and HTTP errors
- Add chunk size validation (max 21MB) for defense-in-depth
- Fix temp file cleanup for multi-file uploads
- Fix type casting for start/cancel parameters to work with strict comparison
- Add temp file cleanup on write errors
- Add detailed error codes for debugging (tempfile, chunksize, write)
- Remove unused FileReader variable (dead code cleanup)
- Use strict comparison for start parameter check
Copy link
Contributor

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 2

Caution

Some comments are outside the diff and can’t be posted inline due to platform limitations.

⚠️ Outside diff range comments (1)
emhttp/plugins/dynamix/Browse.page (1)

952-961: Critical bug: Cancel variable not reset between uploads.

The global cancel variable is initialized to 0 at line 952 and set to 1 when the user clicks the cancel button (line 957). However, it's never reset back to 0 after an upload completes or is cancelled. This means:

  1. User starts upload (cancel = 0)
  2. User clicks "Cancel" (cancel = 1)
  3. Upload stops
  4. User starts new upload (cancel still = 1)
  5. First chunk includes cancel=1 in URL (line 902)
  6. Server responds with 'stop' (checked at line 914)
  7. New upload immediately fails

Users must refresh the page to upload again after canceling.

🔎 Proposed fix

Reset cancel to 0 when starting a new upload:

 function startUpload(files) {
   if (files.length == 0) return;
+  cancel = 0;
   window.onbeforeunload = function(e){return '';};
   $('#dfm_uploadButton').val("_(Cancel)_").prop('onclick',null).off('click').click(function(){cancel=1;});
   dfm.running = true;
   const d = new Date();
   uploadFile(files,0,0,d.getTime());
 }
♻️ Duplicate comments (1)
emhttp/plugins/dynamix/Browse.page (1)

920-933: LGTM: Progress calculation correctly fixed.

The progress calculation now correctly computes bytesTransferred = completed + next, where completed tracks bytes from previous files and next is the current position in the current file. This properly addresses the past review comment about variable shadowing and incorrect offset calculation.

🧹 Nitpick comments (1)
emhttp/plugins/dynamix/Browse.page (1)

895-950: Consider aborting XHR immediately on cancel.

When the user clicks the cancel button, the current XHR request continues until completion before the next chunk checks the cancel flag. For 20MB chunks on slower connections, this could result in a noticeable delay before the upload actually stops.

💡 Suggested improvement

Store the XHR object and abort it when cancel is clicked:

var cancel = 0;
var currentXhr = null;

function uploadFile(files,index,start,time) {
  var file = files[index];
  var slice = 20971520;
  var next = start + slice;
  var blob = file.slice(start, next);
  
  currentXhr = new XMLHttpRequest();
  var url = '/webGui/include/Control.php?mode=upload&file=' + encodeURIComponent(dir + '/' + dfm_htmlspecialchars(file.name)) + '&start=' + start + '&cancel=' + cancel;
  currentXhr.open('POST', url, true);
  // ... rest of XHR setup
}

function startUpload(files) {
  if (files.length == 0) return;
  cancel = 0;
  window.onbeforeunload = function(e){return '';};
  $('#dfm_uploadButton').val("_(Cancel)_").prop('onclick',null).off('click').click(function(){
    cancel=1;
    if (currentXhr) currentXhr.abort();
  });
  dfm.running = true;
  const d = new Date();
  uploadFile(files,0,0,d.getTime());
}
📜 Review details

Configuration used: Repository UI

Review profile: CHILL

Plan: Pro

📥 Commits

Reviewing files that changed from the base of the PR and between 1c6bfac and ebb4ad4.

📒 Files selected for processing (2)
  • emhttp/plugins/dynamix/Browse.page
  • emhttp/plugins/dynamix/include/Control.php
🚧 Files skipped from review as they are similar to previous changes (1)
  • emhttp/plugins/dynamix/include/Control.php
🔇 Additional comments (3)
emhttp/plugins/dynamix/Browse.page (3)

897-897: LGTM: Chunk size increase.

The 20MB chunk size (vs previous 2MB) combined with raw binary upload significantly reduces overhead and improves throughput, aligning with the PR objective to optimize upload performance.


906-906: Verify timeout calculation.

The timeout formula Math.max(600000, slice / 1024 * 60) calculates approximately 1 minute per MB with a 10-minute minimum. For a 20MB chunk, this yields ~20.5 minutes. While this seems reasonable per chunk, verify this doesn't cause issues on slower connections or with larger files.

For reference, the timeout calculation:

  • 20971520 bytes ÷ 1024 = 20480 KB
  • 20480 KB × 60 ms/KB = 1,228,800 ms ≈ 20.5 minutes per 20MB chunk
  • Minimum timeout: 600,000 ms = 10 minutes

934-938: LGTM: Multi-file temp file cleanup.

The code correctly cleans up the temp file for each completed upload (lines 935-936) before starting the next file. This addresses the multi-file temp file cleanup mentioned in the PR objectives and prevents temp file accumulation.

- Store xhr reference and call abort() to immediately stop request
- Add onabort handler for clean UI reset without error dialog
- Check cancel flag in onerror to prevent error on user-initiated abort
- Reset cancel flag on each upload start
- Add HTTP status code to error messages for better debugging
- Move cancel check before file creation to prevent partial files
- Wait for cancel cleanup to complete before UI reset (callback)
Copy link
Contributor

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 0

🧹 Nitpick comments (1)
emhttp/plugins/dynamix/Browse.page (1)

943-953: Add error handling to the cancellation POST request.

If the cleanup POST fails (e.g., network issue that prompted the cancel), stopUpload is never called—leaving the UI in a stuck state with the "Cancel" button and dfm.running = true.

🔎 Proposed fix
   xhr.onabort = function() {
     // User cancelled upload - trigger deletion via cancel=1 parameter
     $.post('/webGui/include/Control.php', {
       mode: 'upload',
       file: dir + '/' + dfm_htmlspecialchars(file.name),
       start: 0,
       cancel: 1
-    }, function() {
-      stopUpload(file.name, false);
-    });
+    })
+    .always(function() {
+      stopUpload(file.name, false);
+    });
   };

Using .always() ensures the UI resets regardless of whether the server cleanup succeeded.

📜 Review details

Configuration used: Repository UI

Review profile: CHILL

Plan: Pro

📥 Commits

Reviewing files that changed from the base of the PR and between 1c6bfac and 76937d0.

📒 Files selected for processing (2)
  • emhttp/plugins/dynamix/Browse.page
  • emhttp/plugins/dynamix/include/Control.php
🚧 Files skipped from review as they are similar to previous changes (1)
  • emhttp/plugins/dynamix/include/Control.php
🧰 Additional context used
🧠 Learnings (1)
📚 Learning: 2025-12-28T15:54:58.673Z
Learnt from: mgutt
Repo: unraid/webgui PR: 2496
File: emhttp/plugins/dynamix/Browse.page:901-906
Timestamp: 2025-12-28T15:54:58.673Z
Learning: In the unraid/webgui codebase, local_prepend.php is configured as auto_prepend_file in /etc/php.ini, which means it automatically executes before every PHP script (including Control.php). CSRF validation and other security checks are performed in local_prepend.php, so individual PHP files like Control.php don't need to implement their own CSRF validation.

Applied to files:

  • emhttp/plugins/dynamix/Browse.page
🔇 Additional comments (4)
emhttp/plugins/dynamix/Browse.page (4)

869-884: LGTM! Error handling now differentiates timeout, network, and HTTP errors.

The HTTP error extraction using errorType.indexOf('http') === 0 with substring(5) correctly parses the 'http:STATUS' format passed from line 912. This addresses the previous review feedback about missing HTTP error messages.


896-941: Binary upload implementation looks solid.

The rewrite correctly implements:

  • Raw binary transfer via XHR with application/octet-stream
  • CSRF token passed via header (validated by local_prepend.php per retrieved learnings)
  • Dynamic timeout (~1 min/MB, minimum 10 min)
  • Fixed progress calculation using completed + next
  • HTTP status validation before processing response

Minor observation: Line 938's async cleanup POST before starting the next file could theoretically race, but this is an acceptable performance tradeoff.


955-963: Error handlers correctly differentiate user cancellation from actual errors.

The cancel === 1 check in onerror prevents showing an error dialog when the user intentionally aborted, while ontimeout correctly routes to the timeout-specific error message.


968-982: Cancel/abort flow is well-implemented.

The pattern of storing the XHR reference globally, resetting the cancel flag on new uploads, and conditionally calling abort() ensures proper lifecycle management. The guard if (currentXhr) handles edge cases where the reference might be null.

Consistent with existing unset of POST csrf_token field to
minimize token exposure after validation completes.
Use .always() instead of success callback to ensure stopUpload()
is called even if the cleanup POST request fails, preventing UI
from getting stuck in cancel state.
Copy link
Contributor

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 0

🧹 Nitpick comments (2)
emhttp/plugins/dynamix/Browse.page (2)

904-904: Potential double-slash in file path if dir ends with /.

If dir already ends with a trailing slash, the constructed path will contain //. While most servers handle this gracefully, it's worth ensuring consistency.

🔎 Proposed fix
-  var url = '/webGui/include/Control.php?mode=upload&file=' + encodeURIComponent(dir + '/' + dfm_htmlspecialchars(file.name)) + '&start=' + start + '&cancel=' + cancel;
+  var filePath = dir.replace(/\/+$/, '') + '/' + dfm_htmlspecialchars(file.name);
+  var url = '/webGui/include/Control.php?mode=upload&file=' + encodeURIComponent(filePath) + '&start=' + start + '&cancel=' + cancel;

969-970: Consider clearing currentXhr reference after upload completion.

The global currentXhr retains a reference to the last XHR object after uploads complete. While not a memory leak (it's just one object), setting it to null in stopUpload would be cleaner.

🔎 Proposed fix
 function stopUpload(file,error,errorType) {
   window.onbeforeunload = null;
+  currentXhr = null;
   $.post('/webGui/include/Control.php',{mode:'stop',file:encodeURIComponent(dfm_htmlspecialchars(file.name))});
📜 Review details

Configuration used: Repository UI

Review profile: CHILL

Plan: Pro

📥 Commits

Reviewing files that changed from the base of the PR and between 0769780 and 3b0cfe7.

📒 Files selected for processing (1)
  • emhttp/plugins/dynamix/Browse.page
🧰 Additional context used
🧠 Learnings (1)
📚 Learning: 2025-12-28T15:54:58.673Z
Learnt from: mgutt
Repo: unraid/webgui PR: 2496
File: emhttp/plugins/dynamix/Browse.page:901-906
Timestamp: 2025-12-28T15:54:58.673Z
Learning: In the unraid/webgui codebase, local_prepend.php is configured as auto_prepend_file in /etc/php.ini, which means it automatically executes before every PHP script (including Control.php). CSRF validation and other security checks are performed in local_prepend.php, so individual PHP files like Control.php don't need to implement their own CSRF validation.

Applied to files:

  • emhttp/plugins/dynamix/Browse.page
🔇 Additional comments (5)
emhttp/plugins/dynamix/Browse.page (5)

869-884: LGTM! Error handling improvements are well-implemented.

The extended stopUpload function now properly handles different error types (timeout, network, http) with user-friendly messages. The HTTP status extraction using errorType.substring(5) correctly parses the 'http:XXX' format passed from the XHR handler.


922-935: Progress calculation is now correct.

The previous variable shadowing issue has been fixed. Using completed for bytes from previous files and next (which represents start + slice) for the current file offset correctly calculates bytesTransferred.


943-954: Good use of .always() for robust cleanup.

Using .always() ensures stopUpload runs regardless of whether the cancellation POST succeeds or fails, preventing the UI from getting stuck in a cancel state. This matches the PR commit message intention.


898-908: Chunk size and timeout configuration look appropriate.

The 20MB chunk size with raw binary transfer aligns with PR objectives (~92 MB/s vs ~22 MB/s baseline). The timeout formula provides generous margins for slow connections while the 10-minute minimum prevents premature timeouts on smaller chunks.


976-979: Cancel button handling is clean and effective.

The cancel flow properly:

  1. Sets the cancel flag
  2. Aborts the current XHR (triggering onabort)
  3. The onabort handler then cleans up server-side and UI state

This approach correctly handles mid-transfer cancellation.

- Strip trailing slashes from dir before constructing file path
- Clear currentXhr reference in stopUpload for clean state
- Use consistent filePath variable in both upload and cancel
Copy link
Contributor

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 0

🧹 Nitpick comments (1)
emhttp/plugins/dynamix/Browse.page (1)

910-910: Consider clarifying the timeout calculation comment.

The timeout formula slice / 1024 * 60 correctly implements "~1 minute per MB" (60 seconds per MB = 1000ms per KB with a scaling factor), yielding ~20 minutes for a 20MB chunk. However, the calculation path (bytes → KB → milliseconds) isn't immediately obvious.

🔎 Optional: More explicit timeout calculation
-  xhr.timeout = Math.max(600000, slice / 1024 * 60); // ~1 minute per MB, minimum 10 minutes
+  xhr.timeout = Math.max(600000, (slice / (1024*1024)) * 60000); // 1 minute per MB, minimum 10 minutes

This makes the intent clearer: divide by 1024² to get MB, multiply by 60,000ms to get minutes.

📜 Review details

Configuration used: Repository UI

Review profile: CHILL

Plan: Pro

📥 Commits

Reviewing files that changed from the base of the PR and between 3b0cfe7 and f23d234.

📒 Files selected for processing (1)
  • emhttp/plugins/dynamix/Browse.page
🧰 Additional context used
🧠 Learnings (1)
📚 Learning: 2025-12-28T15:54:58.673Z
Learnt from: mgutt
Repo: unraid/webgui PR: 2496
File: emhttp/plugins/dynamix/Browse.page:901-906
Timestamp: 2025-12-28T15:54:58.673Z
Learning: In the unraid/webgui codebase, local_prepend.php is configured as auto_prepend_file in /etc/php.ini, which means it automatically executes before every PHP script (including Control.php). CSRF validation and other security checks are performed in local_prepend.php, so individual PHP files like Control.php don't need to implement their own CSRF validation.

Applied to files:

  • emhttp/plugins/dynamix/Browse.page
🔇 Additional comments (4)
emhttp/plugins/dynamix/Browse.page (4)

869-885: LGTM! Enhanced error handling.

The function signature extension and error categorization (timeout, network, HTTP) provide clear user feedback. Clearing currentXhr ensures clean state on upload completion or cancellation.


924-942: LGTM! Progress calculation and cleanup fixed.

The progress calculation correctly uses completed (sum of previous files) plus next (current position in current file) to compute total bytes transferred. This addresses the variable shadowing issue from the previous review.

The temp file cleanup (line 940) between file uploads ensures proper resource management when multiple files are uploaded sequentially.


971-985: LGTM! Clean cancellation mechanism.

The currentXhr global enables graceful abort via the Cancel button. Resetting cancel at the start of each upload ensures clean state. The button handler correctly triggers xhr.abort(), which flows to the onabort handler for cleanup.


945-956: The onabort cancellation flow is correctly implemented. Control.php properly handles both POST patterns:

  • mode='upload' with cancel=1 (from onabort): Deletes both the temporary file and any partial actual file created during upload (Control.php lines 54-58).
  • mode='stop' (from stopUpload): Deletes only the temporary file (Control.php lines 153-154).

The divergence is intentional: user cancellation needs to remove partially-created files, while normal completion only needs to clean up the temporary staging file. Both patterns are correctly handled.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

File manager: Wrong estimated upload speed / upload is slower than possible

1 participant