Skip to content

333 feat hr prioritization#347

Open
Feramance wants to merge 32 commits intomasterfrom
333-feat-hr-prioritization
Open

333 feat hr prioritization#347
Feramance wants to merge 32 commits intomasterfrom
333-feat-hr-prioritization

Conversation

@Feramance
Copy link
Owner

@Feramance Feramance commented Mar 24, 2026

Summary

Added config option to sort torrents by tracker priority in qbittorrent

Testing

Describe how you validated this change.

  • Added or updated automated tests
  • Performed manual verification (describe below)
  • Not applicable (explain why)

Manual test notes:

Checklist

  • Linked the related issue or discussed scope with maintainers
  • Ran linters / formatting tools relevant to the touched code
  • Updated documentation or release notes where it makes sense
  • Confirmed there are no sensitive secrets or credentials committed

Additional Notes

Optional context for reviewers (follow-up work, screenshots, rollout plan, etc.).


Note

Medium Risk
Adds periodic qBittorrent queue reordering and a new pyarr compatibility shim, which can affect torrent scheduling behavior and Arr API interactions across versions. Risk is moderate due to increased API calls/ordering logic and new error-handling paths, but changes are gated by config and mostly additive.

Overview
Introduces an optional per-tracker SortTorrents flag that, when enabled on any configured tracker, reorders torrents in each qBittorrent instance every processing cycle so higher-priority trackers float to the top (seeding/downloading queues handled separately, with cache + no-op detection to reduce churn).

Adds qBitrr.pyarr_compat to support both pyarr v5 and v6 client APIs/exceptions, updates backend/WebUI imports and connection-error handling accordingly, and pins pyarr to >=5.2,<7.

Documentation updates include new SortTorrents configuration guidance, dev notes on pyarr version support, a SECURITY.md policy, and an extra startup warning when WebUI binds publicly with auth disabled.

Written by Cursor Bugbot for commit 521da1f. This will update automatically on new commits. Configure here.

@Feramance Feramance linked an issue Mar 24, 2026 that may be closed by this pull request
2 tasks
Copy link
Contributor

@cursor cursor bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Cursor Bugbot has reviewed your changes and found 1 potential issue.

Autofix Details

Bugbot Autofix prepared a fix for the issue found in the latest run.

  • ✅ Fixed: PlaceHolderArr missing cache clear and sort logic
    • Added tracker cache clearing and conditional tracker-priority sorting to PlaceHolderArr.process_torrents to match Arr behavior each cycle.
Preview (ab4f099322)
diff --git a/docs/configuration/seeding.md b/docs/configuration/seeding.md
--- a/docs/configuration/seeding.md
+++ b/docs/configuration/seeding.md
@@ -51,6 +51,32 @@
 
 ---
 
+### SortTorrents
+
+**Type:** Boolean (per-tracker)
+**Default:** `false`
+
+Set on individual tracker entries in `[[qBit.Trackers]]` or `[[<Arr>.Torrent.Trackers]]`, **right under [Priority](#priority)**.
+
+When `true` on **any** configured tracker, qBitrr reorders torrents in the qBittorrent queue each processing cycle so that torrents are at the top of the queue in order of their **tracker priority** (highest first). Torrents whose trackers are not in your configured trackers list are assigned the lowest priority and appear at the bottom.
+
+**Requirements:**
+
+- **qBittorrent Torrent Queuing** must be enabled (Options → BitTorrent → Torrent Queuing).
+
+**Example:**
+
+```toml
+[[Radarr-Movies.Torrent.Trackers]]
+Name = "BeyondHD"
+URI = "https://tracker.beyond-hd.me/announce"
+Priority = 10
+SortTorrents = true
+MaxUploadRatio = 1.0
+```
+
+---
+
 ## Global Seeding Settings
 
 ### Complete Example

diff --git a/qBitrr/arss.py b/qBitrr/arss.py
--- a/qBitrr/arss.py
+++ b/qBitrr/arss.py
@@ -329,6 +329,7 @@
         self._normalized_bad_tracker_msgs: set[str] = {
             msg.lower() for msg in self.seeding_mode_global_bad_tracker_msg if isinstance(msg, str)
         }
+        self.sort_torrents = any(i.get("SortTorrents", False) for i in self.monitored_trackers)
 
         if (
             self.auto_delete is True
@@ -594,6 +595,7 @@
         self.downloads_with_bad_error_message_blocklist = set()
         self.needs_cleanup = False
         self._warned_no_seeding_limits = False
+        self._torrent_important_trackers_cache: dict[str, tuple[set[str], set[str]]] = {}
 
         self.last_search_description: str | None = None
         self.last_search_timestamp: str | None = None
@@ -4870,6 +4872,44 @@
         )
         return all_torrents
 
+    def _sort_torrents_by_tracker_priority(
+        self,
+        torrents_with_instances: list[tuple[str, qbittorrentapi.TorrentDictionary]],
+    ) -> None:
+        """
+        Reorder torrents in each qBittorrent instance by tracker priority (highest first).
+        Requires qBittorrent Torrent Queuing to be enabled.
+        """
+        by_instance: dict[str, list[qbittorrentapi.TorrentDictionary]] = defaultdict(list)
+        for instance_name, torrent in torrents_with_instances:
+            by_instance[instance_name].append(torrent)
+
+        qbit_manager = self.manager.qbit_manager
+        for instance_name, torrent_list in by_instance.items():
+            client = qbit_manager.get_client(instance_name)
+            if client is None:
+                continue
+            try:
+                sorted_torrents = sorted(
+                    torrent_list,
+                    key=self._get_torrent_tracker_priority,
+                    reverse=True,
+                )
+                if len(sorted_torrents) > 1:
+                    # qBittorrent may ignore hash input ordering in batch topPrio calls.
+                    # Move torrents one-by-one (lowest first) to enforce tracker-priority order.
+                    for torrent in reversed(sorted_torrents):
+                        client.torrents_top_priority(torrent_hashes=[torrent.hash])
+            except (
+                qbittorrentapi.exceptions.APIError,
+                qbittorrentapi.exceptions.APIConnectionError,
+            ) as e:
+                self.logger.warning(
+                    "Failed to sort torrents by tracker priority on instance '%s': %s",
+                    instance_name,
+                    e,
+                )
+
     def process_torrents(self):
         try:
             try:
@@ -4889,6 +4929,7 @@
                 ]
                 self._warned_no_seeding_limits = False
                 self.category_torrent_count = len(torrents_with_instances)
+                self._torrent_important_trackers_cache.clear()
                 if not len(torrents_with_instances):
                     raise DelayLoopException(length=LOOP_SLEEP_TIMER, error_type="no_downloads")
 
@@ -4931,6 +4972,8 @@
 
                 self.api_calls()
                 self.refresh_download_queue()
+                if self.sort_torrents:
+                    self._sort_torrents_by_tracker_priority(torrents_with_instances)
                 # Multi-instance: Process torrents from all instances
                 for instance_name, torrent in torrents_with_instances:
                     with contextlib.suppress(qbittorrentapi.NotFound404Error):
@@ -5728,8 +5771,12 @@
         )
 
     def _get_torrent_important_trackers(
-        self, torrent: qbittorrentapi.TorrentDictionary
+        self, torrent: qbittorrentapi.TorrentDictionary, *, use_cache: bool = True
     ) -> tuple[set[str], set[str]]:
+        torrent_hash = getattr(torrent, "hash", "")
+        if use_cache and torrent_hash:
+            if cached := self._torrent_important_trackers_cache.get(torrent_hash):
+                return cached
         try:
             current_tracker_urls = {
                 i.url.rstrip("/") for i in torrent.trackers if hasattr(i, "url")
@@ -5759,7 +5806,10 @@
             if _extract_tracker_host(uri) not in current_hosts
         }
         monitored_trackers = monitored_trackers.union(need_to_be_added)
-        return need_to_be_added, monitored_trackers
+        result = (need_to_be_added, monitored_trackers)
+        if use_cache and torrent_hash:
+            self._torrent_important_trackers_cache[torrent_hash] = result
+        return result
 
     @staticmethod
     def __return_max(x: dict):
@@ -5782,6 +5832,14 @@
         max_item = max(new_list, key=self.__return_max) if new_list else {}
         return max_item, set(itertools.chain.from_iterable(_list_of_tags))
 
+    def _get_torrent_tracker_priority(self, torrent: qbittorrentapi.TorrentDictionary) -> int:
+        """Return the tracker Priority for this torrent's most important monitored tracker."""
+        _, monitored_trackers = self._get_torrent_important_trackers(torrent)
+        most_important_tracker, _ = self._get_most_important_tracker_and_tags(
+            monitored_trackers, {}
+        )
+        return most_important_tracker.get("Priority", -100)
+
     def _resolve_hnr_clear_mode(self, tracker_or_config: dict) -> str:
         """Resolve HnR mode from single HitAndRunMode key: 'and' | 'or' | 'disabled'."""
         raw = tracker_or_config.get("HitAndRunMode")
@@ -5958,8 +6016,10 @@
         self.tracker_delay.add(torrent.hash)
         _remove_urls = set()
         need_to_be_added, monitored_trackers = self._get_torrent_important_trackers(torrent)
+        tracker_set_changed = False
         if need_to_be_added:
             torrent.add_trackers(need_to_be_added)
+            tracker_set_changed = True
         with contextlib.suppress(BaseException):
             for tracker in torrent.trackers:
                 tracker_url = getattr(tracker, "url", None)
@@ -5987,6 +6047,9 @@
             )
             with contextlib.suppress(qbittorrentapi.Conflict409Error):
                 torrent.remove_trackers(_remove_urls)
+            tracker_set_changed = True
+        if tracker_set_changed:
+            self._torrent_important_trackers_cache.pop(torrent.hash, None)
         most_important_tracker, unique_tags = self._get_most_important_tracker_and_tags(
             monitored_trackers, _remove_urls
         )
@@ -7441,6 +7504,7 @@
         self.downloads_with_bad_error_message_blocklist = set()
         self.needs_cleanup = False
         self._warned_no_seeding_limits = False
+        self._torrent_important_trackers_cache: dict[str, tuple[set[str], set[str]]] = {}
         self.custom_format_unmet_search = False
         self.do_not_remove_slow = False
         self.maximum_eta = CONFIG.get_duration("Settings.Torrent.MaximumETA", fallback=86400)
@@ -7594,6 +7658,7 @@
         self._remove_tracker_hosts = {
             h for u in self._remove_trackers_if_exists if (h := _extract_tracker_host(u))
         }
+        self.sort_torrents = any(i.get("SortTorrents", False) for i in self.monitored_trackers)
         self.logger.debug(
             "Applied qBit seeding config from section '%s' for category '%s': "
             "RemoveTorrent=%s, StalledDelay=%s",
@@ -7829,6 +7894,7 @@
                 ]
                 self._warned_no_seeding_limits = False
                 self.category_torrent_count = len(torrents_with_instances)
+                self._torrent_important_trackers_cache.clear()
                 if not torrents_with_instances:
                     raise DelayLoopException(length=LOOP_SLEEP_TIMER, error_type="no_downloads")
 
@@ -7838,6 +7904,8 @@
                 if self.manager.qbit_manager.should_delay_torrent_scan:
                     raise DelayLoopException(length=NO_INTERNET_SLEEP_TIMER, error_type="delay")
 
+                if self.sort_torrents:
+                    self._sort_torrents_by_tracker_priority(torrents_with_instances)
                 for instance_name, torrent in torrents_with_instances:
                     with contextlib.suppress(qbittorrentapi.NotFound404Error):
                         self._process_single_torrent(torrent, instance_name=instance_name)

diff --git a/webui/package-lock.json b/webui/package-lock.json
--- a/webui/package-lock.json
+++ b/webui/package-lock.json
@@ -1,12 +1,8 @@
 {
+  "lockfileVersion": 3,
   "name": "webui",
-  "version": "0.0.0",
-  "lockfileVersion": 3,
-  "requires": true,
   "packages": {
     "": {
-      "name": "webui",
-      "version": "0.0.0",
       "dependencies": {
         "@mantine/core": "^8.3.15",
         "@mantine/dates": "^8.3.17",
@@ -46,26 +42,24 @@
       "engines": {
         "node": ">=20.0.0",
         "npm": ">=9.0.0"
-      }
+      },
+      "name": "webui",
+      "version": "0.0.0"
     },
     "node_modules/@alloc/quick-lru": {
-      "version": "5.2.0",
-      "resolved": "https://registry.npmjs.org/@alloc/quick-lru/-/quick-lru-5.2.0.tgz",
-      "integrity": "sha512-UrcABB+4bUrFABwbluTIBErXwvbsU/V7TZWfmbgJfbkwiBuziS9gxdODUyuiecfdGQ85jglMW6juS3+z5TsKLw==",
       "dev": true,
-      "license": "MIT",
       "engines": {
         "node": ">=10"
       },
       "funding": {
         "url": "https://github.com/sponsors/sindresorhus"
-      }
+      },
+      "integrity": "sha512-UrcABB+4bUrFABwbluTIBErXwvbsU/V7TZWfmbgJfbkwiBuziS9gxdODUyuiecfdGQ85jglMW6juS3+z5TsKLw==",
+      "license": "MIT",
+      "resolved": "https://registry.npmjs.org/@alloc/quick-lru/-/quick-lru-5.2.0.tgz",
+      "version": "5.2.0"
     },
     "node_modules/@babel/code-frame": {
-      "version": "7.29.0",
-      "resolved": "https://registry.npmjs.org/@babel/code-frame/-/code-frame-7.29.0.tgz",
-      "integrity": "sha512-9NhCeYjq9+3uxgdtp20LSiJXJvN0FeCtNGpJxuMFZ1Kv3cWUNb6DOhJwUvcVCzKGR66cw4njwM6hrJLqgOwbcw==",
-      "license": "MIT",
       "dependencies": {
         "@babel/helper-validator-identifier": "^7.28.5",
         "js-tokens": "^4.0.0",
@@ -73,24 +67,23 @@
       },
       "engines": {
         "node": ">=6.9.0"
-      }
+      },
+      "integrity": "sha512-9NhCeYjq9+3uxgdtp20LSiJXJvN0FeCtNGpJxuMFZ1Kv3cWUNb6DOhJwUvcVCzKGR66cw4njwM6hrJLqgOwbcw==",
+      "license": "MIT",
+      "resolved": "https://registry.npmjs.org/@babel/code-frame/-/code-frame-7.29.0.tgz",
+      "version": "7.29.0"
     },
     "node_modules/@babel/compat-data": {
-      "version": "7.29.0",
-      "resolved": "https://registry.npmjs.org/@babel/compat-data/-/compat-data-7.29.0.tgz",
-      "integrity": "sha512-T1NCJqT/j9+cn8fvkt7jtwbLBfLC/1y1c7NtCeXFRgzGTsafi68MRv8yzkYSapBnFA6L3U2VSc02ciDzoAJhJg==",
       "dev": true,
-      "license": "MIT",
       "engines": {
         "node": ">=6.9.0"
-      }
+      },
+      "integrity": "sha512-T1NCJqT/j9+cn8fvkt7jtwbLBfLC/1y1c7NtCeXFRgzGTsafi68MRv8yzkYSapBnFA6L3U2VSc02ciDzoAJhJg==",
+      "license": "MIT",
+      "resolved": "https://registry.npmjs.org/@babel/compat-data/-/compat-data-7.29.0.tgz",
+      "version": "7.29.0"
     },
     "node_modules/@babel/core": {
-      "version": "7.29.0",
-      "resolved": "https://registry.npmjs.org/@babel/core/-/core-7.29.0.tgz",
-      "integrity": "sha512-CGOfOJqWjg2qW/Mb6zNsDm+u5vFQ8DxXfbM09z69p5Z6+mE1ikP2jUXw+j42Pf1XTYED2Rni5f95npYeuwMDQA==",
-      "dev": true,
-      "license": "MIT",
       "dependencies": {
         "@babel/code-frame": "^7.29.0",
         "@babel/generator": "^7.29.0",
@@ -108,19 +101,20 @@
         "json5": "^2.2.3",
         "semver": "^6.3.1"
       },
+      "dev": true,
       "engines": {
         "node": ">=6.9.0"
       },
       "funding": {
         "type": "opencollective",
         "url": "https://opencollective.com/babel"
-      }
+      },
+      "integrity": "sha512-CGOfOJqWjg2qW/Mb6zNsDm+u5vFQ8DxXfbM09z69p5Z6+mE1ikP2jUXw+j42Pf1XTYED2Rni5f95npYeuwMDQA==",
+      "license": "MIT",
+      "resolved": "https://registry.npmjs.org/@babel/core/-/core-7.29.0.tgz",
+      "version": "7.29.0"
     },
     "node_modules/@babel/generator": {
-      "version": "7.29.1",
-      "resolved": "https://registry.npmjs.org/@babel/generator/-/generator-7.29.1.tgz",
-      "integrity": "sha512-qsaF+9Qcm2Qv8SRIMMscAvG4O3lJ0F1GuMo5HR/Bp02LopNgnZBC/EkbevHFeGs4ls/oPz9v+Bsmzbkbe+0dUw==",
-      "license": "MIT",
       "dependencies": {
         "@babel/parser": "^7.29.0",
         "@babel/types": "^7.29.0",
@@ -130,14 +124,13 @@
       },
       "engines": {
         "node": ">=6.9.0"
-      }
+      },
+      "integrity": "sha512-qsaF+9Qcm2Qv8SRIMMscAvG4O3lJ0F1GuMo5HR/Bp02LopNgnZBC/EkbevHFeGs4ls/oPz9v+Bsmzbkbe+0dUw==",
+      "license": "MIT",
+      "resolved": "https://registry.npmjs.org/@babel/generator/-/generator-7.29.1.tgz",
+      "version": "7.29.1"
     },
     "node_modules/@babel/helper-compilation-targets": {
-      "version": "7.28.6",
-      "resolved": "https://registry.npmjs.org/@babel/helper-compilation-targets/-/helper-compilation-targets-7.28.6.tgz",
-      "integrity": "sha512-JYtls3hqi15fcx5GaSNL7SCTJ2MNmjrkHXg4FSpOA/grxK8KwyZ5bubHsCq8FXCkua6xhuaaBit+3b7+VZRfcA==",
-      "dev": true,
-      "license": "MIT",
       "dependencies": {
         "@babel/compat-data": "^7.28.6",
         "@babel/helper-validator-option": "^7.27.1",
@@ -145,163 +138,164 @@
         "lru-cache": "^5.1.1",
         "semver": "^6.3.1"
       },
+      "dev": true,
       "engines": {
         "node": ">=6.9.0"
-      }
+      },
+      "integrity": "sha512-JYtls3hqi15fcx5GaSNL7SCTJ2MNmjrkHXg4FSpOA/grxK8KwyZ5bubHsCq8FXCkua6xhuaaBit+3b7+VZRfcA==",
+      "license": "MIT",
+      "resolved": "https://registry.npmjs.org/@babel/helper-compilation-targets/-/helper-compilation-targets-7.28.6.tgz",
+      "version": "7.28.6"
     },
     "node_modules/@babel/helper-globals": {
-      "version": "7.28.0",
-      "resolved": "https://registry.npmjs.org/@babel/helper-globals/-/helper-globals-7.28.0.tgz",
+      "engines": {
+        "node": ">=6.9.0"
+      },
       "integrity": "sha512-+W6cISkXFa1jXsDEdYA8HeevQT/FULhxzR99pxphltZcVaugps53THCeiWA8SguxxpSp3gKPiuYfSWopkLQ4hw==",
       "license": "MIT",
-      "engines": {
-        "node": ">=6.9.0"
-      }
+      "resolved": "https://registry.npmjs.org/@babel/helper-globals/-/helper-globals-7.28.0.tgz",
+      "version": "7.28.0"
     },
     "node_modules/@babel/helper-module-imports": {
-      "version": "7.28.6",
-      "resolved": "https://registry.npmjs.org/@babel/helper-module-imports/-/helper-module-imports-7.28.6.tgz",
-      "integrity": "sha512-l5XkZK7r7wa9LucGw9LwZyyCUscb4x37JWTPz7swwFE/0FMQAGpiWUZn8u9DzkSBWEcK25jmvubfpw2dnAMdbw==",
-      "license": "MIT",
       "dependencies": {
         "@babel/traverse": "^7.28.6",
         "@babel/types": "^7.28.6"
       },
       "engines": {
         "node": ">=6.9.0"
-      }
+      },
+      "integrity": "sha512-l5XkZK7r7wa9LucGw9LwZyyCUscb4x37JWTPz7swwFE/0FMQAGpiWUZn8u9DzkSBWEcK25jmvubfpw2dnAMdbw==",
+      "license": "MIT",
+      "resolved": "https://registry.npmjs.org/@babel/helper-module-imports/-/helper-module-imports-7.28.6.tgz",
+      "version": "7.28.6"
     },
     "node_modules/@babel/helper-module-transforms": {
-      "version": "7.28.6",
-      "resolved": "https://registry.npmjs.org/@babel/helper-module-transforms/-/helper-module-transforms-7.28.6.tgz",
-      "integrity": "sha512-67oXFAYr2cDLDVGLXTEABjdBJZ6drElUSI7WKp70NrpyISso3plG9SAGEF6y7zbha/wOzUByWWTJvEDVNIUGcA==",
-      "dev": true,
-      "license": "MIT",
       "dependencies": {
         "@babel/helper-module-imports": "^7.28.6",
         "@babel/helper-validator-identifier": "^7.28.5",
         "@babel/traverse": "^7.28.6"
       },
+      "dev": true,
       "engines": {
         "node": ">=6.9.0"
       },
+      "integrity": "sha512-67oXFAYr2cDLDVGLXTEABjdBJZ6drElUSI7WKp70NrpyISso3plG9SAGEF6y7zbha/wOzUByWWTJvEDVNIUGcA==",
+      "license": "MIT",
       "peerDependencies": {
         "@babel/core": "^7.0.0"
-      }
+      },
+      "resolved": "https://registry.npmjs.org/@babel/helper-module-transforms/-/helper-module-transforms-7.28.6.tgz",
+      "version": "7.28.6"
     },
     "node_modules/@babel/helper-plugin-utils": {
-      "version": "7.28.6",
-      "resolved": "https://registry.npmjs.org/@babel/helper-plugin-utils/-/helper-plugin-utils-7.28.6.tgz",
-      "integrity": "sha512-S9gzZ/bz83GRysI7gAD4wPT/AI3uCnY+9xn+Mx/KPs2JwHJIz1W8PZkg2cqyt3RNOBM8ejcXhV6y8Og7ly/Dug==",
       "dev": true,
-      "license": "MIT",
       "engines": {
         "node": ">=6.9.0"
-      }
+      },
+      "integrity": "sha512-S9gzZ/bz83GRysI7gAD4wPT/AI3uCnY+9xn+Mx/KPs2JwHJIz1W8PZkg2cqyt3RNOBM8ejcXhV6y8Og7ly/Dug==",
+      "license": "MIT",
+      "resolved": "https://registry.npmjs.org/@babel/helper-plugin-utils/-/helper-plugin-utils-7.28.6.tgz",
+      "version": "7.28.6"
     },
     "node_modules/@babel/helper-string-parser": {
-      "version": "7.27.1",
-      "resolved": "https://registry.npmjs.org/@babel/helper-string-parser/-/helper-string-parser-7.27.1.tgz",
+      "engines": {
+        "node": ">=6.9.0"
+      },
       "integrity": "sha512-qMlSxKbpRlAridDExk92nSobyDdpPijUq2DW6oDnUqd0iOGxmQjyqhMIihI9+zv4LPyZdRje2cavWPbCbWm3eA==",
       "license": "MIT",
+      "resolved": "https://registry.npmjs.org/@babel/helper-string-parser/-/helper-string-parser-7.27.1.tgz",
+      "version": "7.27.1"
+    },
+    "node_modules/@babel/helper-validator-identifier": {
       "engines": {
         "node": ">=6.9.0"
-      }
-    },
-    "node_modules/@babel/helper-validator-identifier": {
-      "version": "7.28.5",
-      "resolved": "https://registry.npmjs.org/@babel/helper-validator-identifier/-/helper-validator-identifier-7.28.5.tgz",
+      },
       "integrity": "sha512-qSs4ifwzKJSV39ucNjsvc6WVHs6b7S03sOh2OcHF9UHfVPqWWALUsNUVzhSBiItjRZoLHx7nIarVjqKVusUZ1Q==",
       "license": "MIT",
-      "engines": {
-        "node": ">=6.9.0"
-      }
+      "resolved": "https://registry.npmjs.org/@babel/helper-validator-identifier/-/helper-validator-identifier-7.28.5.tgz",
+      "version": "7.28.5"
     },
     "node_modules/@babel/helper-validator-option": {
-      "version": "7.27.1",
-      "resolved": "https://registry.npmjs.org/@babel/helper-validator-option/-/helper-validator-option-7.27.1.tgz",
-      "integrity": "sha512-YvjJow9FxbhFFKDSuFnVCe2WxXk1zWc22fFePVNEaWJEu8IrZVlda6N0uHwzZrUM1il7NC9Mlp4MaJYbYd9JSg==",
       "dev": true,
-      "license": "MIT",
       "engines": {
         "node": ">=6.9.0"
-      }
+      },
+      "integrity": "sha512-YvjJow9FxbhFFKDSuFnVCe2WxXk1zWc22fFePVNEaWJEu8IrZVlda6N0uHwzZrUM1il7NC9Mlp4MaJYbYd9JSg==",
+      "license": "MIT",
+      "resolved": "https://registry.npmjs.org/@babel/helper-validator-option/-/helper-validator-option-7.27.1.tgz",
+      "version": "7.27.1"
     },
     "node_modules/@babel/helpers": {
-      "version": "7.28.6",
-      "resolved": "https://registry.npmjs.org/@babel/helpers/-/helpers-7.28.6.tgz",
-      "integrity": "sha512-xOBvwq86HHdB7WUDTfKfT/Vuxh7gElQ+Sfti2Cy6yIWNW05P8iUslOVcZ4/sKbE+/jQaukQAdz/gf3724kYdqw==",
-      "dev": true,
-      "license": "MIT",
       "dependencies": {
         "@babel/template": "^7.28.6",
         "@babel/types": "^7.28.6"
       },
+      "dev": true,
       "engines": {
         "node": ">=6.9.0"
-      }
+      },
+      "integrity": "sha512-xOBvwq86HHdB7WUDTfKfT/Vuxh7gElQ+Sfti2Cy6yIWNW05P8iUslOVcZ4/sKbE+/jQaukQAdz/gf3724kYdqw==",
+      "license": "MIT",
+      "resolved": "https://registry.npmjs.org/@babel/helpers/-/helpers-7.28.6.tgz",
+      "version": "7.28.6"
     },
     "node_modules/@babel/parser": {
-      "version": "7.29.0",
-      "resolved": "https://registry.npmjs.org/@babel/parser/-/parser-7.29.0.tgz",
-      "integrity": "sha512-IyDgFV5GeDUVX4YdF/3CPULtVGSXXMLh1xVIgdCgxApktqnQV0r7/8Nqthg+8YLGaAtdyIlo2qIdZrbCv4+7ww==",
-      "license": "MIT",
+      "bin": {
+        "parser": "bin/babel-parser.js"
+      },
       "dependencies": {
         "@babel/types": "^7.29.0"
       },
-      "bin": {
-        "parser": "bin/babel-parser.js"
-      },
       "engines": {
         "node": ">=6.0.0"
-      }
+      },
+      "integrity": "sha512-IyDgFV5GeDUVX4YdF/3CPULtVGSXXMLh1xVIgdCgxApktqnQV0r7/8Nqthg+8YLGaAtdyIlo2qIdZrbCv4+7ww==",
+      "license": "MIT",
+      "resolved": "https://registry.npmjs.org/@babel/parser/-/parser-7.29.0.tgz",
+      "version": "7.29.0"
     },
     "node_modules/@babel/plugin-transform-react-jsx-self": {
-      "version": "7.27.1",
-      "resolved": "https://registry.npmjs.org/@babel/plugin-transform-react-jsx-self/-/plugin-transform-react-jsx-self-7.27.1.tgz",
-      "integrity": "sha512-6UzkCs+ejGdZ5mFFC/OCUrv028ab2fp1znZmCZjAOBKiBK2jXD1O+BPSfX8X2qjJ75fZBMSnQn3Rq2mrBJK2mw==",
-      "dev": true,
-      "license": "MIT",
       "dependencies": {
         "@babel/helper-plugin-utils": "^7.27.1"
       },
+      "dev": true,
       "engines": {
         "node": ">=6.9.0"
       },
+      "integrity": "sha512-6UzkCs+ejGdZ5mFFC/OCUrv028ab2fp1znZmCZjAOBKiBK2jXD1O+BPSfX8X2qjJ75fZBMSnQn3Rq2mrBJK2mw==",
+      "license": "MIT",
       "peerDependencies": {
         "@babel/core": "^7.0.0-0"
-      }
+      },
+      "resolved": "https://registry.npmjs.org/@babel/plugin-transform-react-jsx-self/-/plugin-transform-react-jsx-self-7.27.1.tgz",
+      "version": "7.27.1"
     },
     "node_modules/@babel/plugin-transform-react-jsx-source": {
-      "version": "7.27.1",
-      "resolved": "https://registry.npmjs.org/@babel/plugin-transform-react-jsx-source/-/plugin-transform-react-jsx-source-7.27.1.tgz",
-      "integrity": "sha512-zbwoTsBruTeKB9hSq73ha66iFeJHuaFkUbwvqElnygoNbj/jHRsSeokowZFN3CZ64IvEqcmmkVe89OPXc7ldAw==",
-      "dev": true,
-      "license": "MIT",
       "dependencies": {
         "@babel/helper-plugin-utils": "^7.27.1"
       },
+      "dev": true,
       "engines": {
         "node": ">=6.9.0"
       },
+      "integrity": "sha512-zbwoTsBruTeKB9hSq73ha66iFeJHuaFkUbwvqElnygoNbj/jHRsSeokowZFN3CZ64IvEqcmmkVe89OPXc7ldAw==",
+      "license": "MIT",
       "peerDependencies": {
         "@babel/core": "^7.0.0-0"
-      }
+      },
+      "resolved": "https://registry.npmjs.org/@babel/plugin-transform-react-jsx-source/-/plugin-transform-react-jsx-source-7.27.1.tgz",
+      "version": "7.27.1"
     },
     "node_modules/@babel/runtime": {
-      "version": "7.28.6",
-      "resolved": "https://registry.npmjs.org/@babel/runtime/-/runtime-7.28.6.tgz",
+      "engines": {
+        "node": ">=6.9.0"
+      },
       "integrity": "sha512-05WQkdpL9COIMz4LjTxGpPNCdlpyimKppYNoJ5Di5EUObifl8t4tuLuUBBZEpoLYOmfvIWrsp9fCl0HoPRVTdA==",
       "license": "MIT",
-      "engines": {
-        "node": ">=6.9.0"
-      }
+      "resolved": "https://registry.npmjs.org/@babel/runtime/-/runtime-7.28.6.tgz",
+      "version": "7.28.6"
     },
     "node_modules/@babel/template": {
-      "version": "7.28.6",
-      "resolved": "https://registry.npmjs.org/@babel/template/-/template-7.28.6.tgz",
-      "integrity": "sha512-YA6Ma2KsCdGb+WC6UpBVFJGXL58MDA6oyONbjyF/+5sBgxY/dwkhLogbMT2GXXyU84/IhRw/2D1Os1B/giz+BQ==",
-      "license": "MIT",
       "dependencies": {
         "@babel/code-frame": "^7.28.6",
         "@babel/parser": "^7.28.6",
@@ -309,13 +303,13 @@
       },
       "engines": {
         "node": ">=6.9.0"
-      }
+      },
+      "integrity": "sha512-YA6Ma2KsCdGb+WC6UpBVFJGXL58MDA6oyONbjyF/+5sBgxY/dwkhLogbMT2GXXyU84/IhRw/2D1Os1B/giz+BQ==",
+      "license": "MIT",
+      "resolved": "https://registry.npmjs.org/@babel/template/-/template-7.28.6.tgz",
+      "version": "7.28.6"
     },
     "node_modules/@babel/traverse": {
-      "version": "7.29.0",
-      "resolved": "https://registry.npmjs.org/@babel/traverse/-/traverse-7.29.0.tgz",
-      "integrity": "sha512-4HPiQr0X7+waHfyXPZpWPfWL/J7dcN1mx9gL6WdQVMbPnF3+ZhSMs8tCxN7oHddJE9fhNE7+lxdnlyemKfJRuA==",
-      "license": "MIT",
       "dependencies": {
         "@babel/code-frame": "^7.29.0",
         "@babel/generator": "^7.29.0",
@@ -327,26 +321,26 @@
       },
       "engines": {
         "node": ">=6.9.0"
-      }
+      },
+      "integrity": "sha512-4HPiQr0X7+waHfyXPZpWPfWL/J7dcN1mx9gL6WdQVMbPnF3+ZhSMs8tCxN7oHddJE9fhNE7+lxdnlyemKfJRuA==",
+      "license": "MIT",
+      "resolved": "https://registry.npmjs.org/@babel/traverse/-/traverse-7.29.0.tgz",
+      "version": "7.29.0"
     },
     "node_modules/@babel/types": {
-      "version": "7.29.0",
-      "resolved": "https://registry.npmjs.org/@babel/types/-/types-7.29.0.tgz",
-      "integrity": "sha512-LwdZHpScM4Qz8Xw2iKSzS+cfglZzJGvofQICy7W7v4caru4EaAmyUuO6BGrbyQ2mYV11W0U8j5mBhd14dd3B0A==",
-      "license": "MIT",
       "dependencies": {
         "@babel/helper-string-parser": "^7.27.1",
         "@babel/helper-validator-identifier": "^7.28.5"
       },
       "engines": {
         "node": ">=6.9.0"
-      }
+      },
+      "integrity": "sha512-LwdZHpScM4Qz8Xw2iKSzS+cfglZzJGvofQICy7W7v4caru4EaAmyUuO6BGrbyQ2mYV11W0U8j5mBhd14dd3B0A==",
+      "license": "MIT",
+      "resolved": "https://registry.npmjs.org/@babel/types/-/types-7.29.0.tgz",
+      "version": "7.29.0"
     },
     "node_modules/@emotion/babel-plugin": {
-      "version": "11.13.5",
-      "resolved": "https://registry.npmjs.org/@emotion/babel-plugin/-/babel-plugin-11.13.5.tgz",
-      "integrity": "sha512-pxHCpT2ex+0q+HH91/zsdHkw/lXd468DIN2zvfvLtPKLLMo6gQj7oLObq8PhkrxOZb/gGCq03S3Z7PDhS8pduQ==",
-      "license": "MIT",
       "dependencies": {
         "@babel/helper-module-imports": "^7.16.7",
         "@babel/runtime": "^7.18.3",
@@ -359,44 +353,44 @@
         "find-root": "^1.1.0",
         "source-map": "^0.5.7",
         "stylis": "4.2.0"
-      }
+      },
+      "integrity": "sha512-pxHCpT2ex+0q+HH91/zsdHkw/lXd468DIN2zvfvLtPKLLMo6gQj7oLObq8PhkrxOZb/gGCq03S3Z7PDhS8pduQ==",
+      "license": "MIT",
+      "resolved": "https://registry.npmjs.org/@emotion/babel-plugin/-/babel-plugin-11.13.5.tgz",
+      "version": "11.13.5"
     },
     "node_modules/@emotion/babel-plugin/node_modules/convert-source-map": {
-      "version": "1.9.0",
+      "integrity": "sha512-ASFBup0Mz1uyiIjANan1jzLQami9z1PoYSZCiiYW2FczPbenXc45FZdBZLzOT+r6+iciuEModtmCti+hjaAk0A==",
+      "license": "MIT",
       "resolved": "https://registry.npmjs.org/convert-source-map/-/convert-source-map-1.9.0.tgz",
-      "integrity": "sha512-ASFBup0Mz1uyiIjANan1jzLQami9z1PoYSZCiiYW2FczPbenXc45FZdBZLzOT+r6+iciuEModtmCti+hjaAk0A==",
-      "license": "MIT"
+      "version": "1.9.0"
     },
     "node_modules/@emotion/cache": {
-      "version": "11.14.0",
-      "resolved": "https://registry.npmjs.org/@emotion/cache/-/cache-11.14.0.tgz",
-      "integrity": "sha512-L/B1lc/TViYk4DcpGxtAVbx0ZyiKM5ktoIyafGkH6zg/tj+mA+NE//aPYKG0k8kCHSHVJrpLpcAlOBEXQ3SavA==",
-      "license": "MIT",
       "dependencies": {
         "@emotion/memoize": "^0.9.0",
         "@emotion/sheet": "^1.4.0",
         "@emotion/utils": "^1.4.2",
         "@emotion/weak-memoize": "^0.4.0",
         "stylis": "4.2.0"
-      }
+      },
+      "integrity": "sha512-L/B1lc/TViYk4DcpGxtAVbx0ZyiKM5ktoIyafGkH6zg/tj+mA+NE//aPYKG0k8kCHSHVJrpLpcAlOBEXQ3SavA==",
+      "license": "MIT",
+      "resolved": "https://registry.npmjs.org/@emotion/cache/-/cache-11.14.0.tgz",
+      "version": "11.14.0"
     },
     "node_modules/@emotion/hash": {
-      "version": "0.9.2",
+      "integrity": "sha512-MyqliTZGuOm3+5ZRSaaBGP3USLw6+EGykkwZns2EPC5g8jJ4z9OrdZY9apkl3+UP9+sdz76YYkwCKP5gh8iY3g==",
+      "license": "MIT",
       "resolved": "https://registry.npmjs.org/@emotion/hash/-/hash-0.9.2.tgz",
-      "integrity": "sha512-MyqliTZGuOm3+5ZRSaaBGP3USLw6+EGykkwZns2EPC5g8jJ4z9OrdZY9apkl3+UP9+sdz76YYkwCKP5gh8iY3g==",
-      "license": "MIT"
+      "version": "0.9.2"
     },
     "node_modules/@emotion/memoize": {
-      "version": "0.9.0",
+      "integrity": "sha512-30FAj7/EoJ5mwVPOWhAyCX+FPfMDrVecJAM+Iw9NRoSl4BBAQeqj4cApHHUXOVvIPgLVDsCFoz/hGD+5QQD1GQ==",
+      "license": "MIT",
       "resolved": "https://registry.npmjs.org/@emotion/memoize/-/memoize-0.9.0.tgz",
-      "integrity": "sha512-30FAj7/EoJ5mwVPOWhAyCX+FPfMDrVecJAM+Iw9NRoSl4BBAQeqj4cApHHUXOVvIPgLVDsCFoz/hGD+5QQD1GQ==",
-      "license": "MIT"
+      "version": "0.9.0"
     },
     "node_modules/@emotion/react": {
-      "version": "11.14.0",
-      "resolved": "https://registry.npmjs.org/@emotion/react/-/react-11.14.0.tgz",
-      "integrity": "sha512-O000MLDBDdk/EohJPFUqvnp4qnHeYkVP5B0xEG0D/L7cOKP9kefu2DXn8dj74cQfsEzUqh+sr1RzFqiL1o+PpA==",
-      "license": "MIT",
       "dependencies": {
         "@babel/runtime": "^7.18.3",
         "@emotion/babel-plugin": "^11.13.5",
@@ -407,6 +401,8 @@
         "@emotion/weak-memoize": "^0.4.0",
         "hoist-non-react-statics": "^3.3.1"
       },
+      "integrity": "sha512-O000MLDBDdk/EohJPFUqvnp4qnHeYkVP5B0xEG0D/L7cOKP9kefu2DXn8dj74cQfsEzUqh+sr1RzFqiL1o+PpA==",
+      "license": "MIT",
       "peerDependencies": {
         "react": ">=16.8.0"
       },
@@ -414,591 +410,591 @@
         "@types/react": {
           "optional": true
         }
-      }
+      },
+      "resolved": "https://registry.npmjs.org/@emotion/react/-/react-11.14.0.tgz",
+      "version": "11.14.0"
     },
     "node_modules/@emotion/serialize": {
-      "version": "1.3.3",
-      "resolved": "https://registry.npmjs.org/@emotion/serialize/-/serialize-1.3.3.tgz",
-      "integrity": "sha512-EISGqt7sSNWHGI76hC7x1CksiXPahbxEOrC5RjmFRJTqLyEK9/9hZvBbiYn70dw4wuwMKiEMCUlR6ZXTSWQqxA==",
-      "license": "MIT",
       "dependencies": {
         "@emotion/hash": "^0.9.2",
         "@emotion/memoize": "^0.9.0",
         "@emotion/unitless": "^0.10.0",
         "@emotion/utils": "^1.4.2",
         "csstype": "^3.0.2"
-      }
+      },
+      "integrity": "sha512-EISGqt7sSNWHGI76hC7x1CksiXPahbxEOrC5RjmFRJTqLyEK9/9hZvBbiYn70dw4wuwMKiEMCUlR6ZXTSWQqxA==",
+      "license": "MIT",
+      "resolved": "https://registry.npmjs.org/@emotion/serialize/-/serialize-1.3.3.tgz",
+      "version": "1.3.3"
     },
     "node_modules/@emotion/sheet": {
-      "version": "1.4.0",
+      "integrity": "sha512-fTBW9/8r2w3dXWYM4HCB1Rdp8NLibOw2+XELH5m5+AkWiL/KqYX6dc0kKYlaYyKjrQ6ds33MCdMPEwgs2z1rqg==",
+      "license": "MIT",
       "resolved": "https://registry.npmjs.org/@emotion/sheet/-/sheet-1.4.0.tgz",
-      "integrity": "sha512-fTBW9/8r2w3dXWYM4HCB1Rdp8NLibOw2+XELH5m5+AkWiL/KqYX6dc0kKYlaYyKjrQ6ds33MCdMPEwgs2z1rqg==",
-      "license": "MIT"
+      "version": "1.4.0"
     },
     "node_modules/@emotion/unitless": {
-      "version": "0.10.0",
+      "integrity": "sha512-dFoMUuQA20zvtVTuxZww6OHoJYgrzfKM1t52mVySDJnMSEa08ruEvdYQbhvyu6soU+NeLVd3yKfTfT0NeV6qGg==",
+      "license": "MIT",
       "resolved": "https://registry.npmjs.org/@emotion/unitless/-/unitless-0.10.0.tgz",
-      "integrity": "sha512-dFoMUuQA20zvtVTuxZww6OHoJYgrzfKM1t52mVySDJnMSEa08ruEvdYQbhvyu6soU+NeLVd3yKfTfT0NeV6qGg==",
-      "license": "MIT"
+      "version": "0.10.0"
     },
     "node_modules/@emotion/use-insertion-effect-with-fallbacks": {
-      "version": "1.2.0",
-      "resolved": "https://registry.npmjs.org/@emotion/use-insertion-effect-with-fallbacks/-/use-insertion-effect-with-fallbacks-1.2.0.tgz",
       "integrity": "sha512-yJMtVdH59sxi/aVJBpk9FQq+OR8ll5GT8oWd57UpeaKEVGab41JWaCFA7FRLoMLloOZF/c/wsPoe+bfGmRKgDg==",
       "license": "MIT",
       "peerDependencies": {
         "react": ">=16.8.0"
-      }
+      },
+      "resolved": "https://registry.npmjs.org/@emotion/use-insertion-effect-with-fallbacks/-/use-insertion-effect-with-fallbacks-1.2.0.tgz",
+      "version": "1.2.0"
     },
     "node_modules/@emotion/utils": {
-      "version": "1.4.2",
+      "integrity": "sha512-3vLclRofFziIa3J2wDh9jjbkUz9qk5Vi3IZ/FSTKViB0k+ef0fPV7dYrUIugbgupYDx7v9ud/SjrtEP8Y4xLoA==",
+      "license": "MIT",
       "resolved": "https://registry.npmjs.org/@emotion/utils/-/utils-1.4.2.tgz",
-      "integrity": "sha512-3vLclRofFziIa3J2wDh9jjbkUz9qk5Vi3IZ/FSTKViB0k+ef0fPV7dYrUIugbgupYDx7v9ud/SjrtEP8Y4xLoA==",
-      "license": "MIT"
+      "version": "1.4.2"
     },
     "node_modules/@emotion/weak-memoize": {
-      "version": "0.4.0",
+      "integrity": "sha512-snKqtPW01tN0ui7yu9rGv69aJXr/a/Ywvl11sUjNtEcRc+ng/mQriFL0wLXMef74iHa/EkftbDzU9F8iFbH+zg==",
+      "license": "MIT",
       "resolved": "https://registry.npmjs.org/@emotion/weak-memoize/-/weak-memoize-0.4.0.tgz",
-      "integrity": "sha512-snKqtPW01tN0ui7yu9rGv69aJXr/a/Ywvl11sUjNtEcRc+ng/mQriFL0wLXMef74iHa/EkftbDzU9F8iFbH+zg==",
-      "license": "MIT"
+      "version": "0.4.0"
     },
     "node_modules/@esbuild/aix-ppc64": {
-      "version": "0.27.3",
-      "resolved": "https://registry.npmjs.org/@esbuild/aix-ppc64/-/aix-ppc64-0.27.3.tgz",
-      "integrity": "sha512-9fJMTNFTWZMh5qwrBItuziu834eOCUcEqymSH7pY+zoMVEZg3gcPuBNxH1EvfVYe9h0x/Ptw8KBzv7qxb7l8dg==",
       "cpu": [
         "ppc64"
       ],
       "dev": true,
+      "engines": {
+        "node": ">=18"
+      },
+      "integrity": "sha512-9fJMTNFTWZMh5qwrBItuziu834eOCUcEqymSH7pY+zoMVEZg3gcPuBNxH1EvfVYe9h0x/Ptw8KBzv7qxb7l8dg==",
... diff truncated: showing 800 of 8133 lines

Copy link
Contributor

@cursor cursor bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Cursor Bugbot has reviewed your changes and found 1 potential issue.

Autofix Details

Bugbot Autofix prepared a fix for the issue found in the latest run.

  • ✅ Fixed: Empty dict passed where empty set expected
    • Replaced the {} argument with set() in _get_torrent_tracker_priority so _get_most_important_tracker_and_tags receives the expected set type.
Preview (7f87d40dec)
diff --git a/SECURITY.md b/SECURITY.md
new file mode 100644
--- /dev/null
+++ b/SECURITY.md
@@ -1,0 +1,23 @@
+# Security Policy
+
+## Supported versions
+
+Security fixes are provided **only for the latest released version** of qBitrr. Older releases are not maintained for security patches. [Upgrade to the latest release](https://github.com/Feramance/qBitrr/releases/latest) to receive security updates.
+
+## Reporting a vulnerability
+
+Report security issues **privately** so they can be fixed before public disclosure.
+
+**Preferred:** Use GitHub's private reporting flow: open the [Security tab](https://github.com/Feramance/qBitrr/security), then use **Report a vulnerability**.
+
+Please include:
+
+- A clear description of the issue and its potential impact
+- Steps to reproduce (if possible)
+- The qBitrr version and environment you tested (OS, install method, relevant config if safe to share)
+
+We will acknowledge receipt when we can and coordinate on a fix and disclosure timeline where appropriate.
+
+## Coordinated disclosure
+
+Please do not publish details of an unfixed vulnerability until a fix is available, unless we agree otherwise.

diff --git a/docs/configuration/seeding.md b/docs/configuration/seeding.md
--- a/docs/configuration/seeding.md
+++ b/docs/configuration/seeding.md
@@ -51,6 +51,32 @@
 
 ---
 
+### SortTorrents
+
+**Type:** Boolean (per-tracker)
+**Default:** `false`
+
+Set on individual tracker entries in `[[qBit.Trackers]]` or `[[<Arr>.Torrent.Trackers]]`, **right under [Priority](#priority)**.
+
+When `true` on **any** configured tracker, qBitrr reorders torrents in the qBittorrent queue each processing cycle so that torrents are at the top of the queue in order of their **tracker priority** (highest first). Torrents whose trackers are not in your configured trackers list are assigned the lowest priority and appear at the bottom.
+
+**Requirements:**
+
+- **qBittorrent Torrent Queuing** must be enabled (Options → BitTorrent → Torrent Queuing).
+
+**Example:**
+
+```toml
+[[Radarr-Movies.Torrent.Trackers]]
+Name = "BeyondHD"
+URI = "https://tracker.beyond-hd.me/announce"
+Priority = 10
+SortTorrents = true
+MaxUploadRatio = 1.0
+```
+
+---
+
 ## Global Seeding Settings
 
 ### Complete Example

diff --git a/qBitrr/arss.py b/qBitrr/arss.py
--- a/qBitrr/arss.py
+++ b/qBitrr/arss.py
@@ -24,7 +24,6 @@
 from jaraco.docker import is_docker
 from packaging import version as version_parser
 from peewee import DatabaseError, Model, OperationalError, SqliteDatabase
-from pyarr import LidarrAPI, RadarrAPI, SonarrAPI
 from pyarr.exceptions import PyarrResourceNotFound, PyarrServerError
 from pyarr.types import JsonObject
 from qbittorrentapi import TorrentDictionary, TorrentStates
@@ -56,6 +55,7 @@
     UnhandledError,
 )
 from qBitrr.logger import run_logs
+from qBitrr.pyarr_compat import LidarrAPI, RadarrAPI, SonarrAPI
 from qBitrr.search_activity_store import (
     clear_search_activity,
     fetch_search_activities,
@@ -329,6 +329,7 @@
         self._normalized_bad_tracker_msgs: set[str] = {
             msg.lower() for msg in self.seeding_mode_global_bad_tracker_msg if isinstance(msg, str)
         }
+        self.sort_torrents = any(i.get("SortTorrents", False) for i in self.monitored_trackers)
 
         if (
             self.auto_delete is True
@@ -594,6 +595,7 @@
         self.downloads_with_bad_error_message_blocklist = set()
         self.needs_cleanup = False
         self._warned_no_seeding_limits = False
+        self._torrent_important_trackers_cache: dict[str, tuple[set[str], set[str]]] = {}
 
         self.last_search_description: str | None = None
         self.last_search_timestamp: str | None = None
@@ -4870,6 +4872,44 @@
         )
         return all_torrents
 
+    def _sort_torrents_by_tracker_priority(
+        self,
+        torrents_with_instances: list[tuple[str, qbittorrentapi.TorrentDictionary]],
+    ) -> None:
+        """
+        Reorder torrents in each qBittorrent instance by tracker priority (highest first).
+        Requires qBittorrent Torrent Queuing to be enabled.
+        """
+        by_instance: dict[str, list[qbittorrentapi.TorrentDictionary]] = defaultdict(list)
+        for instance_name, torrent in torrents_with_instances:
+            by_instance[instance_name].append(torrent)
+
+        qbit_manager = self.manager.qbit_manager
+        for instance_name, torrent_list in by_instance.items():
+            client = qbit_manager.get_client(instance_name)
+            if client is None:
+                continue
+            try:
+                sorted_torrents = sorted(
+                    torrent_list,
+                    key=self._get_torrent_tracker_priority,
+                    reverse=True,
+                )
+                if len(sorted_torrents) > 1:
+                    # qBittorrent may ignore hash input ordering in batch topPrio calls.
+                    # Move torrents one-by-one (lowest first) to enforce tracker-priority order.
+                    for torrent in reversed(sorted_torrents):
+                        client.torrents_top_priority(torrent_hashes=[torrent.hash])
+            except (
+                qbittorrentapi.exceptions.APIError,
+                qbittorrentapi.exceptions.APIConnectionError,
+            ) as e:
+                self.logger.warning(
+                    "Failed to sort torrents by tracker priority on instance '%s': %s",
+                    instance_name,
+                    e,
+                )
+
     def process_torrents(self):
         try:
             try:
@@ -4889,6 +4929,7 @@
                 ]
                 self._warned_no_seeding_limits = False
                 self.category_torrent_count = len(torrents_with_instances)
+                self._torrent_important_trackers_cache.clear()
                 if not len(torrents_with_instances):
                     raise DelayLoopException(length=LOOP_SLEEP_TIMER, error_type="no_downloads")
 
@@ -4931,6 +4972,8 @@
 
                 self.api_calls()
                 self.refresh_download_queue()
+                if self.sort_torrents:
+                    self._sort_torrents_by_tracker_priority(torrents_with_instances)
                 # Multi-instance: Process torrents from all instances
                 for instance_name, torrent in torrents_with_instances:
                     with contextlib.suppress(qbittorrentapi.NotFound404Error):
@@ -5728,8 +5771,12 @@
         )
 
     def _get_torrent_important_trackers(
-        self, torrent: qbittorrentapi.TorrentDictionary
+        self, torrent: qbittorrentapi.TorrentDictionary, *, use_cache: bool = True
     ) -> tuple[set[str], set[str]]:
+        torrent_hash = getattr(torrent, "hash", "")
+        if use_cache and torrent_hash:
+            if cached := self._torrent_important_trackers_cache.get(torrent_hash):
+                return cached
         try:
             current_tracker_urls = {
                 i.url.rstrip("/") for i in torrent.trackers if hasattr(i, "url")
@@ -5759,7 +5806,10 @@
             if _extract_tracker_host(uri) not in current_hosts
         }
         monitored_trackers = monitored_trackers.union(need_to_be_added)
-        return need_to_be_added, monitored_trackers
+        result = (need_to_be_added, monitored_trackers)
+        if use_cache and torrent_hash:
+            self._torrent_important_trackers_cache[torrent_hash] = result
+        return result
 
     @staticmethod
     def __return_max(x: dict):
@@ -5782,6 +5832,14 @@
         max_item = max(new_list, key=self.__return_max) if new_list else {}
         return max_item, set(itertools.chain.from_iterable(_list_of_tags))
 
+    def _get_torrent_tracker_priority(self, torrent: qbittorrentapi.TorrentDictionary) -> int:
+        """Return the tracker Priority for this torrent's most important monitored tracker."""
+        _, monitored_trackers = self._get_torrent_important_trackers(torrent)
+        most_important_tracker, _ = self._get_most_important_tracker_and_tags(
+            monitored_trackers, set()
+        )
+        return most_important_tracker.get("Priority", -100)
+
     def _resolve_hnr_clear_mode(self, tracker_or_config: dict) -> str:
         """Resolve HnR mode from single HitAndRunMode key: 'and' | 'or' | 'disabled'."""
         raw = tracker_or_config.get("HitAndRunMode")
@@ -5958,8 +6016,10 @@
         self.tracker_delay.add(torrent.hash)
         _remove_urls = set()
         need_to_be_added, monitored_trackers = self._get_torrent_important_trackers(torrent)
+        tracker_set_changed = False
         if need_to_be_added:
             torrent.add_trackers(need_to_be_added)
+            tracker_set_changed = True
         with contextlib.suppress(BaseException):
             for tracker in torrent.trackers:
                 tracker_url = getattr(tracker, "url", None)
@@ -5987,6 +6047,9 @@
             )
             with contextlib.suppress(qbittorrentapi.Conflict409Error):
                 torrent.remove_trackers(_remove_urls)
+            tracker_set_changed = True
+        if tracker_set_changed:
+            self._torrent_important_trackers_cache.pop(torrent.hash, None)
         most_important_tracker, unique_tags = self._get_most_important_tracker_and_tags(
             monitored_trackers, _remove_urls
         )
@@ -7441,6 +7504,7 @@
         self.downloads_with_bad_error_message_blocklist = set()
         self.needs_cleanup = False
         self._warned_no_seeding_limits = False
+        self._torrent_important_trackers_cache: dict[str, tuple[set[str], set[str]]] = {}
         self.custom_format_unmet_search = False
         self.do_not_remove_slow = False
         self.maximum_eta = CONFIG.get_duration("Settings.Torrent.MaximumETA", fallback=86400)
@@ -7594,6 +7658,7 @@
         self._remove_tracker_hosts = {
             h for u in self._remove_trackers_if_exists if (h := _extract_tracker_host(u))
         }
+        self.sort_torrents = any(i.get("SortTorrents", False) for i in self.monitored_trackers)
         self.logger.debug(
             "Applied qBit seeding config from section '%s' for category '%s': "
             "RemoveTorrent=%s, StalledDelay=%s",
@@ -7829,6 +7894,7 @@
                 ]
                 self._warned_no_seeding_limits = False
                 self.category_torrent_count = len(torrents_with_instances)
+                self._torrent_important_trackers_cache.clear()
                 if not torrents_with_instances:
                     raise DelayLoopException(length=LOOP_SLEEP_TIMER, error_type="no_downloads")
 
@@ -7838,6 +7904,8 @@
                 if self.manager.qbit_manager.should_delay_torrent_scan:
                     raise DelayLoopException(length=NO_INTERNET_SLEEP_TIMER, error_type="delay")
 
+                if self.sort_torrents:
+                    self._sort_torrents_by_tracker_priority(torrents_with_instances)
                 for instance_name, torrent in torrents_with_instances:
                     with contextlib.suppress(qbittorrentapi.NotFound404Error):
                         self._process_single_torrent(torrent, instance_name=instance_name)

diff --git a/qBitrr/pyarr_compat.py b/qBitrr/pyarr_compat.py
new file mode 100644
--- /dev/null
+++ b/qBitrr/pyarr_compat.py
@@ -1,0 +1,20 @@
+from __future__ import annotations
+
+"""Compatibility imports for pyarr client class naming changes.
+
+Recent pyarr versions expose client classes as ``Radarr``, ``Sonarr``, and
+``Lidarr`` instead of ``RadarrAPI``, ``SonarrAPI``, and ``LidarrAPI``.
+qBitrr historically used the ``*API`` names. This module normalizes imports so
+the rest of the code can keep using ``RadarrAPI``/``SonarrAPI``/``LidarrAPI``.
+"""
+
+try:
+    # Legacy pyarr naming (<= v5.x style)
+    from pyarr import LidarrAPI, RadarrAPI, SonarrAPI
+except ImportError:
+    # Newer pyarr naming (v6+ style)
+    from pyarr import Lidarr as LidarrAPI
+    from pyarr import Radarr as RadarrAPI
+    from pyarr import Sonarr as SonarrAPI
+
+__all__ = ["RadarrAPI", "SonarrAPI", "LidarrAPI"]

diff --git a/qBitrr/webui.py b/qBitrr/webui.py
--- a/qBitrr/webui.py
+++ b/qBitrr/webui.py
@@ -193,6 +193,14 @@
                 "WebUI configured to listen on %s. Expose this only behind a trusted reverse proxy.",
                 self.host,
             )
+            if _auth_disabled():
+                self.logger.warning(
+                    "WebUI authentication is disabled: all API and WebUI actions are available "
+                    "without credentials to any client that can reach this port. If that is not "
+                    "intentional, enable authentication (see WebUI.AuthDisabled and login/token in "
+                    "the docs), bind WebUI.Host to 127.0.0.1, or place the service behind a "
+                    "trusted reverse proxy with its own access controls."
+                )
         self.app.logger.handlers.clear()
         self.app.logger.propagate = True
         self.app.logger.setLevel(self.logger.level)
@@ -3195,15 +3203,15 @@
                     # Create temporary Arr API client
                     self.logger.info("Creating temporary %s client for %s", arr_type, uri)
                     if arr_type == "radarr":
-                        from pyarr import RadarrAPI
+                        from qBitrr.pyarr_compat import RadarrAPI
 
                         client = RadarrAPI(uri, api_key)
                     elif arr_type == "sonarr":
-                        from pyarr import SonarrAPI
+                        from qBitrr.pyarr_compat import SonarrAPI
 
                         client = SonarrAPI(uri, api_key)
                     elif arr_type == "lidarr":
-                        from pyarr import LidarrAPI
+                        from qBitrr.pyarr_compat import LidarrAPI
 
                         client = LidarrAPI(uri, api_key)
                     else:
@@ -3514,15 +3522,15 @@
         # Determine client class based on name
         client_cls = None
         if re.match(r"^(Rad|rad)arr", instance_name):
-            from pyarr import RadarrAPI
+            from qBitrr.pyarr_compat import RadarrAPI
 
             client_cls = RadarrAPI
         elif re.match(r"^(Son|son|Anim|anim)arr", instance_name):
-            from pyarr import SonarrAPI
+            from qBitrr.pyarr_compat import SonarrAPI
 
             client_cls = SonarrAPI
         elif re.match(r"^(Lid|lid)arr", instance_name):
-            from pyarr import LidarrAPI
+            from qBitrr.pyarr_compat import LidarrAPI
 
             client_cls = LidarrAPI
         else:

diff --git a/webui/package-lock.json b/webui/package-lock.json
--- a/webui/package-lock.json
+++ b/webui/package-lock.json
@@ -54,9 +54,7 @@
       "funding": {
         "url": "https://github.com/sponsors/sindresorhus"
       },
-      "integrity": "sha512-UrcABB+4bUrFABwbluTIBErXwvbsU/V7TZWfmbgJfbkwiBuziS9gxdODUyuiecfdGQ85jglMW6juS3+z5TsKLw==",
       "license": "MIT",
-      "resolved": "https://registry.npmjs.org/@alloc/quick-lru/-/quick-lru-5.2.0.tgz",
       "version": "5.2.0"
     },
     "node_modules/@babel/code-frame": {
@@ -68,9 +66,7 @@
       "engines": {
         "node": ">=6.9.0"
       },
-      "integrity": "sha512-9NhCeYjq9+3uxgdtp20LSiJXJvN0FeCtNGpJxuMFZ1Kv3cWUNb6DOhJwUvcVCzKGR66cw4njwM6hrJLqgOwbcw==",
       "license": "MIT",
-      "resolved": "https://registry.npmjs.org/@babel/code-frame/-/code-frame-7.29.0.tgz",
       "version": "7.29.0"
     },
     "node_modules/@babel/compat-data": {
@@ -78,9 +74,7 @@
       "engines": {
         "node": ">=6.9.0"
       },
-      "integrity": "sha512-T1NCJqT/j9+cn8fvkt7jtwbLBfLC/1y1c7NtCeXFRgzGTsafi68MRv8yzkYSapBnFA6L3U2VSc02ciDzoAJhJg==",
       "license": "MIT",
-      "resolved": "https://registry.npmjs.org/@babel/compat-data/-/compat-data-7.29.0.tgz",
       "version": "7.29.0"
     },
     "node_modules/@babel/core": {
@@ -109,9 +103,7 @@
         "type": "opencollective",
         "url": "https://opencollective.com/babel"
       },
-      "integrity": "sha512-CGOfOJqWjg2qW/Mb6zNsDm+u5vFQ8DxXfbM09z69p5Z6+mE1ikP2jUXw+j42Pf1XTYED2Rni5f95npYeuwMDQA==",
       "license": "MIT",
-      "resolved": "https://registry.npmjs.org/@babel/core/-/core-7.29.0.tgz",
       "version": "7.29.0"
     },
     "node_modules/@babel/generator": {
@@ -125,9 +117,7 @@
       "engines": {
         "node": ">=6.9.0"
       },
-      "integrity": "sha512-qsaF+9Qcm2Qv8SRIMMscAvG4O3lJ0F1GuMo5HR/Bp02LopNgnZBC/EkbevHFeGs4ls/oPz9v+Bsmzbkbe+0dUw==",
       "license": "MIT",
-      "resolved": "https://registry.npmjs.org/@babel/generator/-/generator-7.29.1.tgz",
       "version": "7.29.1"
     },
     "node_modules/@babel/helper-compilation-targets": {
@@ -142,18 +132,14 @@
       "engines": {
         "node": ">=6.9.0"
       },
-      "integrity": "sha512-JYtls3hqi15fcx5GaSNL7SCTJ2MNmjrkHXg4FSpOA/grxK8KwyZ5bubHsCq8FXCkua6xhuaaBit+3b7+VZRfcA==",
       "license": "MIT",
-      "resolved": "https://registry.npmjs.org/@babel/helper-compilation-targets/-/helper-compilation-targets-7.28.6.tgz",
       "version": "7.28.6"
     },
     "node_modules/@babel/helper-globals": {
       "engines": {
         "node": ">=6.9.0"
       },
-      "integrity": "sha512-+W6cISkXFa1jXsDEdYA8HeevQT/FULhxzR99pxphltZcVaugps53THCeiWA8SguxxpSp3gKPiuYfSWopkLQ4hw==",
       "license": "MIT",
-      "resolved": "https://registry.npmjs.org/@babel/helper-globals/-/helper-globals-7.28.0.tgz",
       "version": "7.28.0"
     },
     "node_modules/@babel/helper-module-imports": {
@@ -164,9 +150,7 @@
       "engines": {
         "node": ">=6.9.0"
       },
-      "integrity": "sha512-l5XkZK7r7wa9LucGw9LwZyyCUscb4x37JWTPz7swwFE/0FMQAGpiWUZn8u9DzkSBWEcK25jmvubfpw2dnAMdbw==",
       "license": "MIT",
-      "resolved": "https://registry.npmjs.org/@babel/helper-module-imports/-/helper-module-imports-7.28.6.tgz",
       "version": "7.28.6"
     },
     "node_modules/@babel/helper-module-transforms": {
@@ -179,30 +163,24 @@
       "engines": {
         "node": ">=6.9.0"
       },
-      "integrity": "sha512-67oXFAYr2cDLDVGLXTEABjdBJZ6drElUSI7WKp70NrpyISso3plG9SAGEF6y7zbha/wOzUByWWTJvEDVNIUGcA==",
       "license": "MIT",
       "peerDependencies": {
         "@babel/core": "^7.0.0"
       },
-      "resolved": "https://registry.npmjs.org/@babel/helper-module-transforms/-/helper-module-transforms-7.28.6.tgz",
       "version": "7.28.6"
     },
     "node_modules/@babel/helper-string-parser": {
       "engines": {
         "node": ">=6.9.0"
       },
-      "integrity": "sha512-qMlSxKbpRlAridDExk92nSobyDdpPijUq2DW6oDnUqd0iOGxmQjyqhMIihI9+zv4LPyZdRje2cavWPbCbWm3eA==",
       "license": "MIT",
-      "resolved": "https://registry.npmjs.org/@babel/helper-string-parser/-/helper-string-parser-7.27.1.tgz",
       "version": "7.27.1"
     },
     "node_modules/@babel/helper-validator-identifier": {
       "engines": {
         "node": ">=6.9.0"
       },
-      "integrity": "sha512-qSs4ifwzKJSV39ucNjsvc6WVHs6b7S03sOh2OcHF9UHfVPqWWALUsNUVzhSBiItjRZoLHx7nIarVjqKVusUZ1Q==",
       "license": "MIT",
-      "resolved": "https://registry.npmjs.org/@babel/helper-validator-identifier/-/helper-validator-identifier-7.28.5.tgz",
       "version": "7.28.5"
     },
     "node_modules/@babel/helper-validator-option": {
@@ -210,9 +188,7 @@
       "engines": {
         "node": ">=6.9.0"
       },
-      "integrity": "sha512-YvjJow9FxbhFFKDSuFnVCe2WxXk1zWc22fFePVNEaWJEu8IrZVlda6N0uHwzZrUM1il7NC9Mlp4MaJYbYd9JSg==",
       "license": "MIT",
-      "resolved": "https://registry.npmjs.org/@babel/helper-validator-option/-/helper-validator-option-7.27.1.tgz",
       "version": "7.27.1"
     },
     "node_modules/@babel/helpers": {
@@ -224,9 +200,7 @@
       "engines": {
         "node": ">=6.9.0"
       },
-      "integrity": "sha512-xOBvwq86HHdB7WUDTfKfT/Vuxh7gElQ+Sfti2Cy6yIWNW05P8iUslOVcZ4/sKbE+/jQaukQAdz/gf3724kYdqw==",
       "license": "MIT",
-      "resolved": "https://registry.npmjs.org/@babel/helpers/-/helpers-7.28.6.tgz",
       "version": "7.28.6"
     },
     "node_modules/@babel/parser": {
@@ -239,18 +213,14 @@
       "engines": {
         "node": ">=6.0.0"
       },
-      "integrity": "sha512-IyDgFV5GeDUVX4YdF/3CPULtVGSXXMLh1xVIgdCgxApktqnQV0r7/8Nqthg+8YLGaAtdyIlo2qIdZrbCv4+7ww==",
       "license": "MIT",
-      "resolved": "https://registry.npmjs.org/@babel/parser/-/parser-7.29.0.tgz",
       "version": "7.29.0"
     },
     "node_modules/@babel/runtime": {
       "engines": {
         "node": ">=6.9.0"
       },
-      "integrity": "sha512-05WQkdpL9COIMz4LjTxGpPNCdlpyimKppYNoJ5Di5EUObifl8t4tuLuUBBZEpoLYOmfvIWrsp9fCl0HoPRVTdA==",
       "license": "MIT",
-      "resolved": "https://registry.npmjs.org/@babel/runtime/-/runtime-7.28.6.tgz",
       "version": "7.28.6"
     },
     "node_modules/@babel/template": {
@@ -262,9 +232,7 @@
       "engines": {
         "node": ">=6.9.0"
       },
-      "integrity": "sha512-YA6Ma2KsCdGb+WC6UpBVFJGXL58MDA6oyONbjyF/+5sBgxY/dwkhLogbMT2GXXyU84/IhRw/2D1Os1B/giz+BQ==",
       "license": "MIT",
-      "resolved": "https://registry.npmjs.org/@babel/template/-/template-7.28.6.tgz",
       "version": "7.28.6"
     },
     "node_modules/@babel/traverse": {
@@ -280,9 +248,7 @@
       "engines": {
         "node": ">=6.9.0"
       },
-      "integrity": "sha512-4HPiQr0X7+waHfyXPZpWPfWL/J7dcN1mx9gL6WdQVMbPnF3+ZhSMs8tCxN7oHddJE9fhNE7+lxdnlyemKfJRuA==",
       "license": "MIT",
-      "resolved": "https://registry.npmjs.org/@babel/traverse/-/traverse-7.29.0.tgz",
       "version": "7.29.0"
     },
     "node_modules/@babel/types": {
@@ -293,9 +259,7 @@
       "engines": {
         "node": ">=6.9.0"
       },
-      "integrity": "sha512-LwdZHpScM4Qz8Xw2iKSzS+cfglZzJGvofQICy7W7v4caru4EaAmyUuO6BGrbyQ2mYV11W0U8j5mBhd14dd3B0A==",
       "license": "MIT",
-      "resolved": "https://registry.npmjs.org/@babel/types/-/types-7.29.0.tgz",
       "version": "7.29.0"
     },
     "node_modules/@emnapi/core": {
@@ -346,15 +310,11 @@
         "source-map": "^0.5.7",
         "stylis": "4.2.0"
       },
-      "integrity": "sha512-pxHCpT2ex+0q+HH91/zsdHkw/lXd468DIN2zvfvLtPKLLMo6gQj7oLObq8PhkrxOZb/gGCq03S3Z7PDhS8pduQ==",
       "license": "MIT",
-      "resolved": "https://registry.npmjs.org/@emotion/babel-plugin/-/babel-plugin-11.13.5.tgz",
       "version": "11.13.5"
     },
     "node_modules/@emotion/babel-plugin/node_modules/convert-source-map": {
-      "integrity": "sha512-ASFBup0Mz1uyiIjANan1jzLQami9z1PoYSZCiiYW2FczPbenXc45FZdBZLzOT+r6+iciuEModtmCti+hjaAk0A==",
       "license": "MIT",
-      "resolved": "https://registry.npmjs.org/convert-source-map/-/convert-source-map-1.9.0.tgz",
       "version": "1.9.0"
     },
     "node_modules/@emotion/cache": {
@@ -365,21 +325,15 @@
         "@emotion/weak-memoize": "^0.4.0",
         "stylis": "4.2.0"
       },
-      "integrity": "sha512-L/B1lc/TViYk4DcpGxtAVbx0ZyiKM5ktoIyafGkH6zg/tj+mA+NE//aPYKG0k8kCHSHVJrpLpcAlOBEXQ3SavA==",
       "license": "MIT",
-      "resolved": "https://registry.npmjs.org/@emotion/cache/-/cache-11.14.0.tgz",
       "version": "11.14.0"
     },
     "node_modules/@emotion/hash": {
-      "integrity": "sha512-MyqliTZGuOm3+5ZRSaaBGP3USLw6+EGykkwZns2EPC5g8jJ4z9OrdZY9apkl3+UP9+sdz76YYkwCKP5gh8iY3g==",
       "license": "MIT",
-      "resolved": "https://registry.npmjs.org/@emotion/hash/-/hash-0.9.2.tgz",
       "version": "0.9.2"
     },
     "node_modules/@emotion/memoize": {
-      "integrity": "sha512-30FAj7/EoJ5mwVPOWhAyCX+FPfMDrVecJAM+Iw9NRoSl4BBAQeqj4cApHHUXOVvIPgLVDsCFoz/hGD+5QQD1GQ==",
       "license": "MIT",
-      "resolved": "https://registry.npmjs.org/@emotion/memoize/-/memoize-0.9.0.tgz",
       "version": "0.9.0"
     },
     "node_modules/@emotion/react": {
@@ -393,7 +347,6 @@
         "@emotion/weak-memoize": "^0.4.0",
         "hoist-non-react-statics": "^3.3.1"
       },
-      "integrity": "sha512-O000MLDBDdk/EohJPFUqvnp4qnHeYkVP5B0xEG0D/L7cOKP9kefu2DXn8dj74cQfsEzUqh+sr1RzFqiL1o+PpA==",
       "license": "MIT",
       "peerDependencies": {
         "react": ">=16.8.0"
@@ -403,7 +356,6 @@
           "optional": true
         }
       },
-      "resolved": "https://registry.npmjs.org/@emotion/react/-/react-11.14.0.tgz",
       "version": "11.14.0"
     },
     "node_modules/@emotion/serialize": {
@@ -414,42 +366,30 @@
         "@emotion/utils": "^1.4.2",
         "csstype": "^3.0.2"
       },
-      "integrity": "sha512-EISGqt7sSNWHGI76hC7x1CksiXPahbxEOrC5RjmFRJTqLyEK9/9hZvBbiYn70dw4wuwMKiEMCUlR6ZXTSWQqxA==",
       "license": "MIT",
-      "resolved": "https://registry.npmjs.org/@emotion/serialize/-/serialize-1.3.3.tgz",
       "version": "1.3.3"
     },
     "node_modules/@emotion/sheet": {
-      "integrity": "sha512-fTBW9/8r2w3dXWYM4HCB1Rdp8NLibOw2+XELH5m5+AkWiL/KqYX6dc0kKYlaYyKjrQ6ds33MCdMPEwgs2z1rqg==",
       "license": "MIT",
-      "resolved": "https://registry.npmjs.org/@emotion/sheet/-/sheet-1.4.0.tgz",
       "version": "1.4.0"
     },
     "node_modules/@emotion/unitless": {
-      "integrity": "sha512-dFoMUuQA20zvtVTuxZww6OHoJYgrzfKM1t52mVySDJnMSEa08ruEvdYQbhvyu6soU+NeLVd3yKfTfT0NeV6qGg==",
       "license": "MIT",
-      "resolved": "https://registry.npmjs.org/@emotion/unitless/-/unitless-0.10.0.tgz",
       "version": "0.10.0"
     },
     "node_modules/@emotion/use-insertion-effect-with-fallbacks": {
-      "integrity": "sha512-yJMtVdH59sxi/aVJBpk9FQq+OR8ll5GT8oWd57UpeaKEVGab41JWaCFA7FRLoMLloOZF/c/wsPoe+bfGmRKgDg==",
       "license": "MIT",
       "peerDependencies": {
         "react": ">=16.8.0"
       },
-      "resolved": "https://registry.npmjs.org/@emotion/use-insertion-effect-with-fallbacks/-/use-insertion-effect-with-fallbacks-1.2.0.tgz",
       "version": "1.2.0"
     },
     "node_modules/@emotion/utils": {
-      "integrity": "sha512-3vLclRofFziIa3J2wDh9jjbkUz9qk5Vi3IZ/FSTKViB0k+ef0fPV7dYrUIugbgupYDx7v9ud/SjrtEP8Y4xLoA==",
       "license": "MIT",
-      "resolved": "https://registry.npmjs.org/@emotion/utils/-/utils-1.4.2.tgz",
       "version": "1.4.2"
     },
     "node_modules/@emotion/weak-memoize": {
-      "integrity": "sha512-snKqtPW01tN0ui7yu9rGv69aJXr/a/Ywvl11sUjNtEcRc+ng/mQriFL0wLXMef74iHa/EkftbDzU9F8iFbH+zg==",
       "license": "MIT",
-      "resolved": "https://registry.npmjs.org/@emotion/weak-memoize/-/weak-memoize-0.4.0.tgz",
       "version": "0.4.0"
     },
     "node_modules/@eslint-community/eslint-utils": {
@@ -463,12 +403,10 @@
       "funding": {
         "url": "https://opencollective.com/eslint"
       },
-      "integrity": "sha512-phrYmNiYppR7znFEdqgfWHXR6NCkZEK7hwWDHZUjit/2/U0r6XvkDl0SYnoM51Hq7FhCGdLDT6zxCCOY1hexsQ==",
       "license": "MIT",
       "peerDependencies": {
         "eslint": "^6.0.0 || ^7.0.0 || >=8.0.0"
       },
-      "resolved": "https://registry.npmjs.org/@eslint-community/eslint-utils/-/eslint-utils-4.9.1.tgz",
       "version": "4.9.1"
     },
     "node_modules/@eslint-community/eslint-utils/node_modules/eslint-visitor-keys": {
@@ -479,9 +417,7 @@
       "funding": {
         "url": "https://opencollective.com/eslint"
       },
-      "integrity": "sha512-wpc+LXeiyiisxPlEkUzU6svyS1frIO3Mgxj1fdy7Pm8Ygzguax2N3Fa/D/ag1WqbOprdI+uY6wMUl8/a2G+iag==",
       "license": "Apache-2.0",
-      "resolved": "https://registry.npmjs.org/eslint-visitor-keys/-/eslint-visitor-keys-3.4.3.tgz",
       "version": "3.4.3"
     },
     "node_modules/@eslint-community/regexpp": {
@@ -489,9 +425,7 @@
       "engines": {
         "node": "^12.0.0 || ^14.0.0 || >=16.0.0"
       },
-      "integrity": "sha512-EriSTlt5OC9/7SXkRSCAhfSxxoSUgBm33OH+IkwbdpgoqsSsUg7y3uh+IICI/Qg4BBWr3U2i39RpmycbxMq4ew==",
       "license": "MIT",
-      "resolved": "https://registry.npmjs.org/@eslint-community/regexpp/-/regexpp-4.12.2.tgz",
       "version": "4.12.2"
     },
     "node_modules/@eslint/config-array": {
@@ -504,9 +438,7 @@
       "engines": {
         "node": "^20.19.0 || ^22.13.0 || >=24"
       },
-      "integrity": "sha512-j+eEWmB6YYLwcNOdlwQ6L2OsptI/LO6lNBuLIqe5R7RetD658HLoF+Mn7LzYmAWWNNzdC6cqP+L6r8ujeYXWLw==",
       "license": "Apache-2.0",
-      "resolved": "https://registry.npmjs.org/@eslint/config-array/-/config-array-0.23.3.tgz",
       "version": "0.23.3"
     },
     "node_modules/@eslint/config-helpers": {
@@ -517,9 +449,7 @@
       "engines": {
         "node": "^20.19.0 || ^22.13.0 || >=24"
       },
-      "integrity": "sha512-a5MxrdDXEvqnIq+LisyCX6tQMPF/dSJpCfBgBauY+pNZ28yCtSsTvyTYrMhaI+LK26bVyCJfJkT0u8KIj2i1dQ==",
       "license": "Apache-2.0",
-      "resolved": "https://registry.npmjs.org/@eslint/config-helpers/-/config-helpers-0.5.2.tgz",
       "version": "0.5.2"
     },
     "node_modules/@eslint/core": {
@@ -530,9 +460,7 @@
       "engines": {
         "node": "^20.19.0 || ^22.13.0 || >=24"
       },
-      "integrity": "sha512-QUPblTtE51/7/Zhfv8BDwO0qkkzQL7P/aWWbqcf4xWLEYn1oKjdO0gglQBB4GAsu7u6wjijbCmzsUTy6mnk6oQ==",
       "license": "Apache-2.0",
-      "resolved": "https://registry.npmjs.org/@eslint/core/-/core-1.1.1.tgz",
       "version": "1.1.1"
     },
     "node_modules/@eslint/js": {
@@ -543,7 +471,6 @@
       "funding": {
         "url": "https://eslint.org/donate"
       },
-      "integrity": "sha512-zeR9k5pd4gxjZ0abRoIaxdc7I3nDktoXZk2qOv9gCNWx3mVwEn32VRhyLaRsDiJjTs0xq/T8mfPtyuXu7GWBcA==",
       "license": "MIT",
       "peerDependencies": {
         "eslint": "^10.0.0"
@@ -553,7 +480,6 @@
           "optional": true
         }
       },
-      "resolved": "https://registry.npmjs.org/@eslint/js/-/js-10.0.1.tgz",
       "version": "10.0.1"
     },
     "node_modules/@eslint/object-schema": {
@@ -561,9 +487,7 @@
       "engines": {
         "node": "^20.19.0 || ^22.13.0 || >=24"
       },
-      "integrity": "sha512-iM869Pugn9Nsxbh/YHRqYiqd23AmIbxJOcpUMOuWCVNdoQJ5ZtwL6h3t0bcZzJUlC3Dq9jCFCESBZnX0GTv7iQ==",
       "license": "Apache-2.0",
-      "resolved": "https://registry.npmjs.org/@eslint/object-schema/-/object-schema-3.0.3.tgz",
       "version": "3.0.3"
     },
     "node_modules/@eslint/plugin-kit": {
@@ -575,18 +499,14 @@
       "engines": {
         "node": "^20.19.0 || ^22.13.0 || >=24"
       },
-      "integrity": "sha512-iH1B076HoAshH1mLpHMgwdGeTs0CYwL0SPMkGuSebZrwBp16v415e9NZXg2jtrqPVQjf6IANe2Vtlr5KswtcZQ==",
       "license": "Apache-2.0",
-      "resolved": "https://registry.npmjs.org/@eslint/plugin-kit/-/plugin-kit-0.6.1.tgz",
       "version": "0.6.1"
     },
     "node_modules/@floating-ui/core": {
       "dependencies": {
         "@floating-ui/utils": "^0.2.10"
       },
-      "integrity": "sha512-C3HlIdsBxszvm5McXlB8PeOEWfBhcGBTZGkGlWc2U0KFY5IwG5OQEuQ8rq52DZmcHDlPLd+YFBK+cZcytwIFWg==",
       "license": "MIT",
-      "resolved": "https://registry.npmjs.org/@floating-ui/core/-/core-1.7.4.tgz",
       "version": "1.7.4"
     },
     "node_modules/@floating-ui/dom": {
@@ -594,9 +514,7 @@
         "@floating-ui/core": "^1.7.4",
         "@floating-ui/utils": "^0.2.10"
       },
-      "integrity": "sha512-N0bD2kIPInNHUHehXhMke1rBGs1dwqvC9O9KYMyyjK7iXt7GAhnro7UlcuYcGdS/yYOlq0MAVgrow8IbWJwyqg==",
       "license": "MIT",
-      "resolved": "https://registry.npmjs.org/@floating-ui/dom/-/dom-1.7.5.tgz",
       "version": "1.7.5"
     },
     "node_modules/@floating-ui/react": {
@@ -605,32 +523,26 @@
         "@floating-ui/utils": "^0.2.10",
         "tabbable": "^6.0.0"
       },
-      "integrity": "sha512-LGVZKHwmWGg6MRHjLLgsfyaX2y2aCNgnD1zT/E6B+/h+vxg+nIJUqHPAlTzsHDyqdgEpJ1Np5kxWuFEErXzoGg==",
       "license": "MIT",
       "peerDependencies": {
         "react": ">=17.0.0",
         "react-dom": ">=17.0.0"
       },
-      "resolved": "https://registry.npmjs.org/@floating-ui/react/-/react-0.27.17.tgz",
       "version": "0.27.17"
     },
     "node_modules/@floating-ui/react-dom": {
       "dependencies": {
         "@floating-ui/dom": "^1.7.5"
       },
-      "integrity": "sha512-0tLRojf/1Go2JgEVm+3Frg9A3IW8bJgKgdO0BN5RkF//ufuz2joZM63Npau2ff3J6lUVYgDSNzNkR+aH3IVfjg==",
       "license": "MIT",
       "peerDependencies": {
         "react": ">=16.8.0",
         "react-dom": ">=16.8.0"
       },
-      "resolved": "https://registry.npmjs.org/@floating-ui/react-dom/-/react-dom-2.1.7.tgz",
       "version": "2.1.7"
     },
     "node_modules/@floating-ui/utils": {
-      "integrity": "sha512-aGTxbpbg8/b5JfU1HXSrbH3wXZuLPJcNEcZQFMxLs3oSzgtVu6nFPkbbGGUvBcUjKV2YyB9Wxxabo+HEH9tcRQ==",
       "license": "MIT",
-      "resolved": "https://registry.npmjs.org/@floating-ui/utils/-/utils-0.2.10.tgz",
       "version": "0.2.10"
     },
     "node_modules/@humanfs/core": {
@@ -638,9 +550,7 @@
       "engines": {
         "node": ">=18.18.0"
       },
-      "integrity": "sha512-5DyQ4+1JEUzejeK1JGICcideyfUbGixgS9jNgex5nqkW+cY7WZhxBigmieN5Qnw9ZosSNVC9KQKyb+GUaGyKUA==",
       "license": "Apache-2.0",
-      "resolved": "https://registry.npmjs.org/@humanfs/core/-/core-0.19.1.tgz",
       "version": "0.19.1"
     },
     "node_modules/@humanfs/node": {
@@ -652,9 +562,7 @@
       "engines": {
         "node": ">=18.18.0"
       },
-      "integrity": "sha512-/zUx+yOsIrG4Y43Eh2peDeKCxlRt/gET6aHfaKpuq267qXdYDFViVHfMaLyygZOnl0kGWxFIgsBy8QFuTLUXEQ==",
       "license": "Apache-2.0",
-      "resolved": "https://registry.npmjs.org/@humanfs/node/-/node-0.16.7.tgz",
       "version": "0.16.7"
     },
     "node_modules/@humanwhocodes/module-importer": {
@@ -666,9 +574,7 @@
         "type": "github",
         "url": "https://github.com/sponsors/nzakas"
       },
-      "integrity": "sha512-bxveV4V8v5Yb4ncFTT3rPSgZBOpCkjfK0y4oVVVJwIuDVBRMDXrPyXRL988i5ap9m9bnyEEjWfm5WkBmtffLfA==",
       "license": "Apache-2.0",
-      "resolved": "https://registry.npmjs.org/@humanwhocodes/module-importer/-/module-importer-1.0.1.tgz",
       "version": "1.0.1"
... diff truncated: showing 800 of 3756 lines

cursoragent and others added 2 commits March 24, 2026 10:09
Centralize pyarr compatibility in a shared adapter that preserves qBitrr's legacy client calls while mapping to pyarr v6 composition APIs. This reduces runtime breakage from pyarr upgrades and standardizes pyarr exception/type imports across Arr and WebUI paths.
Feramance added 2 commits March 24, 2026 11:54
Translate legacy qBitrr pyarr client init arguments (including host_url and URL-style positional host values) into pyarr v6 host/port/tls/base_path arguments to prevent runtime constructor failures.
Catch DelayLoopException in tracker-priority sorting so optimization failures do not abort the main torrent processing cycle.
Copy link
Contributor

@cursor cursor bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Cursor Bugbot has reviewed your changes and found 1 potential issue.

There are 2 total unresolved issues (including 1 from previous review).

Autofix Details

Bugbot Autofix prepared a fix for the issue found in the latest run.

  • ✅ Fixed: Missing sort_torrents attribute breaks PlaceHolderArr processing
    • Added a default self.sort_torrents = False in PlaceHolderArr.__init__ so non-qBit-managed placeholder categories no longer raise AttributeError in process_torrents.

cursoragent and others added 4 commits March 24, 2026 11:12
…er for v6

- Re-export PyarrConnectionError from pyarr_compat; set api_ver for Radarr/Sonarr (v3) and Lidarr (v12025-03-24) when using pyarr v6 to skip version probe.
- Treat PyarrConnectionError like other Arr outages in run_search_loop (delay + continue).
- Retry PyarrConnectionError in _ARR_RETRY_EXCEPTIONS_EXTENDED for get_queue.
Copy link
Contributor

@cursor cursor bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Cursor Bugbot has reviewed your changes and found 1 potential issue.

Autofix Details

Bugbot Autofix prepared a fix for the issue found in the latest run.

  • ✅ Fixed: Sorting makes N API calls every cycle unconditionally
    • Added a queue-order short-circuit in _sort_torrents_by_tracker_priority so torrents_top_priority calls are skipped when current and desired order already match.
Preview (48a7eff0c5)
diff --git a/SECURITY.md b/SECURITY.md
new file mode 100644
--- /dev/null
+++ b/SECURITY.md
@@ -1,0 +1,23 @@
+# Security Policy
+
+## Supported versions
+
+Security fixes are provided **only for the latest released version** of qBitrr. Older releases are not maintained for security patches. [Upgrade to the latest release](https://github.com/Feramance/qBitrr/releases/latest) to receive security updates.
+
+## Reporting a vulnerability
+
+Report security issues **privately** so they can be fixed before public disclosure.
+
+**Preferred:** Use GitHub's private reporting flow: open the [Security tab](https://github.com/Feramance/qBitrr/security), then use **Report a vulnerability**.
+
+Please include:
+
+- A clear description of the issue and its potential impact
+- Steps to reproduce (if possible)
+- The qBitrr version and environment you tested (OS, install method, relevant config if safe to share)
+
+We will acknowledge receipt when we can and coordinate on a fix and disclosure timeline where appropriate.
+
+## Coordinated disclosure
+
+Please do not publish details of an unfixed vulnerability until a fix is available, unless we agree otherwise.

diff --git a/docs/configuration/seeding.md b/docs/configuration/seeding.md
--- a/docs/configuration/seeding.md
+++ b/docs/configuration/seeding.md
@@ -51,6 +51,32 @@
 
 ---
 
+### SortTorrents
+
+**Type:** Boolean (per-tracker)
+**Default:** `false`
+
+Set on individual tracker entries in `[[qBit.Trackers]]` or `[[<Arr>.Torrent.Trackers]]`, **right under [Priority](#priority)**.
+
+When `true` on **any** configured tracker, qBitrr reorders torrents in the qBittorrent queue each processing cycle so that torrents are at the top of the queue in order of their **tracker priority** (highest first). Torrents whose trackers are not in your configured trackers list are assigned the lowest priority and appear at the bottom.
+
+**Requirements:**
+
+- **qBittorrent Torrent Queuing** must be enabled (Options → BitTorrent → Torrent Queuing).
+
+**Example:**
+
+```toml
+[[Radarr-Movies.Torrent.Trackers]]
+Name = "BeyondHD"
+URI = "https://tracker.beyond-hd.me/announce"
+Priority = 10
+SortTorrents = true
+MaxUploadRatio = 1.0
+```
+
+---
+
 ## Global Seeding Settings
 
 ### Complete Example

diff --git a/docs/development/contributing.md b/docs/development/contributing.md
--- a/docs/development/contributing.md
+++ b/docs/development/contributing.md
@@ -31,6 +31,7 @@
 - [ ] Code follows [style guidelines](code-style.md)
 - [ ] Pre-commit hooks pass (`pre-commit run --all-files`)
 - [ ] Changes tested locally with live qBittorrent + Arr instances
+- [ ] If touching Arr integrations, validate against supported pyarr versions (v5 and v6)
 - [ ] Documentation updated (if adding features)
 - [ ] Commit messages follow conventional commits format
 

diff --git a/docs/development/index.md b/docs/development/index.md
--- a/docs/development/index.md
+++ b/docs/development/index.md
@@ -28,6 +28,7 @@
 - **Node.js 18+** - For WebUI development
 - **Git** - Version control
 - **Make** - Build automation (optional but recommended)
+- **pyarr compatibility** - qBitrr currently supports pyarr v5 and v6 (`pyarr>=5.2,<7`)
 
 ### Repository Structure
 

diff --git a/qBitrr/arss.py b/qBitrr/arss.py
--- a/qBitrr/arss.py
+++ b/qBitrr/arss.py
@@ -24,9 +24,6 @@
 from jaraco.docker import is_docker
 from packaging import version as version_parser
 from peewee import DatabaseError, Model, OperationalError, SqliteDatabase
-from pyarr import LidarrAPI, RadarrAPI, SonarrAPI
-from pyarr.exceptions import PyarrResourceNotFound, PyarrServerError
-from pyarr.types import JsonObject
 from qbittorrentapi import TorrentDictionary, TorrentStates
 from ujson import JSONDecodeError
 
@@ -56,6 +53,15 @@
     UnhandledError,
 )
 from qBitrr.logger import run_logs
+from qBitrr.pyarr_compat import (
+    JsonObject,
+    LidarrAPI,
+    PyarrConnectionError,
+    PyarrResourceNotFound,
+    PyarrServerError,
+    RadarrAPI,
+    SonarrAPI,
+)
 from qBitrr.search_activity_store import (
     clear_search_activity,
     fetch_search_activities,
@@ -97,6 +103,7 @@
     requests.exceptions.ConnectionError,
     JSONDecodeError,
     requests.exceptions.RequestException,
+    PyarrConnectionError,
 )
 
 
@@ -329,6 +336,7 @@
         self._normalized_bad_tracker_msgs: set[str] = {
             msg.lower() for msg in self.seeding_mode_global_bad_tracker_msg if isinstance(msg, str)
         }
+        self.sort_torrents = any(i.get("SortTorrents", False) for i in self.monitored_trackers)
 
         if (
             self.auto_delete is True
@@ -594,6 +602,7 @@
         self.downloads_with_bad_error_message_blocklist = set()
         self.needs_cleanup = False
         self._warned_no_seeding_limits = False
+        self._torrent_important_trackers_cache: dict[str, tuple[set[str], set[str]]] = {}
 
         self.last_search_description: str | None = None
         self.last_search_timestamp: str | None = None
@@ -4870,6 +4879,63 @@
         )
         return all_torrents
 
+    def _sort_torrents_by_tracker_priority(
+        self,
+        torrents_with_instances: list[tuple[str, qbittorrentapi.TorrentDictionary]],
+    ) -> None:
+        """
+        Reorder torrents in each qBittorrent instance by tracker priority (highest first).
+        Requires qBittorrent Torrent Queuing to be enabled.
+        """
+        by_instance: dict[str, list[qbittorrentapi.TorrentDictionary]] = defaultdict(list)
+        for instance_name, torrent in torrents_with_instances:
+            by_instance[instance_name].append(torrent)
+
+        qbit_manager = self.manager.qbit_manager
+        for instance_name, torrent_list in by_instance.items():
+            client = qbit_manager.get_client(instance_name)
+            if client is None:
+                continue
+            try:
+                sorted_torrents = sorted(
+                    torrent_list,
+                    key=self._get_torrent_tracker_priority,
+                    reverse=True,
+                )
+                if len(sorted_torrents) > 1:
+                    # Skip queue updates when the current queue order already matches
+                    # desired tracker-priority ordering for this instance.
+                    current_queue_order = [
+                        torrent.hash
+                        for torrent in sorted(
+                            torrent_list,
+                            key=lambda torrent: (
+                                not (
+                                    isinstance(getattr(torrent, "priority", -1), int)
+                                    and getattr(torrent, "priority", -1) > 0
+                                ),
+                                getattr(torrent, "priority", -1),
+                            ),
+                        )
+                    ]
+                    desired_queue_order = [torrent.hash for torrent in sorted_torrents]
+                    if current_queue_order == desired_queue_order:
+                        continue
+                    # qBittorrent may ignore hash input ordering in batch topPrio calls.
+                    # Move torrents one-by-one (lowest first) to enforce tracker-priority order.
+                    for torrent in reversed(sorted_torrents):
+                        client.torrents_top_priority(torrent_hashes=[torrent.hash])
+            except (
+                DelayLoopException,
+                qbittorrentapi.exceptions.APIError,
+                qbittorrentapi.exceptions.APIConnectionError,
+            ) as e:
+                self.logger.warning(
+                    "Failed to sort torrents by tracker priority on instance '%s': %s",
+                    instance_name,
+                    e,
+                )
+
     def process_torrents(self):
         try:
             try:
@@ -4889,6 +4955,7 @@
                 ]
                 self._warned_no_seeding_limits = False
                 self.category_torrent_count = len(torrents_with_instances)
+                self._torrent_important_trackers_cache.clear()
                 if not len(torrents_with_instances):
                     raise DelayLoopException(length=LOOP_SLEEP_TIMER, error_type="no_downloads")
 
@@ -4931,6 +4998,8 @@
 
                 self.api_calls()
                 self.refresh_download_queue()
+                if self.sort_torrents:
+                    self._sort_torrents_by_tracker_priority(torrents_with_instances)
                 # Multi-instance: Process torrents from all instances
                 for instance_name, torrent in torrents_with_instances:
                     with contextlib.suppress(qbittorrentapi.NotFound404Error):
@@ -5728,8 +5797,12 @@
         )
 
     def _get_torrent_important_trackers(
-        self, torrent: qbittorrentapi.TorrentDictionary
+        self, torrent: qbittorrentapi.TorrentDictionary, *, use_cache: bool = True
     ) -> tuple[set[str], set[str]]:
+        torrent_hash = getattr(torrent, "hash", "")
+        if use_cache and torrent_hash:
+            if cached := self._torrent_important_trackers_cache.get(torrent_hash):
+                return cached
         try:
             current_tracker_urls = {
                 i.url.rstrip("/") for i in torrent.trackers if hasattr(i, "url")
@@ -5759,7 +5832,10 @@
             if _extract_tracker_host(uri) not in current_hosts
         }
         monitored_trackers = monitored_trackers.union(need_to_be_added)
-        return need_to_be_added, monitored_trackers
+        result = (need_to_be_added, monitored_trackers)
+        if use_cache and torrent_hash:
+            self._torrent_important_trackers_cache[torrent_hash] = result
+        return result
 
     @staticmethod
     def __return_max(x: dict):
@@ -5782,6 +5858,14 @@
         max_item = max(new_list, key=self.__return_max) if new_list else {}
         return max_item, set(itertools.chain.from_iterable(_list_of_tags))
 
+    def _get_torrent_tracker_priority(self, torrent: qbittorrentapi.TorrentDictionary) -> int:
+        """Return the tracker Priority for this torrent's most important monitored tracker."""
+        _, monitored_trackers = self._get_torrent_important_trackers(torrent)
+        most_important_tracker, _ = self._get_most_important_tracker_and_tags(
+            monitored_trackers, set()
+        )
+        return most_important_tracker.get("Priority", -100)
+
     def _resolve_hnr_clear_mode(self, tracker_or_config: dict) -> str:
         """Resolve HnR mode from single HitAndRunMode key: 'and' | 'or' | 'disabled'."""
         raw = tracker_or_config.get("HitAndRunMode")
@@ -5958,8 +6042,10 @@
         self.tracker_delay.add(torrent.hash)
         _remove_urls = set()
         need_to_be_added, monitored_trackers = self._get_torrent_important_trackers(torrent)
+        tracker_set_changed = False
         if need_to_be_added:
             torrent.add_trackers(need_to_be_added)
+            tracker_set_changed = True
         with contextlib.suppress(BaseException):
             for tracker in torrent.trackers:
                 tracker_url = getattr(tracker, "url", None)
@@ -5987,6 +6073,9 @@
             )
             with contextlib.suppress(qbittorrentapi.Conflict409Error):
                 torrent.remove_trackers(_remove_urls)
+            tracker_set_changed = True
+        if tracker_set_changed:
+            self._torrent_important_trackers_cache.pop(torrent.hash, None)
         most_important_tracker, unique_tags = self._get_most_important_tracker_and_tags(
             monitored_trackers, _remove_urls
         )
@@ -7269,29 +7358,38 @@
                     except Exception as e:
                         self.logger.exception(e, exc_info=sys.exc_info())
                     event.wait(LOOP_SLEEP_TIMER)
-                except DelayLoopException as e:
-                    if e.error_type == "qbit":
+                except (PyarrConnectionError, DelayLoopException) as e:
+                    if isinstance(e, PyarrConnectionError):
+                        self.logger.warning(
+                            "Could not reach %s Arr API during search loop: %s",
+                            self._name,
+                            e,
+                        )
+                        delay_exc = DelayLoopException(length=300, error_type="arr")
+                    else:
+                        delay_exc = e
+                    if delay_exc.error_type == "qbit":
                         self.logger.critical(
                             "Failed to connected to qBit client, sleeping for %s",
-                            timedelta(seconds=e.length),
+                            timedelta(seconds=delay_exc.length),
                         )
-                    elif e.error_type == "internet":
+                    elif delay_exc.error_type == "internet":
                         self.logger.critical(
                             "Failed to connected to the internet, sleeping for %s",
-                            timedelta(seconds=e.length),
+                            timedelta(seconds=delay_exc.length),
                         )
-                    elif e.error_type == "arr":
+                    elif delay_exc.error_type == "arr":
                         self.logger.critical(
                             "Failed to connected to the Arr instance, sleeping for %s",
-                            timedelta(seconds=e.length),
+                            timedelta(seconds=delay_exc.length),
                         )
-                    elif e.error_type == "delay":
+                    elif delay_exc.error_type == "delay":
                         self.logger.critical(
                             "Forced delay due to temporary issue with environment, "
                             "sleeping for %s",
-                            timedelta(seconds=e.length),
+                            timedelta(seconds=delay_exc.length),
                         )
-                    event.wait(e.length)
+                    event.wait(delay_exc.length)
                     self.manager.qbit_manager.should_delay_torrent_scan = False
                 except KeyboardInterrupt:
                     self.logger.hnotice("Detected Ctrl+C - Terminating process")
@@ -7441,6 +7539,7 @@
         self.downloads_with_bad_error_message_blocklist = set()
         self.needs_cleanup = False
         self._warned_no_seeding_limits = False
+        self._torrent_important_trackers_cache: dict[str, tuple[set[str], set[str]]] = {}
         self.custom_format_unmet_search = False
         self.do_not_remove_slow = False
         self.maximum_eta = CONFIG.get_duration("Settings.Torrent.MaximumETA", fallback=86400)
@@ -7459,6 +7558,7 @@
         self._add_trackers_if_missing = set()
         self._remove_trackers_if_exists = set()
         self._monitored_tracker_urls = set()
+        self.sort_torrents = False
         self.remove_dead_trackers = False
         self._remove_tracker_hosts = set()
         self._normalized_bad_tracker_msgs = set()
@@ -7594,6 +7694,7 @@
         self._remove_tracker_hosts = {
             h for u in self._remove_trackers_if_exists if (h := _extract_tracker_host(u))
         }
+        self.sort_torrents = any(i.get("SortTorrents", False) for i in self.monitored_trackers)
         self.logger.debug(
             "Applied qBit seeding config from section '%s' for category '%s': "
             "RemoveTorrent=%s, StalledDelay=%s",
@@ -7829,6 +7930,7 @@
                 ]
                 self._warned_no_seeding_limits = False
                 self.category_torrent_count = len(torrents_with_instances)
+                self._torrent_important_trackers_cache.clear()
                 if not torrents_with_instances:
                     raise DelayLoopException(length=LOOP_SLEEP_TIMER, error_type="no_downloads")
 
@@ -7838,6 +7940,8 @@
                 if self.manager.qbit_manager.should_delay_torrent_scan:
                     raise DelayLoopException(length=NO_INTERNET_SLEEP_TIMER, error_type="delay")
 
+                if self.sort_torrents:
+                    self._sort_torrents_by_tracker_priority(torrents_with_instances)
                 for instance_name, torrent in torrents_with_instances:
                     with contextlib.suppress(qbittorrentapi.NotFound404Error):
                         self._process_single_torrent(torrent, instance_name=instance_name)

diff --git a/qBitrr/pyarr_compat.py b/qBitrr/pyarr_compat.py
new file mode 100644
--- /dev/null
+++ b/qBitrr/pyarr_compat.py
@@ -1,0 +1,313 @@
+from __future__ import annotations
+
+"""Compatibility layer for pyarr v5/v6 API differences."""
+
+from typing import Any
+from urllib.parse import urlparse
+
+try:
+    # pyarr <= v5
+    from pyarr import LidarrAPI as _LegacyLidarrAPI
+    from pyarr import RadarrAPI as _LegacyRadarrAPI
+    from pyarr import SonarrAPI as _LegacySonarrAPI
+except ImportError:  # pragma: no cover - import path only differs by installed pyarr version
+    _LegacyLidarrAPI = None
+    _LegacyRadarrAPI = None
+    _LegacySonarrAPI = None
+
+try:
+    # pyarr >= v6
+    from pyarr import Lidarr as _Lidarr
+    from pyarr import Radarr as _Radarr
+    from pyarr import Sonarr as _Sonarr
+except ImportError:  # pragma: no cover - import path only differs by installed pyarr version
+    _Lidarr = None
+    _Radarr = None
+    _Sonarr = None
+
+try:
+    from pyarr.exceptions import PyarrResourceNotFound, PyarrServerError
+except ImportError:  # pragma: no cover
+    # Last-resort fallback keeps importers working even if pyarr reshuffles modules.
+    PyarrResourceNotFound = Exception
+    PyarrServerError = Exception
+
+try:
+    from pyarr.exceptions import PyarrConnectionError
+except ImportError:  # pragma: no cover
+
+    class PyarrConnectionError(ConnectionError):
+        """Placeholder when pyarr does not expose connection errors."""
+
+
+try:
+    from pyarr.types import JsonObject
+except ImportError:  # pragma: no cover
+    JsonObject = dict[str, Any]
+
+
+class _CompatArrClient:
+    """Adapter that preserves qBitrr's legacy pyarr call surface."""
+
+    def __init__(self, client: Any):
+        self._client = client
+
+    def __getattr__(self, name: str) -> Any:
+        return getattr(self._client, name)
+
+    def _legacy_call(self, method: str, *args: Any, **kwargs: Any) -> Any:
+        return getattr(self._client, method)(*args, **kwargs)
+
+    def _has_legacy(self, method: str) -> bool:
+        return hasattr(self._client, method)
+
+    def get_update(self) -> Any:
+        if self._has_legacy("get_update"):
+            return self._legacy_call("get_update")
+        return self._client.update.get()
+
+    def get_command(self, item_id: int | None = None) -> Any:
+        if self._has_legacy("get_command"):
+            if item_id is None:
+                return self._legacy_call("get_command")
+            return self._legacy_call("get_command", item_id)
+        return self._client.command.get(item_id=item_id)
+
+    def post_command(self, command: str, **kwargs: Any) -> Any:
+        if self._has_legacy("post_command"):
+            return self._legacy_call("post_command", command, **kwargs)
+        return self._client.command.execute(command, **kwargs)
+
+    def get_queue(self, **kwargs: Any) -> JsonObject:
+        if self._has_legacy("get_queue"):
+            return self._legacy_call("get_queue", **kwargs)
+        return self._client.queue.get(**kwargs)
+
+    def del_queue(
+        self,
+        item_id: int,
+        remove_from_client: bool | None = None,
+        blacklist: bool | None = None,
+        **kwargs: Any,
+    ) -> Any:
+        if self._has_legacy("del_queue"):
+            return self._legacy_call("del_queue", item_id, remove_from_client, blacklist, **kwargs)
+        blocklist = kwargs.pop("blocklist", blacklist)
+        return self._client.queue.delete(
+            item_id=item_id, remove_from_client=remove_from_client, blocklist=blocklist, **kwargs
+        )
+
+    def get_system_status(self) -> JsonObject:
+        if self._has_legacy("get_system_status"):
+            return self._legacy_call("get_system_status")
+        return self._client.system.get_status()
+
+    def get_quality_profile(self, item_id: int | None = None) -> Any:
+        if self._has_legacy("get_quality_profile"):
+            if item_id is None:
+                return self._legacy_call("get_quality_profile")
+            return self._legacy_call("get_quality_profile", item_id)
+        return self._client.quality_profile.get(item_id=item_id)
+
+    def get_series(self, item_id: int | None = None, **kwargs: Any) -> Any:
+        if self._has_legacy("get_series"):
+            if item_id is None and "id_" in kwargs:
+                item_id = kwargs.pop("id_")
+            if item_id is None:
+                return self._legacy_call("get_series", **kwargs)
+            return self._legacy_call("get_series", item_id, **kwargs)
+        if item_id is None:
+            item_id = kwargs.pop("id_", None)
+        return self._client.series.get(item_id=item_id, **kwargs)
+
+    def get_episode(self, item_id: int | None = None, series: bool = False, **kwargs: Any) -> Any:
+        if self._has_legacy("get_episode"):
+            if item_id is None:
+                item_id = kwargs.pop("id_", None)
+            if item_id is None:
+                return self._legacy_call("get_episode", **kwargs)
+            return self._legacy_call("get_episode", item_id, series, **kwargs)
+        if series:
+            return self._client.episode.get(series_id=item_id)
+        if item_id is None:
+            item_id = kwargs.pop("id_", None)
+        return self._client.episode.get(item_id=item_id, **kwargs)
+
+    def get_episode_file(self, item_id: int | None = None, **kwargs: Any) -> Any:
+        if self._has_legacy("get_episode_file"):
+            if item_id is None:
+                return self._legacy_call("get_episode_file", **kwargs)
+            return self._legacy_call("get_episode_file", item_id, **kwargs)
+        if item_id is None:
+            item_id = kwargs.pop("id_", None)
+        return self._client.episode_file.get(item_id=item_id, **kwargs)
+
+    def upd_episode(self, item_id: int, data: JsonObject) -> JsonObject:
+        if self._has_legacy("upd_episode"):
+            return self._legacy_call("upd_episode", item_id, data)
+        return self._client.episode.update(item_id=item_id, data=data)
+
+    def upd_series(self, data: JsonObject) -> JsonObject:
+        if self._has_legacy("upd_series"):
+            return self._legacy_call("upd_series", data)
+        return self._client.series.update(data=data)
+
+    def get_movie(self, item_id: int | None = None, **kwargs: Any) -> Any:
+        if self._has_legacy("get_movie"):
+            if item_id is None:
+                return self._legacy_call("get_movie", **kwargs)
+            return self._legacy_call("get_movie", item_id, **kwargs)
+        if item_id is None:
+            item_id = kwargs.pop("id_", None)
+        return self._client.movie.get(item_id=item_id, **kwargs)
+
+    def get_movie_file(self, item_id: int | None = None, **kwargs: Any) -> Any:
+        if self._has_legacy("get_movie_file"):
+            if item_id is None:
+                return self._legacy_call("get_movie_file", **kwargs)
+            return self._legacy_call("get_movie_file", item_id, **kwargs)
+        if item_id is None:
+            item_id = kwargs.pop("id_", None)
+        return self._client.movie_file.get(item_id=item_id, **kwargs)
+
+    def upd_movie(self, data: JsonObject, move_files: bool | None = None) -> JsonObject:
+        if self._has_legacy("upd_movie"):
+            if move_files is None:
+                return self._legacy_call("upd_movie", data)
+            return self._legacy_call("upd_movie", data, move_files)
+        return self._client.movie.update(data=data, move_files=move_files)
+
+    def get_artist(self, item_id: int | None = None, **kwargs: Any) -> Any:
+        if self._has_legacy("get_artist"):
+            if item_id is None and "id_" in kwargs:
+                item_id = kwargs.pop("id_")
+            if item_id is None:
+                return self._legacy_call("get_artist", **kwargs)
+            return self._legacy_call("get_artist", item_id, **kwargs)
+        if item_id is None:
+            item_id = kwargs.pop("id_", None)
+        return self._client.artist.get(item_id=item_id, **kwargs)
+
+    def get_album(self, item_id: int | None = None, **kwargs: Any) -> Any:
+        if self._has_legacy("get_album"):
+            if item_id is None:
+                return self._legacy_call("get_album", **kwargs)
+            return self._legacy_call("get_album", item_id, **kwargs)
+        if item_id is None:
+            item_id = kwargs.pop("id_", None)
+        artist_id = kwargs.pop("artistId", kwargs.pop("artist_id", None))
+        return self._client.album.get(item_id=item_id, artist_id=artist_id, **kwargs)
+
+    def get_tracks(self, **kwargs: Any) -> Any:
+        if self._has_legacy("get_tracks"):
+            return self._legacy_call("get_tracks", **kwargs)
+        album_id = kwargs.pop("albumId", kwargs.pop("album_id", None))
+        artist_id = kwargs.pop("artistId", kwargs.pop("artist_id", None))
+        return self._client.track.get(album_id=album_id, artist_id=artist_id, **kwargs)
+
+    def get_track_file(self, item_id: int | None = None, **kwargs: Any) -> Any:
+        if self._has_legacy("get_track_file"):
+            if item_id is None:
+                return self._legacy_call("get_track_file", **kwargs)
+            return self._legacy_call("get_track_file", item_id, **kwargs)
+        if item_id is not None:
+            kwargs["track_file_ids"] = [item_id]
+        return self._client.track_file.get(**kwargs)
+
+    def upd_artist(self, data: JsonObject) -> JsonObject:
+        if self._has_legacy("upd_artist"):
+            return self._legacy_call("upd_artist", data)
+        return self._client.artist.update(data=data)
+
+
+def _normalize_v6_client_args(
+    args: tuple[Any, ...],
+    kwargs: dict[str, Any],
+    default_port: int,
+    *,
+    default_api_ver: str | None = None,
+) -> tuple[tuple[Any, ...], dict[str, Any]]:
+    """Map legacy qBitrr constructor args into pyarr v6 constructor shape."""
+    new_args = list(args)
+    new_kwargs = dict(kwargs)
+
+    host_url = new_kwargs.pop("host_url", None)
+    if host_url and "host" not in new_kwargs:
+        new_kwargs["host"] = host_url
+
+    # qBitrr frequently passes a full URL as first positional argument.
+    if new_args and isinstance(new_args[0], str) and "host" not in new_kwargs:
+        new_kwargs["host"] = new_args.pop(0)
+        if new_args and "api_key" not in new_kwargs:
+            new_kwargs["api_key"] = new_args.pop(0)
+
+    host_value = new_kwargs.get("host")
+    if isinstance(host_value, str):
+        parsed = urlparse(host_value)
+        if parsed.scheme and parsed.netloc:
+            if parsed.hostname:
+                new_kwargs["host"] = parsed.hostname
+            if "port" not in new_kwargs:
+                new_kwargs["port"] = parsed.port or default_port
+            if "tls" not in new_kwargs:
+                new_kwargs["tls"] = parsed.scheme.lower() == "https"
+            if "base_path" not in new_kwargs and parsed.path not in ("", "/"):
+                new_kwargs["base_path"] = parsed.path.rstrip("/")
+
+    if "port" not in new_kwargs:
+        new_kwargs["port"] = default_port
+
+    if default_api_ver is not None and "api_ver" not in new_kwargs:
+        new_kwargs["api_ver"] = default_api_ver
+
+    return tuple(new_args), new_kwargs
+
+
+class RadarrAPI(_CompatArrClient):
+    def __init__(self, *args: Any, **kwargs: Any):
+        if _LegacyRadarrAPI is not None:
+            super().__init__(_LegacyRadarrAPI(*args, **kwargs))
+            return
+        if _Radarr is None:
+            raise ImportError("pyarr Radarr client not found")
+        call_args, call_kwargs = _normalize_v6_client_args(
+            args, kwargs, default_port=7878, default_api_ver="v3"
+        )
+        super().__init__(_Radarr(*call_args, **call_kwargs))
+
+
+class SonarrAPI(_CompatArrClient):
+    def __init__(self, *args: Any, **kwargs: Any):
+        if _LegacySonarrAPI is not None:
+            super().__init__(_LegacySonarrAPI(*args, **kwargs))
+            return
+        if _Sonarr is None:
+            raise ImportError("pyarr Sonarr client not found")
+        call_args, call_kwargs = _normalize_v6_client_args(
+            args, kwargs, default_port=8989, default_api_ver="v3"
+        )
+        super().__init__(_Sonarr(*call_args, **call_kwargs))
+
+
+class LidarrAPI(_CompatArrClient):
+    def __init__(self, *args: Any, **kwargs: Any):
+        if _LegacyLidarrAPI is not None:
+            super().__init__(_LegacyLidarrAPI(*args, **kwargs))
+            return
+        if _Lidarr is None:
+            raise ImportError("pyarr Lidarr client not found")
+        call_args, call_kwargs = _normalize_v6_client_args(
+            args, kwargs, default_port=8686, default_api_ver="v1"
+        )
+        super().__init__(_Lidarr(*call_args, **call_kwargs))
+
+
+__all__ = [
+    "JsonObject",
+    "LidarrAPI",
+    "PyarrConnectionError",
+    "PyarrResourceNotFound",
+    "PyarrServerError",
+    "RadarrAPI",
+    "SonarrAPI",
+]

diff --git a/qBitrr/webui.py b/qBitrr/webui.py
--- a/qBitrr/webui.py
+++ b/qBitrr/webui.py
@@ -193,6 +193,14 @@
                 "WebUI configured to listen on %s. Expose this only behind a trusted reverse proxy.",
                 self.host,
             )
+            if _auth_disabled():
+                self.logger.warning(
+                    "WebUI authentication is disabled: all API and WebUI actions are available "
+                    "without credentials to any client that can reach this port. If that is not "
+                    "intentional, enable authentication (see WebUI.AuthDisabled and login/token in "
+                    "the docs), bind WebUI.Host to 127.0.0.1, or place the service behind a "
+                    "trusted reverse proxy with its own access controls."
+                )
         self.app.logger.handlers.clear()
         self.app.logger.propagate = True
         self.app.logger.setLevel(self.logger.level)
@@ -3195,15 +3203,15 @@
                     # Create temporary Arr API client
                     self.logger.info("Creating temporary %s client for %s", arr_type, uri)
                     if arr_type == "radarr":
-                        from pyarr import RadarrAPI
+                        from qBitrr.pyarr_compat import RadarrAPI
 
                         client = RadarrAPI(uri, api_key)
                     elif arr_type == "sonarr":
-                        from pyarr import SonarrAPI
+                        from qBitrr.pyarr_compat import SonarrAPI
 
                         client = SonarrAPI(uri, api_key)
                     elif arr_type == "lidarr":
-                        from pyarr import LidarrAPI
+                        from qBitrr.pyarr_compat import LidarrAPI
 
                         client = LidarrAPI(uri, api_key)
                     else:
@@ -3226,8 +3234,9 @@
                     from json import JSONDecodeError
 
                     import requests
-                    from pyarr.exceptions import PyarrServerError
 
+                    from qBitrr.pyarr_compat import PyarrServerError
+
                     max_retries = 3
                     retry_count = 0
                     quality_profiles = []
@@ -3514,15 +3523,15 @@
         # Determine client class based on name
         client_cls = None
         if re.match(r"^(Rad|rad)arr", instance_name):
-            from pyarr import RadarrAPI
+            from qBitrr.pyarr_compat import RadarrAPI
 
             client_cls = RadarrAPI
         elif re.match(r"^(Son|son|Anim|anim)arr", instance_name):
-            from pyarr import SonarrAPI
+            from qBitrr.pyarr_compat import SonarrAPI
 
             client_cls = SonarrAPI
         elif re.match(r"^(Lid|lid)arr", instance_name):
-            from pyarr import LidarrAPI
+            from qBitrr.pyarr_compat import LidarrAPI
 
             client_cls = LidarrAPI
         else:

diff --git a/setup.cfg b/setup.cfg
--- a/setup.cfg
+++ b/setup.cfg
@@ -70,7 +70,7 @@
     pathos>=0.3
     peewee>=3.17
     ping3>=4.0
-    pyarr>=5.2
+    pyarr>=5.2,<7
     qbittorrent-api>=2024.2
     requests>=2.31
     tomlkit>=0.12

diff --git a/webui/package-lock.json b/webui/package-lock.json
--- a/webui/package-lock.json
+++ b/webui/package-lock.json
@@ -54,9 +54,7 @@
       "funding": {
         "url": "https://github.com/sponsors/sindresorhus"
       },
-      "integrity": "sha512-UrcABB+4bUrFABwbluTIBErXwvbsU/V7TZWfmbgJfbkwiBuziS9gxdODUyuiecfdGQ85jglMW6juS3+z5TsKLw==",
       "license": "MIT",
-      "resolved": "https://registry.npmjs.org/@alloc/quick-lru/-/quick-lru-5.2.0.tgz",
       "version": "5.2.0"
     },
     "node_modules/@babel/code-frame": {
@@ -68,9 +66,7 @@
       "engines": {
         "node": ">=6.9.0"
       },
-      "integrity": "sha512-9NhCeYjq9+3uxgdtp20LSiJXJvN0FeCtNGpJxuMFZ1Kv3cWUNb6DOhJwUvcVCzKGR66cw4njwM6hrJLqgOwbcw==",
       "license": "MIT",
-      "resolved": "https://registry.npmjs.org/@babel/code-frame/-/code-frame-7.29.0.tgz",
       "version": "7.29.0"
     },
     "node_modules/@babel/compat-data": {
... diff truncated: showing 800 of 4191 lines

@cursor

This comment has been minimized.

Copy link
Contributor

@cursor cursor bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Cursor Bugbot has reviewed your changes and found 1 potential issue.

Autofix Details

Bugbot Autofix prepared a fix for the issue found in the latest run.

  • ✅ Fixed: Sort comparison mixes separate qBit queues, never stabilizes
    • I updated the queue-stability check (and reordering loop) to compare and apply tracker-priority ordering separately for downloading and seeding queues, matching qBittorrent’s independent queue semantics.
Preview (84c7cf4087)
diff --git a/SECURITY.md b/SECURITY.md
new file mode 100644
--- /dev/null
+++ b/SECURITY.md
@@ -1,0 +1,23 @@
+# Security Policy
+
+## Supported versions
+
+Security fixes are provided **only for the latest released version** of qBitrr. Older releases are not maintained for security patches. [Upgrade to the latest release](https://github.com/Feramance/qBitrr/releases/latest) to receive security updates.
+
+## Reporting a vulnerability
+
+Report security issues **privately** so they can be fixed before public disclosure.
+
+**Preferred:** Use GitHub's private reporting flow: open the [Security tab](https://github.com/Feramance/qBitrr/security), then use **Report a vulnerability**.
+
+Please include:
+
+- A clear description of the issue and its potential impact
+- Steps to reproduce (if possible)
+- The qBitrr version and environment you tested (OS, install method, relevant config if safe to share)
+
+We will acknowledge receipt when we can and coordinate on a fix and disclosure timeline where appropriate.
+
+## Coordinated disclosure
+
+Please do not publish details of an unfixed vulnerability until a fix is available, unless we agree otherwise.

diff --git a/docs/configuration/seeding.md b/docs/configuration/seeding.md
--- a/docs/configuration/seeding.md
+++ b/docs/configuration/seeding.md
@@ -51,6 +51,32 @@
 
 ---
 
+### SortTorrents
+
+**Type:** Boolean (per-tracker)
+**Default:** `false`
+
+Set on individual tracker entries in `[[qBit.Trackers]]` or `[[<Arr>.Torrent.Trackers]]`, **right under [Priority](#priority)**.
+
+When `true` on **any** configured tracker, qBitrr reorders torrents in the qBittorrent queue each processing cycle so that torrents are at the top of the queue in order of their **tracker priority** (highest first). Torrents whose trackers are not in your configured trackers list are assigned the lowest priority and appear at the bottom.
+
+**Requirements:**
+
+- **qBittorrent Torrent Queuing** must be enabled (Options → BitTorrent → Torrent Queuing).
+
+**Example:**
+
+```toml
+[[Radarr-Movies.Torrent.Trackers]]
+Name = "BeyondHD"
+URI = "https://tracker.beyond-hd.me/announce"
+Priority = 10
+SortTorrents = true
+MaxUploadRatio = 1.0
+```
+
+---
+
 ## Global Seeding Settings
 
 ### Complete Example

diff --git a/docs/development/contributing.md b/docs/development/contributing.md
--- a/docs/development/contributing.md
+++ b/docs/development/contributing.md
@@ -31,6 +31,7 @@
 - [ ] Code follows [style guidelines](code-style.md)
 - [ ] Pre-commit hooks pass (`pre-commit run --all-files`)
 - [ ] Changes tested locally with live qBittorrent + Arr instances
+- [ ] If touching Arr integrations, validate against supported pyarr versions (v5 and v6)
 - [ ] Documentation updated (if adding features)
 - [ ] Commit messages follow conventional commits format
 

diff --git a/docs/development/index.md b/docs/development/index.md
--- a/docs/development/index.md
+++ b/docs/development/index.md
@@ -28,6 +28,7 @@
 - **Node.js 18+** - For WebUI development
 - **Git** - Version control
 - **Make** - Build automation (optional but recommended)
+- **pyarr compatibility** - qBitrr currently supports pyarr v5 and v6 (`pyarr>=5.2,<7`)
 
 ### Repository Structure
 

diff --git a/qBitrr/arss.py b/qBitrr/arss.py
--- a/qBitrr/arss.py
+++ b/qBitrr/arss.py
@@ -24,9 +24,6 @@
 from jaraco.docker import is_docker
 from packaging import version as version_parser
 from peewee import DatabaseError, Model, OperationalError, SqliteDatabase
-from pyarr import LidarrAPI, RadarrAPI, SonarrAPI
-from pyarr.exceptions import PyarrResourceNotFound, PyarrServerError
-from pyarr.types import JsonObject
 from qbittorrentapi import TorrentDictionary, TorrentStates
 from ujson import JSONDecodeError
 
@@ -56,6 +53,15 @@
     UnhandledError,
 )
 from qBitrr.logger import run_logs
+from qBitrr.pyarr_compat import (
+    JsonObject,
+    LidarrAPI,
+    PyarrConnectionError,
+    PyarrResourceNotFound,
+    PyarrServerError,
+    RadarrAPI,
+    SonarrAPI,
+)
 from qBitrr.search_activity_store import (
     clear_search_activity,
     fetch_search_activities,
@@ -97,6 +103,7 @@
     requests.exceptions.ConnectionError,
     JSONDecodeError,
     requests.exceptions.RequestException,
+    PyarrConnectionError,
 )
 
 
@@ -329,6 +336,7 @@
         self._normalized_bad_tracker_msgs: set[str] = {
             msg.lower() for msg in self.seeding_mode_global_bad_tracker_msg if isinstance(msg, str)
         }
+        self.sort_torrents = any(i.get("SortTorrents", False) for i in self.monitored_trackers)
 
         if (
             self.auto_delete is True
@@ -594,6 +602,7 @@
         self.downloads_with_bad_error_message_blocklist = set()
         self.needs_cleanup = False
         self._warned_no_seeding_limits = False
+        self._torrent_important_trackers_cache: dict[str, tuple[set[str], set[str]]] = {}
 
         self.last_search_description: str | None = None
         self.last_search_timestamp: str | None = None
@@ -4870,6 +4879,98 @@
         )
         return all_torrents
 
+    def _sort_torrents_by_tracker_priority(
+        self,
+        torrents_with_instances: list[tuple[str, qbittorrentapi.TorrentDictionary]],
+    ) -> None:
+        """
+        Reorder torrents in each qBittorrent instance by tracker priority (highest first).
+        Requires qBittorrent Torrent Queuing to be enabled.
+        """
+        by_instance: dict[str, list[qbittorrentapi.TorrentDictionary]] = defaultdict(list)
+        for instance_name, torrent in torrents_with_instances:
+            by_instance[instance_name].append(torrent)
+
+        qbit_manager = self.manager.qbit_manager
+        for instance_name, torrent_list in by_instance.items():
+            client = qbit_manager.get_client(instance_name)
+            if client is None:
+                continue
+            try:
+                sorted_torrents = sorted(
+                    torrent_list,
+                    key=self._get_torrent_tracker_priority,
+                    reverse=True,
+                )
+                if len(sorted_torrents) > 1:
+                    # Skip queue updates when the current queue order already matches
+                    # desired tracker-priority ordering for this instance.
+                    queue_membership = {
+                        torrent.hash: self.is_complete_state(torrent) for torrent in torrent_list
+                    }
+                    current_order_by_qbit_priority = sorted(
+                        torrent_list,
+                        key=lambda torrent: (
+                            not (
+                                isinstance(getattr(torrent, "priority", -1), int)
+                                and getattr(torrent, "priority", -1) > 0
+                            ),
+                            getattr(torrent, "priority", -1),
+                        ),
+                    )
+                    current_downloading_order = [
+                        torrent.hash
+                        for torrent in current_order_by_qbit_priority
+                        if not queue_membership.get(torrent.hash, False)
+                    ]
+                    current_seeding_order = [
+                        torrent.hash
+                        for torrent in current_order_by_qbit_priority
+                        if queue_membership.get(torrent.hash, False)
+                    ]
+                    desired_downloading_order = [
+                        torrent.hash
+                        for torrent in sorted_torrents
+                        if not queue_membership.get(torrent.hash, False)
+                    ]
+                    desired_seeding_order = [
+                        torrent.hash
+                        for torrent in sorted_torrents
+                        if queue_membership.get(torrent.hash, False)
+                    ]
+                    if (
+                        current_downloading_order == desired_downloading_order
+                        and current_seeding_order == desired_seeding_order
+                    ):
+                        continue
+                    # qBittorrent may ignore hash input ordering in batch topPrio calls.
+                    # Move torrents one-by-one (lowest first) to enforce tracker-priority
+                    # order within each queue, since qBittorrent keeps download/upload
+                    # queues separate.
+                    for queue_is_seeding in (False, True):
+                        queue_torrents = [
+                            torrent
+                            for torrent in sorted_torrents
+                            if queue_membership.get(torrent.hash, False) == queue_is_seeding
+                        ]
+                        for torrent in reversed(queue_torrents):
+                            client.torrents_top_priority(torrent_hashes=[torrent.hash])
+            except DelayLoopException as e:
+                self.logger.warning(
+                    "Failed to sort torrents by tracker priority on instance '%s': %s",
+                    instance_name,
+                    e,
+                )
+            except (
+                qbittorrentapi.exceptions.APIError,
+                qbittorrentapi.exceptions.APIConnectionError,
+            ) as e:
+                self.logger.warning(
+                    "Failed to sort torrents by tracker priority on instance '%s': %s",
+                    instance_name,
+                    e,
+                )
+
     def process_torrents(self):
         try:
             try:
@@ -4889,6 +4990,7 @@
                 ]
                 self._warned_no_seeding_limits = False
                 self.category_torrent_count = len(torrents_with_instances)
+                self._torrent_important_trackers_cache.clear()
                 if not len(torrents_with_instances):
                     raise DelayLoopException(length=LOOP_SLEEP_TIMER, error_type="no_downloads")
 
@@ -4931,6 +5033,8 @@
 
                 self.api_calls()
                 self.refresh_download_queue()
+                if self.sort_torrents:
+                    self._sort_torrents_by_tracker_priority(torrents_with_instances)
                 # Multi-instance: Process torrents from all instances
                 for instance_name, torrent in torrents_with_instances:
                     with contextlib.suppress(qbittorrentapi.NotFound404Error):
@@ -5728,8 +5832,12 @@
         )
 
     def _get_torrent_important_trackers(
-        self, torrent: qbittorrentapi.TorrentDictionary
+        self, torrent: qbittorrentapi.TorrentDictionary, *, use_cache: bool = True
     ) -> tuple[set[str], set[str]]:
+        torrent_hash = getattr(torrent, "hash", "")
+        if use_cache and torrent_hash:
+            if cached := self._torrent_important_trackers_cache.get(torrent_hash):
+                return cached
         try:
             current_tracker_urls = {
                 i.url.rstrip("/") for i in torrent.trackers if hasattr(i, "url")
@@ -5759,7 +5867,10 @@
             if _extract_tracker_host(uri) not in current_hosts
         }
         monitored_trackers = monitored_trackers.union(need_to_be_added)
-        return need_to_be_added, monitored_trackers
+        result = (need_to_be_added, monitored_trackers)
+        if use_cache and torrent_hash:
+            self._torrent_important_trackers_cache[torrent_hash] = result
+        return result
 
     @staticmethod
     def __return_max(x: dict):
@@ -5782,6 +5893,14 @@
         max_item = max(new_list, key=self.__return_max) if new_list else {}
         return max_item, set(itertools.chain.from_iterable(_list_of_tags))
 
+    def _get_torrent_tracker_priority(self, torrent: qbittorrentapi.TorrentDictionary) -> int:
+        """Return the tracker Priority for this torrent's most important monitored tracker."""
+        _, monitored_trackers = self._get_torrent_important_trackers(torrent)
+        most_important_tracker, _ = self._get_most_important_tracker_and_tags(
+            monitored_trackers, set()
+        )
+        return most_important_tracker.get("Priority", -100)
+
     def _resolve_hnr_clear_mode(self, tracker_or_config: dict) -> str:
         """Resolve HnR mode from single HitAndRunMode key: 'and' | 'or' | 'disabled'."""
         raw = tracker_or_config.get("HitAndRunMode")
@@ -5958,8 +6077,10 @@
         self.tracker_delay.add(torrent.hash)
         _remove_urls = set()
         need_to_be_added, monitored_trackers = self._get_torrent_important_trackers(torrent)
+        tracker_set_changed = False
         if need_to_be_added:
             torrent.add_trackers(need_to_be_added)
+            tracker_set_changed = True
         with contextlib.suppress(BaseException):
             for tracker in torrent.trackers:
                 tracker_url = getattr(tracker, "url", None)
@@ -5987,6 +6108,9 @@
             )
             with contextlib.suppress(qbittorrentapi.Conflict409Error):
                 torrent.remove_trackers(_remove_urls)
+            tracker_set_changed = True
+        if tracker_set_changed:
+            self._torrent_important_trackers_cache.pop(torrent.hash, None)
         most_important_tracker, unique_tags = self._get_most_important_tracker_and_tags(
             monitored_trackers, _remove_urls
         )
@@ -7269,29 +7393,38 @@
                     except Exception as e:
                         self.logger.exception(e, exc_info=sys.exc_info())
                     event.wait(LOOP_SLEEP_TIMER)
-                except DelayLoopException as e:
-                    if e.error_type == "qbit":
+                except (PyarrConnectionError, DelayLoopException) as e:
+                    if isinstance(e, PyarrConnectionError):
+                        self.logger.warning(
+                            "Could not reach %s Arr API during search loop: %s",
+                            self._name,
+                            e,
+                        )
+                        delay_exc = DelayLoopException(length=300, error_type="arr")
+                    else:
+                        delay_exc = e
+                    if delay_exc.error_type == "qbit":
                         self.logger.critical(
                             "Failed to connected to qBit client, sleeping for %s",
-                            timedelta(seconds=e.length),
+                            timedelta(seconds=delay_exc.length),
                         )
-                    elif e.error_type == "internet":
+                    elif delay_exc.error_type == "internet":
                         self.logger.critical(
                             "Failed to connected to the internet, sleeping for %s",
-                            timedelta(seconds=e.length),
+                            timedelta(seconds=delay_exc.length),
                         )
-                    elif e.error_type == "arr":
+                    elif delay_exc.error_type == "arr":
                         self.logger.critical(
                             "Failed to connected to the Arr instance, sleeping for %s",
-                            timedelta(seconds=e.length),
+                            timedelta(seconds=delay_exc.length),
                         )
-                    elif e.error_type == "delay":
+                    elif delay_exc.error_type == "delay":
                         self.logger.critical(
                             "Forced delay due to temporary issue with environment, "
                             "sleeping for %s",
-                            timedelta(seconds=e.length),
+                            timedelta(seconds=delay_exc.length),
                         )
-                    event.wait(e.length)
+                    event.wait(delay_exc.length)
                     self.manager.qbit_manager.should_delay_torrent_scan = False
                 except KeyboardInterrupt:
                     self.logger.hnotice("Detected Ctrl+C - Terminating process")
@@ -7441,6 +7574,7 @@
         self.downloads_with_bad_error_message_blocklist = set()
         self.needs_cleanup = False
         self._warned_no_seeding_limits = False
+        self._torrent_important_trackers_cache: dict[str, tuple[set[str], set[str]]] = {}
         self.custom_format_unmet_search = False
         self.do_not_remove_slow = False
         self.maximum_eta = CONFIG.get_duration("Settings.Torrent.MaximumETA", fallback=86400)
@@ -7459,6 +7593,7 @@
         self._add_trackers_if_missing = set()
         self._remove_trackers_if_exists = set()
         self._monitored_tracker_urls = set()
+        self.sort_torrents = False
         self.remove_dead_trackers = False
         self._remove_tracker_hosts = set()
         self._normalized_bad_tracker_msgs = set()
@@ -7594,6 +7729,7 @@
         self._remove_tracker_hosts = {
             h for u in self._remove_trackers_if_exists if (h := _extract_tracker_host(u))
         }
+        self.sort_torrents = any(i.get("SortTorrents", False) for i in self.monitored_trackers)
         self.logger.debug(
             "Applied qBit seeding config from section '%s' for category '%s': "
             "RemoveTorrent=%s, StalledDelay=%s",
@@ -7829,6 +7965,7 @@
                 ]
                 self._warned_no_seeding_limits = False
                 self.category_torrent_count = len(torrents_with_instances)
+                self._torrent_important_trackers_cache.clear()
                 if not torrents_with_instances:
                     raise DelayLoopException(length=LOOP_SLEEP_TIMER, error_type="no_downloads")
 
@@ -7838,6 +7975,8 @@
                 if self.manager.qbit_manager.should_delay_torrent_scan:
                     raise DelayLoopException(length=NO_INTERNET_SLEEP_TIMER, error_type="delay")
 
+                if self.sort_torrents:
+                    self._sort_torrents_by_tracker_priority(torrents_with_instances)
                 for instance_name, torrent in torrents_with_instances:
                     with contextlib.suppress(qbittorrentapi.NotFound404Error):
                         self._process_single_torrent(torrent, instance_name=instance_name)

diff --git a/qBitrr/pyarr_compat.py b/qBitrr/pyarr_compat.py
new file mode 100644
--- /dev/null
+++ b/qBitrr/pyarr_compat.py
@@ -1,0 +1,326 @@
+from __future__ import annotations
+
+"""Compatibility layer for pyarr v5/v6 API differences."""
+
+from typing import Any
+from urllib.parse import urlparse
+
+try:
+    # pyarr <= v5
+    from pyarr import LidarrAPI as _LegacyLidarrAPI
+    from pyarr import RadarrAPI as _LegacyRadarrAPI
+    from pyarr import SonarrAPI as _LegacySonarrAPI
+except ImportError:  # pragma: no cover - import path only differs by installed pyarr version
+    _LegacyLidarrAPI = None
+    _LegacyRadarrAPI = None
+    _LegacySonarrAPI = None
+
+try:
+    # pyarr >= v6
+    from pyarr import Lidarr as _Lidarr
+    from pyarr import Radarr as _Radarr
+    from pyarr import Sonarr as _Sonarr
+except ImportError:  # pragma: no cover - import path only differs by installed pyarr version
+    _Lidarr = None
+    _Radarr = None
+    _Sonarr = None
+
+try:
+    from pyarr.exceptions import PyarrResourceNotFound, PyarrServerError
+except ImportError:  # pragma: no cover
+    # Last-resort fallback keeps importers working even if pyarr reshuffles modules.
+    class PyarrResourceNotFound(Exception):
+        """Fallback pyarr resource-not-found exception type."""
+
+
+    class PyarrServerError(Exception):
+        """Fallback pyarr server-error exception type."""
+
+try:
+    from pyarr.exceptions import PyarrConnectionError
+except ImportError:  # pragma: no cover
+
+    class PyarrConnectionError(ConnectionError):
+        """Placeholder when pyarr does not expose connection errors."""
+
+
+try:
+    from pyarr.types import JsonObject
+except ImportError:  # pragma: no cover
+    JsonObject = dict[str, Any]
+
+
+class _CompatArrClient:
+    """Adapter that preserves qBitrr's legacy pyarr call surface."""
+
+    def __init__(self, client: Any):
+        self._client = client
+
+    def __getattr__(self, name: str) -> Any:
+        return getattr(self._client, name)
+
+    def _legacy_call(self, method: str, *args: Any, **kwargs: Any) -> Any:
+        return getattr(self._client, method)(*args, **kwargs)
+
+    def _has_legacy(self, method: str) -> bool:
+        return hasattr(self._client, method)
+
+    def get_update(self) -> Any:
+        if self._has_legacy("get_update"):
+            return self._legacy_call("get_update")
+        return self._client.update.get()
+
+    def get_command(self, item_id: int | None = None) -> Any:
+        if self._has_legacy("get_command"):
+            if item_id is None:
+                return self._legacy_call("get_command")
+            return self._legacy_call("get_command", item_id)
+        return self._client.command.get(item_id=item_id)
+
+    def post_command(self, command: str, **kwargs: Any) -> Any:
+        if self._has_legacy("post_command"):
+            return self._legacy_call("post_command", command, **kwargs)
+        return self._client.command.execute(command, **kwargs)
+
+    def get_queue(self, **kwargs: Any) -> JsonObject:
+        if self._has_legacy("get_queue"):
+            return self._legacy_call("get_queue", **kwargs)
+        return self._client.queue.get(**kwargs)
+
+    def del_queue(
+        self,
+        item_id: int,
+        remove_from_client: bool | None = None,
+        blacklist: bool | None = None,
+        **kwargs: Any,
+    ) -> Any:
+        if self._has_legacy("del_queue"):
+            return self._legacy_call("del_queue", item_id, remove_from_client, blacklist, **kwargs)
+        blocklist = kwargs.pop("blocklist", blacklist)
+        return self._client.queue.delete(
+            item_id=item_id, remove_from_client=remove_from_client, blocklist=blocklist, **kwargs
+        )
+
+    def get_system_status(self) -> JsonObject:
+        if self._has_legacy("get_system_status"):
+            return self._legacy_call("get_system_status")
+        return self._client.system.get_status()
+
+    def get_quality_profile(self, item_id: int | None = None) -> Any:
+        if self._has_legacy("get_quality_profile"):
+            if item_id is None:
+                return self._legacy_call("get_quality_profile")
+            return self._legacy_call("get_quality_profile", item_id)
+        return self._client.quality_profile.get(item_id=item_id)
+
+    def get_series(self, item_id: int | None = None, **kwargs: Any) -> Any:
+        if self._has_legacy("get_series"):
+            if item_id is None and "id_" in kwargs:
+                item_id = kwargs.pop("id_")
+            if item_id is None:
+                return self._legacy_call("get_series", **kwargs)
+            return self._legacy_call("get_series", item_id, **kwargs)
+        if item_id is None:
+            item_id = kwargs.pop("id_", None)
+        return self._client.series.get(item_id=item_id, **kwargs)
+
+    def get_episode(self, item_id: int | None = None, series: bool = False, **kwargs: Any) -> Any:
+        if self._has_legacy("get_episode"):
+            if item_id is None:
+                item_id = kwargs.pop("id_", None)
+            if item_id is None:
+                return self._legacy_call("get_episode", **kwargs)
+            return self._legacy_call("get_episode", item_id, series, **kwargs)
+        if series:
+            return self._client.episode.get(series_id=item_id)
+        if item_id is None:
+            item_id = kwargs.pop("id_", None)
+        return self._client.episode.get(item_id=item_id, **kwargs)
+
+    def get_episode_file(self, item_id: int | None = None, **kwargs: Any) -> Any:
+        if self._has_legacy("get_episode_file"):
+            if item_id is None:
+                return self._legacy_call("get_episode_file", **kwargs)
+            return self._legacy_call("get_episode_file", item_id, **kwargs)
+        if item_id is None:
+            item_id = kwargs.pop("id_", None)
+        return self._client.episode_file.get(item_id=item_id, **kwargs)
+
+    def upd_episode(self, item_id: int, data: JsonObject) -> JsonObject:
+        if self._has_legacy("upd_episode"):
+            return self._legacy_call("upd_episode", item_id, data)
+        return self._client.episode.update(item_id=item_id, data=data)
+
+    def upd_series(self, data: JsonObject) -> JsonObject:
+        if self._has_legacy("upd_series"):
+            return self._legacy_call("upd_series", data)
+        return self._client.series.update(data=data)
+
+    def get_movie(self, item_id: int | None = None, **kwargs: Any) -> Any:
+        if self._has_legacy("get_movie"):
+            if item_id is None:
+                return self._legacy_call("get_movie", **kwargs)
+            return self._legacy_call("get_movie", item_id, **kwargs)
+        if item_id is None:
+            item_id = kwargs.pop("id_", None)
+        return self._client.movie.get(item_id=item_id, **kwargs)
+
+    def get_movie_file(self, item_id: int | None = None, **kwargs: Any) -> Any:
+        if self._has_legacy("get_movie_file"):
+            if item_id is None:
+                return self._legacy_call("get_movie_file", **kwargs)
+            return self._legacy_call("get_movie_file", item_id, **kwargs)
+        if item_id is None:
+            item_id = kwargs.pop("id_", None)
+        return self._client.movie_file.get(item_id=item_id, **kwargs)
+
+    def upd_movie(self, data: JsonObject, move_files: bool | None = None) -> JsonObject:
+        if self._has_legacy("upd_movie"):
+            if move_files is None:
+                return self._legacy_call("upd_movie", data)
+            return self._legacy_call("upd_movie", data, move_files)
+        return self._client.movie.update(data=data, move_files=move_files)
+
+    def get_artist(self, item_id: int | None = None, **kwargs: Any) -> Any:
+        if self._has_legacy("get_artist"):
+            if item_id is None and "id_" in kwargs:
+                item_id = kwargs.pop("id_")
+            if item_id is None:
+                return self._legacy_call("get_artist", **kwargs)
+            return self._legacy_call("get_artist", item_id, **kwargs)
+        if item_id is None:
+            item_id = kwargs.pop("id_", None)
+        return self._client.artist.get(item_id=item_id, **kwargs)
+
+    def get_album(self, item_id: int | None = None, **kwargs: Any) -> Any:
+        if self._has_legacy("get_album"):
+            if item_id is None:
+                return self._legacy_call("get_album", **kwargs)
+            return self._legacy_call("get_album", item_id, **kwargs)
+        if item_id is None:
+            item_id = kwargs.pop("id_", None)
+        artist_id = kwargs.pop("artistId", kwargs.pop("artist_id", None))
+        return self._client.album.get(item_id=item_id, artist_id=artist_id, **kwargs)
+
+    def get_tracks(self, **kwargs: Any) -> Any:
+        if self._has_legacy("get_tracks"):
+            return self._legacy_call("get_tracks", **kwargs)
+        album_id = kwargs.pop("albumId", kwargs.pop("album_id", None))
+        artist_id = kwargs.pop("artistId", kwargs.pop("artist_id", None))
+        return self._client.track.get(album_id=album_id, artist_id=artist_id, **kwargs)
+
+    def get_track_file(self, item_id: int | None = None, **kwargs: Any) -> Any:
+        if self._has_legacy("get_track_file"):
+            if item_id is None:
+                return self._legacy_call("get_track_file", **kwargs)
+            return self._legacy_call("get_track_file", item_id, **kwargs)
+        if item_id is not None:
+            kwargs["track_file_ids"] = [item_id]
+        return self._client.track_file.get(**kwargs)
+
+    def upd_artist(self, data: JsonObject) -> JsonObject:
+        if self._has_legacy("upd_artist"):
+            return self._legacy_call("upd_artist", data)
+        return self._client.artist.update(data=data)
+
+
+def _normalize_v6_client_args(
+    args: tuple[Any, ...],
+    kwargs: dict[str, Any],
+    default_port: int,
+    *,
+    default_api_ver: str | None = None,
+) -> tuple[tuple[Any, ...], dict[str, Any]]:
+    """Map legacy qBitrr constructor args into pyarr v6 constructor shape."""
+    new_args = list(args)
+    new_kwargs = dict(kwargs)
+
+    host_url = new_kwargs.pop("host_url", None)
+    if host_url and "host" not in new_kwargs:
+        new_kwargs["host"] = host_url
+
+    # qBitrr frequently passes a full URL as first positional argument.
+    if new_args and isinstance(new_args[0], str) and "host" not in new_kwargs:
+        new_kwargs["host"] = new_args.pop(0)
+        if new_args and "api_key" not in new_kwargs:
+            new_kwargs["api_key"] = new_args.pop(0)
+
+    host_value = new_kwargs.get("host")
+    if isinstance(host_value, str):
+        parsed = urlparse(host_value)
+        if parsed.scheme and parsed.netloc:
+            if parsed.hostname:
+                new_kwargs["host"] = parsed.hostname
+            if "port" not in new_kwargs:
+                if parsed.port is not None:
+                    new_kwargs["port"] = parsed.port
+                else:
+                    scheme = parsed.scheme.lower()
+                    if scheme == "https":
+                        new_kwargs["port"] = 443
+                    elif scheme == "http":
+                        new_kwargs["port"] = 80
+                    else:
+                        new_kwargs["port"] = default_port
+            if "tls" not in new_kwargs:
+                new_kwargs["tls"] = parsed.scheme.lower() == "https"
+            if "base_path" not in new_kwargs and parsed.path not in ("", "/"):
+                new_kwargs["base_path"] = parsed.path.rstrip("/")
+
+    if "port" not in new_kwargs:
+        new_kwargs["port"] = default_port
+
+    if default_api_ver is not None and "api_ver" not in new_kwargs:
+        new_kwargs["api_ver"] = default_api_ver
+
+    return tuple(new_args), new_kwargs
+
+
+class RadarrAPI(_CompatArrClient):
+    def __init__(self, *args: Any, **kwargs: Any):
+        if _LegacyRadarrAPI is not None:
+            super().__init__(_LegacyRadarrAPI(*args, **kwargs))
+            return
+        if _Radarr is None:
+            raise ImportError("pyarr Radarr client not found")
+        call_args, call_kwargs = _normalize_v6_client_args(
+            args, kwargs, default_port=7878, default_api_ver="v3"
+        )
+        super().__init__(_Radarr(*call_args, **call_kwargs))
+
+
+class SonarrAPI(_CompatArrClient):
+    def __init__(self, *args: Any, **kwargs: Any):
+        if _LegacySonarrAPI is not None:
+            super().__init__(_LegacySonarrAPI(*args, **kwargs))
+            return
+        if _Sonarr is None:
+            raise ImportError("pyarr Sonarr client not found")
+        call_args, call_kwargs = _normalize_v6_client_args(
+            args, kwargs, default_port=8989, default_api_ver="v3"
+        )
+        super().__init__(_Sonarr(*call_args, **call_kwargs))
+
+
+class LidarrAPI(_CompatArrClient):
+    def __init__(self, *args: Any, **kwargs: Any):
+        if _LegacyLidarrAPI is not None:
+            super().__init__(_LegacyLidarrAPI(*args, **kwargs))
+            return
+        if _Lidarr is None:
+            raise ImportError("pyarr Lidarr client not found")
+        call_args, call_kwargs = _normalize_v6_client_args(
+            args, kwargs, default_port=8686, default_api_ver="v1"
+        )
+        super().__init__(_Lidarr(*call_args, **call_kwargs))
+
+
+__all__ = [
+    "JsonObject",
+    "LidarrAPI",
+    "PyarrConnectionError",
+    "PyarrResourceNotFound",
+    "PyarrServerError",
+    "RadarrAPI",
+    "SonarrAPI",
+]

diff --git a/qBitrr/webui.py b/qBitrr/webui.py
--- a/qBitrr/webui.py
+++ b/qBitrr/webui.py
@@ -193,6 +193,14 @@
                 "WebUI configured to listen on %s. Expose this only behind a trusted reverse proxy.",
                 self.host,
             )
+            if _auth_disabled():
+                self.logger.warning(
+                    "WebUI authentication is disabled: all API and WebUI actions are available "
+                    "without credentials to any client that can reach this port. If that is not "
+                    "intentional, enable authentication (see WebUI.AuthDisabled and login/token in "
+                    "the docs), bind WebUI.Host to 127.0.0.1, or place the service behind a "
+                    "trusted reverse proxy with its own access controls."
+                )
         self.app.logger.handlers.clear()
         self.app.logger.propagate = True
         self.app.logger.setLevel(self.logger.level)
@@ -3195,15 +3203,15 @@
                     # Create temporary Arr API client
                     self.logger.info("Creating temporary %s client for %s", arr_type, uri)
                     if arr_type == "radarr":
-                        from pyarr import RadarrAPI
+                        from qBitrr.pyarr_compat import RadarrAPI
 
                         client = RadarrAPI(uri, api_key)
                     elif arr_type == "sonarr":
-                        from pyarr import SonarrAPI
+                        from qBitrr.pyarr_compat import SonarrAPI
 
                         client = SonarrAPI(uri, api_key)
                     elif arr_type == "lidarr":
-                        from pyarr import LidarrAPI
+                        from qBitrr.pyarr_compat import LidarrAPI
 
                         client = LidarrAPI(uri, api_key)
                     else:
@@ -3226,8 +3234,9 @@
                     from json import JSONDecodeError
 
                     import requests
-                    from pyarr.exceptions import PyarrServerError
 
+                    from qBitrr.pyarr_compat import PyarrServerError
+
                     max_retries = 3
                     retry_count = 0
                     quality_profiles = []
@@ -3514,15 +3523,15 @@
         # Determine client class based on name
         client_cls = None
         if re.match(r"^(Rad|rad)arr", instance_name):
-            from pyarr import RadarrAPI
+            from qBitrr.pyarr_compat import RadarrAPI
 
             client_cls = RadarrAPI
... diff truncated: showing 800 of 4239 lines

cursoragent and others added 4 commits March 24, 2026 12:16
Treat PyarrConnectionError from api_calls as an Arr delay condition in process_torrents so transient network outages back off cleanly instead of bubbling as generic errors. Also tighten pyarr compat fallback formatting consistency.
@cursor
Copy link
Contributor

cursor bot commented Mar 24, 2026

Bugbot Autofix prepared a fix for the issue found in the latest run.

  • ✅ Fixed: Missing id_ kwarg extraction in v6 series=True path
    • I moved the v6 id_ kwarg fallback before the series=True branch so get_episode(id_=...) now correctly passes series_id.
Preview (2c6b2f00cc)
diff --git a/SECURITY.md b/SECURITY.md
new file mode 100644
--- /dev/null
+++ b/SECURITY.md
@@ -1,0 +1,23 @@
+# Security Policy
+
+## Supported versions
+
+Security fixes are provided **only for the latest released version** of qBitrr. Older releases are not maintained for security patches. [Upgrade to the latest release](https://github.com/Feramance/qBitrr/releases/latest) to receive security updates.
+
+## Reporting a vulnerability
+
+Report security issues **privately** so they can be fixed before public disclosure.
+
+**Preferred:** Use GitHub's private reporting flow: open the [Security tab](https://github.com/Feramance/qBitrr/security), then use **Report a vulnerability**.
+
+Please include:
+
+- A clear description of the issue and its potential impact
+- Steps to reproduce (if possible)
+- The qBitrr version and environment you tested (OS, install method, relevant config if safe to share)
+
+We will acknowledge receipt when we can and coordinate on a fix and disclosure timeline where appropriate.
+
+## Coordinated disclosure
+
+Please do not publish details of an unfixed vulnerability until a fix is available, unless we agree otherwise.

diff --git a/docs/configuration/seeding.md b/docs/configuration/seeding.md
--- a/docs/configuration/seeding.md
+++ b/docs/configuration/seeding.md
@@ -51,6 +51,32 @@
 
 ---
 
+### SortTorrents
+
+**Type:** Boolean (per-tracker)
+**Default:** `false`
+
+Set on individual tracker entries in `[[qBit.Trackers]]` or `[[<Arr>.Torrent.Trackers]]`, **right under [Priority](#priority)**.
+
+When `true` on **any** configured tracker, qBitrr reorders torrents in the qBittorrent queue each processing cycle so that torrents are at the top of the queue in order of their **tracker priority** (highest first). Torrents whose trackers are not in your configured trackers list are assigned the lowest priority and appear at the bottom.
+
+**Requirements:**
+
+- **qBittorrent Torrent Queuing** must be enabled (Options → BitTorrent → Torrent Queuing).
+
+**Example:**
+
+```toml
+[[Radarr-Movies.Torrent.Trackers]]
+Name = "BeyondHD"
+URI = "https://tracker.beyond-hd.me/announce"
+Priority = 10
+SortTorrents = true
+MaxUploadRatio = 1.0
+```
+
+---
+
 ## Global Seeding Settings
 
 ### Complete Example

diff --git a/docs/development/contributing.md b/docs/development/contributing.md
--- a/docs/development/contributing.md
+++ b/docs/development/contributing.md
@@ -31,6 +31,7 @@
 - [ ] Code follows [style guidelines](code-style.md)
 - [ ] Pre-commit hooks pass (`pre-commit run --all-files`)
 - [ ] Changes tested locally with live qBittorrent + Arr instances
+- [ ] If touching Arr integrations, validate against supported pyarr versions (v5 and v6)
 - [ ] Documentation updated (if adding features)
 - [ ] Commit messages follow conventional commits format
 

diff --git a/docs/development/index.md b/docs/development/index.md
--- a/docs/development/index.md
+++ b/docs/development/index.md
@@ -28,6 +28,7 @@
 - **Node.js 18+** - For WebUI development
 - **Git** - Version control
 - **Make** - Build automation (optional but recommended)
+- **pyarr compatibility** - qBitrr currently supports pyarr v5 and v6 (`pyarr>=5.2,<7`)
 
 ### Repository Structure
 

diff --git a/qBitrr/arss.py b/qBitrr/arss.py
--- a/qBitrr/arss.py
+++ b/qBitrr/arss.py
@@ -24,9 +24,6 @@
 from jaraco.docker import is_docker
 from packaging import version as version_parser
 from peewee import DatabaseError, Model, OperationalError, SqliteDatabase
-from pyarr import LidarrAPI, RadarrAPI, SonarrAPI
-from pyarr.exceptions import PyarrResourceNotFound, PyarrServerError
-from pyarr.types import JsonObject
 from qbittorrentapi import TorrentDictionary, TorrentStates
 from ujson import JSONDecodeError
 
@@ -56,6 +53,15 @@
     UnhandledError,
 )
 from qBitrr.logger import run_logs
+from qBitrr.pyarr_compat import (
+    JsonObject,
+    LidarrAPI,
+    PyarrConnectionError,
+    PyarrResourceNotFound,
+    PyarrServerError,
+    RadarrAPI,
+    SonarrAPI,
+)
 from qBitrr.search_activity_store import (
     clear_search_activity,
     fetch_search_activities,
@@ -97,6 +103,7 @@
     requests.exceptions.ConnectionError,
     JSONDecodeError,
     requests.exceptions.RequestException,
+    PyarrConnectionError,
 )
 
 
@@ -329,6 +336,7 @@
         self._normalized_bad_tracker_msgs: set[str] = {
             msg.lower() for msg in self.seeding_mode_global_bad_tracker_msg if isinstance(msg, str)
         }
+        self.sort_torrents = any(i.get("SortTorrents", False) for i in self.monitored_trackers)
 
         if (
             self.auto_delete is True
@@ -594,6 +602,7 @@
         self.downloads_with_bad_error_message_blocklist = set()
         self.needs_cleanup = False
         self._warned_no_seeding_limits = False
+        self._torrent_important_trackers_cache: dict[str, tuple[set[str], set[str]]] = {}
 
         self.last_search_description: str | None = None
         self.last_search_timestamp: str | None = None
@@ -4870,6 +4879,98 @@
         )
         return all_torrents
 
+    def _sort_torrents_by_tracker_priority(
+        self,
+        torrents_with_instances: list[tuple[str, qbittorrentapi.TorrentDictionary]],
+    ) -> None:
+        """
+        Reorder torrents in each qBittorrent instance by tracker priority (highest first).
+        Requires qBittorrent Torrent Queuing to be enabled.
+        """
+        by_instance: dict[str, list[qbittorrentapi.TorrentDictionary]] = defaultdict(list)
+        for instance_name, torrent in torrents_with_instances:
+            by_instance[instance_name].append(torrent)
+
+        qbit_manager = self.manager.qbit_manager
+        for instance_name, torrent_list in by_instance.items():
+            client = qbit_manager.get_client(instance_name)
+            if client is None:
+                continue
+            try:
+                sorted_torrents = sorted(
+                    torrent_list,
+                    key=self._get_torrent_tracker_priority,
+                    reverse=True,
+                )
+                if len(sorted_torrents) > 1:
+                    # Skip queue updates when the current queue order already matches
+                    # desired tracker-priority ordering for this instance.
+                    queue_membership = {
+                        torrent.hash: self.is_complete_state(torrent) for torrent in torrent_list
+                    }
+                    current_order_by_qbit_priority = sorted(
+                        torrent_list,
+                        key=lambda torrent: (
+                            not (
+                                isinstance(getattr(torrent, "priority", -1), int)
+                                and getattr(torrent, "priority", -1) > 0
+                            ),
+                            getattr(torrent, "priority", -1),
+                        ),
+                    )
+                    current_downloading_order = [
+                        torrent.hash
+                        for torrent in current_order_by_qbit_priority
+                        if not queue_membership.get(torrent.hash, False)
+                    ]
+                    current_seeding_order = [
+                        torrent.hash
+                        for torrent in current_order_by_qbit_priority
+                        if queue_membership.get(torrent.hash, False)
+                    ]
+                    desired_downloading_order = [
+                        torrent.hash
+                        for torrent in sorted_torrents
+                        if not queue_membership.get(torrent.hash, False)
+                    ]
+                    desired_seeding_order = [
+                        torrent.hash
+                        for torrent in sorted_torrents
+                        if queue_membership.get(torrent.hash, False)
+                    ]
+                    if (
+                        current_downloading_order == desired_downloading_order
+                        and current_seeding_order == desired_seeding_order
+                    ):
+                        continue
+                    # qBittorrent may ignore hash input ordering in batch topPrio calls.
+                    # Move torrents one-by-one (lowest first) to enforce tracker-priority
+                    # order within each queue, since qBittorrent keeps download/upload
+                    # queues separate.
+                    for queue_is_seeding in (False, True):
+                        queue_torrents = [
+                            torrent
+                            for torrent in sorted_torrents
+                            if queue_membership.get(torrent.hash, False) == queue_is_seeding
+                        ]
+                        for torrent in reversed(queue_torrents):
+                            client.torrents_top_priority(torrent_hashes=[torrent.hash])
+            except DelayLoopException as e:
+                self.logger.warning(
+                    "Failed to sort torrents by tracker priority on instance '%s': %s",
+                    instance_name,
+                    e,
+                )
+            except (
+                qbittorrentapi.exceptions.APIError,
+                qbittorrentapi.exceptions.APIConnectionError,
+            ) as e:
+                self.logger.warning(
+                    "Failed to sort torrents by tracker priority on instance '%s': %s",
+                    instance_name,
+                    e,
+                )
+
     def process_torrents(self):
         try:
             try:
@@ -4889,6 +4990,7 @@
                 ]
                 self._warned_no_seeding_limits = False
                 self.category_torrent_count = len(torrents_with_instances)
+                self._torrent_important_trackers_cache.clear()
                 if not len(torrents_with_instances):
                     raise DelayLoopException(length=LOOP_SLEEP_TIMER, error_type="no_downloads")
 
@@ -4931,6 +5033,8 @@
 
                 self.api_calls()
                 self.refresh_download_queue()
+                if self.sort_torrents:
+                    self._sort_torrents_by_tracker_priority(torrents_with_instances)
                 # Multi-instance: Process torrents from all instances
                 for instance_name, torrent in torrents_with_instances:
                     with contextlib.suppress(qbittorrentapi.NotFound404Error):
@@ -4938,6 +5042,9 @@
                 self.process()
             except NoConnectionrException as e:
                 self.logger.error(e.message)
+            except PyarrConnectionError as e:
+                self.logger.warning("Couldn't connect to %s: %s", self.type, e)
+                raise DelayLoopException(length=300, error_type="arr") from e
             except requests.exceptions.ConnectionError:
                 self.logger.warning("Couldn't connect to %s", self.type)
                 self._temp_overseer_request_cache = defaultdict(set)
@@ -5728,8 +5835,12 @@
         )
 
     def _get_torrent_important_trackers(
-        self, torrent: qbittorrentapi.TorrentDictionary
+        self, torrent: qbittorrentapi.TorrentDictionary, *, use_cache: bool = True
     ) -> tuple[set[str], set[str]]:
+        torrent_hash = getattr(torrent, "hash", "")
+        if use_cache and torrent_hash:
+            if cached := self._torrent_important_trackers_cache.get(torrent_hash):
+                return cached
         try:
             current_tracker_urls = {
                 i.url.rstrip("/") for i in torrent.trackers if hasattr(i, "url")
@@ -5759,7 +5870,10 @@
             if _extract_tracker_host(uri) not in current_hosts
         }
         monitored_trackers = monitored_trackers.union(need_to_be_added)
-        return need_to_be_added, monitored_trackers
+        result = (need_to_be_added, monitored_trackers)
+        if use_cache and torrent_hash:
+            self._torrent_important_trackers_cache[torrent_hash] = result
+        return result
 
     @staticmethod
     def __return_max(x: dict):
@@ -5782,6 +5896,14 @@
         max_item = max(new_list, key=self.__return_max) if new_list else {}
         return max_item, set(itertools.chain.from_iterable(_list_of_tags))
 
+    def _get_torrent_tracker_priority(self, torrent: qbittorrentapi.TorrentDictionary) -> int:
+        """Return the tracker Priority for this torrent's most important monitored tracker."""
+        _, monitored_trackers = self._get_torrent_important_trackers(torrent)
+        most_important_tracker, _ = self._get_most_important_tracker_and_tags(
+            monitored_trackers, set()
+        )
+        return most_important_tracker.get("Priority", -100)
+
     def _resolve_hnr_clear_mode(self, tracker_or_config: dict) -> str:
         """Resolve HnR mode from single HitAndRunMode key: 'and' | 'or' | 'disabled'."""
         raw = tracker_or_config.get("HitAndRunMode")
@@ -5958,8 +6080,10 @@
         self.tracker_delay.add(torrent.hash)
         _remove_urls = set()
         need_to_be_added, monitored_trackers = self._get_torrent_important_trackers(torrent)
+        tracker_set_changed = False
         if need_to_be_added:
             torrent.add_trackers(need_to_be_added)
+            tracker_set_changed = True
         with contextlib.suppress(BaseException):
             for tracker in torrent.trackers:
                 tracker_url = getattr(tracker, "url", None)
@@ -5987,6 +6111,9 @@
             )
             with contextlib.suppress(qbittorrentapi.Conflict409Error):
                 torrent.remove_trackers(_remove_urls)
+            tracker_set_changed = True
+        if tracker_set_changed:
+            self._torrent_important_trackers_cache.pop(torrent.hash, None)
         most_important_tracker, unique_tags = self._get_most_important_tracker_and_tags(
             monitored_trackers, _remove_urls
         )
@@ -7269,29 +7396,38 @@
                     except Exception as e:
                         self.logger.exception(e, exc_info=sys.exc_info())
                     event.wait(LOOP_SLEEP_TIMER)
-                except DelayLoopException as e:
-                    if e.error_type == "qbit":
+                except (PyarrConnectionError, DelayLoopException) as e:
+                    if isinstance(e, PyarrConnectionError):
+                        self.logger.warning(
+                            "Could not reach %s Arr API during search loop: %s",
+                            self._name,
+                            e,
+                        )
+                        delay_exc = DelayLoopException(length=300, error_type="arr")
+                    else:
+                        delay_exc = e
+                    if delay_exc.error_type == "qbit":
                         self.logger.critical(
                             "Failed to connected to qBit client, sleeping for %s",
-                            timedelta(seconds=e.length),
+                            timedelta(seconds=delay_exc.length),
                         )
-                    elif e.error_type == "internet":
+                    elif delay_exc.error_type == "internet":
                         self.logger.critical(
                             "Failed to connected to the internet, sleeping for %s",
-                            timedelta(seconds=e.length),
+                            timedelta(seconds=delay_exc.length),
                         )
-                    elif e.error_type == "arr":
+                    elif delay_exc.error_type == "arr":
                         self.logger.critical(
                             "Failed to connected to the Arr instance, sleeping for %s",
-                            timedelta(seconds=e.length),
+                            timedelta(seconds=delay_exc.length),
                         )
-                    elif e.error_type == "delay":
+                    elif delay_exc.error_type == "delay":
                         self.logger.critical(
                             "Forced delay due to temporary issue with environment, "
                             "sleeping for %s",
-                            timedelta(seconds=e.length),
+                            timedelta(seconds=delay_exc.length),
                         )
-                    event.wait(e.length)
+                    event.wait(delay_exc.length)
                     self.manager.qbit_manager.should_delay_torrent_scan = False
                 except KeyboardInterrupt:
                     self.logger.hnotice("Detected Ctrl+C - Terminating process")
@@ -7441,6 +7577,7 @@
         self.downloads_with_bad_error_message_blocklist = set()
         self.needs_cleanup = False
         self._warned_no_seeding_limits = False
+        self._torrent_important_trackers_cache: dict[str, tuple[set[str], set[str]]] = {}
         self.custom_format_unmet_search = False
         self.do_not_remove_slow = False
         self.maximum_eta = CONFIG.get_duration("Settings.Torrent.MaximumETA", fallback=86400)
@@ -7459,6 +7596,7 @@
         self._add_trackers_if_missing = set()
         self._remove_trackers_if_exists = set()
         self._monitored_tracker_urls = set()
+        self.sort_torrents = False
         self.remove_dead_trackers = False
         self._remove_tracker_hosts = set()
         self._normalized_bad_tracker_msgs = set()
@@ -7594,6 +7732,7 @@
         self._remove_tracker_hosts = {
             h for u in self._remove_trackers_if_exists if (h := _extract_tracker_host(u))
         }
+        self.sort_torrents = any(i.get("SortTorrents", False) for i in self.monitored_trackers)
         self.logger.debug(
             "Applied qBit seeding config from section '%s' for category '%s': "
             "RemoveTorrent=%s, StalledDelay=%s",
@@ -7829,6 +7968,7 @@
                 ]
                 self._warned_no_seeding_limits = False
                 self.category_torrent_count = len(torrents_with_instances)
+                self._torrent_important_trackers_cache.clear()
                 if not torrents_with_instances:
                     raise DelayLoopException(length=LOOP_SLEEP_TIMER, error_type="no_downloads")
 
@@ -7838,6 +7978,8 @@
                 if self.manager.qbit_manager.should_delay_torrent_scan:
                     raise DelayLoopException(length=NO_INTERNET_SLEEP_TIMER, error_type="delay")
 
+                if self.sort_torrents:
+                    self._sort_torrents_by_tracker_priority(torrents_with_instances)
                 for instance_name, torrent in torrents_with_instances:
                     with contextlib.suppress(qbittorrentapi.NotFound404Error):
                         self._process_single_torrent(torrent, instance_name=instance_name)

diff --git a/qBitrr/pyarr_compat.py b/qBitrr/pyarr_compat.py
new file mode 100644
--- /dev/null
+++ b/qBitrr/pyarr_compat.py
@@ -1,0 +1,327 @@
+from __future__ import annotations
+
+"""Compatibility layer for pyarr v5/v6 API differences."""
+
+from typing import Any
+from urllib.parse import urlparse
+
+try:
+    # pyarr <= v5
+    from pyarr import LidarrAPI as _LegacyLidarrAPI
+    from pyarr import RadarrAPI as _LegacyRadarrAPI
+    from pyarr import SonarrAPI as _LegacySonarrAPI
+except ImportError:  # pragma: no cover - import path only differs by installed pyarr version
+    _LegacyLidarrAPI = None
+    _LegacyRadarrAPI = None
+    _LegacySonarrAPI = None
+
+try:
+    # pyarr >= v6
+    from pyarr import Lidarr as _Lidarr
+    from pyarr import Radarr as _Radarr
+    from pyarr import Sonarr as _Sonarr
+except ImportError:  # pragma: no cover - import path only differs by installed pyarr version
+    _Lidarr = None
+    _Radarr = None
+    _Sonarr = None
+
+try:
+    from pyarr.exceptions import PyarrResourceNotFound, PyarrServerError
+except ImportError:  # pragma: no cover
+    # Last-resort fallback keeps importers working even if pyarr reshuffles modules.
+    class PyarrResourceNotFound(Exception):
+        """Fallback pyarr resource-not-found exception type."""
+
+    class PyarrServerError(Exception):
+        """Fallback pyarr server-error exception type."""
+
+
+try:
+    from pyarr.exceptions import PyarrConnectionError
+except ImportError:  # pragma: no cover
+
+    class PyarrConnectionError(ConnectionError):
+        """Placeholder when pyarr does not expose connection errors."""
+
+
+try:
+    from pyarr.types import JsonObject
+except ImportError:  # pragma: no cover
+    JsonObject = dict[str, Any]
+
+
+class _CompatArrClient:
+    """Adapter that preserves qBitrr's legacy pyarr call surface."""
+
+    def __init__(self, client: Any):
+        self._client = client
+
+    def __getattr__(self, name: str) -> Any:
+        return getattr(self._client, name)
+
+    def _legacy_call(self, method: str, *args: Any, **kwargs: Any) -> Any:
+        return getattr(self._client, method)(*args, **kwargs)
+
+    def _has_legacy(self, method: str) -> bool:
+        return hasattr(self._client, method)
+
+    def get_update(self) -> Any:
+        if self._has_legacy("get_update"):
+            return self._legacy_call("get_update")
+        return self._client.update.get()
+
+    def get_command(self, item_id: int | None = None) -> Any:
+        if self._has_legacy("get_command"):
+            if item_id is None:
+                return self._legacy_call("get_command")
+            return self._legacy_call("get_command", item_id)
+        return self._client.command.get(item_id=item_id)
+
+    def post_command(self, command: str, **kwargs: Any) -> Any:
+        if self._has_legacy("post_command"):
+            return self._legacy_call("post_command", command, **kwargs)
+        return self._client.command.execute(command, **kwargs)
+
+    def get_queue(self, **kwargs: Any) -> JsonObject:
+        if self._has_legacy("get_queue"):
+            return self._legacy_call("get_queue", **kwargs)
+        return self._client.queue.get(**kwargs)
+
+    def del_queue(
+        self,
+        item_id: int,
+        remove_from_client: bool | None = None,
+        blacklist: bool | None = None,
+        **kwargs: Any,
+    ) -> Any:
+        if self._has_legacy("del_queue"):
+            blocklist = kwargs.pop("blocklist", blacklist)
+            return self._legacy_call("del_queue", item_id, remove_from_client, blocklist, **kwargs)
+        blocklist = kwargs.pop("blocklist", blacklist)
+        return self._client.queue.delete(
+            item_id=item_id, remove_from_client=remove_from_client, blocklist=blocklist, **kwargs
+        )
+
+    def get_system_status(self) -> JsonObject:
+        if self._has_legacy("get_system_status"):
+            return self._legacy_call("get_system_status")
+        return self._client.system.get_status()
+
+    def get_quality_profile(self, item_id: int | None = None) -> Any:
+        if self._has_legacy("get_quality_profile"):
+            if item_id is None:
+                return self._legacy_call("get_quality_profile")
+            return self._legacy_call("get_quality_profile", item_id)
+        return self._client.quality_profile.get(item_id=item_id)
+
+    def get_series(self, item_id: int | None = None, **kwargs: Any) -> Any:
+        if self._has_legacy("get_series"):
+            if item_id is None and "id_" in kwargs:
+                item_id = kwargs.pop("id_")
+            if item_id is None:
+                return self._legacy_call("get_series", **kwargs)
+            return self._legacy_call("get_series", item_id, **kwargs)
+        if item_id is None:
+            item_id = kwargs.pop("id_", None)
+        return self._client.series.get(item_id=item_id, **kwargs)
+
+    def get_episode(self, item_id: int | None = None, series: bool = False, **kwargs: Any) -> Any:
+        if self._has_legacy("get_episode"):
+            if item_id is None:
+                item_id = kwargs.pop("id_", None)
+            if item_id is None:
+                return self._legacy_call("get_episode", **kwargs)
+            return self._legacy_call("get_episode", item_id, series, **kwargs)
+        if item_id is None:
+            item_id = kwargs.pop("id_", None)
+        if series:
+            return self._client.episode.get(series_id=item_id)
+        return self._client.episode.get(item_id=item_id, **kwargs)
+
+    def get_episode_file(self, item_id: int | None = None, **kwargs: Any) -> Any:
+        if self._has_legacy("get_episode_file"):
+            if item_id is None:
+                return self._legacy_call("get_episode_file", **kwargs)
+            return self._legacy_call("get_episode_file", item_id, **kwargs)
+        if item_id is None:
+            item_id = kwargs.pop("id_", None)
+        return self._client.episode_file.get(item_id=item_id, **kwargs)
+
+    def upd_episode(self, item_id: int, data: JsonObject) -> JsonObject:
+        if self._has_legacy("upd_episode"):
+            return self._legacy_call("upd_episode", item_id, data)
+        return self._client.episode.update(item_id=item_id, data=data)
+
+    def upd_series(self, data: JsonObject) -> JsonObject:
+        if self._has_legacy("upd_series"):
+            return self._legacy_call("upd_series", data)
+        return self._client.series.update(data=data)
+
+    def get_movie(self, item_id: int | None = None, **kwargs: Any) -> Any:
+        if self._has_legacy("get_movie"):
+            if item_id is None:
+                return self._legacy_call("get_movie", **kwargs)
+            return self._legacy_call("get_movie", item_id, **kwargs)
+        if item_id is None:
+            item_id = kwargs.pop("id_", None)
+        return self._client.movie.get(item_id=item_id, **kwargs)
+
+    def get_movie_file(self, item_id: int | None = None, **kwargs: Any) -> Any:
+        if self._has_legacy("get_movie_file"):
+            if item_id is None:
+                return self._legacy_call("get_movie_file", **kwargs)
+            return self._legacy_call("get_movie_file", item_id, **kwargs)
+        if item_id is None:
+            item_id = kwargs.pop("id_", None)
+        return self._client.movie_file.get(item_id=item_id, **kwargs)
+
+    def upd_movie(self, data: JsonObject, move_files: bool | None = None) -> JsonObject:
+        if self._has_legacy("upd_movie"):
+            if move_files is None:
+                return self._legacy_call("upd_movie", data)
+            return self._legacy_call("upd_movie", data, move_files)
+        return self._client.movie.update(data=data, move_files=move_files)
+
+    def get_artist(self, item_id: int | None = None, **kwargs: Any) -> Any:
+        if self._has_legacy("get_artist"):
+            if item_id is None and "id_" in kwargs:
+                item_id = kwargs.pop("id_")
+            if item_id is None:
+                return self._legacy_call("get_artist", **kwargs)
+            return self._legacy_call("get_artist", item_id, **kwargs)
+        if item_id is None:
+            item_id = kwargs.pop("id_", None)
+        return self._client.artist.get(item_id=item_id, **kwargs)
+
+    def get_album(self, item_id: int | None = None, **kwargs: Any) -> Any:
+        if self._has_legacy("get_album"):
+            if item_id is None:
+                return self._legacy_call("get_album", **kwargs)
+            return self._legacy_call("get_album", item_id, **kwargs)
+        if item_id is None:
+            item_id = kwargs.pop("id_", None)
+        artist_id = kwargs.pop("artistId", kwargs.pop("artist_id", None))
+        return self._client.album.get(item_id=item_id, artist_id=artist_id, **kwargs)
+
+    def get_tracks(self, **kwargs: Any) -> Any:
+        if self._has_legacy("get_tracks"):
+            return self._legacy_call("get_tracks", **kwargs)
+        album_id = kwargs.pop("albumId", kwargs.pop("album_id", None))
+        artist_id = kwargs.pop("artistId", kwargs.pop("artist_id", None))
+        return self._client.track.get(album_id=album_id, artist_id=artist_id, **kwargs)
+
+    def get_track_file(self, item_id: int | None = None, **kwargs: Any) -> Any:
+        if self._has_legacy("get_track_file"):
+            if item_id is None:
+                return self._legacy_call("get_track_file", **kwargs)
+            return self._legacy_call("get_track_file", item_id, **kwargs)
+        if item_id is not None:
+            kwargs["track_file_ids"] = [item_id]
+        return self._client.track_file.get(**kwargs)
+
+    def upd_artist(self, data: JsonObject) -> JsonObject:
+        if self._has_legacy("upd_artist"):
+            return self._legacy_call("upd_artist", data)
+        return self._client.artist.update(data=data)
+
+
+def _normalize_v6_client_args(
+    args: tuple[Any, ...],
+    kwargs: dict[str, Any],
+    default_port: int,
+    *,
+    default_api_ver: str | None = None,
+) -> tuple[tuple[Any, ...], dict[str, Any]]:
+    """Map legacy qBitrr constructor args into pyarr v6 constructor shape."""
+    new_args = list(args)
+    new_kwargs = dict(kwargs)
+
+    host_url = new_kwargs.pop("host_url", None)
+    if host_url and "host" not in new_kwargs:
+        new_kwargs["host"] = host_url
+
+    # qBitrr frequently passes a full URL as first positional argument.
+    if new_args and isinstance(new_args[0], str) and "host" not in new_kwargs:
+        new_kwargs["host"] = new_args.pop(0)
+        if new_args and "api_key" not in new_kwargs:
+            new_kwargs["api_key"] = new_args.pop(0)
+
+    host_value = new_kwargs.get("host")
+    if isinstance(host_value, str):
+        parsed = urlparse(host_value)
+        if parsed.scheme and parsed.netloc:
+            if parsed.hostname:
+                new_kwargs["host"] = parsed.hostname
+            if "port" not in new_kwargs:
+                if parsed.port is not None:
+                    new_kwargs["port"] = parsed.port
+                else:
+                    scheme = parsed.scheme.lower()
+                    if scheme == "https":
+                        new_kwargs["port"] = 443
+                    elif scheme == "http":
+                        new_kwargs["port"] = 80
+                    else:
+                        new_kwargs["port"] = default_port
+            if "tls" not in new_kwargs:
+                new_kwargs["tls"] = parsed.scheme.lower() == "https"
+            if "base_path" not in new_kwargs and parsed.path not in ("", "/"):
+                new_kwargs["base_path"] = parsed.path.rstrip("/")
+
+    if "port" not in new_kwargs:
+        new_kwargs["port"] = default_port
+
+    if default_api_ver is not None and "api_ver" not in new_kwargs:
+        new_kwargs["api_ver"] = default_api_ver
+
+    return tuple(new_args), new_kwargs
+
+
+class RadarrAPI(_CompatArrClient):
+    def __init__(self, *args: Any, **kwargs: Any):
+        if _LegacyRadarrAPI is not None:
+            super().__init__(_LegacyRadarrAPI(*args, **kwargs))
+            return
+        if _Radarr is None:
+            raise ImportError("pyarr Radarr client not found")
+        call_args, call_kwargs = _normalize_v6_client_args(
+            args, kwargs, default_port=7878, default_api_ver="v3"
+        )
+        super().__init__(_Radarr(*call_args, **call_kwargs))
+
+
+class SonarrAPI(_CompatArrClient):
+    def __init__(self, *args: Any, **kwargs: Any):
+        if _LegacySonarrAPI is not None:
+            super().__init__(_LegacySonarrAPI(*args, **kwargs))
+            return
+        if _Sonarr is None:
+            raise ImportError("pyarr Sonarr client not found")
+        call_args, call_kwargs = _normalize_v6_client_args(
+            args, kwargs, default_port=8989, default_api_ver="v3"
+        )
+        super().__init__(_Sonarr(*call_args, **call_kwargs))
+
+
+class LidarrAPI(_CompatArrClient):
+    def __init__(self, *args: Any, **kwargs: Any):
+        if _LegacyLidarrAPI is not None:
+            super().__init__(_LegacyLidarrAPI(*args, **kwargs))
+            return
+        if _Lidarr is None:
+            raise ImportError("pyarr Lidarr client not found")
+        call_args, call_kwargs = _normalize_v6_client_args(
+            args, kwargs, default_port=8686, default_api_ver="v1"
+        )
+        super().__init__(_Lidarr(*call_args, **call_kwargs))
+
+
+__all__ = [
+    "JsonObject",
+    "LidarrAPI",
+    "PyarrConnectionError",
+    "PyarrResourceNotFound",
+    "PyarrServerError",
+    "RadarrAPI",
+    "SonarrAPI",
+]

diff --git a/qBitrr/webui.py b/qBitrr/webui.py
--- a/qBitrr/webui.py
+++ b/qBitrr/webui.py
@@ -193,6 +193,14 @@
                 "WebUI configured to listen on %s. Expose this only behind a trusted reverse proxy.",
                 self.host,
             )
+            if _auth_disabled():
+                self.logger.warning(
+                    "WebUI authentication is disabled: all API and WebUI actions are available "
+                    "without credentials to any client that can reach this port. If that is not "
+                    "intentional, enable authentication (see WebUI.AuthDisabled and login/token in "
+                    "the docs), bind WebUI.Host to 127.0.0.1, or place the service behind a "
+                    "trusted reverse proxy with its own access controls."
+                )
         self.app.logger.handlers.clear()
         self.app.logger.propagate = True
         self.app.logger.setLevel(self.logger.level)
@@ -3195,15 +3203,15 @@
                     # Create temporary Arr API client
                     self.logger.info("Creating temporary %s client for %s", arr_type, uri)
                     if arr_type == "radarr":
-                        from pyarr import RadarrAPI
+                        from qBitrr.pyarr_compat import RadarrAPI
 
                         client = RadarrAPI(uri, api_key)
                     elif arr_type == "sonarr":
-                        from pyarr import SonarrAPI
+                        from qBitrr.pyarr_compat import SonarrAPI
 
                         client = SonarrAPI(uri, api_key)
                     elif arr_type == "lidarr":
-                        from pyarr import LidarrAPI
+                        from qBitrr.pyarr_compat import LidarrAPI
 
                         client = LidarrAPI(uri, api_key)
                     else:
@@ -3226,8 +3234,9 @@
                     from json import JSONDecodeError
 
                     import requests
-                    from pyarr.exceptions import PyarrServerError
 
+                    from qBitrr.pyarr_compat import PyarrServerError
+
... diff truncated: showing 800 of 4250 lines

Copy link
Contributor

@cursor cursor bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Cursor Bugbot has reviewed your changes and found 1 potential issue.

Fix All in Cursor

qBitrr/arss.py Outdated
)
delay_exc = DelayLoopException(length=300, error_type="arr")
else:
delay_exc = e
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

PyarrConnectionError handler in search loop is unreachable

Medium Severity

The isinstance(e, PyarrConnectionError) branch in the middle except handler is unreachable. The inner try-except block has a generic except Exception as e: handler that catches PyarrConnectionError first (since it's a subclass of Exception). Unlike DelayLoopException, which has an explicit except DelayLoopException: raise in the inner block to propagate it outward, there is no corresponding re-raise for PyarrConnectionError. As a result, pyarr connection errors during the search loop are silently logged as generic exceptions and the intended 300-second delay is never applied.

Additional Locations (1)
Fix in Cursor Fix in Web

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

feat: H&R Prioritization

2 participants