-
Notifications
You must be signed in to change notification settings - Fork 548
Feat: Implemented spawing and closing of backend & sync microservice from tauri application itself #1009
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
base: main
Are you sure you want to change the base?
Conversation
|
|
|
Warning Rate limit exceeded
⌛ How to resolve this issue?After the wait time has elapsed, a review can be triggered using the We recommend that you space out your commits to avoid hitting the rate limit. 🚦 How do rate limits work?CodeRabbit enforces hourly rate limits for each developer per organization. Our paid plans have higher rate limits than the trial, open-source and free plans. In all cases, we re-allow further reviews after a brief timeout. Please see our FAQ for further information. 📝 WalkthroughWalkthroughAdds environment-aware DATABASE_PATH selection (CI vs user data dir), new POST /shutdown endpoints for backend and sync services with delayed termination, frontend Tauri logic to spawn/monitor/terminate backend+sync processes, and CI workflow flags to build with a Changes
Sequence Diagram(s)sequenceDiagram
participant User
participant Frontend as Tauri Frontend
participant Backend as Backend Service
participant Sync as Sync Service
participant OS as Operating System
User->>Frontend: Close Window
activate Frontend
Frontend->>Frontend: on_window_event handler
alt Unix
Frontend->>OS: Send SIGTERM to backend & sync (by PID)
OS-->>Backend: SIGTERM delivered
OS-->>Sync: SIGTERM delivered
else Windows
Frontend->>Backend: POST /shutdown
activate Backend
Backend->>Backend: Schedule delayed shutdown
Backend-->>Frontend: ShutdownResponse (200)
deactivate Backend
Frontend->>Sync: POST /shutdown
activate Sync
Sync->>Sync: Stop folder watcher
Sync->>Sync: Schedule delayed shutdown
Sync-->>Frontend: ShutdownResponse (200)
deactivate Sync
end
Frontend->>Frontend: Exit once child termination initiated
deactivate Frontend
sequenceDiagram
participant Main as App Main
participant Config as Settings
participant FS as File System
Main->>Config: Load settings
Config->>Config: Check GITHUB_ACTIONS env
alt CI env
Config->>FS: Use test DB path (cwd)
else Prod
Config->>FS: Use platformdirs.user_data_dir("PictoPy")
end
Config-->>Main: DATABASE_PATH
Main->>FS: os.makedirs(DB dir) if missing
Main->>Main: include shutdown router
Estimated code review effort🎯 4 (Complex) | ⏱️ ~45 minutes Possibly related PRs
Suggested labels
Suggested reviewers
Poem
🚥 Pre-merge checks | ✅ 2 | ❌ 1❌ Failed checks (1 warning)
✅ Passed checks (2 passed)
✏️ Tip: You can configure your own custom pre-merge checks in the settings. Thanks for using CodeRabbit! It's free for OSS, and your support helps us grow. If you like it, consider giving us a shout-out. Comment |
|
|
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Actionable comments posted: 7
🤖 Fix all issues with AI agents
In @backend/app/config/settings.py:
- Around line 1-2: The import user_data_dir('PictoPy') in settings.py requires
the platformdirs package but backend/requirements.txt is missing it; add
platformdirs (pin to the version used elsewhere, e.g., platformdirs==4.5.1) to
backend/requirements.txt, update any lock/installer files or CI dependency
caches, and reinstall/verify dependencies so the import in settings.py resolves
at runtime.
In @docs/backend/backend_python/openapi.json:
- Around line 1303-1325: OpenAPI shows the "Shutdown" tag twice for the
"/shutdown" POST because the Shutdown router is tagged in both its router
definition (e.g., the Shutdown router where tags=["Shutdown"]) and again when
mounted/included in main.py (the include_router call around line 138 that also
sets tags=["Shutdown"]); remove the duplicate by keeping the tag in only one
place—either delete the tags parameter from the router definition or remove the
tags argument from the include_router call in main.py—so the shutdown route
emits a single "Shutdown" tag in the generated openapi.json.
In @frontend/src-tauri/Cargo.toml:
- Line 15: The reqwest dependency is not pinned to the latest 0.12 patch; change
the reqwest entry (reqwest = { version = "0.12", features = ["blocking"] }) to
require version 0.12.28, then run cargo update -p reqwest --precise 0.12.28 to
lock the patch; after that inspect transitive dependencies (cargo tree -e
features) for any openssl usage and either update the openssl crate to >=0.10.70
or remove/replace the "openssl" feature (e.g., switch reqwest TLS to rustls) to
mitigate RUSTSEC-2025-0004. Ensure the updated resolution is committed
(Cargo.lock) and re-run tests/build.
In @frontend/src-tauri/src/main.rs:
- Around line 122-152: The spawn logic can leave the backend running untracked
if backend_child spawns but sync_child fails; after spawning backend_child,
acquire the state lock and set pids.backend = backend_child.pid() immediately
(using the existing state, pids and state.lock().unwrap()), then attempt to
spawn sync_child; if the sync spawn fails, perform cleanup by killing
backend_child (or calling backend_child.kill()) and resetting pids.backend = 0
under the same lock before returning the error; alternatively, spawn both and on
any partial failure ensure you update pids for the successful child and
terminate the other to avoid orphaned processes.
- Around line 41-72: The issue is mismatched signatures and incorrect Windows
behavior: make both kill_process functions share the same signature (e.g., pub
fn kill_process(process: &sysinfo::Process) -> Result<(), String>) so the call
site can handle the Result consistently; update the Unix implementation (fn
kill_process) to return Ok(())/Err(String) instead of (), converting any Signal
send failures into an Err; modify the Windows implementation to use the
passed-in process (e.g., process.name() or another identifying property) to pick
the correct endpoint instead of shutting down both services, and propagate
request/build errors by returning Err(String) rather than silently swallowing
them.
In @sync-microservice/app/config/settings.py:
- Line 24: DATABASE_PATH was changed to use user_data_dir('PictoPy'), which
breaks existing installs; implement a migration on startup that checks the old
relative path (e.g., "./database/PictoPy.db") and, if present and the new
DATABASE_PATH file is missing, atomically moves or copies the old DB to
DATABASE_PATH, creating parent dirs and preserving permissions, and leave a
timestamped backup of the old file if move fails; perform verification by
opening both DBs (or checking file size/hash) after copy, log successes/failures
via the app logger, and ensure this logic is invoked once during app
initialization before any DB access (place it near where DATABASE_PATH is
referenced or inside the app startup/init function).
In @sync-microservice/main.py:
- Around line 22-23: Wrap the module-level directory creation that uses
DATABASE_PATH and os.makedirs(path, exist_ok=True) in a try-except block to
catch OSError (and optionally Exception), log the error with context using your
logger (or print if none) including the path and exception details, and then
handle failure appropriately (raise the exception or call sys.exit(1)) so the
app fails fast with a clear message instead of crashing unpredictably. Ensure
the try-except surrounds the computation of path =
os.path.dirname(DATABASE_PATH) and the os.makedirs call so
permission/invalid-path errors are captured.
🧹 Nitpick comments (4)
sync-microservice/app/database/folders.py (1)
76-122: Address error handling inconsistency in db_get_tagging_progress().This function lacks an
exceptblock, so exceptions propagate to the caller, unlikedb_get_all_folders_with_ids()anddb_check_database_connection()which catch exceptions and return safe defaults. For consistency and robustness, either:
- Add exception handling to return an empty list on error (consistent with
db_get_all_folders_with_ids()), or- Document why exceptions are intentionally propagated here
Suggested fix: Add exception handling for consistency
def db_get_tagging_progress() -> List[FolderTaggingInfo]: """ Calculate tagging percentage for all folders. Tagging percentage = (tagged images / total images) * 100 Returns: List of FolderTaggingInfo containing folder_id, folder_path, and tagging_percentage """ conn = sqlite3.connect(DATABASE_PATH) cursor = conn.cursor() try: cursor.execute( """ SELECT f.folder_id, f.folder_path, COUNT(i.id) as total_images, COUNT(CASE WHEN i.isTagged = 1 THEN 1 END) as tagged_images FROM folders f LEFT JOIN images i ON f.folder_id = i.folder_id GROUP BY f.folder_id, f.folder_path """ ) results = cursor.fetchall() folder_info_list = [] for folder_id, folder_path, total_images, tagged_images in results: # Calculate percentage, handle division by zero if total_images > 0: tagging_percentage = (tagged_images / total_images) * 100 else: tagging_percentage = 0.0 folder_info_list.append( FolderTaggingInfo( folder_id=folder_id, folder_path=folder_path, tagging_percentage=round(tagging_percentage, 2), ) ) return folder_info_list + except Exception as e: + logger.error(f"Error calculating tagging progress: {e}") + return [] finally: conn.close()sync-microservice/app/routes/shutdown.py (1)
22-37: Consider increasing the shutdown delay for reliability.The delay of 0.1 seconds may be too short to guarantee the HTTP response is fully sent, especially under load or on slower networks. The backend uses 0.5 seconds, which provides a safer margin.
♻️ Increase delay to match backend
-async def _delayed_shutdown(delay: float = 0.1): +async def _delayed_shutdown(delay: float = 0.5): """ Shutdown the server after a short delay to allow the response to be sent. Args: - delay: Seconds to wait before signaling shutdown (kept minimal) + delay: Seconds to wait before signaling shutdown """backend/main.py (1)
43-45: Module-level directory creation is acceptable but consider moving to lifespan.The directory creation happens at import time, which works but could be cleaner inside the
lifespancontext manager where other initialization occurs (lines 47-65). This would centralize all startup logic and make error handling more explicit.That said, the current approach is functional since
os.makedirswithexist_ok=Trueis idempotent and will raise on permission errors.frontend/src-tauri/src/main.rs (1)
27-27: Consider handling poisoned mutex instead of unwrap.
state.lock().unwrap()will panic if the mutex is poisoned (i.e., a thread panicked while holding the lock). While unlikely in this code, usingunwrap_or_elseor matching on the error provides more resilience.♻️ Example using unwrap_or_else
- let state = state.lock().unwrap(); + let state = state.lock().unwrap_or_else(|e| e.into_inner());
📜 Review details
Configuration used: defaults
Review profile: CHILL
Plan: Pro
⛔ Files ignored due to path filters (3)
backend/dist/README.mdis excluded by!**/dist/**frontend/src-tauri/Cargo.lockis excluded by!**/*.locksync-microservice/dist/README.mdis excluded by!**/dist/**
📒 Files selected for processing (11)
backend/app/config/settings.pybackend/app/routes/shutdown.pybackend/main.pydocs/backend/backend_python/openapi.jsonfrontend/src-tauri/Cargo.tomlfrontend/src-tauri/src/main.rssync-microservice/app/config/settings.pysync-microservice/app/database/folders.pysync-microservice/app/routes/shutdown.pysync-microservice/main.pysync-microservice/requirements.txt
🧰 Additional context used
🧬 Code graph analysis (5)
frontend/src-tauri/src/main.rs (1)
frontend/src-tauri/src/services/mod.rs (1)
get_resources_folder_path(5-11)
backend/app/routes/shutdown.py (1)
sync-microservice/app/routes/shutdown.py (4)
ShutdownResponse(15-19)_delayed_shutdown(22-37)shutdown(41-75)_handle_shutdown_exception(62-66)
sync-microservice/app/routes/shutdown.py (3)
sync-microservice/app/utils/watcher.py (1)
watcher_util_stop_folder_watcher(300-328)sync-microservice/app/logging/setup_logging.py (1)
get_sync_logger(205-217)backend/app/routes/shutdown.py (4)
ShutdownResponse(14-18)_delayed_shutdown(21-36)shutdown(40-66)_handle_shutdown_exception(53-57)
backend/main.py (2)
backend/app/routes/shutdown.py (1)
shutdown(40-66)sync-microservice/app/routes/shutdown.py (1)
shutdown(41-75)
sync-microservice/main.py (1)
sync-microservice/app/routes/shutdown.py (1)
shutdown(41-75)
⏰ Context from checks skipped due to timeout of 90000ms. You can increase the timeout in your CodeRabbit configuration to a maximum of 15 minutes (900000ms). (4)
- GitHub Check: Tauri Build Check (windows-latest)
- GitHub Check: Tauri Build Check (ubuntu-22.04)
- GitHub Check: Tauri Build Check (macos-latest, --target aarch64-apple-darwin)
- GitHub Check: Backend Tests
🔇 Additional comments (18)
sync-microservice/app/routes/shutdown.py (3)
1-12: LGTM!The imports, logger setup, and router initialization are appropriate for the shutdown endpoint implementation.
15-19: LGTM!The
ShutdownResponsemodel is correctly defined and matches the backend implementation.
40-75: Proper error handling and shutdown sequence.The implementation correctly:
- Stops the folder watcher before initiating shutdown
- Logs errors without blocking the shutdown process
- Schedules shutdown as a background task after the response
- Handles potential exceptions in the shutdown task
sync-microservice/main.py (4)
2-3: LGTM!The imports of
osandDATABASE_PATHare necessary for creating the database directory.
7-7: LGTM!The import of the shutdown router is correct and properly integrated with the FastAPI application.
47-47: LGTM!The shutdown router is correctly registered with appropriate tags.
52-58: Port configuration is correctly consistent.The hardcoded port 52124 in sync-microservice/main.py matches
SYNC_MICROSERVICE_URL = "http://localhost:52124"defined in both settings files. Configuration is consistent across the codebase.sync-microservice/app/config/settings.py (1)
1-2: Database paths are already consistent across both services.Both the backend and sync microservice use the identical
user_data_dir('PictoPy')/database/PictoPy.dbpath, ensuring data consistency is maintained.sync-microservice/requirements.txt (1)
40-40: platformdirs==4.5.1 is the latest version with no known security vulnerabilities.The pinned version is current and free from known CVEs, security advisories, and vulnerabilities according to Snyk, PyPI, and Safety DB.
frontend/src-tauri/Cargo.toml (1)
17-17: No action needed. Sysinfo 0.37.2 is current and has no known security vulnerabilities.backend/main.py (1)
138-138: LGTM!The shutdown router is correctly wired without a prefix, exposing the endpoint at
/shutdownas expected.backend/app/routes/shutdown.py (3)
31-36:os._exit(0)on Windows skips cleanup handlers.Using
os._exit(0)bypasses atexit handlers, finally blocks in the main thread, and object destructors. The comment correctly notes this is necessary for Windows/uvicorn compatibility, but be aware this could leave resources (like database connections) unclosed.Since the lifespan shutdown (line 65 in main.py) won't run with
os._exit, ensure any critical cleanup doesn't rely on it for Windows.
14-18: LGTM!Clean response model that matches the sync-microservice pattern for consistency.
39-66: Document the shutdown endpoint design decision regarding authentication.The
/shutdownendpoint has no authentication, but it's only accessible onlocalhostand is part of a desktop application where the backend runs as a trusted local service. Since other endpoints also lack authentication and the server only binds to localhost (not exposed to the network), this is by design. However, if intentional, this design decision should be documented. If exposing this API to a network context is planned, authentication/authorization should be added.frontend/src-tauri/src/main.rs (2)
154-198: LGTM!The async listeners for process output are correctly structured and will properly log stdout/stderr/termination events for both services.
73-99: Good process tree termination logic.The depth-first collection ensures child processes are terminated before their parent, preventing orphaned processes. The use of
sysinfofor cross-platform process enumeration is appropriate.docs/backend/backend_python/openapi.json (2)
1640-1663: LGTM!The new
ErrorResponseandShutdownResponseschemas are correctly defined and match their Python counterparts.Also applies to: 2471-2489
1119-1123: LGTM!The schema refinements (simplified
input_typereference and explicitadditionalProperties: truefor metadata) improve clarity and flexibility.Also applies to: 2249-2250
| "/shutdown": { | ||
| "post": { | ||
| "tags": [ | ||
| "Shutdown", | ||
| "Shutdown" | ||
| ], | ||
| "summary": "Shutdown", | ||
| "description": "Gracefully shutdown the PictoPy backend.\n\nThis endpoint schedules backend server termination after response is sent.\nThe frontend is responsible for shutting down the sync service separately.\n\nReturns:\n ShutdownResponse with status and message", | ||
| "operationId": "shutdown_shutdown_post", | ||
| "responses": { | ||
| "200": { | ||
| "description": "Successful Response", | ||
| "content": { | ||
| "application/json": { | ||
| "schema": { | ||
| "$ref": "#/components/schemas/ShutdownResponse" | ||
| } | ||
| } | ||
| } | ||
| } | ||
| } | ||
| } | ||
| } |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Duplicate tag in the shutdown endpoint.
The tags array contains "Shutdown" twice (lines 1306-1308). This appears to be generated from the router definition. While not breaking, it's a minor inconsistency in the OpenAPI spec.
This likely stems from the router being defined with tags=["Shutdown"] and then included with tags=["Shutdown"] again in main.py line 138. Consider removing the tag from one location.
🤖 Prompt for AI Agents
In @docs/backend/backend_python/openapi.json around lines 1303 - 1325, OpenAPI
shows the "Shutdown" tag twice for the "/shutdown" POST because the Shutdown
router is tagged in both its router definition (e.g., the Shutdown router where
tags=["Shutdown"]) and again when mounted/included in main.py (the
include_router call around line 138 that also sets tags=["Shutdown"]); remove
the duplicate by keeping the tag in only one place—either delete the tags
parameter from the router definition or remove the tags argument from the
include_router call in main.py—so the shutdown route emits a single "Shutdown"
tag in the generated openapi.json.
|
|
||
| [dependencies] | ||
| tauri = { version = "2.9.1", features = ["protocol-asset", "devtools"] } | ||
| reqwest = { version = "0.12", features = ["blocking"] } |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
🧩 Analysis chain
🌐 Web query:
reqwest rust crate latest version 2025 security vulnerabilities
💡 Result:
Short answer — as of Jan 10, 2026:
- Latest published reqwest (stable) in distributions: 0.12.28. [1]
- Main security issue to watch: a use-after-free in the rust-openssl crate (RUSTSEC-2025-0004 / CVE-2025-24898 / GHSA-rpmj-rpgj-qmpm) which was fixed in openssl >= 0.10.70; reqwest builds that enable the "openssl" feature (i.e. use rust-openssl) can be affected until their dependency is updated. Upgrade openssl to >= 0.10.70 and update reqwest to the latest 0.12.28 (or your distro’s rebuilt package). [2][3]
Sources:
- Fedora package update announcing rust-reqwest 0.12.28 (Jan 8, 2026). [1]
- RustSec advisory RUSTSEC-2025-0004 (openssl ssl::select_next_proto use-after-free). [2]
- GitHub / NVD references for CVE-2025-24898 (rust-openssl). [3]
🌐 Web query:
reqwest 0.12 security advisories CVE vulnerabilities
💡 Result:
Short answer: there’s no widely‑published CVE that names reqwest 0.12 itself, but multiple security advisories and CVEs affect projects that use reqwest 0.12 (or its dependencies). Notable items to be aware of:
- Fedora / distro advisories updated rust-reqwest to 0.12.8 in Oct 2024 to address an HTTP client DoS issue triggered via dependent TLS/hyper crates (advisory / update notes). [1][2]
- CVE-2024-41178 — Apache Arrow Rust object_store (which uses reqwest) could log temporary AWS WebIdentity tokens (exposing credentials) on certain errors; guidance is to upgrade object_store — this issue arises because reqwest errors can include full URLs. [3][4]
- Upstream crate versions: reqwest 0.12.x is still maintained in the 0.12 series (examples: 0.12.8 → 0.12.18 shown on deps.rs); you should upgrade to the latest patched 0.12.x or to 0.11/0.13+ if appropriate for your toolchain. Check your distro’s rust-reqwest package advisories and the crate changelog for exact fixes. [5]
If you want, I can:
- look up the exact changelog entries for each 0.12.x release and list fixed CVEs/bugs with dates, or
- check your current Cargo.lock / version and recommend the minimal safe upgrade.
Sources:
[1] Fedora update notice / rust-reqwest (0.12.8 update).
[2] Fedora security advisory summary (FEDORA-2024-347164df1c / FEDORA-2024-bf524bf5c0).
[3] NVD / CVE-2024-41178 (object_store exposure of WebIdentity tokens via logged reqwest error).
[4] SUSE / Wiz writeups for CVE-2024-41178.
[5] deps.rs listing for reqwest 0.12.x.
Update reqwest to the latest 0.12.28 patch version and verify transitive dependencies.
The "0.12" version specifier is not fully pinned to the latest patch (0.12.28 as of Jan 2026). Additionally, while no direct CVE exists for reqwest 0.12 itself, known security issues affect the broader ecosystem—including RUSTSEC-2025-0004 (use-after-free in rust-openssl for builds enabling the "openssl" feature). Ensure your dependency resolution uses the latest patch and that transitive dependencies, particularly openssl, are at secure versions (openssl >= 0.10.70 if present).
🤖 Prompt for AI Agents
In @frontend/src-tauri/Cargo.toml at line 15, The reqwest dependency is not
pinned to the latest 0.12 patch; change the reqwest entry (reqwest = { version =
"0.12", features = ["blocking"] }) to require version 0.12.28, then run cargo
update -p reqwest --precise 0.12.28 to lock the patch; after that inspect
transitive dependencies (cargo tree -e features) for any openssl usage and
either update the openssl crate to >=0.10.70 or remove/replace the "openssl"
feature (e.g., switch reqwest TLS to rustls) to mitigate RUSTSEC-2025-0004.
Ensure the updated resolution is committed (Cargo.lock) and re-run tests/build.
| #[cfg(unix)] | ||
| fn kill_process(process: &sysinfo::Process) { | ||
| use sysinfo::Signal; | ||
| let _ = process.kill_with(Signal::Term); | ||
| } | ||
|
|
||
| #[cfg(windows)] | ||
| pub fn kill_process(_process: &sysinfo::Process) -> Result<(), String> { | ||
| use reqwest::blocking::Client; | ||
|
|
||
| let client = Client::builder().build().map_err(|e| e.to_string())?; | ||
|
|
||
| let endpoints = [ | ||
| ("BACKEND", "http://localhost:52123/shutdown"), | ||
| ("SYNC", "http://localhost:52124/shutdown"), | ||
| ]; | ||
|
|
||
| for (name, url) in endpoints { | ||
| match client.post(url).send() { | ||
| Ok(resp) => { | ||
| let status = resp.status(); | ||
|
|
||
| if status.is_success() { | ||
| println!("[{}] Shutdown OK ({})", name, status); | ||
| } | ||
| } | ||
| Err(_err) => {} | ||
| } | ||
| } | ||
|
|
||
| Ok(()) | ||
| } |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Platform function signature mismatch and unused parameter on Windows.
The Unix kill_process returns () (line 44) while Windows returns Result<(), String> (line 48). Additionally, the Windows version ignores the _process parameter entirely and instead makes HTTP calls to both services unconditionally.
This creates two issues:
- The call site at line 94 (
let _ = kill_process(process)) discards the Result on Windows but expects no return on Unix - On Windows, every call to
kill_processshuts down both services regardless of which process was passed
🔧 Suggested fix: Unify signatures and fix Windows behavior
#[cfg(unix)]
-fn kill_process(process: &sysinfo::Process) {
+fn kill_process(process: &sysinfo::Process) -> Result<(), String> {
use sysinfo::Signal;
let _ = process.kill_with(Signal::Term);
+ Ok(())
}
#[cfg(windows)]
pub fn kill_process(_process: &sysinfo::Process) -> Result<(), String> {
use reqwest::blocking::Client;
let client = Client::builder().build().map_err(|e| e.to_string())?;
+ // Get process name to determine which endpoint to call
+ let process_name = _process.name().to_string_lossy().to_lowercase();
+
+ let url = if process_name.contains("server") || process_name.contains("backend") {
+ "http://localhost:52123/shutdown"
+ } else if process_name.contains("sync") {
+ "http://localhost:52124/shutdown"
+ } else {
+ return Ok(()); // Unknown process, skip
+ };
+
- let endpoints = [
- ("BACKEND", "http://localhost:52123/shutdown"),
- ("SYNC", "http://localhost:52124/shutdown"),
- ];
-
- for (name, url) in endpoints {
- match client.post(url).send() {
- Ok(resp) => {
- let status = resp.status();
-
- if status.is_success() {
- println!("[{}] Shutdown OK ({})", name, status);
- }
+ match client.post(url).send() {
+ Ok(resp) => {
+ if resp.status().is_success() {
+ println!("Shutdown OK for {}", process_name);
}
- Err(_err) => {}
}
+ Err(e) => return Err(e.to_string()),
}
Ok(())
}🤖 Prompt for AI Agents
In @frontend/src-tauri/src/main.rs around lines 41 - 72, The issue is mismatched
signatures and incorrect Windows behavior: make both kill_process functions
share the same signature (e.g., pub fn kill_process(process: &sysinfo::Process)
-> Result<(), String>) so the call site can handle the Result consistently;
update the Unix implementation (fn kill_process) to return Ok(())/Err(String)
instead of (), converting any Signal send failures into an Err; modify the
Windows implementation to use the passed-in process (e.g., process.name() or
another identifying property) to pick the correct endpoint instead of shutting
down both services, and propagate request/build errors by returning Err(String)
rather than silently swallowing them.
frontend/src-tauri/src/main.rs
Outdated
| { | ||
| let pids = state.lock().unwrap(); | ||
| if pids.backend != 0 || pids.sync != 0 { | ||
| println!("Backend or sync already running, skipping spawn"); | ||
| return Ok(()); | ||
| } | ||
| } | ||
|
|
||
| let (mut backend_rx, backend_child) = app | ||
| .shell() | ||
| .command(&backend_executable) | ||
| .current_dir(&backend_path) | ||
| .spawn() | ||
| .map_err(|e| format!("Failed to spawn backend: {:?}", e))?; | ||
|
|
||
| println!("Backend spawned with PID {}", backend_child.pid()); | ||
|
|
||
| let (mut sync_rx, sync_child) = app | ||
| .shell() | ||
| .command(&sync_executable) | ||
| .current_dir(&sync_path) | ||
| .spawn() | ||
| .map_err(|e| format!("Failed to spawn sync: {:?}", e))?; | ||
|
|
||
| println!("Sync spawned with PID {}", sync_child.pid()); | ||
|
|
||
| { | ||
| let mut pids = state.lock().unwrap(); | ||
| pids.backend = backend_child.pid(); | ||
| pids.sync = sync_child.pid(); | ||
| } |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Partial spawn failure leaves inconsistent state.
If backend_child spawns successfully but sync_child fails (line 144), the error is returned but pids.backend is never set. This means:
- Backend process is orphaned (no PID tracked)
- On next app restart,
pids.backend == 0passes the check, but the port may still be in use
Consider storing the backend PID before attempting sync spawn, or implementing cleanup on partial failure.
🔧 Suggested approach
let (mut backend_rx, backend_child) = app
.shell()
.command(&backend_executable)
.current_dir(&backend_path)
.spawn()
.map_err(|e| format!("Failed to spawn backend: {:?}", e))?;
println!("Backend spawned with PID {}", backend_child.pid());
+
+ // Store backend PID immediately to ensure cleanup on partial failure
+ {
+ let mut pids = state.lock().unwrap();
+ pids.backend = backend_child.pid();
+ }
let (mut sync_rx, sync_child) = app
.shell()
.command(&sync_executable)
.current_dir(&sync_path)
.spawn()
- .map_err(|e| format!("Failed to spawn sync: {:?}", e))?;
+ .map_err(|e| {
+ // Backend was spawned but sync failed - backend PID is already tracked
+ // so it will be cleaned up on close
+ format!("Failed to spawn sync: {:?}", e)
+ })?;
println!("Sync spawned with PID {}", sync_child.pid());
{
let mut pids = state.lock().unwrap();
- pids.backend = backend_child.pid();
pids.sync = sync_child.pid();
}🤖 Prompt for AI Agents
In @frontend/src-tauri/src/main.rs around lines 122 - 152, The spawn logic can
leave the backend running untracked if backend_child spawns but sync_child
fails; after spawning backend_child, acquire the state lock and set pids.backend
= backend_child.pid() immediately (using the existing state, pids and
state.lock().unwrap()), then attempt to spawn sync_child; if the sync spawn
fails, perform cleanup by killing backend_child (or calling
backend_child.kill()) and resetting pids.backend = 0 under the same lock before
returning the error; alternatively, spawn both and on any partial failure ensure
you update pids for the successful child and terminate the other to avoid
orphaned processes.
| TEST_OUTPUT_PATH = "tests/outputs" | ||
| # Point to the main PictoPy database | ||
| DATABASE_PATH = "../backend/app/database/PictoPy.db" | ||
| DATABASE_PATH = f"{user_data_dir('PictoPy')}/database/PictoPy.db" |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Consider data migration strategy for existing installations.
This change relocates the database from a relative path to a user data directory, which is a breaking change for existing users. Consider implementing a migration strategy to:
- Check if a database exists at the old location
- Automatically migrate it to the new location on first startup
- Document the change in release notes
Would you like me to generate a migration script that checks for and moves the database from the old location to the new location?
🤖 Prompt for AI Agents
In @sync-microservice/app/config/settings.py at line 24, DATABASE_PATH was
changed to use user_data_dir('PictoPy'), which breaks existing installs;
implement a migration on startup that checks the old relative path (e.g.,
"./database/PictoPy.db") and, if present and the new DATABASE_PATH file is
missing, atomically moves or copies the old DB to DATABASE_PATH, creating parent
dirs and preserving permissions, and leave a timestamped backup of the old file
if move fails; perform verification by opening both DBs (or checking file
size/hash) after copy, log successes/failures via the app logger, and ensure
this logic is invoked once during app initialization before any DB access (place
it near where DATABASE_PATH is referenced or inside the app startup/init
function).
| path = os.path.dirname(DATABASE_PATH) | ||
| os.makedirs(path, exist_ok=True) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
🛠️ Refactor suggestion | 🟠 Major
Add error handling for database directory creation.
Creating the database directory at module level without error handling could cause the application to fail to start if there are permission issues or if the path is invalid. Consider wrapping this in a try-except block with appropriate error logging.
♻️ Add error handling
-path = os.path.dirname(DATABASE_PATH)
-os.makedirs(path, exist_ok=True)
+try:
+ path = os.path.dirname(DATABASE_PATH)
+ os.makedirs(path, exist_ok=True)
+ logger.info(f"Database directory ensured at: {path}")
+except Exception as e:
+ logger.error(f"Failed to create database directory: {e}")
+ raise🤖 Prompt for AI Agents
In @sync-microservice/main.py around lines 22 - 23, Wrap the module-level
directory creation that uses DATABASE_PATH and os.makedirs(path, exist_ok=True)
in a try-except block to catch OSError (and optionally Exception), log the error
with context using your logger (or print if none) including the path and
exception details, and then handle failure appropriately (raise the exception or
call sys.exit(1)) so the app fails fast with a clear message instead of crashing
unpredictably. Ensure the try-except surrounds the computation of path =
os.path.dirname(DATABASE_PATH) and the os.makedirs call so
permission/invalid-path errors are captured.
|
|
4 similar comments
|
|
|
|
|
|
|
|
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Actionable comments posted: 1
🤖 Fix all issues with AI agents
In @frontend/src-tauri/tauri.conf.json:
- Around line 24-27: The backend resource path in tauri.conf.json under the
"resources" object is wrong — update the entry that currently points to
"../../backend/dist/backend/" to the actual build output
"../../backend/dist/PictoPy_Server/" so Tauri can find the backend executable
during bundling; leave the sync-microservice entry unchanged.
📜 Review details
Configuration used: defaults
Review profile: CHILL
Plan: Pro
⛔ Files ignored due to path filters (1)
frontend/src-tauri/Cargo.lockis excluded by!**/*.lock
📒 Files selected for processing (1)
frontend/src-tauri/tauri.conf.json
⏰ Context from checks skipped due to timeout of 90000ms. You can increase the timeout in your CodeRabbit configuration to a maximum of 15 minutes (900000ms). (4)
- GitHub Check: Tauri Build Check (windows-latest)
- GitHub Check: Tauri Build Check (ubuntu-22.04)
- GitHub Check: Tauri Build Check (macos-latest, --target aarch64-apple-darwin)
- GitHub Check: Backend Tests
| "resources": { | ||
| "../../backend/dist/": "resources/backend", | ||
| "../../sync-microservice/dist/": "resources/sync-microservice" | ||
| "../../backend/dist/backend/": "resources/backend", | ||
| "../../sync-microservice/dist/PictoPy_Sync/": "resources/sync-microservice" | ||
| }, |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
🧩 Analysis chain
🏁 Script executed:
# First, let's understand the overall repository structure
find . -maxdepth 2 -type d -name "backend" -o -name "sync-microservice" | head -20Repository: AOSSIE-Org/PictoPy
Length of output: 106
🏁 Script executed:
# List files in the frontend directory to confirm Tauri setup
ls -la frontend/ 2>/dev/null | head -20Repository: AOSSIE-Org/PictoPy
Length of output: 1274
🏁 Script executed:
# Check if the dist directories exist currently
ls -la backend/dist 2>/dev/null | head -20 || echo "backend/dist not found or empty"Repository: AOSSIE-Org/PictoPy
Length of output: 44
🏁 Script executed:
# Check sync-microservice dist structure
ls -la sync-microservice/dist 2>/dev/null | head -20 || echo "sync-microservice/dist not found or empty"Repository: AOSSIE-Org/PictoPy
Length of output: 44
🏁 Script executed:
# Look for build configuration files to understand the build process
find . -maxdepth 3 -type f \( -name "setup.py" -o -name "pyproject.toml" -o -name "build.py" -o -name "Makefile" \) 2>/dev/nullRepository: AOSSIE-Org/PictoPy
Length of output: 121
🏁 Script executed:
# Check the Tauri configuration file itself to see the exact content
cat -n frontend/src-tauri/tauri.conf.json | sed -n '20,35p'Repository: AOSSIE-Org/PictoPy
Length of output: 606
🏁 Script executed:
# Check backend build configuration
cat backend/pyproject.toml | head -50Repository: AOSSIE-Org/PictoPy
Length of output: 91
🏁 Script executed:
# Check sync-microservice build configuration
cat sync-microservice/pyproject.toml | head -50Repository: AOSSIE-Org/PictoPy
Length of output: 91
🏁 Script executed:
# Look for any build scripts in backend
ls -la backend/ | grep -E "(build|script|setup)"Repository: AOSSIE-Org/PictoPy
Length of output: 44
🏁 Script executed:
# Look for any build scripts in sync-microservice
ls -la sync-microservice/ | grep -E "(build|script|setup)"Repository: AOSSIE-Org/PictoPy
Length of output: 44
🏁 Script executed:
# Check if there's a Makefile or build documentation at root level
ls -la | grep -i make
cat README.md 2>/dev/null | head -50Repository: AOSSIE-Org/PictoPy
Length of output: 2055
🏁 Script executed:
# Search for PyInstaller references or configuration
rg -i "pyinstaller|--name|dist/" --type py --type toml --type yaml --type json 2>/dev/null | head -30Repository: AOSSIE-Org/PictoPy
Length of output: 1561
🏁 Script executed:
# Check for CI/CD or build workflow files
find . -type f \( -name "*.yml" -o -name "*.yaml" \) | head -20Repository: AOSSIE-Org/PictoPy
Length of output: 533
🏁 Script executed:
# Check for any build documentation or scripts
find . -type f -name "CONTRIBUTING.md" -o -name "BUILD.md" -o -name "setup.sh" 2>/dev/nullRepository: AOSSIE-Org/PictoPy
Length of output: 98
🏁 Script executed:
# Look at the complete pyproject.toml files to see all content
wc -l backend/pyproject.toml sync-microservice/pyproject.tomlRepository: AOSSIE-Org/PictoPy
Length of output: 132
🏁 Script executed:
# Read the full pyproject.toml files
echo "=== Backend pyproject.toml ===" && cat backend/pyproject.toml && echo -e "\n=== Sync-microservice pyproject.toml ===" && cat sync-microservice/pyproject.tomlRepository: AOSSIE-Org/PictoPy
Length of output: 194
🏁 Script executed:
# Check the build workflow
cat .github/workflows/build-and-release.ymlRepository: AOSSIE-Org/PictoPy
Length of output: 10279
🏁 Script executed:
# Check the PR check build workflow
cat .github/workflows/pr-check-build.ymlRepository: AOSSIE-Org/PictoPy
Length of output: 2351
🏁 Script executed:
# Check setup script
cat scripts/setup.shRepository: AOSSIE-Org/PictoPy
Length of output: 3501
🏁 Script executed:
# Check CONTRIBUTING.md for build information
cat CONTRIBUTING.mdRepository: AOSSIE-Org/PictoPy
Length of output: 3041
Backend resource path is incorrect and will cause Tauri bundling to fail.
The tauri.conf.json specifies ../../backend/dist/backend/ for the backend resource, but the build workflow (build-and-release.yml) creates the backend executable as backend/dist/PictoPy_Server/ using pyinstaller main.py --name PictoPy_Server --onedir --distpath dist. This path mismatch will cause the Tauri resource bundling to fail.
The sync-microservice path is correct: it builds to sync-microservice/dist/PictoPy_Sync/ which matches the configured path.
Fix: Change line 25 from "../../backend/dist/backend/" to "../../backend/dist/PictoPy_Server/".
🤖 Prompt for AI Agents
In @frontend/src-tauri/tauri.conf.json around lines 24 - 27, The backend
resource path in tauri.conf.json under the "resources" object is wrong — update
the entry that currently points to "../../backend/dist/backend/" to the actual
build output "../../backend/dist/PictoPy_Server/" so Tauri can find the backend
executable during bundling; leave the sync-microservice entry unchanged.
|
|
2 similar comments
|
|
|
|
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Actionable comments posted: 1
🤖 Fix all issues with AI agents
In `@frontend/src-tauri/src/main.rs`:
- Around line 48-66: The blocking reqwest Client used in the shutdown loop has
no explicit timeout which can hang CloseRequested; update the client
construction (Client::builder() in main.rs) to set a short timeout (e.g.,
timeout(Duration::from_secs(2))) and import std::time::Duration, then reuse that
client when posting to the endpoints array (the loop that calls
client.post(url).send()) so each shutdown request will fail fast instead of
blocking window closure.
♻️ Duplicate comments (2)
frontend/src-tauri/src/main.rs (2)
40-71: Still inconsistentkill_processsignatures + Windows ignores the process parameter.Unix returns
(), Windows returnsResult, and Windows posts to both shutdown endpoints regardless of which process is being terminated. Please align the signatures and use the passed process to choose the correct endpoint.
119-141: Partial spawn failure can still orphan the backend.If backend spawns but sync fails, PIDs are stored only after both succeed, leaving the backend untracked. Store the backend PID immediately and/or clean it up on sync spawn failure.
🧹 Nitpick comments (2)
.github/workflows/build-and-release.yml (1)
274-285: Consider renaming thecifeature for clarity.The feature works correctly, but naming it
ciis potentially misleading. Per the comment, this feature enablesprod()to spawn backend + sync services in release builds—a production behavior, not a CI-specific one.A more descriptive name like
bundled,spawn_services, orsidecarwould better convey the feature's purpose and reduce confusion about why a "CI" feature is enabled in production releases.frontend/src-tauri/src/main.rs (1)
18-38: Reduce mutex hold time during shutdown.The lock is held while
kill_process_treeruns; copy PIDs first to avoid holding the mutex during blocking work.♻️ Proposed refactor
- let state = state.lock().unwrap(); - - if state.backend != 0 { - let _ = kill_process_tree(state.backend); - } - - if state.sync != 0 { - let _ = kill_process_tree(state.sync); - } - - drop(state); + let (backend_pid, sync_pid) = { + let state = state.lock().unwrap(); + (state.backend, state.sync) + }; + + if backend_pid != 0 { + let _ = kill_process_tree(backend_pid); + } + + if sync_pid != 0 { + let _ = kill_process_tree(sync_pid); + }
069348a to
aa9212f
Compare
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Actionable comments posted: 1
🤖 Fix all issues with AI agents
In `@frontend/src-tauri/src/main.rs`:
- Around line 100-116: The prod spawning is behind the "ci" feature but
on_window_event always calls the HTTP shutdown functions (kill_process) which
can affect manually-run dev services; update on_window_event to only perform
shutdown/HTTP calls when the app actually spawned the services (i.e., check the
Pids state: only call kill_process_tree and kill_process for backend/sync if
pids.backend != 0 or pids.sync != 0), or alternatively gate the entire shutdown
sequence with the same cfg(feature = "ci") used by prod; adjust checks around
the Pids state in on_window_event (and any call sites of kill_process_tree /
kill_process) so HTTP shutdown is only attempted for processes the app started.
♻️ Duplicate comments (6)
frontend/src-tauri/src/main.rs (4)
18-38: Process termination runs unconditionally in both dev and release builds.The
on_window_eventhandler kills processes regardless of build mode. In development, this could terminate manually-started dev services. Consider guarding with#[cfg(not(debug_assertions))]or checking thecifeature.
40-71: Platform function signature mismatch persists.The Unix
kill_processreturns()(line 43) while Windows returnsResult<(), String>(line 47). This causes the call site at line 93 (let _ = kill_process(process)) to have inconsistent behavior - on Unix it discards nothing, on Windows it silently discards errors.Additionally, the Windows implementation ignores the
_processparameter and unconditionally shuts down both services for every call, which is incorrect when killing individual processes in the tree.
46-68: Missing timeout on blocking HTTP shutdown requests.The
reqwestblocking client defaults to a 30-second timeout. DuringCloseRequestedevents, this causes the window to hang if services are unreachable. Add explicit shorter timeouts for responsive window closure.🔧 Suggested fix
#[cfg(windows)] pub fn kill_process(_process: &sysinfo::Process) -> Result<(), String> { use reqwest::blocking::Client; + use std::time::Duration; - let client = Client::builder().build().map_err(|e| e.to_string())?; + let client = Client::builder() + .timeout(Duration::from_secs(3)) + .connect_timeout(Duration::from_secs(1)) + .build() + .map_err(|e| e.to_string())?;
119-141: Partial spawn failure leaves backend process orphaned.If
backend_childspawns successfully butsync_childfails (line 133), the error is returned butpids.backendis never set. This means:
- Backend process is running but untracked (orphaned)
- On next app restart,
pids.backend == 0passes the check, but port 52123 may still be in useStore the backend PID immediately after spawning to ensure cleanup on partial failure.
🔧 Suggested fix
let (mut backend_rx, backend_child) = app .shell() .command(&backend_executable) .current_dir(&backend_path) .spawn() .map_err(|e| format!("Failed to spawn backend: {:?}", e))?; println!("Backend spawned with PID {}", backend_child.pid()); + // Store backend PID immediately to ensure cleanup on partial failure + { + let mut pids = state.lock().unwrap(); + pids.backend = backend_child.pid(); + } let (mut sync_rx, sync_child) = app .shell() .command(&sync_executable) .current_dir(&sync_path) .spawn() .map_err(|e| format!("Failed to spawn sync: {:?}", e))?; println!("Sync spawned with PID {}", sync_child.pid()); { let mut pids = state.lock().unwrap(); - pids.backend = backend_child.pid(); pids.sync = sync_child.pid(); }backend/main.py (1)
138-138: Duplicate "Shutdown" tag likely present.The router is included with
tags=["Shutdown"], but the router definition inbackend/app/routes/shutdown.pylikely also specifiestags=["Shutdown"]in theAPIRouterconstructor. This results in the OpenAPI spec showing the tag twice (confirmed in openapi.json lines 1306-1308). Remove the tag from one location.♻️ Suggested fix - remove tag from include_router
-app.include_router(shutdown_router, tags=["Shutdown"]) +app.include_router(shutdown_router)docs/backend/backend_python/openapi.json (1)
1303-1325: Duplicate "Shutdown" tag in the generated OpenAPI spec.The tags array contains
"Shutdown"twice (lines 1306-1308). This is generated from the router being defined withtags=["Shutdown"]and also included withtags=["Shutdown"]inbackend/main.pyline 138. Fix this at the source by removing the tag from one location.
🧹 Nitpick comments (2)
sync-microservice/app/routes/shutdown.py (2)
32-37: Platform-specific shutdown behavior has different cleanup guarantees.On Windows,
os._exit(0)performs an immediate exit without running cleanup handlers,atexitfunctions, orfinallyblocks. On Unix,SIGTERMallows uvicorn's signal handlers to perform graceful cleanup. This asymmetry could cause resource leaks or incomplete cleanup on Windows.Consider using a more graceful shutdown approach on Windows as well, such as setting a shutdown flag that uvicorn can detect, or using
sys.exit(0)which runs cleanup handlers.
55-59: Watcher stop failure is silently swallowed.If
watcher_util_stop_folder_watcher()fails, the error is logged but the shutdown continues. This is reasonable for shutdown scenarios, but the response message at line 74 always says "Watcher stopped" even if it failed. Consider reflecting the actual watcher status in the response.♻️ Suggested improvement
+ watcher_stopped = True try: # Stop the folder watcher first watcher_util_stop_folder_watcher() except Exception as e: logger.error(f"Error stopping folder watcher: {e}") + watcher_stopped = False # ... rest of the function ... return ShutdownResponse( status="shutting_down", - message="Sync microservice shutdown initiated. Watcher stopped.", + message=f"Sync microservice shutdown initiated. Watcher {'stopped' if watcher_stopped else 'stop failed'}.", )
6577fdd to
dd2f62c
Compare
…from tauri application
dd2f62c to
e3a5449
Compare
This PR builds on earlier discussions regarding the backend and sync shutdown by @rohan-pandeyy and @tushar1977
Discord thread - https://discord.com/channels/1022871757289422898/1458148504156373043
Issue #870
Summary by CodeRabbit
New Features
Chores
Documentation
✏️ Tip: You can customize this high-level summary in your review settings.