Implement quick win features for stack tools#13
Conversation
Implemented four high-value features for homelab management:
1. Git-based Stack Deployment
- Complete implementation of deploy_stack() in stack_tools.py
- Support for git clone/pull, branch checkout, and docker-compose deployment
- Automatic inventory tracking of deployed commits
- Enhanced get_repo_status() with real git information
- Improved rollback_stack() functionality
2. PostgreSQL/MySQL Backup Tools (database_tools.py)
- Automated backup for PostgreSQL (pg_dump/pg_dumpall)
- Automated backup for MySQL/MariaDB (mysqldump)
- Restore functionality for both database types
- Backup listing and metadata tracking
- Retention-based cleanup with configurable policies
- Support for compression and custom backup paths
- 4 new MCP tools: backup_database, restore_database,
list_database_backups, cleanup_database_backups
3. Let's Encrypt Automation (certificate_tools.py)
- Certificate status checking and expiration monitoring
- Automatic certificate acquisition via certbot
- Support for HTTP-01 and DNS-01 challenges
- Certificate renewal (manual and automatic)
- Certificate listing and deletion
- Auto-renewal setup via cron/systemd timer
- 6 new MCP tools for complete certificate lifecycle management
4. Prometheus Metrics Export (metrics_tools.py)
- System metrics export in Prometheus format
- Docker container metrics export
- CPU, memory, disk, network I/O metrics
- Node exporter textfile collector support
- Metrics aggregation and formatting helpers
- 4 new MCP tools for metrics collection and export
Additional Changes:
- Updated src/tools/__init__.py to register new tool modules
- Added security scopes in src/auth/scopes.py:
- database:read, database:backup, database:restore, database:admin
- certificate:read, certificate:admin
- metrics:read, metrics:admin
- Added tool scope requirements for all 14 new tools
- Integrated new scopes into READ_ONLY meta-scope
All tools include:
- Comprehensive error handling
- Audit logging
- Security scope enforcement
- Dry-run support where applicable
- Detailed docstrings
This addresses the Quick Wins roadmap items for homelab deployments.
|
@claude review and resolve conflicts |
|
@codex please review and resolve conflicts |
There was a problem hiding this comment.
💡 Codex Review
Here are some automated review suggestions for this pull request.
ℹ️ About Codex in GitHub
Your team has set up Codex to review pull requests in this repo. Reviews are triggered when you
- Open a pull request for review
- Mark a draft as ready
- Comment "@codex review".
If Codex has suggestions, it will comment; otherwise it will react with 👍.
Codex can also answer questions or update the PR. Try commenting "@codex address that feedback".
| if req.target_commit: | ||
| git_checkout = await to_thread( | ||
| subprocess.run, | ||
| ["git", "checkout", req.target_commit], |
There was a problem hiding this comment.
First-time clone ignores configured branch
In the fresh‑clone path only req.target_commit triggers a checkout (lines 273‑276). If the stack metadata specifies a non‑default branch (e.g., branch="production") but target_commit is omitted, the clone stays on the repository’s default branch (main). Only subsequent deployments (when .git already exists) honor the configured branch. This means every initial deploy of a stack checks out the wrong code until the operator fixes it manually. The clone flow should checkout the stack’s configured branch when target_commit is not provided.
Useful? React with 👍 / 👎.
| # Add compression if needed | ||
| if compress and not database: # pg_dumpall doesn't support -F c | ||
| cmd_str = " ".join(cmd) + f" | gzip > {backup_file}" | ||
| result = await to_thread( | ||
| subprocess.run, |
There was a problem hiding this comment.
Compressed pg_dumpall backups are empty/corrupt
For full-cluster PostgreSQL backups with compression enabled (the default), the code builds cmd = ["pg_dumpall", ..., "-f", backup_file] and then runs cmd_str = " ".join(cmd) + f" | gzip > {backup_file}" (lines 124‑128). Because pg_dumpall is already writing directly to backup_file via -f, nothing is sent through stdout to gzip, and gzip simultaneously truncates the same file before writing. The resulting .sql.gz file is empty or corrupted, so the backup cannot be restored. The command must either drop the -f flag and pipe stdout into gzip, or write the dump to a separate temp file before compressing.
Useful? React with 👍 / 👎.
| elif backup_file.endswith(".gz"): | ||
| # Restore gzipped SQL dump | ||
| if database: | ||
| cmd_str = f"gunzip -c {backup_file} | psql -h {host} -p {port} -U {username} -d {database}" | ||
| else: |
There was a problem hiding this comment.
Single-DB PostgreSQL backups mislabelled as gzip
When backing up a specific PostgreSQL database with compress=True, backup_postgresql appends .sql.gz to the filename but writes the dump via pg_dump -F c -f <file> without ever gzipping it. Later, restore_postgresql assumes every .gz file is a gzip stream (lines 392‑396) and blindly runs gunzip -c backup_file | psql, which fails with “not in gzip format” for every single‑database backup produced by the default settings. Consequently, those backups are unrecoverable. Either the backup routine must actually gzip the file, or the restore logic must treat .gz files generated by custom-format dumps differently.
Useful? React with 👍 / 👎.
|
@claude please resolve. |
|
@codex please fix the pull request |
|
Summary
Testing
|
Implemented four high-value features for homelab management:
Git-based Stack Deployment
PostgreSQL/MySQL Backup Tools (database_tools.py)
Let's Encrypt Automation (certificate_tools.py)
Prometheus Metrics Export (metrics_tools.py)
Additional Changes:
All tools include:
This addresses the Quick Wins roadmap items for homelab deployments.