A terminal UI (TUI) SSH troubleshooting assistant: chat on the left, command output on the right.
Supports Gemini and OpenAI as the “agent brain”, and runs commands on a remote host via SSH.
- Connects to a remote host over SSH (Paramiko)
- Lets you type natural language like:
check why this pi is rebooting - The AI suggests and runs commands (or asks for confirmation)
- Shows stdout/stderr in the right pane (“terminal output”)
- Keeps a running chat log in the left pane
- macOS / Linux terminal
- Python 3.10+
- SSH access to the target machine
Create a virtualenv and install dependencies:
python3 -m venv venv
source venv/bin/activate
pip install --upgrade pip
pip install textual paramiko openai google-genaiSet your Gemini key:
export GEMINI_API_KEY="YOUR_KEY_HERE"Optional model override:
export GEMINI_MODEL="gemini-2.0-flash"If using OpenAI:
export OPENAI_API_KEY="YOUR_KEY_HERE"Optional model override:
export OPENAI_MODEL="gpt-5-mini"python ssh_agent.py --host 192.168.1.178 --username yourname --ask-password --provider geminipython ssh_agent.py --host 192.168.1.178 --username yourname --ask-password --provider openaiYou can also pass a password directly (not recommended):
python ssh_agent.py --host 192.168.1.178 --username yourname --password "your_password"- F2: Toggle AUTO-RUN vs CONFIRM
- AUTO-RUN: agent runs safe commands automatically
- CONFIRM: agent proposes commands and waits for
y/n
- Ctrl+C: Quit
- F3/F4: resize chat/terminal window
- F5: Reset size
- F^: Toggle JSON output
The agent blocks or asks for confirmation if it detects commands matching dangerous patterns like:
rm -rfdd if=...mkfs.*shutdown,reboot,poweroffapt remove,apt purge
You can expand/modify these rules in DANGEROUS_PATTERNS.
- This is a prototype: it runs discrete SSH commands, not an interactive shell session.
- Some tools/services may require
sudodepending on the host configuration. - If your host prompts for MFA or interactive login steps, Paramiko won’t handle that.
This app uses an LLM “tool” called run_ssh to execute commands on the remote host.
When the AI wants to run something, it issues a structured tool call like:
{"command":"journalctl -k -b --no-pager | tail -n 200"}The app runs the command over SSH, captures the results, and sends the structured output back to the AI as JSON:
{
"command": "journalctl -k -b --no-pager | tail -n 200",
"exit_status": 0,
"stdout": "…command output…",
"stderr": "",
"duration_sec": 0.42
}JSON makes the agent more reliable than “screen scraping” because the AI can read exact fields:
exit_status→ whether the command succeeded (0= success)stdout→ normal outputstderr→ error outputduration_sec→ how long the command took (useful for hangs / slow commands)
No. By default you only see terminal-style output (stdout/stderr).
If you want to debug what the AI is receiving, press F6 to toggle Tool JSON. When enabled, the right pane will also print:
[TOOL RESULT JSON]- the raw JSON payload that gets sent back to the AI
- Resizable split panes (chat vs output)
- Toggle showing tool JSON / raw payloads
- Command history + replay
- Session recording/export to Markdown
- Multi-host support / saved profiles
- PTY support for interactive commands
MIT (or choose your own)
