Skip to content

feat(brain): remote HTTP MCP server on Charlie :8501#138

Open
Mikecranesync wants to merge 9 commits into
mainfrom
feat/brain-mcp-http
Open

feat(brain): remote HTTP MCP server on Charlie :8501#138
Mikecranesync wants to merge 9 commits into
mainfrom
feat/brain-mcp-http

Conversation

@Mikecranesync
Copy link
Copy Markdown
Owner

Summary

  • brain_server.py: Added env-var-driven MCP_HOST/MCP_PORT/MCP_TRANSPORT config, /health endpoint, streamable-http mode with optional bearer auth via BRAIN_ACCESS_KEY
  • deploy-brain-mcp-mac.sh: One-shot deploy script for Charlie Mac Mini — installs deps, creates launchd plist, loads service on port 8501, disables sleep, prints claude mcp add command
  • Stdio mode unchanged (default) — existing .mcp.json usage unaffected

Why

Every cluster device needs brain tools (brain_search, brain_capture, etc.). Previously required local Python, venv, and repo on each machine. Now: single persistent HTTP server on Charlie, one claude mcp add per device.

claude mcp add --transport http open-brain \
  http://100.82.246.52:8501/mcp

Test plan

  • Deploy script runs clean on Charlie
  • Health check passes: curl -sf http://localhost:8501/health
  • MCP endpoint responds (streamable-http handshake)
  • Register from another device via claude mcp add --transport http
  • Call brain_search("test") from remote Claude Code session
  • Reboot Charlie → service auto-starts

🤖 Generated with Claude Code

CharlieNode and others added 9 commits March 9, 2026 06:59
- brain-feed.yml: continue-on-error, fast timeouts, jq payload, dead letter queue
- ci-watchdog.yml: 30-min health check, severity classification, auto-issue management
- replay-brain-dlq.sh: replay failed payloads once endpoint recovers
- INC-2026-03-09-001: incident report for 14 consecutive failures
- Ops trace documenting the change

Fixes brain-feed blocking all pushes when brain-ingest endpoint is down.

Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
- brain_server.py: env-var-driven host/port/transport (MCP_HOST, MCP_PORT, MCP_TRANSPORT)
- brain_server.py: /health endpoint via custom_route (public, no auth)
- brain_server.py: streamable-http mode with optional bearer auth (BRAIN_ACCESS_KEY)
- deploy-brain-mcp-mac.sh: one-shot deploy for Charlie Mac Mini (port 8501, launchd)
- requirements.txt: add mcp[cli] and httpx

Enables multi-device access: claude mcp add --transport http open-brain http://<tailscale-ip>:8501/mcp
Brain-ingest on :8500 untouched.

Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
Telegram fault alerts now fire immediately without waiting for Cosmos
analysis (which could block up to 30s). Cosmos enrichment runs as a
background task and sends a follow-up message when ready.

- Add send_raw_incident_alert() and send_enrichment_followup() to TelegramAlerter
- Restructure _check_incidents() and _compare_twins() to alert-first
- Add circuit breaker to CosmosClient (3-fail threshold, 300s cooldown)
- Fix Python 3.9 syntax (str | None -> Optional[str]) in cosmos/client.py
- Add 6 isolation tests verifying decoupling and circuit breaker

Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
Replace hardcoded GEMINI_API_KEY/GROQ_API_KEY/NEON_DATABASE_URL in
LaunchAgent plists with doppler run -p factorylm -c dev -- wrapper.
Fixes API key expired errors after key rotation in Doppler.

- Update .mcp.json open-brain entry to use doppler run with full paths
  (doppler at ~/.local/bin/doppler, python3 at brain-venv — no bash
  escaping or --format flag compatibility issues)
- Fix com.factorylm.brain-mcp.plist and brain-ingest.plist to use
  doppler run instead of hardcoded EnvironmentVariables block
- Fix brain-ingest WorkingDirectory (was stale worktree path)
- Add config/network.env — node identity and IP config for CHARLIE
- Add scripts/health-check.sh — cluster health check script
- Add telegram_bot.py timeout config (connect/read/write/pool)
- Add work_orders JSON files from troubleshoot service

Co-Authored-By: Claude Sonnet 4.6 <noreply@anthropic.com>
POST /api/query endpoint that accepts {question, machine?} and returns
a unified response from Qdrant KB, Nautobot device records, and the
LLM router (with Groq direct fallback).

- Searches Qdrant :8000/factorylm_brain for top-5 relevant KB chunks
  using keyword-scored scroll (no embedding dependency for V1)
- Fetches Nautobot device context via HTTP when machine is provided
- Assembles context + calls LLM router :8100 (Groq fallback chain)
- Returns {answer, sources, machine_context, latency_ms}
- GET /health reports Qdrant + service connectivity
- 18 integration tests hitting real Qdrant, Nautobot, and query-api
  (no mocks — per project policy)

Services: query-api port 8300
Tests: 18/18 passing

Co-Authored-By: Paperclip <noreply@paperclip.ing>
- tools/ingest_factorylm_kb.py: indexes FactoryLM equipment data into Qdrant factorylm_brain
- kb/factorylm_modbus_map.md: Micro820 PLC Modbus TCP register reference

Co-Authored-By: Claude Sonnet 4.6 <noreply@anthropic.com>
All nodes inherit this via git pull on session start:
- Board: https://github.com/users/Mikecranesync/projects/4
- Session start: show In Progress + Todo items
- On commit: add issues, move to Done
- Field IDs hardcoded (Status, Todo, In Progress, Done)

Closes Mikecranesync/MIRA#90

Co-Authored-By: Claude Sonnet 4.6 <noreply@anthropic.com>
- docs/gtm/YOUTUBE_SHORTS_GTM.md — full strategy doc (FAC-18)
- docs/gtm/CONTENT_CALENDAR_4W.md — 4-week launch calendar (FAC-18)
- docs/gtm/REVIEW_CHECKLIST.md — pre-publish quality gate (FAC-19)
- tools/shorts_pipeline.py — LinkedIn-first vertical video producer
  (crop 9:16, Whisper captions, hook card, progress bar, end card, validation)
- tools/thumbnail_generator.py — Pillow 1280×720 thumbnails, 6 series color map
- tools/youtube_uploader.py — YouTube Data API v3 wrapper with quota guard
- tools/cross_post.py — one LinkedIn master → 5 platform derivatives
- tools/analytics_reporter.py — weekly analytics + self-improving calendar
- workers/content_capture_tasks.py — auto-trigger Shorts pipeline on score ≥ 8

Platform priority: LinkedIn native video first (B2B ICP), Shorts as derivative.
Closes #82, #83, #84, #85, #86, #87, #88.

Co-Authored-By: Claude Sonnet 4.6 <noreply@anthropic.com>
API_URL was hardcoded to http://165.245.138.91:8082.
Now uses ${PUBLIC_API_URL:-http://localhost:8082} so it works
on any host without file edits.

Co-Authored-By: Claude Sonnet 4.6 <noreply@anthropic.com>
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

1 participant