From 15de8e4f7e9d2dde1d3ec7d82657735c9ae3019b Mon Sep 17 00:00:00 2001 From: Claude Date: Fri, 23 Jan 2026 05:18:36 +0000 Subject: [PATCH 1/2] feat(research): initialize jade-ide VSCode fork research framework - Add unmodified_prompt.md with original research requirements - Create init.sh for multi-agent research orchestration - Add RESEARCH.md as central research index document - Initialize 2604-research-append-progress.txt for agent outputs Research covers: VSCode fork patterns (Cursor/Kiro/Windsurf), ACP protocol, Ubuntu 26.04 WSL2 setup, chezmoi dotfiles, Ollama local, and enterprise configs. https://claude.ai/code/session_01KaaGCSkyCSyBvFGLH7RgAR --- 2604-research-append-progress.txt | 66 ++++++ RESEARCH.md | 324 ++++++++++++++++++++++++++++++ init.sh | 236 ++++++++++++++++++++++ unmodified_prompt.md | 45 +++++ 4 files changed, 671 insertions(+) create mode 100644 2604-research-append-progress.txt create mode 100644 RESEARCH.md create mode 100755 init.sh create mode 100644 unmodified_prompt.md diff --git a/2604-research-append-progress.txt b/2604-research-append-progress.txt new file mode 100644 index 0000000..3eaf5ba --- /dev/null +++ b/2604-research-append-progress.txt @@ -0,0 +1,66 @@ +# JADE-IDE Research Progress Log +================================================================================ +Session: claude/research-vscode-fork-1hrvL +Started: 2026-01-23T00:00:00Z +================================================================================ + +## Research Objectives + +### Primary Goals +1. Evaluate VSCode fork viability (Cursor, Kiro, Gemini Code patterns) +2. Design jade-cli architecture (Claude Code compatible, standalone capable) +3. Define Ubuntu 26.04 development environment specifications +4. Establish .claude dotfiles hierarchy for enterprise + individual use + +### Hardware Context +- **VRAM:** 11GB (GPU passthrough for Ollama/CUDA) +- **RAM:** 128GB (sufficient for large context models) +- **CPU:** 24 threads (parallel build/test capability) +- **Host:** Windows 10 + WSL2 Ubuntu 26.04 + +### Dotfiles Hierarchy Model +``` +Layer 1: ~/.claude/ # Engineer personal +Layer 2: ~///.claude/ # Project-specific +Layer 3: org-config (IT managed) # Enterprise policies +Layer 4: shared-dotfiles (GitHub org) # Reusable templates +``` + +================================================================================ +## Research Progress +================================================================================ + +--- +## [00:00:01] Agent: research-init +**Status:** COMPLETED +**Timestamp:** 2026-01-23T00:00:00Z + +### Initial Web Research Findings + +#### Agent Client Protocol (ACP) +- **Source:** Google + Zed Industries collaboration +- **Purpose:** Open standard for AI agents to integrate with any IDE +- **Architecture:** JSON-RPC over stdio, reuses MCP specs +- **Supported:** Zed, JetBrains, Neovim, Emacs, Claude (via adapter) +- **Key Insight:** Designed for portable agents - not locked to one IDE + +#### Cursor IDE (VSCode Fork) +- **Developer:** Anysphere Inc. +- **Architecture:** Full VS Code fork with AI-first principles +- **Key Tech:** RAG-based codebase indexing, 272k token context +- **Challenge:** Microsoft extensions stopped working (2025 licensing) +- **Why Fork:** Extension API too constrained for deep AI integration + +#### Kiro IDE (AWS VSCode Fork) +- **Developer:** Amazon Web Services +- **Architecture:** Code OSS fork, powered by Anthropic Claude +- **Key Innovation:** Spec-driven development (requirements → stories → tasks) +- **Agent Hooks:** Event-driven agents triggered by file/commit changes +- **Pricing:** $20/month (preview) + +### Implications for jade-ide +1. **ACP Support Essential:** Must implement ACP for agent portability +2. **Fork vs Extension:** Deep AI integration requires fork, not extension +3. **Spec-Driven Potential:** Consider Kiro's spec approach for enterprise +4. **RAG Architecture:** Cursor's codebase indexing pattern is proven + diff --git a/RESEARCH.md b/RESEARCH.md new file mode 100644 index 0000000..72021cf --- /dev/null +++ b/RESEARCH.md @@ -0,0 +1,324 @@ +--- +title: "jade-ide Research Framework" +project: jadecli +session: claude/research-vscode-fork-1hrvL +date: 2026-01-23 +status: active +hardware: + vram: 11GB + ram: 128GB + cpu_threads: 24 + host_os: Windows 10 + guest_os: Ubuntu 26.04 (WSL2) +--- + +# JADE-IDE Research Framework + +## Executive Summary + +This document tracks research for **jade-ide** (VSCode fork) and **jade-cli** (Claude Code compatible CLI), targeting commercial-grade deployment with enterprise dotfiles management. + +## Quick Links + +| Document | Purpose | +|----------|---------| +| [unmodified_prompt.md](./unmodified_prompt.md) | Original research prompt | +| [init.sh](./init.sh) | Multi-agent research orchestration | +| [2604-research-append-progress.txt](./2604-research-append-progress.txt) | Live research progress log | + +--- + +## 1. VSCode Fork Landscape (2025-2026) + +### 1.1 Major Players + +| IDE | Developer | Architecture | Key Innovation | +|-----|-----------|--------------|----------------| +| **Cursor** | Anysphere | VSCode fork | RAG codebase indexing, 272k context | +| **Kiro** | AWS | Code OSS fork | Spec-driven development | +| **Windsurf** | Codeium | VSCode fork | Cascade multi-file edits | +| **Zed** | Zed Industries | Native Rust | GPU-accelerated, ACP pioneer | +| **Gemini Code** | Google | VSCode extension | Deep Google Cloud integration | + +### 1.2 Why Fork (Not Extend) + +VSCode's extension API is deliberately constrained: +- Cannot modify core editor UI/UX +- Limited access to full codebase context +- No control over distribution/branding +- Extension sandboxing limits AI integration depth + +**Cursor's approach:** Fork entire VSCode, modify Electron layer, add proprietary AI backbone. + +### 1.3 Implications for jade-ide + +1. **Must fork Code OSS** (not VSCode) for licensing compliance +2. **ACP support essential** for agent portability +3. **Consider Kiro's spec-driven model** for enterprise customers +4. **Address Electron overhead** with performance optimizations + +--- + +## 2. Agent Client Protocol (ACP) + +### 2.1 Overview + +ACP is an open standard (Google + Zed) enabling any AI agent to integrate with any IDE. + +``` +┌─────────────────────────────────────────────────────────────┐ +│ IDE / Editor │ +│ ┌──────────┐ ┌──────────┐ ┌──────────┐ ┌──────────┐ │ +│ │ Zed │ │ JetBrains│ │ Neovim │ │ Emacs │ │ +│ └────┬─────┘ └────┬─────┘ └────┬─────┘ └────┬─────┘ │ +│ │ │ │ │ │ +│ └─────────────┴─────────────┴─────────────┘ │ +│ │ │ +│ ACP Protocol │ +│ (JSON-RPC/stdio) │ +│ │ │ +│ ┌─────────────┬────┴────┬─────────────┐ │ +│ │ │ │ │ │ +│ ┌────┴────┐ ┌────┴────┐ ┌─┴───────┐ ┌─┴───────┐ │ +│ │ Goose │ │ Claude │ │ Gemini │ │ jade │ │ +│ │ Agent │ │ Agent │ │ Agent │ │ Agent │ │ +│ └─────────┘ └─────────┘ └─────────┘ └─────────┘ │ +└─────────────────────────────────────────────────────────────┘ +``` + +### 2.2 Technical Details + +- **Transport:** JSON-RPC over stdio +- **Reuses:** MCP (Model Context Protocol) specifications +- **Text Format:** Markdown-based +- **Libraries:** TypeScript and Rust implementations available + +### 2.3 Supported Environments + +- Zed (native) +- JetBrains IDEs (coming) +- Neovim (via CodeCompanion, avante.nvim) +- Emacs (via agent-shell.el) +- Claude (via Zed SDK adapter) + +--- + +## 3. Dotfiles Hierarchy (4-Layer Model) + +### 3.1 Architecture + +``` +Priority (highest to lowest): +┌───────────────────────────────────────────────────────────┐ +│ Layer 4: GitHub Org Shared Templates │ +│ github.com//claude-dotfiles/ │ +│ └── templates/ (reusable rules, commands, hooks) │ +├───────────────────────────────────────────────────────────┤ +│ Layer 3: Enterprise IT Configs │ +│ Managed via: MDM, Vault, 1Password Business │ +│ └── Policies: SSO, audit logging, data residency │ +├───────────────────────────────────────────────────────────┤ +│ Layer 2: Project-Specific │ +│ ~///.claude/ │ +│ └── CLAUDE.md, rules/, commands/, hooks/ │ +├───────────────────────────────────────────────────────────┤ +│ Layer 1: Engineer Personal │ +│ ~/.claude/ │ +│ └── settings.json, personal rules, custom commands │ +└───────────────────────────────────────────────────────────┘ +``` + +### 3.2 Configuration Resolution + +1. Load Layer 1 (personal defaults) +2. Overlay Layer 2 (project overrides) +3. Apply Layer 3 (enterprise policies - mandatory) +4. Inject Layer 4 (shared templates as needed) + +### 3.3 Chezmoi Integration + +```toml +# ~/.config/chezmoi/chezmoi.toml +[data] + org = "jade-corp" + team = "platform" + env = "dev" + +[data.claude] + api_tier = "enterprise" + mcp_enabled = true +``` + +``` +# ~/.local/share/chezmoi/dot_claude/ +├── settings.json.tmpl # Templated with {{ .org }} +├── rules/ +│ ├── security.md # Static +│ └── team-{{ .team }}.md.tmpl # Per-team rules +└── hooks/ + └── pre-commit.sh.tmpl +``` + +--- + +## 4. Ubuntu 26.04 WSL2 Environment + +### 4.1 Hardware Utilization + +| Resource | Allocation | Configuration | +|----------|------------|---------------| +| VRAM | 11GB | Full GPU passthrough for Ollama | +| RAM | 128GB | WSL memory limit: 64GB recommended | +| CPU | 24 threads | All cores available to WSL | + +### 4.2 WSL2 Configuration + +```ini +# %USERPROFILE%\.wslconfig +[wsl2] +memory=64GB +processors=24 +swap=8GB +localhostForwarding=true + +[experimental] +autoMemoryReclaim=gradual +sparseVhd=true +``` + +### 4.3 GPU Passthrough + +```bash +# Verify CUDA in WSL +nvidia-smi +# Install CUDA toolkit (WSL-specific) +sudo apt install nvidia-cuda-toolkit +``` + +--- + +## 5. Package Management Strategy + +### 5.1 Recommended Stack + +| Tool | Purpose | Install | +|------|---------|---------| +| **mise** | Polyglot runtime manager | `curl https://mise.run \| sh` | +| **uv** | Python package manager | `curl -LsSf https://astral.sh/uv/install.sh \| sh` | +| **pnpm** | Node package manager | `mise install pnpm` | +| **chezmoi** | Dotfiles manager | `mise install chezmoi` | + +### 5.2 Anti-Patterns to Avoid + +- ❌ Using apt for development tools (stale versions) +- ❌ Global pip install (conflicts) +- ❌ nvm for Node (slow, mise is faster) +- ❌ Manual PATH management +- ❌ Storing secrets in dotfiles + +### 5.3 Recommended Patterns + +- ✅ mise for all runtimes (node, python, go, rust) +- ✅ uv for Python (10-100x faster than pip) +- ✅ chezmoi for dotfiles (templating, encryption) +- ✅ 1Password CLI / Vault for secrets +- ✅ direnv for project-specific env vars + +--- + +## 6. Ollama Local Setup (11GB VRAM) + +### 6.1 Recommended Models + +| Model | Size | VRAM | Use Case | +|-------|------|------|----------| +| deepseek-coder:6.7b-instruct-q4_K_M | 4GB | ~5GB | Code completion | +| codellama:13b-instruct-q4_K_M | 7GB | ~9GB | Code generation | +| qwen2.5-coder:7b-instruct-q4_K_M | 4GB | ~5GB | Multi-language | +| mistral:7b-instruct-q4_K_M | 4GB | ~5GB | General assistant | + +### 6.2 Modelfile Example + +```dockerfile +# ~/.ollama/models/jade-coder +FROM deepseek-coder:6.7b-instruct-q4_K_M + +PARAMETER temperature 0.7 +PARAMETER top_p 0.9 +PARAMETER num_ctx 8192 + +SYSTEM """ +You are jade-coder, an AI assistant specialized in: +- TypeScript/JavaScript (VSCode extension development) +- Rust (performance-critical components) +- Python (tooling and automation) +Follow jade-ide coding standards and patterns. +""" +``` + +### 6.3 MCP Bridge for Claude Code + +```json +// ~/.claude/settings.json +{ + "mcpServers": { + "ollama": { + "command": "npx", + "args": ["@anthropic/mcp-ollama-bridge"], + "env": { + "OLLAMA_HOST": "http://localhost:11434" + } + } + } +} +``` + +--- + +## 7. Research Agents (In Progress) + +| Agent | Status | Output | +|-------|--------|--------| +| ubuntu-2604 | 🔄 Running | WSL2 optimization, CUDA setup | +| chezmoi-org | 🔄 Running | Organizational dotfiles patterns | +| ollama-local | 🔄 Running | GPU optimization, model selection | +| claude-dotfiles | 🔄 Running | 4-layer hierarchy implementation | +| enterprise-config | 🔄 Running | SSO, audit, compliance | +| git-culture | 🔄 Running | AI team workflows | + +Check progress: `tail -f 2604-research-append-progress.txt` + +--- + +## 8. Next Steps + +1. [ ] Review parallel agent findings +2. [ ] Synthesize into jade-ide architecture doc +3. [ ] Create jade-cli specification +4. [ ] Design dotfiles hierarchy implementation +5. [ ] Set up Ubuntu 26.04 development environment +6. [ ] Initialize jade-ide repository structure + +--- + +## Sources + +### VSCode Fork Architecture +- [Forked by Cursor: Hidden Cost of VS Code Fragmentation](https://dev.to/pullflow/forked-by-cursor-the-hidden-cost-of-vs-code-fragmentation-4p1) +- [Why Cursor, Windsurf fork VS Code](https://blogs.eclipse.org/post/thomas-froment/why-cursor-windsurf-and-co-fork-vs-code-shouldnt) +- [Cursor Alternatives 2026](https://www.builder.io/blog/cursor-alternatives-2026) + +### Agent Client Protocol +- [Intro to ACP (Goose)](https://block.github.io/goose/blog/2025/10/24/intro-to-agent-client-protocol-acp/) +- [Zed ACP Documentation](https://zed.dev/acp) +- [JetBrains AI ACP Support](https://www.jetbrains.com/help/ai-assistant/acp.html) +- [Google/Zed ACP Announcement](https://www.theregister.com/2025/08/28/google_zed_acp/) + +### Kiro IDE +- [AWS Kiro: Spec-Driven AI IDE](https://thenewstack.io/aws-kiro-testing-an-ai-ide-with-a-spec-driven-approach/) +- [Kiro Official](https://kiro.dev/) +- [Kiro InfoQ Coverage](https://www.infoq.com/news/2025/08/aws-kiro-spec-driven-agent/) + +### AI Agent Protocols +- [AI Agent Protocols 2026 Guide](https://www.ruh.ai/blogs/ai-agent-protocols-2026-complete-guide) +- [Top AI Agent Protocols](https://getstream.io/blog/ai-agent-protocols/) diff --git a/init.sh b/init.sh new file mode 100755 index 0000000..9cbbe04 --- /dev/null +++ b/init.sh @@ -0,0 +1,236 @@ +#!/usr/bin/env bash +# ============================================================================ +# JADE-IDE RESEARCH INITIALIZATION SCRIPT +# ============================================================================ +# Project: jade-ide (VSCode fork) + jade-cli (Claude Code compatible) +# Date: 2026-01-23 +# Session: claude/research-vscode-fork-1hrvL +# ============================================================================ + +set -euo pipefail + +# ---------------------------------------------------------------------------- +# RESEARCH CONFIGURATION +# ---------------------------------------------------------------------------- +RESEARCH_DIR="${RESEARCH_DIR:-$(pwd)}" +PROGRESS_FILE="${RESEARCH_DIR}/2604-research-append-progress.txt" +TIMESTAMP=$(date -u +"%Y-%m-%dT%H:%M:%SZ") + +# Hardware Context +export JADE_VRAM="11GB" +export JADE_RAM="128GB" +export JADE_CPU_THREADS="24" +export JADE_HOST_OS="Windows 10" +export JADE_GUEST_OS="Ubuntu 26.04 (WSL)" + +# ---------------------------------------------------------------------------- +# RESEARCH DOMAINS (Parallel Agent Topics) +# ---------------------------------------------------------------------------- +declare -A RESEARCH_AGENTS=( + ["vscode-fork"]="VSCode fork architecture, Electron, extension API, cursor/kiro patterns" + ["ubuntu-2604"]="Ubuntu 26.04 LTS setup, WSL2 optimization, CUDA/GPU passthrough" + ["claude-dotfiles"]="Claude Code configuration hierarchy, CLAUDE.md best practices" + ["chezmoi-org"]="Chezmoi for organizations, dotfiles templating, multi-machine sync" + ["ollama-local"]="Ollama setup, model management, vLLM alternatives, GPU optimization" + ["acp-protocol"]="Agent Client Protocol, MCP servers, tool integration patterns" + ["git-culture"]="Git workflows for AI teams, monorepo vs polyrepo, trunk-based dev" + ["enterprise-config"]="Enterprise Claude configs, IT policies, secrets management" +) + +# ---------------------------------------------------------------------------- +# PROGRESS TRACKING FUNCTIONS +# ---------------------------------------------------------------------------- +append_progress() { + local agent="$1" + local status="$2" + local content="$3" + + # Append without opening file (atomic write) + printf "\n---\n## [%s] Agent: %s\n**Status:** %s\n**Timestamp:** %s\n\n%s\n" \ + "$(date -u +"%H:%M:%S")" "$agent" "$status" "$TIMESTAMP" "$content" \ + >> "$PROGRESS_FILE" +} + +init_progress_file() { + cat >> "$PROGRESS_FILE" << 'HEADER' +# JADE-IDE Research Progress Log +================================================================================ +Session: claude/research-vscode-fork-1hrvL +Started: $(date -u +"%Y-%m-%dT%H:%M:%SZ") +================================================================================ + +## Research Objectives + +### Primary Goals +1. Evaluate VSCode fork viability (Cursor, Kiro, Gemini Code patterns) +2. Design jade-cli architecture (Claude Code compatible, standalone capable) +3. Define Ubuntu 26.04 development environment specifications +4. Establish .claude dotfiles hierarchy for enterprise + individual use + +### Hardware Context +- **VRAM:** 11GB (GPU passthrough for Ollama/CUDA) +- **RAM:** 128GB (sufficient for large context models) +- **CPU:** 24 threads (parallel build/test capability) +- **Host:** Windows 10 + WSL2 Ubuntu 26.04 + +### Dotfiles Hierarchy Model +``` +Layer 1: ~/.claude/ # Engineer personal +Layer 2: ~///.claude/ # Project-specific +Layer 3: org-config (IT managed) # Enterprise policies +Layer 4: shared-dotfiles (GitHub org) # Reusable templates +``` + +================================================================================ +## Research Progress +================================================================================ +HEADER +} + +# ---------------------------------------------------------------------------- +# RESEARCH TOPIC DEFINITIONS +# ---------------------------------------------------------------------------- + +research_vscode_fork() { + append_progress "vscode-fork" "STARTED" "Researching VSCode fork patterns..." + + # Key investigation areas: + # - Cursor: How they extended VSCode for AI-first development + # - Kiro: AWS-backed, spec-driven development approach + # - Gemini Code: Google's approach to IDE integration + # - Windsurf/Codeium: Alternative fork strategies + # - Extension API stability across VSCode versions + # - Electron optimization for AI workloads + + echo "vscode-fork" +} + +research_ubuntu_2604() { + append_progress "ubuntu-2604" "STARTED" "Researching Ubuntu 26.04 LTS setup..." + + # Key investigation areas: + # - WSL2 GPU passthrough (NVIDIA CUDA) + # - systemd integration in WSL + # - Memory/CPU allocation optimization + # - Docker Desktop vs native containers + # - mise/asdf for polyglot tooling + # - Snap vs apt vs Nix for reproducibility + + echo "ubuntu-2604" +} + +research_claude_dotfiles() { + append_progress "claude-dotfiles" "STARTED" "Researching Claude Code configuration..." + + # Key investigation areas: + # - ~/.claude/ structure (settings.json, rules/, commands/, hooks/) + # - CLAUDE.md per-project context + # - MCP server configuration + # - Plugin architecture + # - Permission models (hooks, tools) + # - Multi-project isolation + + echo "claude-dotfiles" +} + +research_chezmoi_org() { + append_progress "chezmoi-org" "STARTED" "Researching chezmoi for organizations..." + + # Key investigation areas: + # - Templating with {{ .chezmoi.* }} variables + # - External data sources (1Password, Vault, AWS SM) + # - Machine-specific configs with .chezmoi.toml + # - Scripts: run_once_, run_onchange_ + # - Multi-source dotfiles (private + org repos) + # - Encryption for sensitive configs + + echo "chezmoi-org" +} + +research_ollama_local() { + append_progress "ollama-local" "STARTED" "Researching Ollama local setup..." + + # Key investigation areas: + # - Model quantization for 11GB VRAM + # - Modelfiles for custom assistants + # - ollama serve daemon configuration + # - Integration with Claude Code (MCP bridge) + # - vLLM as alternative for higher throughput + # - GPU memory management strategies + + echo "ollama-local" +} + +research_acp_protocol() { + append_progress "acp-protocol" "STARTED" "Researching Agent Client Protocol..." + + # Key investigation areas: + # - ACP specification (from llms.txt) + # - MCP (Model Context Protocol) relationship + # - Tool definition patterns + # - IDE integration hooks + # - Multi-agent coordination + # - State management across agents + + echo "acp-protocol" +} + +research_git_culture() { + append_progress "git-culture" "STARTED" "Researching git team culture..." + + # Key investigation areas: + # - Trunk-based development for AI-assisted coding + # - PR review workflows with AI + # - Branch naming conventions (claude/*, feature/*) + # - Commit message standards + # - Monorepo vs polyrepo for IDE+CLI + # - GitHub Actions for CI/CD + + echo "git-culture" +} + +research_enterprise_config() { + append_progress "enterprise-config" "STARTED" "Researching enterprise configurations..." + + # Key investigation areas: + # - Claude for Enterprise features + # - SSO/SAML integration + # - Audit logging requirements + # - Data residency constraints + # - API key rotation policies + # - Secrets management (Vault, 1Password) + + echo "enterprise-config" +} + +# ---------------------------------------------------------------------------- +# MAIN EXECUTION +# ---------------------------------------------------------------------------- +main() { + echo "============================================================" + echo "JADE-IDE Research Initialization" + echo "============================================================" + echo "Progress file: $PROGRESS_FILE" + echo "Research agents: ${#RESEARCH_AGENTS[@]}" + echo "" + + # Initialize progress file + init_progress_file + + echo "Research domains to investigate:" + for agent in "${!RESEARCH_AGENTS[@]}"; do + echo " - [$agent]: ${RESEARCH_AGENTS[$agent]}" + done + + echo "" + echo "To run parallel research, use Claude Code with:" + echo " claude --research jade-ide" + echo "" + echo "Or trigger individual agents via Task tool with subagent_type=Explore" + echo "============================================================" +} + +# Run if executed directly +if [[ "${BASH_SOURCE[0]}" == "${0}" ]]; then + main "$@" +fi diff --git a/unmodified_prompt.md b/unmodified_prompt.md new file mode 100644 index 0000000..de77d56 --- /dev/null +++ b/unmodified_prompt.md @@ -0,0 +1,45 @@ +# Original Research Prompt - Jade IDE Project + +**Date:** 2026-01-23 +**Session:** claude/research-vscode-fork-1hrvL + +--- + +## Raw Prompt + +Create a structured research used Claude code optimized front matter and modern research techniques to organize knowledge as new insights are gained. + +Focus on researching, if i am staring a business to fork vscode (like what cursor, Gemini and kiro did) that will be called jade-ide and a CLI extension that works with Claude code that could be used standalone or in jade-ide , set up the research context to describe my Linux os 26.04 , vram 11gb , ram 128 gb, and 24 thread cpu running windows 10 with wsl for Ubuntu 26.04. research and provide guidance for how to properly set up a 26.04 Linux environment that is tuned for Claude code and local ollama assisted development on that local machine. All installations , package managers , configurations , and dotfiles need to use modern GitHub organizational techniques for launching commercial grade consumer technology that an engineer can maintain by learning from others mistakes and anti patterns. One of the core benefits jade will offer is improving the team oriented culture of managing git projects and that starts by implementing it for ourselves. On this new 26.04 environment, consider the complexity of 4 scenarios) +1. An engineer will have their own local dotfiles e.g. "~/.claude", +2. A project may or may not have "~///.claude", +3. IT will need to set up enterprise / organizational configs for Claude for the repo +4. There may be usefulness in having shared .claude files as parts of a dotfiles utility at a GitHub organization level and per project. + +The above scenario is only the dotfiles for .claude for an engineer can easily have 20+ on their machine at an organization over time. Think about we can use chezmoi and other relevant modern packages / techniques / processes / automation used by modern ai companies like anthropic , Google Gemini , OpenAI , cursor and GitHub with copilot. And think about how the agentclientprotcol can help us (process Llms.txt https://agentclientprotocol.com/llms.txt) + +Save this prompt into "unmodified_prompt.md", then cp and optimize the formatting as "init.sh" to start a multi agent research process where they all contribute researching using parallel.ai api's for better web search and extract type abilities. They should append write structured output to <2604-research>-append-progress.txt without opening the file as they progress. + +--- + +## Research Metadata + +### Hardware Specifications +| Component | Specification | +|-----------|---------------| +| Host OS | Windows 10 | +| Guest OS | Ubuntu 26.04 (WSL) | +| VRAM | 11GB | +| RAM | 128GB | +| CPU | 24 threads | + +### Project Scope +- **Product:** jade-ide (VSCode fork) +- **CLI:** jade-cli (Claude Code compatible) +- **Target:** Commercial-grade consumer technology +- **Focus:** Team-oriented git project culture + +### Dotfiles Hierarchy (4 Layers) +1. Engineer local: `~/.claude/` +2. Project-level: `~///.claude/` +3. Enterprise/IT: Organization-wide configs +4. Shared dotfiles: GitHub org-level utilities From f85985b7d3d164117b18a14d1756bd8257d510d6 Mon Sep 17 00:00:00 2001 From: Claude Date: Fri, 23 Jan 2026 06:45:19 +0000 Subject: [PATCH 2/2] feat(arch): add GitHub organization architecture with ASCII diagrams - Add ARCHITECTURE.md with comprehensive organization structure - Include package strategy matrix (fork vs build from scratch) - Document 4-layer dotfiles configuration flow - Add technology stack matrix by repository - Include data flow architecture diagrams - Document monorepo vs polyrepo hybrid approach - Add development phase roadmap (Q1-Q4) - Update research progress with completed agent findings All 6 research agents completed: - enterprise-config: SSO, audit, compliance patterns - ollama-local: 11GB VRAM optimization, MCP integration - git-culture: trunk-based dev, AI PR review workflows - ubuntu-2604: WSL2 GPU passthrough, memory allocation - claude-dotfiles: 4-layer hierarchy, chezmoi templates - chezmoi-org: organizational dotfiles at scale https://claude.ai/code/session_01KaaGCSkyCSyBvFGLH7RgAR --- 2604-research-append-progress.txt | 1962 +++++++++++++++++++++++++++++ ARCHITECTURE.md | 539 ++++++++ 2 files changed, 2501 insertions(+) create mode 100644 ARCHITECTURE.md diff --git a/2604-research-append-progress.txt b/2604-research-append-progress.txt index 3eaf5ba..fc7d3dd 100644 --- a/2604-research-append-progress.txt +++ b/2604-research-append-progress.txt @@ -64,3 +64,1965 @@ Layer 4: shared-dotfiles (GitHub org) # Reusable templates 3. **Spec-Driven Potential:** Consider Kiro's spec approach for enterprise 4. **RAG Architecture:** Cursor's codebase indexing pattern is proven + +--- +## [05:19:53] Agent: enterprise-config +**Status:** COMPLETED +**Timestamp:** 2026-01-23T05:19:53Z + +### Research: Enterprise Configuration Patterns for AI IDE Deployments + +--- + +#### 1. Claude for Enterprise Features and Requirements + +**Core Capabilities:** +- 500K expanded context window (Team/Enterprise plans) +- Native GitHub integration for full codebase analysis +- Claude Code bundled with Team and Enterprise plans +- Custom data retention controls + +**Security & Compliance:** +- SSO (Single Sign-On) and domain capture +- SCIM (System for Cross-domain Identity Management) for automated user provisioning +- Comprehensive audit logs for activity tracing +- Role-based permissioning +- Compliance API for programmatic access to usage data + +**Pricing Model:** +- Enterprise: ~$60/seat (minimum 70 users, 12-month contract, ~$50K minimum) +- Two billing options: Usage-based (token consumption) or Seat-based (flat monthly) +- Available via direct sales or AWS Marketplace + +**Data Privacy:** +- Anthropic does not train on enterprise conversations +- Custom retention period configuration +- Filtering capabilities by user and time range + +--- + +#### 2. SSO/SAML Integration for IDE Tools + +**Supported Identity Providers:** +1. Okta Workforce Identity Cloud (SAML 2.0, outbound SCIM 2.0, adaptive MFA) +2. Microsoft Entra ID (Azure AD) +3. Auth0 +4. AWS IAM Identity Center +5. Google Workspace Identity +6. PingOne + +**Integration Requirements:** +- SAML 2.0 or OIDC for corporate identity provider integration +- Repository-level RBAC via custom claims +- Service account management for AI agents +- Persistent access across distributed development environments + +**AI Agent Challenges:** +- Traditional OAuth 2.0/SAML assume human-in-the-loop consent flows +- AI agents run headless (no UI) requiring specialized auth patterns +- Consider AWS IAM federation for full user attribution and MFA enforcement +- 12-hour session duration typical for enterprise deployments + +**Best Practices:** +- Policy-first deployment (define allow/deny/ask rules before rollout) +- Broad app connector catalogs for simplified deployment +- Risk-based authentication and adaptive MFA + +--- + +#### 3. Audit Logging Requirements for AI Code Assistants + +**Compliance Certifications Required:** +- SOC 2 Type II (validates controls operate effectively over time) +- ISO/IEC 42001 (AI-specific governance) +- HIPAA (healthcare) +- GDPR (EU data protection) + +**What to Log:** +- Algorithm version control (which model version per decision) +- Training data lineage (bias source identification) +- Feature importance tracking +- Human oversight actions (accept/reject/modify decisions) +- Code generation events and AI interactions +- Usage patterns and analytics + +**Minimal Audit Log Schema:** +| Field | Description | +|-------|-------------| +| Timestamp | UTC in ISO 8601 format | +| Request ID | Unique identifier per interaction | +| Actor Details | Role, hashed ID, origin | +| Endpoint | Feature/API route triggered | +| Input Parameters | Anonymized/hashed as required | +| Output Data | Model response + confidence levels | +| Model Version | For decision reproducibility | + +**Retention Policies:** +- Decision Logs: 1-2 years or model version lifetime +- System/Error Logs: 90-180 days +- Model Metadata: Full lifecycle + one release cycle +- PII Data: 6-12 months max (GDPR) + +**Encryption Requirements:** +- At rest: AES-256 or stronger +- In transit: TLS 1.2+ + +--- + +#### 4. Data Residency and Compliance Constraints + +**Key Compliance Frameworks:** +- GDPR: Up to €20M or 4% global turnover penalties +- SOC 2 (adding AI-specific criteria for model governance) +- HIPAA for healthcare +- FedRAMP for US government + +**Data Residency Controls:** +- Customer-managed encryption keys (CMEK) +- Regional data residency guarantees +- Air-gapped deployment for regulated environments +- Self-hosting within VPC for strict residency requirements + +**GDPR Considerations:** +- Standard Contractual Clauses for non-adequate countries +- Data Processing Agreements required for third-party AI endpoints +- Cross-border transfer controls for distributed GPU processing + +**Enterprise Evaluation Criteria:** +1. Certifications and attestations +2. Data protection architecture +3. Deployment model flexibility (cloud/hybrid/on-prem) +4. Identity and access management integration +5. Model privacy guarantees +6. Audit monitoring capabilities + +--- + +#### 5. API Key Rotation Policies and Secrets Management + +**HashiCorp Vault:** +- Industry standard for enterprise secrets management +- Dynamic secrets (auto-generated, auto-revoked) +- OpenAI plugin for AI-specific workloads +- Database password auto-rotation +- Encryption key rotation without code changes +- AI-powered "Intelligent Secret Rotation" based on usage patterns + +**1Password Business:** +- SDK for programmatic vault item management +- AI workflow integration with runtime secret read/write/rotate +- Human-in-the-loop approval for AI agent credentials +- End-to-end encrypted credential provisioning +- CI/CD pipeline integrations +- 1Password Service Accounts or Connect servers for automation + +**Key Rotation Best Practices:** +- Dynamic secrets preferred (short-lived, auto-expire) +- Integrate with existing tech stack +- Ensure scalability for projected growth +- Verify compliance with industry regulations +- AES-256 encryption standards minimum + +--- + +#### 6. Network Proxy and Firewall Configurations + +**Proxy Configuration (Claude Code Example):** +- Environment variables: HTTPS_PROXY, HTTP_PROXY, NO_PROXY +- Custom Certificate Authority (CA) support +- mTLS authentication for enhanced security +- SOCKS proxies NOT supported +- For NTLM/Kerberos: Use LLM Gateway service + +**Firewall Requirements:** +- Allowlist specific API endpoints (e.g., api.anthropic.com, api.openai.com) +- HTTPS port 443 for secure communication +- Sentry.io for error reporting (some tools) +- Block outbound access to unauthorized AI services + +**Air-Gapped Deployments:** +- Internal container registry (avoid Docker Hub runtime pulls) +- Host model weights on internal storage +- Firewall AI servers to communicate only with dev workstations/IDEs +- Least privilege network policies + +**AI Firewall Solutions:** +- Cloudflare Firewall for AI (Model DoS protection) +- Akamai Firewall for AI (edge/API/proxy deployment) +- Meta LlamaFirewall (open-source, 90%+ attack mitigation) +- Custom policy enforcement layers for suggestion filtering + +--- + +#### 7. GitHub Copilot Enterprise Deployment Patterns + +**Architecture:** +- Client-server model (IDE extension captures context) +- GitHub Actions for secure, customizable compute environments +- Multi-model support: Claude 3.5/3.7, Gemini 2.0, OpenAI models + +**Enterprise Access Management:** +Two approaches: +1. Direct license assignment to users/teams (simplified at scale) +2. Enable per-organization with owner control + +**Enterprise-Specific Features:** +- Codebase indexing for tailored suggestions +- Private fine-tuned models for code completion +- GitHub.com chat interface integration +- API access for internal developer portal integration +- 1,000 premium requests/month (vs 300 for Business) + +**Network Configuration:** +- Custom SSL certificate installation on user machines +- Firewall allowlisting for Copilot Business +- Optional blocking of Copilot Pro/Free tiers + +**2025-2026 Enhancements:** +- Coding Agent (autonomous task execution via GitHub Actions) +- BYOK (Bring Your Own Key) for enterprise control +- Granular governance policies +- Multi-model flexibility for task-specific selection + +--- + +### Key Recommendations for Enterprise AI IDE Deployment + +1. **Identity:** Implement SAML 2.0 SSO with SCIM provisioning +2. **Secrets:** Use Vault or 1Password for dynamic API key management +3. **Logging:** Capture model version, inputs, outputs, and human decisions +4. **Residency:** Evaluate air-gapped or self-hosted options for regulated industries +5. **Network:** Establish explicit allowlists and consider AI-specific firewalls +6. **Compliance:** Obtain SOC 2 Type II + ISO 42001 certifications +7. **Access:** Adopt policy-first deployment with role-based permissions + +--- + +### Sources Consulted +- Anthropic Claude Enterprise documentation +- Augment Code enterprise security guides +- HashiCorp Vault and 1Password developer documentation +- GitHub Copilot Enterprise documentation +- Cloudflare and Akamai AI firewall solutions +- IntuitionLabs air-gapped deployment guides + + +--- +## [05:20:00] Agent: ollama-local +**Status:** COMPLETED +**Timestamp:** 2026-01-23T05:20:00Z + +### Ollama Local Setup Optimized for 11GB VRAM GPU on Ubuntu WSL2 + +--- + +#### 1. Model Quantization Strategies for 11GB VRAM + +| Quantization | Bytes/Weight | VRAM for 7-8B | VRAM for 13B | Notes | +|--------------|--------------|---------------|--------------|-------| +| Q4_K_M | 0.57 | ~6-7 GB | ~10-11 GB | **Best balance** - recommended for most users | +| Q5_K_M | 0.68 | ~7-8 GB | ~12-13 GB | 15-20% more VRAM, marginally better quality | +| Q3_K_S | 0.44 | ~5-6 GB | ~8-9 GB | Lower quality, use if memory constrained | + +**Key Insight:** With 11GB VRAM, you can comfortably run: +- 7-8B models with Q4_K_M or Q5_K_M quantization +- 13B models with Q4_K_M quantization (tight fit) + +**KV Cache Impact:** +- Default FP16 KV cache for 8B model @ 32K context = ~4.5 GB additional +- Enable `OLLAMA_KV_CACHE_TYPE=q8_0` to reduce by ~50% +- Each 1,000 tokens adds ~0.11GB for 7B models + +--- + +#### 2. Best Coding Models for 11GB VRAM + +| Model | Size (Q4_K_M) | Specialization | Performance | +|-------|---------------|----------------|-------------| +| `deepseek-coder:6.7b` | ~5 GB | Python, algorithms | Excellent HumanEval scores | +| `qwen2.5-coder:7b` | ~5 GB | General coding, refactoring | Rivals GitHub Copilot | +| `codellama:13b` (Q4) | ~8 GB | Multi-language (Python, C++, Java, TypeScript) | Production-quality | +| `yi-coder:9b` | ~6 GB | Web development | Niche specialist | +| `codellama:7b-python` | ~5 GB | Python data science | Specialized variant | + +**Recommended Stack for 11GB:** +```bash +# Primary (general purpose coding) +ollama pull qwen2.5-coder:7b + +# Secondary (Python/algorithm specialist) +ollama pull deepseek-coder:6.7b + +# Optional (larger context, tighter fit) +ollama pull codellama:13b-instruct-q4_K_M +``` + +--- + +#### 3. Modelfile Customization for Coding Assistants + +**Coding Assistant Modelfile Template:** +```dockerfile +# ~/ollama/Modelfile.coder +FROM qwen2.5-coder:7b + +# Reduce temperature for deterministic code output +PARAMETER temperature 0.3 + +# Increase context for larger files +PARAMETER num_ctx 8192 + +# Control output length +PARAMETER num_predict 2048 + +# Reduce repetition in code +PARAMETER repeat_penalty 1.1 + +SYSTEM """You are an expert coding assistant specialized in: +- Python, TypeScript, Go, Rust development +- Clean code principles and SOLID design +- Test-driven development (TDD) +- Performance optimization +- Security best practices + +Provide concise, well-documented code with type hints. +Include error handling and edge cases. +When reviewing code, suggest specific improvements. +""" +``` + +**Build and Run:** +```bash +ollama create coder -f ~/ollama/Modelfile.coder +ollama run coder +``` + +**Key Parameters for Coding:** +| Parameter | Coding Value | Purpose | +|-----------|--------------|---------| +| temperature | 0.2-0.4 | Lower = more deterministic code | +| num_ctx | 8192-16384 | Context window for large files | +| repeat_penalty | 1.1-1.2 | Avoid repetitive patterns | +| top_k | 40 | Reduce sampling diversity | +| top_p | 0.9 | Nucleus sampling threshold | + +--- + +#### 4. Ollama Serve Daemon Configuration & Memory Management + +**Systemd Service Configuration:** +```bash +# Edit the service +sudo systemctl edit ollama.service +``` + +**Add these environment variables:** +```ini +[Service] +# Listen on all interfaces +Environment="OLLAMA_HOST=0.0.0.0:11434" + +# GPU Optimization (for 11GB VRAM) +Environment="OLLAMA_FLASH_ATTENTION=1" +Environment="OLLAMA_KV_CACHE_TYPE=q8_0" +Environment="OLLAMA_GPU_OVERHEAD=1073741824" + +# Memory management +Environment="OLLAMA_MAX_LOADED_MODELS=1" +Environment="OLLAMA_NUM_PARALLEL=2" + +# Context (adjust based on model) +Environment="OLLAMA_CONTEXT_LENGTH=8192" + +# Specific GPU selection +Environment="CUDA_VISIBLE_DEVICES=0" +``` + +**Key Environment Variables:** +| Variable | Value for 11GB | Purpose | +|----------|----------------|---------| +| OLLAMA_FLASH_ATTENTION | 1 | Faster token generation, less VRAM | +| OLLAMA_KV_CACHE_TYPE | q8_0 | 50% less KV cache memory | +| OLLAMA_GPU_OVERHEAD | 1073741824 (1GB) | Reserve for system ops | +| OLLAMA_MAX_LOADED_MODELS | 1-2 | Prevent OOM with multiple models | +| OLLAMA_NUM_PARALLEL | 2-4 | Concurrent requests | +| CUDA_VISIBLE_DEVICES | 0 | Target specific GPU | + +**WSL2-Specific Fixes:** +```bash +# Fix suspend/resume GPU detection issue +sudo rmmod nvidia_uvm && sudo modprobe nvidia_uvm + +# Set TMPDIR for cross-filesystem issues +export TMPDIR="$HOME/.cache/tmp" +``` + +--- + +#### 5. Integration with Claude Code via MCP Bridge + +**Option A: Ollama v0.14.0+ Native Integration (Recommended)** +```bash +# Upgrade to latest Ollama with Anthropic API compatibility +curl -fsSL https://ollama.com/install.sh | sh + +# Verify version >= 0.14.0 +ollama --version + +# Configure Claude Code to use Ollama +export ANTHROPIC_BASE_URL="http://localhost:11434/v1" +export ANTHROPIC_API_KEY="ollama" # Any value works + +# Run Claude Code with local model +claude --model qwen2.5-coder:7b +``` + +**Option B: Ollama-MCP Server (Full SDK Access)** +```json +// ~/.claude/mcp_servers.json +{ + "ollama": { + "command": "npx", + "args": ["-y", "@rawveg/ollama-mcp"], + "env": { + "OLLAMA_HOST": "http://localhost:11434" + } + } +} +``` + +**Option C: Ollama-MCP-Bridge (Tool Calling)** +```bash +git clone https://github.com/patruff/ollama-mcp-bridge +cd ollama-mcp-bridge +npm install +npm start +``` + +**Capabilities via MCP:** +- Multi-turn conversations +- Tool calling support +- Vision inputs (for multimodal models) +- Local/private execution + +--- + +#### 6. vLLM as Alternative for Higher Throughput + +| Aspect | Ollama | vLLM | +|--------|--------|------| +| **Throughput** | ~41 TPS | ~793 TPS (19x faster) | +| **P99 Latency** | 673 ms | 80 ms | +| **Setup Complexity** | Easy | Complex | +| **VRAM Efficiency** | Good | Aggressive (90%+) | +| **Use Case** | Single user, development | Multi-user, production | + +**vLLM Challenges on 11GB VRAM:** +- Pre-allocated KV caches consume most VRAM +- WSL2 adds virtualization overhead +- High temperatures (85C+), throttling likely +- Format mismatches require troubleshooting + +**vLLM Setup (Docker):** +```bash +docker run --gpus all -p 8000:8000 \ + -e CUDA_VISIBLE_DEVICES=0 \ + -e VLLM_ATTENTION_BACKEND=FLASH_ATTN \ + vllm/vllm-openai:latest \ + --model Qwen/Qwen2.5-Coder-7B-Instruct \ + --quantization awq \ + --max-model-len 8192 \ + --gpu-memory-utilization 0.85 +``` + +**Recommendation:** For 11GB VRAM single-user development, **Ollama is more practical**. Use vLLM only if you need concurrent request handling and can manage the complexity. + +--- + +#### 7. GPU Memory Management Summary + +**Optimal Configuration for 11GB VRAM:** +```bash +# ~/.bashrc or ~/.zshrc +export CUDA_VISIBLE_DEVICES=0 +export OLLAMA_FLASH_ATTENTION=1 +export OLLAMA_KV_CACHE_TYPE=q8_0 +export OLLAMA_GPU_OVERHEAD=1073741824 +export OLLAMA_MAX_LOADED_MODELS=1 +export OLLAMA_CONTEXT_LENGTH=8192 +``` + +**Memory Budget (11GB):** +| Component | Allocation | +|-----------|------------| +| Model Weights (7B Q4_K_M) | ~5.5 GB | +| KV Cache (8K ctx, q8_0) | ~1.5 GB | +| GPU Overhead | ~1 GB | +| CUDA/System | ~0.5-1 GB | +| **Remaining Buffer** | ~2-2.5 GB | + +**Monitoring Commands:** +```bash +# Watch GPU memory in real-time +watch -n 1 nvidia-smi + +# Check Ollama loaded models +curl http://localhost:11434/api/tags | jq + +# Unload models to free VRAM +ollama rm +``` + +--- + +### Quick Start Commands + +```bash +# Install/upgrade Ollama +curl -fsSL https://ollama.com/install.sh | sh + +# Pull recommended models for 11GB +ollama pull qwen2.5-coder:7b +ollama pull deepseek-coder:6.7b + +# Create custom coding assistant +cat << 'EOF' > ~/Modelfile.coder +FROM qwen2.5-coder:7b +PARAMETER temperature 0.3 +PARAMETER num_ctx 8192 +SYSTEM "Expert coding assistant for Python, TypeScript, Go." +EOF + +ollama create coder -f ~/Modelfile.coder + +# Configure systemd +sudo systemctl edit ollama.service +# Add: Environment="OLLAMA_FLASH_ATTENTION=1" + +# Restart service +sudo systemctl daemon-reload +sudo systemctl restart ollama + +# Test +ollama run coder "Write a Python function to parse JSON with error handling" +``` + +--- + +### Sources + +- [Ollama VRAM Requirements 2026 Guide](https://localllm.in/blog/ollama-vram-requirements-for-local-llms) +- [Best Ollama Models for Coding 2025](https://www.codegpt.co/blog/best-ollama-model-for-coding) +- [Ollama Modelfile Reference](https://docs.ollama.com/modelfile) +- [Ollama GPU Hardware Support](https://docs.ollama.com/gpu) +- [Claude Code + Ollama Integration](https://docs.ollama.com/integrations/claude-code) +- [Ollama-MCP Bridge](https://github.com/patruff/ollama-mcp-bridge) +- [vLLM vs Ollama Comparison](https://northflank.com/blog/vllm-vs-ollama-and-how-to-run-them) +- [Ollama Performance Tuning](https://collabnix.com/ollama-performance-tuning-gpu-optimization-techniques-for-production/) + + +--- +## [05:20:02] Agent: git-culture +**Status:** COMPLETED +**Timestamp:** 2026-01-23T05:20:02Z + +### Research: Git Workflows and Team Culture for AI-Assisted Development Teams + +--- + +#### 1. Trunk-Based Development with AI Coding Assistants + +**Key Findings:** +- **Short-lived branches recommended**: Keep branches under one day or commit directly to main +- **AI resolves conflicts**: AI assistants can resolve merge conflicts and suggest integration strategies +- **Feature flags essential**: Decouple deployment from release; toggle features on/off without redeployment +- **Small increments**: Generate code in small increments, run unit tests after each integration +- **Commit frequently**: Use version control aggressively for easy rollback if AI suggestions go wrong + +**Pipeline Considerations (DORA 2025 Report):** +- 90% of developers now use AI tools to write code faster +- Increased output can overwhelm delivery pipelines built for lower volumes +- AI can write automated tests that make frequent integration safe +- Continuous delivery keeps codebase ready to deploy at any moment + +**Sources:** +- https://aws.amazon.com/blogs/enterprise-strategy/your-ai-coding-assistants-will-overwhelm-your-delivery-pipeline-heres-how-to-prepare/ +- https://getdx.com/blog/ai-code-enterprise-adoption/ +- https://zencoder.ai/blog/how-to-use-ai-in-coding + +--- + +#### 2. PR Review Workflows with AI Review Tools + +**Top Tools (2025-2026):** +| Tool | Key Feature | Pricing | +|------|-------------|---------| +| **Graphite** | Stacked PRs, rethinks entire review workflow | - | +| **CodeRabbit** | Auto-generates walkthrough summaries, 40+ code analyzers | - | +| **Qodo PR-Agent** | 15+ automated workflows, single LLM call (~30s) | Fast & affordable | +| **GitHub Copilot** | Native integration, PR summarization | $10-19/user/month | + +**Best Practices:** +- Aim for 200-400 lines per PR +- Break large features into stacked pull requests +- Use `.github/workflows/ai-pr-reviewer.yml` for GitHub Actions integration +- Support for OpenAI and Anthropic (Claude) as AI providers + +**Enterprise Considerations:** +- Integration with GitHub, GitLab, Bitbucket, Azure DevOps +- Data privacy and deployment flexibility for regulatory constraints +- VPC/on-prem/zero-retention options (SOC2/GDPR compliance) + +**Sources:** +- https://www.codeant.ai/blogs/best-github-ai-code-review-tools-2025 +- https://github.com/qodo-ai/pr-agent +- https://graphite.com/ + +--- + +#### 3. Branch Naming Conventions for AI-Assisted Development + +**Recommended Patterns:** +``` +# AI Agent-Specific Prefixes +claude/* # Claude Code generated branches +cursor/* # Cursor AI generated branches +copilot/* # GitHub Copilot assisted branches +ai-generated/* # Generic AI-generated branches + +# Traditional Prefixes (still valid) +feature/* # New features +fix/* # Bug fixes +hotfix/* # Production hotfixes +``` + +**Attribution Cascade Strategy:** +1. Branch prefixes (e.g., `cursor/`, `claude/`) +2. PR author logins +3. First-commit author names (e.g., `google-labs-jules[bot]`, `claude[bot]`) + +**Best Practices:** +- Use lowercase with hyphens (e.g., `feature/add-user-authentication`) +- Keep short but descriptive (3-5 words max) +- Include issue/ticket ID (e.g., `fix/PROJ-123-bug-name`) +- Configure in CLAUDE.md for AI tool consistency + +**Sources:** +- https://medium.com/@jaychu259/git-branch-naming-conventions-2025-the-ultimate-guide-for-developers-5f8e0b3bb9f7 +- https://www.eesel.ai/blog/git-workflows-claude-code +- https://github.com/LIDR-academy/ai-specs + +--- + +#### 4. Commit Message Standards for AI-Assisted Commits + +**Attribution Methods:** +``` +# Trailers for AI attribution +Assisted-by: GitHub Copilot +Generated-by: ChatGPT +Co-authored-by: Claude + +# Tool-specific approaches +Aider: adds "(aider)" to author name + model as co-author +Claude Code: adds Co-authored-by trailer +GitHub Copilot: no automatic attribution +``` + +**When to Attribute (Decision Framework):** +| Scenario | Attribution Level | +|----------|-------------------| +| AI wrote complete functions/features | Co-author attribution | +| AI generated >50% of commit | AI-assisted tag | +| Tab-completion/syntax only | No attribution needed | +| AI solved core logic/algorithm | AI tag minimum | + +**Conventional Commits Format:** +``` +(scope): description + +feat(auth): add two-factor authentication + +Assisted-by: Claude Code +``` + +**Enforcement Tools:** +- Commitizen for structured prompts +- Husky + commitlint for Git hooks validation + +**Sources:** +- https://www.deployhq.com/blog/how-to-use-git-with-claude-code-understanding-the-co-authored-by-attribution +- https://www.ranger.net/post/version-control-best-practices-ai-code +- https://devblogs.microsoft.com/visualstudio/customize-your-ai-generated-git-commit-messages/ + +--- + +#### 5. Monorepo vs Polyrepo for IDE+CLI Products + +**AI Changes the Calculus:** +- GitHub Copilot: 64,000 token context window +- Cursor: Handles even larger contexts +- AI agents need to see API endpoints, schemas, types, configs together +- **Tips scales toward monorepos** for AI-assisted development + +**Comparison:** +| Factor | Monorepo | Polyrepo | +|--------|----------|----------| +| AI Context | Full visibility across services | Limited to single repo | +| Atomic Changes | One commit, one review, one deploy | Coordinate across repos | +| Ownership | Shared, complex governance | Clear team boundaries | +| Build Speed | Slower (needs optimization) | Faster (focused scope) | +| IDE Performance | RAM-heavy indexing | Lighter weight | + +**Hybrid Approach (Enterprise Trend):** +- Monorepos for frontend + shared libraries +- Polyrepos for core backend microservices + +**Tool Considerations:** +- Zed: Rust-based, GPU-accelerated, blazing fast +- Cursor/Claude Code: Editor-centric, local workflow +- Performance matters for large monorepos on lower-powered laptops + +**Sources:** +- https://www.augmentcode.com/learn/monorepo-vs-polyrepo-ai-s-new-rules-for-repo-architecture +- https://medium.com/@Nexumo_/monorepo-vs-polyrepo-code-at-scale-in-2025-9b0743b68b99 +- https://www.builder.io/blog/cursor-alternatives-2026 + +--- + +#### 6. GitHub Actions CI/CD Patterns for AI-Generated Code + +**August 2025 GitHub Integration:** +- GitHub Models directly in GitHub Actions workflows +- Automate triage, generate summaries, AI-powered tasks in CI/CD +- Transforms Actions into intelligent workflow orchestrator + +**AI Code Review Workflow:** +```yaml +# .github/workflows/ai-pr-reviewer.yml +name: AI Code Review +on: [pull_request] +jobs: + review: + runs-on: ubuntu-latest + steps: + - uses: actions/checkout@v4 + - uses: coderabbitai/ai-pr-reviewer@latest + with: + OPENAI_API_KEY: ${{ secrets.OPENAI_API_KEY }} +``` + +**Performance Optimizations:** +- GPU-powered inference reduces latency to low single digits +- 200k-token context analysis +- 40% reduction in false positives +- Incremental reviews (only modified files) + +**LLM Testing in CI:** +- Evidently GitHub Action for LLM output quality checks +- Auto-test LLM systems on every code update + +**Security Considerations:** +- Every commit flows through external inference endpoints +- Risk of leaking proprietary algorithms, API keys, business logic +- Implement network isolation and policy documents + +**Sources:** +- https://www.augmentcode.com/tools/integrate-ai-code-checker-with-github-actions-7-key-wins +- https://github.com/PaulDuvall/ai-development-patterns +- https://www.evidentlyai.com/blog/llm-unit-testing-ci-cd-github-actions + +--- + +#### 7. Code Ownership Models When AI Contributes Significantly + +**Three Ownership Models:** +1. **Strong ownership**: Each module owned by exactly one developer/team +2. **Weak ownership**: Primary owners but others can contribute +3. **Collective ownership**: Any developer can modify any code + +**CODEOWNERS with AI:** +- Future development: AI/ML to automatically suggest reviewers +- Based on code complexity, historical patterns, team dynamics + +**Tracking AI Contributions:** +``` +# Recommended tagging approach +- Automatically tag code changes as AI-assisted +- Log specific LLM and version used +- Track which parts influenced by AI tools +``` + +**Legal Considerations:** +- AI-generated code ownership is unclear legally +- Copyright may attach when human provides sufficient creative input +- Treat AI as development tool, not author +- Human developers maintain creative control + +**Best Practices:** +- Formal processes for peer review of AI-generated code +- Security testing and best practice checks +- Validation through automated testing + static analysis + human review +- Create governance layer as safeguard for quality and accountability + +**Attribution Challenge:** +- Know who checked in code, but origin increasingly obscure +- Risk of blind acceptance without understanding +- Need mechanisms to track co-creative activities + +**Sources:** +- https://www.sonarsource.com/blog/trust-ai-contributions-to-code/ +- https://www.aviator.co/blog/code-ownership-using-codeowners-strategically/ +- https://www.redhat.com/en/blog/when-bots-commit-ai-generated-code-open-source-projects + +--- + +### Summary Recommendations for jadecli + +1. **Adopt trunk-based development** with feature flags for AI-generated changes +2. **Use stacked PRs** (200-400 lines each) with Graphite or CodeRabbit for AI review +3. **Branch naming**: `claude/*` prefix for Claude Code branches (already in use) +4. **Commit messages**: Conventional Commits + Co-authored-by attribution +5. **Consider monorepo** for IDE+CLI product (better AI context visibility) +6. **Implement AI-powered CI/CD** with incremental reviews and LLM testing +7. **Establish CODEOWNERS** with clear human accountability for AI-assisted code + + +--- +## [05:20:09] Agent: ubuntu-2604 +**Status:** COMPLETED +**Timestamp:** 2026-01-23T05:20:09Z + +### Ubuntu 26.04 LTS on WSL2 - AI/Claude Code Development Research + +--- + +## 1. WSL2 GPU Passthrough for NVIDIA (11GB VRAM) + +### How It Works +- WSL2 uses **GPU-PV (GPU partitioning)**, NOT true PCIe passthrough +- NVIDIA drivers on WSL2 are lightweight wrappers that send GPU instructions to Windows graphics driver via symlinking +- `nvidia-smi` and CUDA work through this GPU "wrapper" + +### Critical Setup Rules +1. **Install Windows NVIDIA driver only** - CUDA becomes available within WSL2 automatically +2. **DO NOT install Linux NVIDIA GPU driver** - The Windows driver is stubbed as `libcuda.so` inside WSL2 +3. **Use WSL-specific CUDA Toolkit** from NVIDIA's dedicated WSL-Ubuntu repository +4. **Avoid default CUDA Toolkit** - It comes with a driver that can overwrite WSL2 NVIDIA driver + +### Requirements +- Windows 11 (recommended) or Windows 10 21H2+ +- NVIDIA GeForce or Quadro production driver on Windows +- CUDA version >10 (older versions don't support WSL2) + +### Performance Notes +- Near-native GPU performance for AI/ML workloads +- Supports PyTorch, TensorFlow, and all major AI frameworks +- 11GB VRAM fully accessible for model inference/training + +--- + +## 2. Memory Allocation Optimization (128GB RAM) + +### Default Behavior +- WSL2 defaults to 50% of total RAM or 8GB (whichever is smaller) on builds 20175+ +- For 128GB system, default would cap at 8GB - MUST configure manually + +### Recommended .wslconfig for 128GB System +```ini +# Location: %UserProfile%\.wslconfig (e.g., C:\Users\\.wslconfig) +[wsl2] +memory=96GB # Leave ~32GB for Windows +processors=20 # Reserve 4 threads for Windows +swap=32GB # Large swap for model loading +guiApplications=false # Disable WSLg if not needed + +[experimental] +autoMemoryReclaim=gradual # Return unused memory to Windows (5min idle, 30min release) +sparseVhd=true # Auto-shrink VHD files +networkingMode=mirrored # Better network compatibility +dnsTunneling=true # Improved DNS resolution +``` + +### Important Notes +- **autoMemoryReclaim=gradual** recommended for AI workloads (slowly frees memory) +- **Avoid autoMemoryReclaim with Docker daemon** as service - can break Docker +- Create file using WSL to avoid BOM headers/CRLF issues +- Apply changes: `wsl --shutdown` then restart WSL + +--- + +## 3. CPU Pinning/Allocation (24-Thread System) + +### Current Limitations +- WSL2 **does not support true CPU pinning/affinity** natively +- `processors=N` sets maximum logical processors, not specific cores +- `pthread_setaffinity_np` and `sched_setaffinity` don't work properly + +### .wslconfig Configuration +```ini +[wsl2] +processors=20 # Leave 4 threads for Windows +``` + +### Behavior +- WSL doesn't pre-allocate cores; dynamically utilizes CPU +- Same logical processors tend to handle heavy workloads +- "processors" means logical processors (threads), not physical cores + +### Workarounds +- Third-party tool: [W10_affinity_control](https://github.com/EricPell/W10_affinity_control) +- Feature request pending for native CPU affinity support +- For NUMA-aware workloads, this remains a limitation + +--- + +## 4. Systemd Integration in WSL2 + +### Enabling Systemd +```bash +# Edit /etc/wsl.conf inside WSL +sudo nano /etc/wsl.conf + +# Add these lines: +[boot] +systemd=true +``` + +### Requirements +- WSL version 0.67.6 or higher +- Ubuntu 22.04+ recommended +- Windows 11 required (Windows 10 not officially supported) + +### Benefits +- Near-native Ubuntu environment +- Snap packages work properly +- Kubernetes/microk8s support +- cloud-init support +- All systemd-dependent services function + +### Potential Issues +- Slightly higher memory/resource consumption +- Some users report 3+ minute boot delays with error "Failed to get properties" +- Recommended despite overhead for Ubuntu 22.04+ + +--- + +## 5. Docker Setup: Desktop vs Native Containerd + +### Docker Desktop (Recommended for Ease) +**Pros:** +- Seamless WSL2 integration +- GUI management +- Automatic updates +- GPU support out-of-box +- Kubernetes built-in + +**Cons:** +- Commercial license required for large enterprises (250+ employees or $10M+ revenue) +- Additional resource overhead +- **Note:** Don't use `autoMemoryReclaim=gradual` with Docker daemon as service + +### Native Containerd + nerdctl (Advanced) +**Pros:** +- No licensing costs +- Lower overhead (Docker uses containerd internally) +- Running containers survive daemon restarts +- Linux-first, closer to production Kubernetes + +**Cons:** +- Rough developer experience on Windows/WSL +- Different CLI syntax (nerdctl vs docker) +- Networking/volume differences +- Manual setup required + +### Recommendation +| Use Case | Recommendation | +|----------|----------------| +| Quick experiments, internal tools | Docker Desktop | +| Production-like environment | Containerd + nerdctl | +| Enterprise with licensing concerns | Podman or Containerd | +| AI development (ease of use) | Docker Desktop | + +--- + +## 6. Package Managers Comparison + +### mise vs asdf + +| Aspect | mise | asdf | +|--------|------|------| +| **Performance** | 10-100x faster (no shims) | Slower due to shims (~120ms overhead per call) | +| **Language** | Rust (single binary) | Go (0.16+, formerly Bash) | +| **Features** | Version management + env vars + tasks | Version management only | +| **Compatibility** | Reads .tool-versions, .node-version, etc. | Own format | +| **Security** | GPG/Cosign verification, stricter registry | Plugin ecosystem | +| **Learning curve** | Higher (more features) | Lower (focused) | +| **Status (2025)** | Mature, feature-complete | Rewritten in Go | + +**Recommendation:** **mise** for AI/Claude Code development - faster shell startup, better security, includes direnv-like env management + +### uv for Python + +| Aspect | uv | pip | Poetry | +|--------|-----|-----|--------| +| **Speed** | 10-100x faster | Baseline | Slower than uv | +| **Language** | Rust | Python | Python | +| **Features** | pip + pip-tools + pipx + pyenv + venv | Package install only | Full project management | +| **Lockfile** | Yes (universal) | No | Yes | +| **Drop-in** | Yes (uv pip interface) | - | No | + +**Recommendation:** **uv** for Python AI development - dramatically faster, replaces entire Python toolchain + +### apt vs snap vs nix + +| Aspect | apt | snap | nix | +|--------|-----|------|-----| +| **Integration** | Tight OS integration | Sandboxed/containerized | Isolated (/nix/store) | +| **Reproducibility** | Limited | Better | Best (declarative) | +| **Updates** | Manual | Automatic/transactional | Atomic/rollback | +| **Learning curve** | Low | Low | High | +| **Cross-distro** | Debian/Ubuntu only | Any Linux | Any Linux + macOS | + +**Recommendation for AI/Claude Code:** +- **apt** for system packages (cuda-toolkit, build-essential) +- **mise + uv** for development tools and Python +- **nix** for reproducible environments (optional, steep learning curve) +- **Avoid snap** for dev tools (performance overhead, sandboxing issues) + +--- + +## Complete Optimized Setup Summary + +### .wslconfig (Windows side) +```ini +[wsl2] +memory=96GB +processors=20 +swap=32GB +guiApplications=false + +[experimental] +autoMemoryReclaim=gradual +sparseVhd=true +networkingMode=mirrored +dnsTunneling=true +firewall=true +autoProxy=true +``` + +### /etc/wsl.conf (Linux side) +```ini +[boot] +systemd=true + +[interop] +enabled=true +appendWindowsPath=false + +[network] +generateResolvConf=true +``` + +### Recommended Tool Stack +1. **GPU:** Windows NVIDIA driver + WSL-specific CUDA toolkit +2. **Container:** Docker Desktop (for convenience) or containerd (for production-like) +3. **Version Manager:** mise (replaces asdf, direnv, make) +4. **Python:** uv (replaces pip, pip-tools, pipx, pyenv, virtualenv) +5. **System packages:** apt (avoid snap for dev tools) + +### Sources +- [NVIDIA CUDA on WSL User Guide](https://docs.nvidia.com/cuda/wsl-user-guide/index.html) +- [Microsoft WSL Advanced Configuration](https://learn.microsoft.com/en-us/windows/wsl/wsl-config) +- [Microsoft Systemd in WSL](https://learn.microsoft.com/en-us/windows/wsl/systemd) +- [Ubuntu WSL GPU Guide](https://documentation.ubuntu.com/wsl/stable/howto/gpu-cuda/) +- [mise-en-place Documentation](https://mise.jdx.dev/dev-tools/comparison-to-asdf.html) +- [uv Documentation](https://docs.astral.sh/uv/) +- [Docker Desktop WSL2 Integration](https://www.docker.com/blog/wsl-2-gpu-support-is-here/) + + +--- +## [05:20:14] Agent: claude-dotfiles +**Status:** COMPLETED +**Timestamp:** 2026-01-23T05:20:14Z + +### Research: Claude Code Dotfiles Configuration Hierarchy (Enterprise + Individual Use) + +--- + +## 1. ~/.claude/ Structure (User-Level Configuration) + +The user-level Claude Code configuration resides in `~/.claude/` with the following structure: + +``` +~/.claude/ +├── settings.json # Global user settings, hooks, permissions +├── commands/ # Personal slash commands (available in all projects) +├── skills/ # User-defined skills +├── plugins/ # Installed plugins +│ ├── known_marketplaces.json +│ └── marketplaces/ # Downloaded plugin repositories +├── projects/ # Per-project session data (auto-created) +│ └── / # Transcript history, session data +├── debug/ # Debug logs +├── plans/ # Execution plans +├── session-env/ # Environment persistence +├── shell-snapshots/ # Shell state captures +├── statsig/ # Analytics/feature flags +└── todos/ # Todo tracking state +``` + +### settings.json Schema + +```json +{ + "$schema": "https://json.schemastore.org/claude-code-settings.json", + "hooks": { + "Stop": [{ "matcher": "", "hooks": [{ "type": "command", "command": "...", "timeout": 60 }] }], + "PreToolUse": [...], + "PostToolUse": [...], + "SessionStart": [...], + "UserPromptSubmit": [...] + }, + "permissions": { + "allow": ["Skill", "Read", "Write", "Bash(*)"] + } +} +``` + +--- + +## 2. Project-Level .claude/ Structure + +Each project can have its own `.claude/` directory: + +``` +/.claude/ +├── settings.project.json # Project-specific settings +├── mcp-config.json # MCP server configuration +├── commands/ # Project commands (shared with team) +│ └── *.md # Slash command definitions +├── hooks/ # Project-specific hook scripts +│ └── *.py # Hook implementations +├── skills/ # Project-specific skills +│ └── */SKILL.md +└── agents/ # Subagent definitions +``` + +### settings.project.json Example + +```json +{ + "hooks": { + "SessionStart": [{ "command": "python", "args": [".claude/hooks/ensure-tools.py"] }], + "PreToolUse": [{ "matcher": { "tool": "Bash", "command_pattern": "git commit" }, "command": "..."}] + }, + "mcpServers": { "context7": { "command": "npx", "args": [...] } }, + "plugins": { "coderabbit": { "enabled": true } }, + "build": { "diagnostics": { "blockOnCritical": true } } +} +``` + +--- + +## 3. CLAUDE.md Per-Project Context Patterns + +### File Hierarchy (Auto-Discovery) + +| Location | Purpose | Shared | +|----------|---------|--------| +| `./CLAUDE.md` | Project root context | Git-tracked, team-shared | +| `./.claude.local.md` | Personal overrides | Gitignored, local only | +| `~/.claude/CLAUDE.md` | Global user defaults | All projects | +| `./packages/*/CLAUDE.md` | Monorepo package context | Per-package | + +### Recommended Sections + +- **Commands**: Build, test, dev, lint (copy-paste ready) +- **Architecture**: Directory structure with purposes +- **Key Files**: Entry points, config locations +- **Code Style**: Project conventions +- **Environment**: Required vars, setup steps +- **Gotchas**: Non-obvious patterns, warnings +- **Workflow**: When to do what + +### Quality Criteria + +- A (90-100): Comprehensive, current, actionable +- B (70-89): Good coverage, minor gaps +- C (50-69): Basic info, missing key sections +- D/F (<50): Sparse, outdated, or missing + +--- + +## 4. MCP Server Configuration Best Practices + +### Configuration Locations + +1. **Project-level**: `.claude/settings.project.json` or `.claude/mcp-config.json` +2. **Plugin-level**: `/.mcp.json` +3. **User-level**: `~/.claude/settings.json` + +### MCP Server Types + +```json +{ + "mcpServers": { + "http-server": { + "type": "http", + "url": "https://mcp.service.com/api" + }, + "stdio-server": { + "command": "npx", + "args": ["-y", "@package/mcp-server@latest"], + "description": "Server description" + } + } +} +``` + +--- + +## 5. The 4-Layer Dotfiles Model + +### Layer 1: Engineer Personal (~/.claude/) +- Personal commands, skills, and hooks +- Global user preferences +- Cross-project tooling +- Credential/API key management (via env vars) + +### Layer 2: Project-Specific (/.claude/) +- Team-shared commands and workflows +- Project-specific hooks (pre-commit, diagnostics) +- Project MCP servers +- CI/CD integration scripts + +### Layer 3: Organization-Wide IT Configs +- Enterprise policy hooks (security scanning) +- Approved MCP server allowlists +- Code review requirements +- Compliance checks + +### Layer 4: Shared Dotfiles (GitHub Org Level) +- Organization plugin marketplace +- Shared skills and commands across repos +- Standard workflow definitions +- Enterprise templates + +### Configuration Precedence (Lowest to Highest) + +1. User defaults (~/.claude/settings.json) +2. Organization policies (IT-managed) +3. Project settings (.claude/settings.project.json) +4. Local overrides (.claude.local.md) + +--- + +## 6. Permission Models + +### Hook Permissions + +Hooks are the primary enforcement mechanism: + +```json +{ + "PreToolUse": [{ + "matcher": "Write|Edit", + "hooks": [{ "type": "prompt", "prompt": "Validate file safety..." }] + }], + "Stop": [{ + "hooks": [{ "type": "prompt", "prompt": "Verify tests run..." }] + }] +} +``` + +### Tool Access Control + +- `permissions.allow`: Whitelist tools +- `allowed-tools` in commands: Scope tool access per command +- Matchers: Regex patterns for tool filtering (e.g., `Bash(git:*)`) + +### Hook Output Decisions + +- `approve`: Allow operation +- `deny`: Block operation +- `ask`: Request user confirmation + +--- + +## 7. Multi-Project Isolation Strategies + +### Session Isolation + +- Each project gets unique session in `~/.claude/projects//` +- Transcript history isolated per project +- Environment variables scoped via $CLAUDE_PROJECT_DIR + +### Configuration Isolation + +- Project `.claude/` directory takes precedence +- Use `.claude.local.md` for developer-specific settings +- Gitignore local configs: `.claude.local.md`, `.claude/*.local.json` + +### Plugin Isolation + +Plugins can be: +- Global: Installed in `~/.claude/plugins/` +- Project: Referenced in project `settings.project.json` +- Enabled/disabled per project via `plugins` config + +--- + +## 8. Chezmoi Templates for .claude Configs + +### Template Structure + +``` +~/.local/share/chezmoi/ +├── dot_claude/ +│ ├── settings.json.tmpl # Templated settings +│ └── commands/ +│ └── executable_my-cmd.md # Executable commands +├── private_dot_claude.local.md.tmpl +└── dot_config/ + └── claude/ + └── mcp-servers.json.tmpl +``` + +### Chezmoi Template Example + +```json +{{/* ~/.local/share/chezmoi/dot_claude/settings.json.tmpl */}} +{ + "$schema": "https://json.schemastore.org/claude-code-settings.json", + "permissions": { + "allow": ["Read", "Write", "Bash(git:*)"] + }, + {{- if eq .chezmoi.hostname "work-laptop" }} + "mcpServers": { + "internal-api": { + "type": "http", + "url": "{{ .work.mcpServerUrl }}" + } + } + {{- end }} +} +``` + +### Variable Sources + +```yaml +# ~/.config/chezmoi/chezmoi.yaml +data: + work: + mcpServerUrl: "https://mcp.company.internal" + personal: + githubOrg: "myorg" +``` + +### Chezmoi Commands + +```bash +chezmoi diff # Preview changes +chezmoi apply # Apply configs +chezmoi add ~/.claude/settings.json # Track file +chezmoi edit ~/.claude/settings.json # Edit template +``` + +--- + +## Plugin Development Structure + +### Plugin Directory Layout + +``` +my-plugin/ +├── .claude-plugin/ +│ └── plugin.json # Plugin manifest (required) +├── .mcp.json # MCP servers (optional) +├── commands/ # Slash commands +│ └── my-command.md +├── skills/ # Skills +│ └── my-skill/SKILL.md +├── hooks/ # Hook definitions +│ ├── hooks.json +│ └── my-hook.py +├── agents/ # Subagent definitions +│ └── my-agent.md +└── README.md +``` + +### plugin.json Schema + +```json +{ + "name": "plugin-name", + "description": "Plugin description", + "version": "1.0.0", + "author": { + "name": "Author", + "email": "author@example.com" + } +} +``` + +--- + +## Key Environment Variables + +| Variable | Purpose | +|----------|---------| +| $CLAUDE_PROJECT_DIR | Current project root | +| $CLAUDE_PLUGIN_ROOT | Plugin directory (for portable paths) | +| $CLAUDE_ENV_FILE | SessionStart: persist env vars | +| $CLAUDE_CODE_REMOTE | Set if running remotely | + +--- + +## Enterprise Deployment Recommendations + +1. **Central Plugin Marketplace**: Host approved plugins at org GitHub +2. **Policy Hooks**: Deploy security/compliance hooks org-wide +3. **MCP Server Registry**: Maintain allowlist of approved MCP servers +4. **Template Repository**: Provide starter .claude/ configs +5. **Chezmoi Templates**: Use for consistent developer environments +6. **Audit Logging**: Implement Notification hooks for compliance + +--- + + +--- +## [05:20:35] Agent: chezmoi-org +**Status:** COMPLETED +**Timestamp:** 2026-01-23T05:20:35Z + +### Research: Chezmoi for Organizational Dotfiles Management at Scale + +#### 1. Templating with {{ .chezmoi.* }} Variables for Multi-Machine Configs + +**Core Variables Available:** +- `.chezmoi.os` - Operating system (darwin, linux, windows) +- `.chezmoi.arch` - Architecture (amd64, arm64) +- `.chezmoi.hostname` - Machine hostname +- `.chezmoi.username` - Current username +- `.chezmoi.homeDir` - Home directory path +- `.chezmoi.sourceDir` - Source directory location + +**Template Syntax:** +```go +{{- if eq .chezmoi.hostname "work-laptop" }} +# Work-specific configuration +export COMPANY_PROXY="proxy.company.com" +{{- else }} +# Personal machine configuration +{{- end }} +``` + +**Best Practices:** +- Use `.tmpl` suffix for template files OR place in `.chezmoitemplates` directory +- Test templates with: `chezmoi execute-template "{{ .chezmoi.os }}/{{ .chezmoi.arch }}"` +- Create custom variables in `.chezmoidata.yaml/.json/.toml` for organization-wide settings + +#### 2. External Data Sources Integration + +**Supported Password Managers:** +- **1Password**: `onepassword` function with caching, supports Connect and Service Accounts +- **HashiCorp Vault**: `hcpVaultSecret` and `hcpVaultSecretJson` functions (requires vlt CLI) +- **AWS Secrets Manager**: Dedicated template functions available +- **Others**: Azure Key Vault, Bitwarden, Dashlane, Doppler, gopass, KeePassXC, Keeper, LastPass, pass, passage, passhole, Proton Pass, macOS Keychain, GNOME Keyring + +**1Password Example:** +```toml +# In chezmoi.toml +[onepassword] +mode = "service" # or "connect" for enterprise automation +``` + +```go +# In template +{{ (onepassword "my-item-uuid").fields.password }} +``` + +**Security Note:** Output from password managers is cached to minimize API calls. + +#### 3. Machine-Specific Configs with .chezmoi.toml + +**Configuration File Location:** +`~/.config/chezmoi/chezmoi.toml` (also supports .json, .jsonc, .yaml) + +**Key Features:** +- Machine-specific file that is NOT committed to source repo +- Create `.chezmoi.toml.tmpl` in repo for interactive setup on new machines +- Permission should be 0600 if containing secrets + +**Auto-Setup Template Example (.chezmoi.toml.tmpl):** +```toml +{{- $email := promptStringOnce . "email" "Email address" -}} +{{- $isWork := promptBoolOnce . "isWork" "Is this a work machine" -}} + +[data] +email = {{ $email | quote }} +isWork = {{ $isWork }} + +{{- if $isWork }} +[data.work] +vpn = "company-vpn.example.com" +{{- end }} +``` + +#### 4. Script Types: run_once_, run_onchange_, run_before_, run_after_ + +**Script Naming Convention:** `run_[frequency]_[timing]_[name].sh[.tmpl]` + +| Prefix | Behavior | +|--------|----------| +| `run_` | Executes every `chezmoi apply` | +| `run_once_` | Executes once per unique content hash (SHA256) | +| `run_onchange_` | Executes when file content changes | +| `run_before_` | Runs before files/dirs/symlinks are updated | +| `run_after_` | Runs after all updates complete | + +**Common Patterns:** +```bash +# run_once_before_install-password-manager.sh +# Installs 1password CLI before secrets are needed + +# run_onchange_after_configure-nvim.sh.tmpl +# Re-runs when neovim config changes (hash in comment) +# hash: {{ include "dot_config/nvim/init.lua" | sha256sum }} +``` + +**Important:** Scripts should be idempotent. Empty template output disables script execution. + +#### 5. Multi-Source Dotfiles (Private + Org Repos Combined) + +**Design Decision:** Chezmoi uses a single source of truth by design. + +**Strategies for Multi-Source:** +1. **Externals for secondary repos:** +```toml +# .chezmoiexternal.toml.tmpl +{{ if stat (joinPath .chezmoi.homeDir ".ssh" "id_rsa") }} +[".config/company"] +type = "git-repo" +url = "git@github.com:company/dotfiles-base.git" +{{ end }} +``` + +2. **Multiple chezmoi invocations with different sources:** +```bash +chezmoi apply --config ~/.config/chezmoi/personal.toml --source ~/personal-dotfiles +chezmoi apply --config ~/.config/chezmoi/work.toml --source ~/work-dotfiles +``` + +3. **Use .chezmoiignore for per-machine file selection:** +```go +# .chezmoiignore.tmpl +{{- if not .isWork }} +.config/company/* +{{- end }} +``` + +**External Types Supported:** `file`, `archive`, `archive-file`, `git-repo` + +#### 6. Encryption Strategies for Sensitive Configs + +**Option A: Age Encryption (Recommended)** +```toml +# chezmoi.toml +encryption = "age" +[age] +identity = "~/.config/chezmoi/key.txt" +recipient = "age1ql3z7hjy54pw3hyww5ayyfg7zqgvc7w3j2elw8zmrj2kg5sfn9aqmcac8p" +``` + +**Option B: GPG Encryption** +```toml +encryption = "gpg" +[gpg] +recipient = "your-key-id" +# OR for symmetric: +symmetric = true +``` + +**Adding Encrypted Files:** +```bash +chezmoi add --encrypt ~/.ssh/id_rsa +chezmoi add --encrypt ~/.config/secrets.env +``` + +**Key Rotation Process:** +1. Apply all encrypted files to destinations +2. Update chezmoi config with new encryption settings +3. `chezmoi forget` encrypted files +4. `chezmoi add --encrypt` them back + +**Limitation:** Age builtin does not support passphrases/SSH keys (use external age for those). + +#### 7. AI Company Dotfiles at Scale (Industry Trends) + +**No Public Dotfiles Strategies Found** for Anthropic, Google, or OpenAI specifically. + +**Instead, Modern Approaches Observed:** + +**AGENTS.md Standard (OpenAI)** +- Adopted by 60,000+ open-source projects since August 2025 +- Simple markdown file providing AI coding agents with project-specific instructions +- Donated to Linux Foundation's Agentic AI Foundation (AAIF) + +**Model Context Protocol (Anthropic)** +- Standardized way to connect AI models/agents to tools and data +- Fastest adopted standard observed, following Docker-like S-curve adoption + +**Agent Skills (Anthropic)** +- Organized folders of instructions, scripts, resources for AI agents +- Dynamic discovery and loading of domain-specific expertise + +**Enterprise Developer Environment Trends:** +- **Dev Containers / Codespaces**: Preconfigured .devcontainer/devcontainer.json +- **Platform Engineering**: Self-serve developer portals with golden paths +- **Google Cloud Workstations**: Managed dev environments with VPC security +- **One-click onboarding**: Automated environment setup reducing days to minutes + +**Recommended Organization Strategy:** +``` +company-dotfiles/ +├── .chezmoi.toml.tmpl # Interactive setup prompts +├── .chezmoiexternal.toml.tmpl # Include private org repos +├── .chezmoiignore.tmpl # Role-based file selection +├── .chezmoitemplates/ # Shared template snippets +├── run_once_before_bootstrap.sh +├── run_onchange_after_tools.sh.tmpl +├── dot_config/ # Standard configs +└── private_dot_ssh/ # Encrypted with age +``` + +### Sources +- [Templating - chezmoi](https://www.chezmoi.io/user-guide/templating/) +- [Variables - chezmoi](https://www.chezmoi.io/reference/templates/variables/) +- [Manage machine-to-machine differences](https://www.chezmoi.io/user-guide/manage-machine-to-machine-differences/) +- [1Password - chezmoi](https://www.chezmoi.io/user-guide/password-managers/1password/) +- [AWS Secrets Manager - chezmoi](https://www.chezmoi.io/user-guide/password-managers/aws-secrets-manager/) +- [Use scripts to perform actions](https://www.chezmoi.io/user-guide/use-scripts-to-perform-actions/) +- [Include files from elsewhere](https://www.chezmoi.io/user-guide/include-files-from-elsewhere/) +- [Age encryption - chezmoi](https://www.chezmoi.io/user-guide/encryption/age/) +- [GPG encryption - chezmoi](https://www.chezmoi.io/user-guide/encryption/gpg/) +- [HCP Vault Secrets - chezmoi](https://www.chezmoi.io/reference/templates/hcp-vault-secrets-functions/) +- [How AI Is Transforming Work at Anthropic](https://www.anthropic.com/research/how-ai-is-transforming-work-at-anthropic) +- [Agent Skills - Anthropic](https://www.anthropic.com/engineering/equipping-agents-for-the-real-world-with-agent-skills) +- [AAIF announcement - TechCrunch](https://techcrunch.com/2025/12/09/openai-anthropic-and-block-join-new-linux-foundation-effort-to-standardize-the-ai-agent-era/) + + +--- +## [05:21:37] Agent: ollama-local +**Status:** COMPLETED +**Timestamp:** 2026-01-23T05:21:37Z + +### Ollama Local Setup Optimized for 11GB VRAM GPU on Ubuntu WSL2 + +--- + +#### 1. Model Quantization Strategies for 11GB VRAM + +| Quantization | Bytes/Weight | VRAM for 7-8B | VRAM for 13B | Notes | +|--------------|--------------|---------------|--------------|-------| +| Q4_K_M | 0.57 | ~6-7 GB | ~10-11 GB | **Best balance** - recommended for most users | +| Q5_K_M | 0.68 | ~7-8 GB | ~12-13 GB | 15-20% more VRAM, marginally better quality | +| Q3_K_S | 0.44 | ~5-6 GB | ~8-9 GB | Lower quality, use if memory constrained | + +**Key Insight:** With 11GB VRAM, you can comfortably run: +- 7-8B models with Q4_K_M or Q5_K_M quantization +- 13B models with Q4_K_M quantization (tight fit) + +**KV Cache Impact:** +- Default FP16 KV cache for 8B model @ 32K context = ~4.5 GB additional +- Enable `OLLAMA_KV_CACHE_TYPE=q8_0` to reduce by ~50% +- Each 1,000 tokens adds ~0.11GB for 7B models + +--- + +#### 2. Best Coding Models for 11GB VRAM + +| Model | Size (Q4_K_M) | Specialization | Performance | +|-------|---------------|----------------|-------------| +| `deepseek-coder:6.7b` | ~5 GB | Python, algorithms | Excellent HumanEval scores | +| `qwen2.5-coder:7b` | ~5 GB | General coding, refactoring | Rivals GitHub Copilot | +| `codellama:13b` (Q4) | ~8 GB | Multi-language (Python, C++, Java, TypeScript) | Production-quality | +| `yi-coder:9b` | ~6 GB | Web development | Niche specialist | +| `codellama:7b-python` | ~5 GB | Python data science | Specialized variant | + +**Recommended Stack for 11GB:** +```bash +# Primary (general purpose coding) +ollama pull qwen2.5-coder:7b + +# Secondary (Python/algorithm specialist) +ollama pull deepseek-coder:6.7b + +# Optional (larger context, tighter fit) +ollama pull codellama:13b-instruct-q4_K_M +``` + +--- + +#### 3. Modelfile Customization for Coding Assistants + +**Coding Assistant Modelfile Template:** +```dockerfile +# ~/ollama/Modelfile.coder +FROM qwen2.5-coder:7b + +# Reduce temperature for deterministic code output +PARAMETER temperature 0.3 + +# Increase context for larger files +PARAMETER num_ctx 8192 + +# Control output length +PARAMETER num_predict 2048 + +# Reduce repetition in code +PARAMETER repeat_penalty 1.1 + +SYSTEM """You are an expert coding assistant specialized in: +- Python, TypeScript, Go, Rust development +- Clean code principles and SOLID design +- Test-driven development (TDD) +- Performance optimization +- Security best practices + +Provide concise, well-documented code with type hints. +Include error handling and edge cases. +When reviewing code, suggest specific improvements. +""" +``` + +**Build and Run:** +```bash +ollama create coder -f ~/ollama/Modelfile.coder +ollama run coder +``` + +**Key Parameters for Coding:** +| Parameter | Coding Value | Purpose | +|-----------|--------------|---------| +| temperature | 0.2-0.4 | Lower = more deterministic code | +| num_ctx | 8192-16384 | Context window for large files | +| repeat_penalty | 1.1-1.2 | Avoid repetitive patterns | +| top_k | 40 | Reduce sampling diversity | +| top_p | 0.9 | Nucleus sampling threshold | + +--- + +#### 4. Ollama Serve Daemon Configuration & Memory Management + +**Systemd Service Configuration:** +```bash +# Edit the service +sudo systemctl edit ollama.service +``` + +**Add these environment variables:** +```ini +[Service] +# Listen on all interfaces +Environment="OLLAMA_HOST=0.0.0.0:11434" + +# GPU Optimization (for 11GB VRAM) +Environment="OLLAMA_FLASH_ATTENTION=1" +Environment="OLLAMA_KV_CACHE_TYPE=q8_0" +Environment="OLLAMA_GPU_OVERHEAD=1073741824" + +# Memory management +Environment="OLLAMA_MAX_LOADED_MODELS=1" +Environment="OLLAMA_NUM_PARALLEL=2" + +# Context (adjust based on model) +Environment="OLLAMA_CONTEXT_LENGTH=8192" + +# Specific GPU selection +Environment="CUDA_VISIBLE_DEVICES=0" +``` + +**Key Environment Variables:** +| Variable | Value for 11GB | Purpose | +|----------|----------------|---------| +| OLLAMA_FLASH_ATTENTION | 1 | Faster token generation, less VRAM | +| OLLAMA_KV_CACHE_TYPE | q8_0 | 50% less KV cache memory | +| OLLAMA_GPU_OVERHEAD | 1073741824 (1GB) | Reserve for system ops | +| OLLAMA_MAX_LOADED_MODELS | 1-2 | Prevent OOM with multiple models | +| OLLAMA_NUM_PARALLEL | 2-4 | Concurrent requests | +| CUDA_VISIBLE_DEVICES | 0 | Target specific GPU | + +**WSL2-Specific Fixes:** +```bash +# Fix suspend/resume GPU detection issue +sudo rmmod nvidia_uvm && sudo modprobe nvidia_uvm + +# Set TMPDIR for cross-filesystem issues +export TMPDIR="$HOME/.cache/tmp" +``` + +--- + +#### 5. Integration with Claude Code via MCP Bridge + +**Option A: Ollama v0.14.0+ Native Integration (Recommended)** +```bash +# Upgrade to latest Ollama with Anthropic API compatibility +curl -fsSL https://ollama.com/install.sh | sh + +# Verify version >= 0.14.0 +ollama --version + +# Configure Claude Code to use Ollama +export ANTHROPIC_BASE_URL="http://localhost:11434/v1" +export ANTHROPIC_API_KEY="ollama" # Any value works + +# Run Claude Code with local model +claude --model qwen2.5-coder:7b +``` + +**Option B: Ollama-MCP Server (Full SDK Access)** +```json +// ~/.claude/mcp_servers.json +{ + "ollama": { + "command": "npx", + "args": ["-y", "@rawveg/ollama-mcp"], + "env": { + "OLLAMA_HOST": "http://localhost:11434" + } + } +} +``` + +**Option C: Ollama-MCP-Bridge (Tool Calling)** +```bash +git clone https://github.com/patruff/ollama-mcp-bridge +cd ollama-mcp-bridge +npm install +npm start +``` + +**Capabilities via MCP:** +- Multi-turn conversations +- Tool calling support +- Vision inputs (for multimodal models) +- Local/private execution + +--- + +#### 6. vLLM as Alternative for Higher Throughput + +| Aspect | Ollama | vLLM | +|--------|--------|------| +| **Throughput** | ~41 TPS | ~793 TPS (19x faster) | +| **P99 Latency** | 673 ms | 80 ms | +| **Setup Complexity** | Easy | Complex | +| **VRAM Efficiency** | Good | Aggressive (90%+) | +| **Use Case** | Single user, development | Multi-user, production | + +**vLLM Challenges on 11GB VRAM:** +- Pre-allocated KV caches consume most VRAM +- WSL2 adds virtualization overhead +- High temperatures (85C+), throttling likely +- Format mismatches require troubleshooting + +**vLLM Setup (Docker):** +```bash +docker run --gpus all -p 8000:8000 \ + -e CUDA_VISIBLE_DEVICES=0 \ + -e VLLM_ATTENTION_BACKEND=FLASH_ATTN \ + vllm/vllm-openai:latest \ + --model Qwen/Qwen2.5-Coder-7B-Instruct \ + --quantization awq \ + --max-model-len 8192 \ + --gpu-memory-utilization 0.85 +``` + +**Recommendation:** For 11GB VRAM single-user development, **Ollama is more practical**. Use vLLM only if you need concurrent request handling and can manage the complexity. + +--- + +#### 7. GPU Memory Management Summary + +**Optimal Configuration for 11GB VRAM:** +```bash +# ~/.bashrc or ~/.zshrc +export CUDA_VISIBLE_DEVICES=0 +export OLLAMA_FLASH_ATTENTION=1 +export OLLAMA_KV_CACHE_TYPE=q8_0 +export OLLAMA_GPU_OVERHEAD=1073741824 +export OLLAMA_MAX_LOADED_MODELS=1 +export OLLAMA_CONTEXT_LENGTH=8192 +``` + +**Memory Budget (11GB):** +| Component | Allocation | +|-----------|------------| +| Model Weights (7B Q4_K_M) | ~5.5 GB | +| KV Cache (8K ctx, q8_0) | ~1.5 GB | +| GPU Overhead | ~1 GB | +| CUDA/System | ~0.5-1 GB | +| **Remaining Buffer** | ~2-2.5 GB | + +**Monitoring Commands:** +```bash +# Watch GPU memory in real-time +watch -n 1 nvidia-smi + +# Check Ollama loaded models +curl http://localhost:11434/api/tags | jq + +# Unload models to free VRAM +ollama rm +``` + +--- + +### Quick Start Commands + +```bash +# Install/upgrade Ollama +curl -fsSL https://ollama.com/install.sh | sh + +# Pull recommended models for 11GB +ollama pull qwen2.5-coder:7b +ollama pull deepseek-coder:6.7b + +# Create custom coding assistant +cat << 'EOF' > ~/Modelfile.coder +FROM qwen2.5-coder:7b +PARAMETER temperature 0.3 +PARAMETER num_ctx 8192 +SYSTEM "Expert coding assistant for Python, TypeScript, Go." +EOF + +ollama create coder -f ~/Modelfile.coder + +# Configure systemd +sudo systemctl edit ollama.service +# Add: Environment="OLLAMA_FLASH_ATTENTION=1" + +# Restart service +sudo systemctl daemon-reload +sudo systemctl restart ollama + +# Test +ollama run coder "Write a Python function to parse JSON with error handling" +``` + +--- + +### Sources + +- [Ollama VRAM Requirements 2026 Guide](https://localllm.in/blog/ollama-vram-requirements-for-local-llms) +- [Best Ollama Models for Coding 2025](https://www.codegpt.co/blog/best-ollama-model-for-coding) +- [Ollama Modelfile Reference](https://docs.ollama.com/modelfile) +- [Ollama GPU Hardware Support](https://docs.ollama.com/gpu) +- [Claude Code + Ollama Integration](https://docs.ollama.com/integrations/claude-code) +- [Ollama-MCP Bridge](https://github.com/patruff/ollama-mcp-bridge) +- [vLLM vs Ollama Comparison](https://northflank.com/blog/vllm-vs-ollama-and-how-to-run-them) +- [Ollama Performance Tuning](https://collabnix.com/ollama-performance-tuning-gpu-optimization-techniques-for-production/) + diff --git a/ARCHITECTURE.md b/ARCHITECTURE.md new file mode 100644 index 0000000..d1ff8f6 --- /dev/null +++ b/ARCHITECTURE.md @@ -0,0 +1,539 @@ +--- +title: "jade-ide GitHub Organization Architecture" +project: jadecli +date: 2026-01-23 +status: draft +--- + +# jade-ide GitHub Organization Architecture + +## 1. Organization Overview + +``` +┌─────────────────────────────────────────────────────────────────────────────┐ +│ GitHub Organization: jade-ide │ +├─────────────────────────────────────────────────────────────────────────────┤ +│ │ +│ ┌─────────────────────────────────────────────────────────────────────┐ │ +│ │ CORE PRODUCTS │ │ +│ │ │ │ +│ │ ┌──────────────┐ ┌──────────────┐ ┌──────────────┐ │ │ +│ │ │ jade-ide │ │ jade-cli │ │ jade-agent │ │ │ +│ │ │ (VSCode │ │ (Standalone │ │ (AI Agent │ │ │ +│ │ │ Fork) │◄──►│ CLI) │◄──►│ Runtime) │ │ │ +│ │ └──────────────┘ └──────────────┘ └──────────────┘ │ │ +│ │ │ │ │ │ │ +│ └──────────┼───────────────────┼───────────────────┼──────────────────┘ │ +│ │ │ │ │ +│ ┌──────────┼───────────────────┼───────────────────┼──────────────────┐ │ +│ │ ▼ ▼ ▼ │ │ +│ │ SHARED LIBRARIES │ │ +│ │ │ │ +│ │ ┌────────────┐ ┌────────────┐ ┌────────────┐ ┌────────────┐ │ │ +│ │ │jade-core │ │jade-lsp │ │jade-mcp │ │jade-acp │ │ │ +│ │ │(Common │ │(Language │ │(MCP │ │(Agent │ │ │ +│ │ │ Utils) │ │ Server) │ │ Client) │ │ Client) │ │ │ +│ │ └────────────┘ └────────────┘ └────────────┘ └────────────┘ │ │ +│ │ │ │ +│ └─────────────────────────────────────────────────────────────────────┘ │ +│ │ +│ ┌─────────────────────────────────────────────────────────────────────┐ │ +│ │ INFRASTRUCTURE │ │ +│ │ │ │ +│ │ ┌────────────┐ ┌────────────┐ ┌────────────┐ ┌────────────┐ │ │ +│ │ │jade- │ │jade- │ │jade- │ │jade- │ │ │ +│ │ │dotfiles │ │actions │ │infra │ │docs │ │ │ +│ │ │(Chezmoi │ │(CI/CD │ │(Terraform │ │(Docusaurus │ │ │ +│ │ │ Templates) │ │ Workflows) │ │ IaC) │ │ Site) │ │ │ +│ │ └────────────┘ └────────────┘ └────────────┘ └────────────┘ │ │ +│ │ │ │ +│ └──────────────────────────────────────────────────────────────────────┘ │ +│ │ +└─────────────────────────────────────────────────────────────────────────────┘ +``` + +## 2. Repository Structure + +``` +jade-ide (GitHub Organization) +│ +├── 🏗️ CORE PRODUCTS +│ │ +│ ├── jade-ide/ # VSCode Fork (FORK) +│ │ ├── src/vs/ # Core editor modifications +│ │ ├── extensions/jade-ai/ # Built-in AI extension +│ │ ├── build/ # Custom build scripts +│ │ └── resources/ # Branding assets +│ │ +│ ├── jade-cli/ # Standalone CLI (BUILD) +│ │ ├── src/ +│ │ │ ├── commands/ # CLI commands +│ │ │ ├── agents/ # Agent orchestration +│ │ │ ├── mcp/ # MCP client +│ │ │ └── acp/ # ACP client +│ │ ├── tools/ # Built-in tools +│ │ └── templates/ # Project templates +│ │ +│ └── jade-agent/ # AI Agent Runtime (BUILD) +│ ├── src/ +│ │ ├── runtime/ # Agent execution +│ │ ├── memory/ # Context management +│ │ ├── tools/ # Tool definitions +│ │ └── protocols/ # MCP/ACP adapters +│ └── agents/ # Pre-built agents +│ +├── 📚 SHARED LIBRARIES +│ │ +│ ├── jade-core/ # Common Utilities (BUILD) +│ │ ├── src/ +│ │ │ ├── config/ # Configuration loaders +│ │ │ ├── logging/ # Structured logging +│ │ │ ├── auth/ # Authentication +│ │ │ └── utils/ # Shared utilities +│ │ └── types/ # TypeScript types +│ │ +│ ├── jade-lsp/ # Language Server (BUILD) +│ │ ├── src/ +│ │ │ ├── server/ # LSP server +│ │ │ ├── providers/ # Completion, diagnostics +│ │ │ └── ai/ # AI-enhanced features +│ │ └── clients/ # IDE clients +│ │ +│ ├── jade-mcp/ # MCP Implementation (BUILD) +│ │ ├── src/ +│ │ │ ├── client/ # MCP client +│ │ │ ├── server/ # MCP server SDK +│ │ │ └── transports/ # stdio, http, websocket +│ │ └── servers/ # Built-in MCP servers +│ │ +│ └── jade-acp/ # ACP Implementation (BUILD) +│ ├── src/ +│ │ ├── client/ # ACP client +│ │ ├── adapters/ # IDE adapters +│ │ └── protocol/ # ACP spec implementation +│ └── agents/ # ACP-compatible agents +│ +├── 🔧 INFRASTRUCTURE +│ │ +│ ├── jade-dotfiles/ # Chezmoi Templates (BUILD) +│ │ ├── dot_claude/ # .claude/ templates +│ │ ├── dot_config/ # .config/ templates +│ │ ├── scripts/ # Setup scripts +│ │ └── templates/ # Chezmoi templates +│ │ +│ ├── jade-actions/ # GitHub Actions (BUILD) +│ │ ├── actions/ +│ │ │ ├── ai-review/ # AI code review +│ │ │ ├── jade-test/ # Test runner +│ │ │ └── jade-release/ # Release automation +│ │ └── workflows/ # Reusable workflows +│ │ +│ ├── jade-infra/ # Terraform IaC (BUILD) +│ │ ├── modules/ +│ │ │ ├── api-gateway/ # API infrastructure +│ │ │ ├── auth/ # SSO/SAML +│ │ │ └── monitoring/ # Observability +│ │ └── environments/ # dev, staging, prod +│ │ +│ └── jade-docs/ # Documentation (BUILD) +│ ├── docs/ +│ │ ├── getting-started/ +│ │ ├── guides/ +│ │ ├── api/ +│ │ └── enterprise/ +│ └── blog/ # Release notes, tutorials +│ +├── 🔌 EXTENSIONS & PLUGINS +│ │ +│ ├── jade-marketplace/ # Plugin Registry (BUILD) +│ │ ├── api/ # Registry API +│ │ ├── web/ # Marketplace frontend +│ │ └── cli/ # Plugin CLI +│ │ +│ └── jade-plugins/ # Official Plugins (BUILD) +│ ├── plugin-git/ # Git integration +│ ├── plugin-docker/ # Docker/containers +│ ├── plugin-k8s/ # Kubernetes +│ └── plugin-cloud/ # Cloud providers +│ +└── 🔒 PRIVATE REPOS (Internal) + │ + ├── jade-enterprise/ # Enterprise Features + │ ├── sso/ # SSO/SAML impl + │ ├── audit/ # Audit logging + │ └── compliance/ # Compliance tooling + │ + └── jade-models/ # AI Model Configs + ├── prompts/ # System prompts + ├── fine-tuning/ # Training data + └── eval/ # Evaluation suites +``` + +## 3. Package Strategy: Fork vs Build + +``` +┌─────────────────────────────────────────────────────────────────────────────┐ +│ PACKAGE STRATEGY MATRIX │ +├─────────────────────────────────────────────────────────────────────────────┤ +│ │ +│ ┌───────────────────────────────────────────────────────────────────────┐ │ +│ │ 🍴 FORK (Private) │ │ +│ ├───────────────────────────────────────────────────────────────────────┤ │ +│ │ │ │ +│ │ ┌─────────────────┐ │ │ +│ │ │ microsoft/ │ Fork as: jade-ide/jade-ide │ │ +│ │ │ vscode │───► License: MIT (Code OSS base) │ │ +│ │ │ │ Modifications: AI integration, branding │ │ +│ │ └─────────────────┘ │ │ +│ │ │ │ +│ │ ┌─────────────────┐ │ │ +│ │ │ anthropics/ │ Fork as: jade-ide/jade-claude-tools │ │ +│ │ │ claude-code │───► License: Apache 2.0 │ │ +│ │ │ (if open) │ Use: Reference implementation patterns │ │ +│ │ └─────────────────┘ │ │ +│ │ │ │ +│ │ ┌─────────────────┐ │ │ +│ │ │ modelcontextpro │ Fork as: jade-ide/jade-mcp-spec │ │ +│ │ │ tocol/spec │───► License: MIT │ │ +│ │ │ │ Use: MCP specification reference │ │ +│ │ └─────────────────┘ │ │ +│ │ │ │ +│ └───────────────────────────────────────────────────────────────────────┘ │ +│ │ +│ ┌───────────────────────────────────────────────────────────────────────┐ │ +│ │ 🏗️ BUILD FROM SCRATCH │ │ +│ ├───────────────────────────────────────────────────────────────────────┤ │ +│ │ │ │ +│ │ Core Products: │ │ +│ │ ├── jade-cli TypeScript/Rust Claude Code compatible CLI │ │ +│ │ ├── jade-agent TypeScript AI agent runtime/framework │ │ +│ │ └── jade-marketplace TypeScript/Go Plugin registry & store │ │ +│ │ │ │ +│ │ Shared Libraries: │ │ +│ │ ├── jade-core TypeScript Common utilities, auth │ │ +│ │ ├── jade-lsp TypeScript Language server protocol │ │ +│ │ ├── jade-mcp TypeScript/Rust MCP client/server SDK │ │ +│ │ └── jade-acp TypeScript/Rust ACP client for portability │ │ +│ │ │ │ +│ │ Infrastructure: │ │ +│ │ ├── jade-dotfiles Shell/Chezmoi Organization dotfiles │ │ +│ │ ├── jade-actions YAML CI/CD workflows │ │ +│ │ ├── jade-infra Terraform/Pulumi Cloud infrastructure │ │ +│ │ └── jade-docs MDX/Docusaurus Documentation site │ │ +│ │ │ │ +│ └───────────────────────────────────────────────────────────────────────┘ │ +│ │ +│ ┌───────────────────────────────────────────────────────────────────────┐ │ +│ │ 📦 VENDOR (Dependencies) │ │ +│ ├───────────────────────────────────────────────────────────────────────┤ │ +│ │ │ │ +│ │ Runtime: │ │ +│ │ ├── electron MIT Desktop application framework │ │ +│ │ ├── monaco-editor MIT Code editor component │ │ +│ │ └── xterm.js MIT Terminal emulator │ │ +│ │ │ │ +│ │ AI/ML: │ │ +│ │ ├── @anthropic-ai/sdk MIT Claude API client │ │ +│ │ ├── openai MIT OpenAI API client │ │ +│ │ └── ollama-js MIT Ollama local models │ │ +│ │ │ │ +│ │ Tooling: │ │ +│ │ ├── tree-sitter MIT Code parsing │ │ +│ │ ├── ripgrep MIT Fast code search │ │ +│ │ └── tiktoken MIT Token counting │ │ +│ │ │ │ +│ └───────────────────────────────────────────────────────────────────────┘ │ +│ │ +└─────────────────────────────────────────────────────────────────────────────┘ +``` + +## 4. Data Flow Architecture + +``` +┌─────────────────────────────────────────────────────────────────────────────┐ +│ jade-ide Data Flow │ +└─────────────────────────────────────────────────────────────────────────────┘ + + User Input + │ + ▼ +┌─────────────────────────────────────────────────────────────────────────────┐ +│ CLIENT LAYER │ +│ ┌──────────────────┐ ┌──────────────────┐ ┌──────────────────┐ │ +│ │ jade-ide │ │ jade-cli │ │ Third-party │ │ +│ │ (Desktop) │ │ (Terminal) │ │ IDE (via ACP) │ │ +│ └────────┬─────────┘ └────────┬─────────┘ └────────┬─────────┘ │ +│ │ │ │ │ +│ └───────────────────────┴───────────────────────┘ │ +│ │ │ +└───────────────────────────────────┼─────────────────────────────────────────┘ + │ + ┌──────────┴──────────┐ + │ Protocol Router │ + │ (ACP / MCP / LSP) │ + └──────────┬──────────┘ + │ +┌───────────────────────────────────┼─────────────────────────────────────────┐ +│ AGENT LAYER │ +│ │ │ +│ ┌────────────────────────────────┴────────────────────────────────────┐ │ +│ │ jade-agent Runtime │ │ +│ │ ┌──────────────┐ ┌──────────────┐ ┌──────────────┐ │ │ +│ │ │ Memory │ │ Tools │ │ Planner │ │ │ +│ │ │ Manager │ │ Registry │ │ Engine │ │ │ +│ │ └──────────────┘ └──────────────┘ └──────────────┘ │ │ +│ └─────────────────────────────┬────────────────────────────────────────┘ │ +│ │ │ +└────────────────────────────────┼─────────────────────────────────────────────┘ + │ + ┌────────────┴────────────┐ + │ │ + ▼ ▼ +┌─────────────────────────────┐ ┌─────────────────────────────┐ +│ MCP Servers │ │ AI Providers │ +│ ┌───────┐ ┌───────┐ │ │ ┌───────┐ ┌───────┐ │ +│ │ File │ │ Git │ │ │ │Claude │ │Ollama │ │ +│ │System │ │ │ │ │ │ API │ │Local │ │ +│ └───────┘ └───────┘ │ │ └───────┘ └───────┘ │ +│ ┌───────┐ ┌───────┐ │ │ ┌───────┐ ┌───────┐ │ +│ │ Web │ │Docker │ │ │ │OpenAI │ │Gemini │ │ +│ │Search │ │ │ │ │ │ API │ │ API │ │ +│ └───────┘ └───────┘ │ │ └───────┘ └───────┘ │ +└─────────────────────────────┘ └─────────────────────────────┘ +``` + +## 5. Dotfiles Configuration Flow + +``` +┌─────────────────────────────────────────────────────────────────────────────┐ +│ 4-Layer Dotfiles Configuration Flow │ +└─────────────────────────────────────────────────────────────────────────────┘ + + Layer 4: Organization Templates (jade-dotfiles repo) + ┌─────────────────────────────────────────────────────────────────────────┐ + │ github.com/jade-ide/jade-dotfiles/ │ + │ ├── templates/ # Shared templates │ + │ │ ├── claude-rules/ # Security, performance rules │ + │ │ ├── claude-commands/ # Standard commands (/commit, /test) │ + │ │ └── mcp-servers/ # Approved MCP server configs │ + │ └── .chezmoiexternal.toml # Pull into user dotfiles │ + └────────────────────────────────────────┬────────────────────────────────┘ + │ chezmoi external + ▼ + Layer 3: Enterprise IT Policies (Managed by IT/Security) + ┌─────────────────────────────────────────────────────────────────────────┐ + │ Deployed via MDM / Intune / Chef / Ansible │ + │ ├── /etc/jade/ # System-wide policies │ + │ │ ├── allowed-mcp-servers # Allowlist of MCP servers │ + │ │ ├── blocked-tools # Restricted tools/commands │ + │ │ └── audit-config # Compliance logging │ + │ └── Environment variables # JADE_ENTERPRISE_CONFIG, etc. │ + └────────────────────────────────────────┬────────────────────────────────┘ + │ policy merge + ▼ + Layer 2: Project-Specific (~///.claude/) + ┌─────────────────────────────────────────────────────────────────────────┐ + │ jade-ide/jade-cli/.claude/ │ + │ ├── CLAUDE.md # Project context (git-tracked) │ + │ ├── settings.project.json # Project hooks, MCP servers │ + │ ├── commands/ # Project-specific commands │ + │ └── hooks/ # Pre-commit, diagnostics │ + └────────────────────────────────────────┬────────────────────────────────┘ + │ project merge + ▼ + Layer 1: Engineer Personal (~/.claude/) + ┌─────────────────────────────────────────────────────────────────────────┐ + │ ~/.claude/ │ + │ ├── settings.json # Personal preferences │ + │ ├── commands/ # Personal commands │ + │ ├── skills/ # Personal skills │ + │ └── plugins/ # Installed plugins │ + │ │ + │ ~/.claude.local.md # Local overrides (gitignored) │ + └─────────────────────────────────────────────────────────────────────────┘ + + │ + ▼ + ┌────────────────────────┐ + │ Merged Configuration │ + │ (Runtime Resolution) │ + └────────────────────────┘ +``` + +## 6. Technology Stack by Repository + +``` +┌─────────────────────────────────────────────────────────────────────────────┐ +│ Technology Stack Matrix │ +├─────────────────────────────────────────────────────────────────────────────┤ +│ │ +│ Repository Language Runtime Build Package │ +│ ─────────────────────────────────────────────────────────────────────── │ +│ │ +│ jade-ide TypeScript Electron esbuild pnpm │ +│ + HTML/CSS Node 22+ webpack │ +│ │ +│ jade-cli TypeScript Node 22+ tsup pnpm │ +│ + Rust (perf) Bun (opt) cargo │ +│ │ +│ jade-agent TypeScript Node 22+ tsup pnpm │ +│ Bun (opt) │ +│ │ +│ jade-core TypeScript Node 22+ tsup pnpm │ +│ │ +│ jade-lsp TypeScript Node 22+ tsup pnpm │ +│ │ +│ jade-mcp TypeScript Node 22+ tsup pnpm │ +│ + Rust (opt) cargo │ +│ │ +│ jade-acp TypeScript Node 22+ tsup pnpm │ +│ + Rust cargo │ +│ │ +│ jade-dotfiles Shell/TOML chezmoi - chezmoi │ +│ + Templates │ +│ │ +│ jade-actions YAML GitHub - - │ +│ + TypeScript Actions │ +│ │ +│ jade-infra HCL/TypeScript Terraform - terraform │ +│ Pulumi pulumi │ +│ │ +│ jade-docs MDX Docusaurus - pnpm │ +│ + React Node 22+ │ +│ │ +│ jade-marketplace TypeScript Node 22+ tsup pnpm │ +│ + Go (API) go build │ +│ │ +└─────────────────────────────────────────────────────────────────────────────┘ + +Development Environment: +┌─────────────────────────────────────────────────────────────────────────────┐ +│ Tool Purpose Version │ +│ ─────────────────────────────────────────────────────────────────────── │ +│ mise Runtime version manager Latest │ +│ pnpm Node package manager 9.x │ +│ uv Python package manager Latest │ +│ rustup Rust toolchain 1.92+ │ +│ docker Containerization Latest │ +│ chezmoi Dotfiles manager Latest │ +│ pre-commit Git hooks Latest │ +│ just Task runner Latest │ +└─────────────────────────────────────────────────────────────────────────────┘ +``` + +## 7. Monorepo vs Polyrepo Decision + +``` +┌─────────────────────────────────────────────────────────────────────────────┐ +│ RECOMMENDED: HYBRID APPROACH │ +├─────────────────────────────────────────────────────────────────────────────┤ +│ │ +│ Monorepo (jade-ide/jade-platform) Polyrepo (Individual repos) │ +│ ────────────────────────────────── ────────────────────────── │ +│ │ +│ ┌─────────────────────────────┐ ┌───────────────────────────┐ │ +│ │ jade-platform/ │ │ jade-ide/jade-ide │ │ +│ │ ├── packages/ │ │ (VSCode fork - too large │ │ +│ │ │ ├── jade-cli/ │ │ for monorepo) │ │ +│ │ │ ├── jade-agent/ │ └───────────────────────────┘ │ +│ │ │ ├── jade-core/ │ │ +│ │ │ ├── jade-lsp/ │ ┌───────────────────────────┐ │ +│ │ │ ├── jade-mcp/ │ │ jade-ide/jade-infra │ │ +│ │ │ └── jade-acp/ │ │ (Separate deploy cycle) │ │ +│ │ ├── apps/ │ └───────────────────────────┘ │ +│ │ │ └── jade-marketplace/ │ │ +│ │ ├── tools/ │ ┌───────────────────────────┐ │ +│ │ │ └── jade-actions/ │ │ jade-ide/jade-docs │ │ +│ │ └── pnpm-workspace.yaml │ │ (Content contributors) │ │ +│ └─────────────────────────────┘ └───────────────────────────┘ │ +│ │ +│ Benefits: Benefits: │ +│ ✓ Single version of shared libs ✓ Independent release cycles │ +│ ✓ Atomic cross-package changes ✓ Focused CI/CD pipelines │ +│ ✓ Better AI context visibility ✓ Clear ownership boundaries │ +│ ✓ Simplified dependency management ✓ Smaller clone sizes │ +│ │ +└─────────────────────────────────────────────────────────────────────────────┘ +``` + +## 8. Development Phase Roadmap + +``` +┌─────────────────────────────────────────────────────────────────────────────┐ +│ DEVELOPMENT PHASES │ +├─────────────────────────────────────────────────────────────────────────────┤ +│ │ +│ Phase 0: Foundation (Current) │ +│ ┌───────────────────────────────────────────────────────────────────────┐ │ +│ │ [■■■■■■░░░░] 60% │ │ +│ │ │ │ +│ │ ✓ Research complete (VSCode forks, ACP, MCP, dotfiles) │ │ +│ │ ✓ Architecture documentation │ │ +│ │ → jade-dotfiles: Chezmoi templates for team │ │ +│ │ → Development environment setup (Ubuntu 26.04 WSL2) │ │ +│ │ → jade-platform monorepo initialization │ │ +│ └───────────────────────────────────────────────────────────────────────┘ │ +│ │ +│ Phase 1: Core Libraries (Q1) │ +│ ┌───────────────────────────────────────────────────────────────────────┐ │ +│ │ [░░░░░░░░░░] 0% │ │ +│ │ │ │ +│ │ □ jade-core: Config loading, auth, logging │ │ +│ │ □ jade-mcp: MCP client/server SDK │ │ +│ │ □ jade-acp: ACP client for IDE portability │ │ +│ │ □ jade-actions: CI/CD workflows │ │ +│ └───────────────────────────────────────────────────────────────────────┘ │ +│ │ +│ Phase 2: jade-cli MVP (Q2) │ +│ ┌───────────────────────────────────────────────────────────────────────┐ │ +│ │ [░░░░░░░░░░] 0% │ │ +│ │ │ │ +│ │ □ CLI framework with Claude Code compatibility │ │ +│ │ □ MCP server integration │ │ +│ │ □ Local Ollama support │ │ +│ │ □ Basic agent runtime │ │ +│ └───────────────────────────────────────────────────────────────────────┘ │ +│ │ +│ Phase 3: jade-ide Alpha (Q3) │ +│ ┌───────────────────────────────────────────────────────────────────────┐ │ +│ │ [░░░░░░░░░░] 0% │ │ +│ │ │ │ +│ │ □ Fork Code OSS (microsoft/vscode) │ │ +│ │ □ Integrate jade-cli as built-in extension │ │ +│ │ □ Custom branding and UX │ │ +│ │ □ AI-first features (inline chat, codebase context) │ │ +│ └───────────────────────────────────────────────────────────────────────┘ │ +│ │ +│ Phase 4: Enterprise & Marketplace (Q4) │ +│ ┌───────────────────────────────────────────────────────────────────────┐ │ +│ │ [░░░░░░░░░░] 0% │ │ +│ │ │ │ +│ │ □ SSO/SAML integration │ │ +│ │ □ Audit logging and compliance │ │ +│ │ □ Plugin marketplace │ │ +│ │ □ Enterprise pricing and support │ │ +│ └───────────────────────────────────────────────────────────────────────┘ │ +│ │ +└─────────────────────────────────────────────────────────────────────────────┘ +``` + +## 9. Key Dependencies to Audit + +| Dependency | License | Risk | Notes | +|------------|---------|------|-------| +| microsoft/vscode | MIT | Medium | Fork Code OSS, not VS Code marketplace | +| electron | MIT | Low | Stable, well-maintained | +| monaco-editor | MIT | Low | Core editor, can customize | +| @anthropic-ai/sdk | MIT | Low | Official Claude SDK | +| tree-sitter | MIT | Low | Code parsing | +| ripgrep | MIT | Low | Fast search | +| chezmoi | MIT | Low | Dotfiles management | +| ollama | MIT | Low | Local models | + +## 10. Next Steps + +1. **Initialize jade-platform monorepo** with pnpm workspaces +2. **Create jade-dotfiles** repository with Chezmoi templates +3. **Set up Ubuntu 26.04 WSL2** development environment +4. **Fork Code OSS** into private jade-ide repository +5. **Begin jade-core** library development