Complete Cognitive Infrastructure. Five enterprise memory systems, orchestrated.
| Component | Protocol | Purpose |
|---|---|---|
| CASCADE Enterprise (Open Source - MIT) | MCP stdio | Six-layer temporal memory with decay modeling |
| PyTorch Memory Enterprise | MCP stdio | GPU-accelerated semantic vector search |
| Hebbian Mind Enterprise (Open Source) | MCP stdio | Associative learning - edges strengthen through use |
| Soul Matrix | MCP stdio | Pre-retrieval activation gating |
| CMM Enterprise | MCP stdio | Unified cognitive search across all backends |
GPU auto-detection with CPU fallback. Production-ready.
Three paths. Pick what fits.
Run each component directly on your machine. No containers.
.\install.ps1 -SkipDockerThis installs each component via pip install -e . or component-specific installers. You run services directly.
Requirements:
- Python 3.10+
- Git
- 16GB RAM minimum (32GB recommended)
- 20GB disk space
- NVIDIA GPU with CUDA 11.7+ (optional - falls back to CPU)
./install.sh --skip-dockerSame as Windows: native Python installations, no containers.
Requirements:
- Python 3.10+
- Git
- 16GB RAM minimum (32GB recommended)
- 20GB disk space
- NVIDIA GPU with CUDA 11.7+ (optional - falls back to CPU)
For teams, production deployments, or if you prefer containers.
# Windows
.\install.ps1
# Linux/macOS
./install.shRequirements:
- Docker 20.10+ with Docker Compose V2
- Python 3.10+
- Git
- 16GB RAM minimum (32GB recommended)
- 20GB disk space
- NVIDIA GPU + Container Toolkit (optional - falls back to CPU)
Docker handles inter-service networking, health checks, and restarts automatically.
The installer handles everything:
- Checks prerequisites
- Detects GPU (or configures CPU fallback)
- Initializes submodules
- Runs component installers
- Creates data directories
- Builds containers (unless
--skip-docker) - Starts services
- Reports endpoints
First run: 5-10 minutes. Subsequent starts: under 30 seconds.
All five components implement the Model Context Protocol (MCP) using stdio transport. They communicate via JSON-RPC over stdin/stdout, not HTTP.
Add to your claude_desktop_config.json:
{
"mcpServers": {
"cascade-enterprise": {
"command": "python",
"args": ["-m", "cascade_enterprise_ram"],
"cwd": "./components/cascade-enterprise-ram"
},
"pytorch-memory-enterprise": {
"command": "python",
"args": ["-m", "pytorch_memory_enterprise"],
"cwd": "./components/pytorch-memory-enterprise"
},
"hebbian-mind-enterprise": {
"command": "python",
"args": ["-m", "hebbian_mind_enterprise"],
"cwd": "./components/hebbian-mind-enterprise"
},
"soul-matrix": {
"command": "./components/soul-matrix-rust/target/release/soul-matrix-server",
"args": ["--matrix", "./data/soul_matrix.bin", "--map", "./data/concept_map.json"]
},
"cmm-enterprise": {
"command": "python",
"args": ["-m", "cmm_enterprise"],
"cwd": "./components/cmm-enterprise"
}
}
}Each component exposes tools through MCP. See individual component READMEs for complete tool documentation.
Copy .env.example to .env and customize:
# GPU Configuration (auto-detected, override if needed)
PYTORCH_MEMORY_DEVICE=auto # auto | cuda | cpu
# Memory Limits
PYTORCH_MEMORY_CAPACITY=8192
HEBBIAN_MAX_EDGE_WEIGHT=10.0
# Temporal Decay
CASCADE_EPISODIC_DECAY=0.95
CASCADE_SEMANTIC_DECAY=0.99
# Network (for Docker deployments)
CIPS_NETWORK_SUBNET=172.28.0.0/16Native installs: each component reads from its own .env or command-line arguments.
Docker installs: docker compose restart after changes.
Your Application
|
v
+-------------+
| CMM | <-- Unified search interface
+-------------+
|
+------------------+------------------+
| | |
v v v
+---------------+ +---------------+ +---------------+
| CASCADE | | PyTorch | | Hebbian |
| (Temporal) | | (Semantic) | | (Associative) |
+---------------+ +---------------+ +---------------+
| | |
+------------------+------------------+
|
v
+-------------+
| Soul Matrix | <-- Pre-retrieval activation
+-------------+
CMM queries all backends in parallel, synthesizes results, returns unified responses. Soul Matrix shapes activation before retrieval begins.
Docker deployments:
docker compose up -d # Start
docker compose down # Stop
docker compose logs -f # All logs
docker compose logs -f cascade # Single service
docker compose restart # Restart all
docker stats # Resource usageNative deployments: Start each service manually or via process manager (systemd, PM2, etc.). Each component has its own startup instructions in its README.
Each component has dedicated docs:
- CASCADE Enterprise RAM - Temporal memory with six layers and natural decay
- PyTorch Memory Enterprise - Semantic vector search with GPU acceleration
- Hebbian Mind Enterprise - Associative learning that strengthens through use
- Soul Matrix - Pre-retrieval activation and gating
- CMM Enterprise - Unified cognitive search API
Docker:
docker compose down
# Extract new release
docker compose up -d --buildData volumes persist. Memories stay intact.
Native:
git pull
pip install -e . --upgrade # In each component directoryMCP server won't start: Check the component's stderr output for initialization errors. Common causes: missing dependencies, incorrect paths, Python version mismatch.
GPU not detected:
nvidia-smi # Check drivers
docker run --rm --gpus all nvidia/cuda:11.7-base nvidia-smi # Docker GPU accessIf GPU fails, set PYTORCH_MEMORY_DEVICE=cpu. The stack runs fine on CPU.
Container won't start:
docker compose logs [service-name]Common causes: insufficient memory, port conflicts, permission errors.
- Documentation: cipscorps.io/docs
- Architecture & Benchmarks: cipscorps.io/architecture
- Email Support: glass@cipscorps.io (response within 24 hours, business days)
- Updates: Included for 1 year from purchase
CIPS Stack is proprietary software. See LICENSE.md and EULA for terms.
Per-developer licensing. See EULA for full terms including 90-day money-back guarantee.
Copyright (c) 2025-2026 C.I.P.S. LLC. All rights reserved.
Portions of the technology described herein are subject to pending patent application(s) filed with the United States Patent and Trademark Office. The methods, processes, and architectures embodied in this software -- including but not limited to multi-system cognitive memory orchestration, temporal decay modeling, Hebbian associative learning, GPU-optimized sequential tensor rotation for semantic retrieval, and pre-retrieval activation gating -- may be protected under one or more issued or pending patents.
Unauthorized reproduction, reverse engineering, creation of derivative works, or commercial redistribution is strictly prohibited and may constitute infringement of intellectual property rights protected under U.S. and international law.
For licensing inquiries: glass@cipscorps.io
CIPS Stack Cognitive Infrastructure Production System