OpenClaw is an open-source personal AI assistant that runs on your own machine. Unlike cloud-based AI services, OpenClaw gives you full control over your data and infrastructure.
Key features include:
- Multi-platform chat integration: Interact via WhatsApp, Telegram, Discord, Slack, Signal, or iMessage
- Persistent memory: Remembers your preferences and context across sessions
- Full system access: Read/write files, run shell commands, and control your browser
- Extensible skills: Use community-built skills or create your own
- Model flexibility: Works with Anthropic, OpenAI, or local models
Github repo of OpenClaw: https://github.com/openclaw/openclaw
To integrate Parallax with OpenClaw, you need to meet the prerequisites for both projects:
- Node.js: >= 22 (required by OpenClaw)
- Python: >=3.11 (required by Parallax)
Before proceeding, we assume you have already deployed Parallax on your AI cluster. For deployment instructions, please refer to:
Step 1: Start the Scheduler
On your scheduler machine, run:
parallax run --host 0.0.0.0Step 2: Select Model
Open your browser and navigate to localhost:3001 on the scheduler machine. Select your model and click Continue.
Step 3: Start Edge Nodes
On your edge nodes, run:
parallax join --max-sequence-length 65536 --max-num-tokens-per-batch 65536 --enable-prefix-cacheStep 4: Test the Model
On the scheduler machine, open your browser and navigate to localhost:3001. Use the chat interface to test if the model is working properly.
Step 1: Install OpenClaw
Use the official install script to install OpenClaw, skipping the onboard wizard:
curl -fsSL https://openclaw.ai/install.sh | bash -s -- --no-onboardStep 2: Create Configuration File
Create the configuration file at ~/.openclaw/openclaw.json with the following content:
{
"agents": {
"defaults": {
"model": {
"primary": "parallax/your-model-name"
}
}
},
"models": {
"providers": {
"parallax": {
"baseUrl": "http://localhost:3001/v1",
"apiKey": "placeholder",
"api": "openai-completions",
"models": [
{
"id": "your-model-name",
"name": "Parallax Model"
}
]
}
}
}
}Step 3: Run Onboard
openclaw onboard --install-daemonDuring the onboard process:
- Read and accept the OpenClaw risk disclaimer
- When prompted for onboarding mode, select
Quick Start - When prompted for config handling, select
Use existing values - When prompted for Model/auth provider, select
Skip for now - When prompted for Filter models by provider, select
All providers - When prompted for Default model, select
Keep current (parallax/your-model-name) - When prompted for Select channel, configure the channel based on your needs, or select
Skip for now - When prompted for Select skills, configure the skills based on your needs, or select
Skip for now - When prompted for Enable hooks, configure the hooks based on your needs, or select
Skip for now - Wait a moment for Gateway services being installed.
- When prompted for How do you want to hatch your bot, configure the way you hatch your bot based on your needs.
Open your browser and navigate to http://127.0.0.1:18789/. Start sending messages to OpenClaw and enjoy!
Q: OOM Error
libc++abi: terminating due to uncaught exception of type std::runtime_error: [METAL] Command buffer execution failed: Insufficient Memory (00000008:kIOGPUCommandBufferCallbackErrorOutOfMemory)
A: Add the --kv-cache-memory-fraction parameter when starting Parallax on edge nodes:
parallax join --max-sequence-length 65536 --max-num-tokens-per-batch 65536 --enable-prefix-cache --kv-cache-memory-fraction 0.5If OOM errors persist, try using a smaller value for --kv-cache-memory-fraction.