Integration Bridge for EXtended systems
A Model Context Protocol server that connects AI assistants to your workplace tools — Slack, Notion, Jira, and a persistent GitHub-backed memory system.
Designed to run alongside Open WebUI and a local LLM server (LM Studio, Ollama, etc.) for a fully self-hosted AI assistant with access to your internal tools.
| Server | Port | Tools | Capability |
|---|---|---|---|
| Slack | 3001 | slack_search_messages, slack_get_channel_history, slack_list_channels, slack_get_thread |
Search messages, read channels and threads |
| Notion | 3002 | notion_search, notion_get_page, notion_get_block_children, notion_query_database |
Search pages, read content, query databases |
| Jira | 3003 | jira_search_issues, jira_get_issue, jira_get_projects |
JQL search, issue details, project listing |
| Memory | 3004 | memory_get, memory_update |
Read/write a persistent markdown file on GitHub |
Each server runs independently — start only the ones you need.
git clone https://github.com/Percona-Lab/IBEX.git
cd IBEX
npm installCreate ~/.ibex-mcp.env (outside the repo for security):
# Slack (user token required for search)
SLACK_TOKEN=xoxp-...
# Notion
NOTION_TOKEN=ntn_...
# Jira
JIRA_DOMAIN=yourcompany.atlassian.net
JIRA_EMAIL=you@yourcompany.com
JIRA_API_TOKEN=...
# Memory (GitHub-backed)
GITHUB_TOKEN=ghp_... # Fine-grained PAT with Contents read/write scope
GITHUB_OWNER=your-github-org # GitHub org or username
GITHUB_REPO=ai-memory # Private repo for memory storage
GITHUB_MEMORY_PATH=MEMORY.md # File path (default: MEMORY.md)Only include the variables for connectors you want to use.
Create a private GitHub repo for memory storage. The first memory_update call will create the MEMORY.md file automatically.
Security notice: The memory file may accumulate sensitive context over time — meeting notes, project details, personal preferences, etc. Always create the repo as private and restrict collaborator access.
Generate a fine-grained personal access token with:
- Repository access: Only select your memory repo
- Permissions: Contents → Read and write
Start all MCP servers and Open WebUI with one command:
~/IBEX/start.shPress Ctrl+C to stop all MCP servers. The script starts each server in the background and launches the Open WebUI Docker container.
Each server runs on its own port:
cd ~/IBEX
node servers/slack.js --http # port 3001
node servers/notion.js --http # port 3002
node servers/jira.js --http # port 3003
node servers/memory.js --http # port 3004Or override the port with MCP_SSE_PORT:
MCP_SSE_PORT=4000 node servers/slack.js --httpVerify any server is running:
curl http://localhost:3001/healthOpen WebUI connects to your LLM server. Replace <LLM_SERVER_IP> with the IP of your LM Studio or Ollama server:
docker run -d \
--name open-webui \
-p 8080:8080 \
-v ~/open-webui-data:/app/backend/data \
-e OPENAI_API_BASE_URL=http://<LLM_SERVER_IP>:1234/v1 \
-e OPENAI_API_KEY=dummy \
ghcr.io/open-webui/open-webui:mainOpen http://localhost:8080 and create your admin account on first launch.
- In Open WebUI, go to Settings → Tools → MCP Servers
- Add each server you started:
| Server | Type | URL |
|---|---|---|
| Slack | Streamable HTTP | http://host.docker.internal:3001/mcp |
| Notion | Streamable HTTP | http://host.docker.internal:3002/mcp |
| Jira | Streamable HTTP | http://host.docker.internal:3003/mcp |
| Memory | Streamable HTTP | http://host.docker.internal:3004/mcp |
- Auth: None for all servers
- Toggle on/off individual servers per conversation as needed
The server.js file runs all tools in a single server if you prefer:
node server.js --http # all tools on port 3001All servers support three transport modes:
| Mode | Flag | Use Case |
|---|---|---|
| Streamable HTTP | --http |
Open WebUI and modern MCP clients |
| stdio | (none) | Claude Desktop and other stdio-based MCP clients |
| Legacy SSE | --sse-only |
Older MCP clients |
The memory system stores a single markdown file in a private GitHub repo, providing persistent context across AI conversations.
memory_get— Returns the current markdown content from GitHub.memory_update— Replaces the file entirely with new content. Accepts an optionalmessageparameter for the git commit message.
Updates use GitHub's SHA-based optimistic concurrency — the connector fetches the current SHA before each write to prevent blind overwrites.
To get the most out of the memory tools, add a system prompt that tells the model when to read and write memory. Go to Settings → Models → (select your model) → System Prompt and paste:
You have access to persistent memory tools: memory_get and memory_update.
Use memory_get when:
- The user says "what do you know about me" or asks for context from previous conversations
- The user references something you should already know
- You need background on a project, preference, or decision
Use memory_update when:
- The user says "remember this", "save this", or "update memory"
- The user shares important context they'll want you to recall later
When updating memory:
1. Always call memory_get first to fetch the current content
2. Merge new information into the existing markdown — never overwrite from scratch
3. Call memory_update with the complete updated markdown
4. Use clear ## headings and bullet points to keep it organized
Do not call memory_get at the start of every conversation — only when context is needed.
Why not auto-load on every conversation? Local models have limited context windows and tool-calling ability. Explicit triggers ("remember this", "what do you know") work more reliably and avoid adding latency to every first message.
The Notion indexer builds a searchable JSON index of your Notion workspace by recursively crawling pages from root pages you configure.
# 1. Create a config file
node notion_indexer.js --init
# 2. Edit notion_roots.json with your Notion page IDs
# (see instructions printed by --init)
# 3. Build the index
node notion_indexer.js --allThe config file (notion_roots.json) and generated index (notion_index.json) are both gitignored — they contain your workspace-specific page IDs and content.
node notion_indexer.js --all # Index all configured root pages
node notion_indexer.js --all --incremental # Update existing index
node notion_indexer.js abc123def456... # Index a specific page
node notion_indexer.js --list # List configured root pagesAfter each memory_update, the content can be automatically synced to Google Docs and/or Notion. This is 1-way (GitHub → targets) and non-blocking — sync failures are logged but never break the memory update.
This makes your memory readable in a browser and accessible to other AI tools like Gemini Gems and ChatGPT.
Add to ~/.ibex-mcp.env:
NOTION_SYNC_PAGE_ID=abcdef1234567890 # Page ID to overwrite with memory contentRequires NOTION_TOKEN to already be set. The target page will have its content replaced on each memory update. Create a dedicated page for this — don't use one with content you want to keep.
Step 1: Create OAuth credentials
- Go to Google Cloud Console
- Create a project (or use existing)
- Enable the Google Docs API
- Go to Credentials → Create Credentials → OAuth client ID
- Application type: Desktop app
- Copy the Client ID and Client Secret
Step 2: Get a refresh token
GOOGLE_CLIENT_ID=xxx GOOGLE_CLIENT_SECRET=yyy node scripts/google-auth.jsThis opens a browser for authorization and prints the refresh token.
Step 3: Add to .env
GOOGLE_DOC_ID=1BxiMVs0XRA5nFMdKvBd... # From the Google Docs URL
GOOGLE_CLIENT_ID=xxxx.apps.googleusercontent.com
GOOGLE_CLIENT_SECRET=GOCSPX-...
GOOGLE_REFRESH_TOKEN=1//0eXXXX...The Google Doc ID is the long string in the URL: https://docs.google.com/document/d/<DOC_ID>/edit
- Both targets are optional and independent — configure one, both, or neither
- Sync runs in the background after GitHub write succeeds
- Failures are logged to stderr but don't affect the
memory_updateresponse - Google Docs receives plain markdown text (readable but not formatted)
- Notion receives structured blocks (headings, bullets, code blocks, etc.)
├── server.js # All-in-one MCP server (all tools)
├── start.sh # Launch all servers + Open WebUI
├── notion_indexer.js # Notion workspace indexer
├── servers/
│ ├── shared.js # Shared transport and startup logic
│ ├── slack.js # Slack MCP server (port 3001)
│ ├── notion.js # Notion MCP server (port 3002)
│ ├── jira.js # Jira MCP server (port 3003)
│ └── memory.js # Memory MCP server (port 3004)
├── connectors/
│ ├── slack.js # Slack Web API connector
│ ├── notion.js # Notion API connector (read + write for sync)
│ ├── jira.js # Jira Cloud API connector
│ ├── github.js # GitHub Contents API connector (memory backend)
│ ├── google-docs.js # Google Docs API connector (memory sync)
│ └── memory-sync.js # Sync orchestrator (Notion + Google Docs)
├── scripts/
│ └── google-auth.js # One-time Google OAuth2 setup
├── package.json
├── LAUNCH.md # Quick-start launch commands
└── README.md
100% vibe coded with Claude.
MIT