Skip to content
Merged
Show file tree
Hide file tree
Changes from all commits
Commits
Show all changes
22 commits
Select commit Hold shift + click to select a range
6cfd909
feat: native TypeScript OpenClaw plugin — zero external dependencies
SecretSettler Mar 25, 2026
b960324
test: add comprehensive test suite for ContextPilot engine (38 tests)
SecretSettler Mar 25, 2026
08e5d3b
test: add E2E integration tests for full optimization pipeline (16 te…
SecretSettler Mar 25, 2026
64e9d90
feat: complete ContextPilot engine port to TypeScript (6145 lines)
SecretSettler Mar 26, 2026
86ab671
feat: wire full ContextPilot engine + SGLang mode into plugin
SecretSettler Mar 26, 2026
7454895
Merge branch 'cloud-cache-proxy' of https://github.com/Edinburgh-Agen…
dalongbao Mar 29, 2026
f60b8fb
fix: working plugin
dalongbao Apr 1, 2026
5b408db
openclaw plugin bench
dalongbao Apr 3, 2026
a388ae5
fix for tests
dalongbao Apr 6, 2026
4d33156
fix benchmark
dalongbao Apr 9, 2026
a164ee4
cleanup
dalongbao Apr 9, 2026
6fc4c8d
bench fix
dalongbao Apr 9, 2026
10da5cf
ci: add independent npm publish and version bump workflows for opencl…
SecretSettler Apr 10, 2026
a2edf9a
chore(plugin): bump to v0.2.1
SecretSettler Apr 10, 2026
de9082a
fix(plugin): update npm scope to @contextpilot-ai
SecretSettler Apr 10, 2026
6a56dd2
Add cross-layer block dedup scanning in Python intercept flow
SecretSettler Apr 11, 2026
6e23fdb
Add cross-layer and assistant code-block dedup to plugin engine
SecretSettler Apr 11, 2026
5a9cc96
Wire plugin index integration for updated dedup behavior
SecretSettler Apr 11, 2026
222bddc
Bump Python package version to 0.4.1
SecretSettler Apr 11, 2026
cac6e8a
Bump OpenClaw plugin package version to 0.3.0
SecretSettler Apr 11, 2026
28cf97b
ci: switch npm publish to Trusted Publishing (OIDC provenance, no tok…
SecretSettler Apr 11, 2026
3e901cf
docs: add OpenClaw native plugin as primary installation method
SecretSettler Apr 11, 2026
File filter

Filter by extension

Filter by extension


Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
55 changes: 55 additions & 0 deletions .github/workflows/bump-plugin.yml
Original file line number Diff line number Diff line change
@@ -0,0 +1,55 @@
name: Bump Plugin Version

on:
workflow_dispatch:
inputs:
bump_type:
description: Version bump type
required: true
default: patch
type: choice
options:
- patch
- minor
- major

permissions:
contents: write

jobs:
bump-plugin-version:
runs-on: ubuntu-latest
steps:
- name: Checkout repository
uses: actions/checkout@v4
with:
token: ${{ secrets.GITHUB_TOKEN }}

- name: Setup Node.js
uses: actions/setup-node@v4
with:
node-version: '20'

- name: Configure Git user
run: |
git config user.name "github-actions[bot]"
git config user.email "github-actions[bot]@users.noreply.github.com"

- name: Bump plugin version
run: cd openclaw-plugin && npm version ${{ github.event.inputs.bump_type }} --no-git-tag-version

- name: Extract new version
run: |
VERSION=$(node -p "require('./openclaw-plugin/package.json').version")
echo "VERSION=${VERSION}" >> "$GITHUB_ENV"

- name: Commit version bump
run: |
git add openclaw-plugin/package.json openclaw-plugin/package-lock.json
git commit -m "chore(plugin): bump to v${VERSION}"

- name: Create plugin tag
run: git tag "plugin-v${VERSION}"

- name: Push commit and tags
run: git push origin "HEAD:${{ github.ref_name }}" --tags
39 changes: 39 additions & 0 deletions .github/workflows/release-plugin.yml
Original file line number Diff line number Diff line change
@@ -0,0 +1,39 @@
name: Release Plugin

on:
push:
tags:
- 'plugin-v*'

permissions:
contents: read
id-token: write

jobs:
publish-npm:
name: Publish OpenClaw Plugin to npm
runs-on: ubuntu-latest
defaults:
run:
working-directory: openclaw-plugin
steps:
- uses: actions/checkout@v4

- name: Setup Node.js
uses: actions/setup-node@v4
with:
node-version: '20'
registry-url: 'https://registry.npmjs.org'

- name: Extract version from tag
id: get_version
run: echo "VERSION=${GITHUB_REF#refs/tags/plugin-v}" >> $GITHUB_OUTPUT

- name: Install dependencies
run: npm ci

- name: Sync package version
run: npm version ${{ steps.get_version.outputs.VERSION }} --no-git-tag-version

- name: Publish to npm with provenance
run: npm publish --provenance --access public
1 change: 1 addition & 0 deletions .gitignore
Original file line number Diff line number Diff line change
Expand Up @@ -8,3 +8,4 @@ dist/
*/.DS_Store
*.DS_Store

node_modules/
24 changes: 22 additions & 2 deletions README.md
Original file line number Diff line number Diff line change
Expand Up @@ -86,17 +86,37 @@ We also evaluated on academic RAG (Qwen3-32B, 4×A6000) and production MoE infer

### OpenClaw

**Option A: Native Plugin** (recommended — zero external dependencies)

```bash
openclaw plugins install @contextpilot-ai/contextpilot
```

Then enable in `~/.openclaw/openclaw.json`:

```json
{
"plugins": {
"slots": { "contextEngine": "contextpilot" },
"entries": { "contextpilot": { "enabled": true } }
}
}
```

Restart OpenClaw. Done — ContextPilot runs in-process, no proxy needed.

**Option B: HTTP Proxy** (for self-hosted models or custom backends)

```bash
pip install contextpilot

# Start proxy (points to your LLM backend)
python -m contextpilot.server.http_server \
--port 8765 --infer-api-url http://localhost:30000 # SGLang
# or: --infer-api-url https://api.anthropic.com # Anthropic
# or: --infer-api-url https://api.openai.com # OpenAI
```

Then set OpenClaw's base URL to `http://localhost:8765/v1`. See the [full OpenClaw integration guide](docs/guides/openclaw.md) for UI setup, config file examples, and self-hosted model instructions.
Then set OpenClaw's base URL to `http://localhost:8765/v1`. See the [full OpenClaw integration guide](docs/guides/openclaw.md) for details.

---

Expand Down
2 changes: 1 addition & 1 deletion contextpilot/__init__.py
Original file line number Diff line number Diff line change
Expand Up @@ -55,7 +55,7 @@
MEM0_AVAILABLE,
)

__version__ = "0.3.5.post2"
__version__ = "0.4.1"

__all__ = [
# High-level pipeline API
Expand Down
74 changes: 66 additions & 8 deletions contextpilot/dedup/block_dedup.py
Original file line number Diff line number Diff line change
Expand Up @@ -18,6 +18,7 @@
class DedupResult:
blocks_deduped: int = 0
blocks_total: int = 0
system_blocks_matched: int = 0
chars_before: int = 0
chars_after: int = 0
chars_saved: int = 0
Expand Down Expand Up @@ -94,11 +95,17 @@ def _dedup_text(
result: DedupResult,
min_block_chars: int,
chunk_modulus: int,
pre_seen: Optional[Dict[str, Tuple[int, str, int]]] = None,
) -> Optional[str]:
"""Core dedup loop shared by all entry points.

Returns the deduped text if any blocks were deduped, or None otherwise.
"""
if pre_seen:
for h, origin in pre_seen.items():
if h not in seen_blocks:
seen_blocks[h] = origin

blocks = _content_defined_chunking(text, chunk_modulus)
if len(blocks) < 2:
for b in blocks:
Expand All @@ -121,9 +128,11 @@ def _dedup_text(
result.blocks_total += 1

if h in seen_blocks and seen_blocks[h][0] != msg_idx:
_, orig_fn, _ = seen_blocks[h]
orig_msg_idx, orig_fn, _ = seen_blocks[h]
first_line = block.strip().split("\n")[0][:80]
ref = f'[... "{first_line}" — identical to earlier {orig_fn} result, see above ...]'
if orig_msg_idx == -1:
result.system_blocks_matched += 1
chars_saved = len(block) - len(ref)
if chars_saved > 0:
new_blocks.append(ref)
Expand All @@ -148,18 +157,40 @@ def _dedup_text(
return None


def _prescan_system_blocks(
system_content: Optional[str],
min_block_chars: int,
chunk_modulus: int,
) -> Dict[str, Tuple[int, str, int]]:
"""Hash and register dedup-eligible blocks from system prompt content."""
pre_seen: Dict[str, Tuple[int, str, int]] = {}
if not isinstance(system_content, str) or not system_content.strip():
return pre_seen

blocks = _content_defined_chunking(system_content, chunk_modulus)
for block_idx, block in enumerate(blocks):
if len(block.strip()) < min_block_chars:
continue
h = _hash_block(block)
if h not in pre_seen:
pre_seen[h] = (-1, "system prompt", block_idx)
return pre_seen


def dedup_chat_completions(
body: dict,
min_block_chars: int = MIN_BLOCK_CHARS,
min_content_chars: int = MIN_CONTENT_CHARS,
chunk_modulus: int = CHUNK_MODULUS,
system_content: Optional[str] = None,
) -> DedupResult:
messages = body.get("messages")
if not isinstance(messages, list) or not messages:
return DedupResult()

tool_names = _build_tool_name_map_openai(messages)
seen_blocks: Dict[str, Tuple[int, str, int]] = {}
pre_seen = _prescan_system_blocks(system_content, min_block_chars, chunk_modulus)
result = DedupResult()

for idx, msg in enumerate(messages):
Expand All @@ -174,8 +205,14 @@ def dedup_chat_completions(
fn_name = tool_names.get(tc_id, msg.get("name", "")) or "tool"

new_content = _dedup_text(
content, seen_blocks, idx, fn_name, result,
min_block_chars, chunk_modulus,
content,
seen_blocks,
idx,
fn_name,
result,
min_block_chars,
chunk_modulus,
pre_seen=pre_seen,
)
if new_content is not None:
original_len = len(content)
Expand All @@ -190,7 +227,13 @@ def dedup_chat_completions(
)

_dedup_assistant_code_blocks(
messages, seen_blocks, result, min_block_chars, min_content_chars, chunk_modulus
messages,
seen_blocks,
result,
min_block_chars,
min_content_chars,
chunk_modulus,
pre_seen=pre_seen,
)

return result
Expand All @@ -206,6 +249,7 @@ def _dedup_assistant_code_blocks(
min_block_chars: int,
min_content_chars: int,
chunk_modulus: int,
pre_seen: Optional[Dict[str, Tuple[int, str, int]]] = None,
) -> None:
for idx, msg in enumerate(messages):
if not isinstance(msg, dict) or msg.get("role") != "assistant":
Expand Down Expand Up @@ -249,8 +293,14 @@ def _dedup_assistant_code_blocks(
continue

new_code = _dedup_text(
code, seen_blocks, idx, "assistant", result,
min_block_chars, chunk_modulus,
code,
seen_blocks,
idx,
"assistant",
result,
min_block_chars,
chunk_modulus,
pre_seen=pre_seen,
)
if new_code is not None:
start, end = match.start(2), match.end(2)
Expand All @@ -273,13 +323,15 @@ def dedup_responses_api(
min_block_chars: int = MIN_BLOCK_CHARS,
min_content_chars: int = MIN_CONTENT_CHARS,
chunk_modulus: int = CHUNK_MODULUS,
system_content: Optional[str] = None,
) -> DedupResult:
input_items = body.get("input")
if not isinstance(input_items, list) or not input_items:
return DedupResult()

fn_names = _build_tool_name_map_responses(input_items)
seen_blocks: Dict[str, Tuple[int, str, int]] = {}
pre_seen = _prescan_system_blocks(system_content, min_block_chars, chunk_modulus)
result = DedupResult()

for idx, item in enumerate(input_items):
Expand All @@ -294,8 +346,14 @@ def dedup_responses_api(
fn_name = fn_names.get(call_id, call_id) or "tool"

new_output = _dedup_text(
output, seen_blocks, idx, fn_name, result,
min_block_chars, chunk_modulus,
output,
seen_blocks,
idx,
fn_name,
result,
min_block_chars,
chunk_modulus,
pre_seen=pre_seen,
)
if new_output is not None:
original_len = len(output)
Expand Down
Loading
Loading