|
| 1 | +# E2E Smoke Test |
| 2 | + |
| 3 | +Installs `context-compression-engine` as a real consumer would and exercises every public export. |
| 4 | + |
| 5 | +Catches issues that unit tests can't: broken `exports` map, missing files in the tarball, ESM resolution failures, async path regressions. |
| 6 | + |
| 7 | +## Pipeline |
| 8 | + |
| 9 | +``` |
| 10 | +npm run test:e2e |
| 11 | +``` |
| 12 | + |
| 13 | +Runs: **build → pack → publint + attw → smoke test → cleanup** |
| 14 | + |
| 15 | +| Step | What it does | |
| 16 | +|------|-------------| |
| 17 | +| `npm run build` | Compile TypeScript | |
| 18 | +| `npm pack` | Create tarball from `files` field | |
| 19 | +| `publint --strict` | Validate package.json exports, files, types | |
| 20 | +| `attw` | Check TypeScript type resolution across all `moduleResolution` settings | |
| 21 | +| `smoke.mjs` | 68 assertions exercising the public API | |
| 22 | +| cleanup | Remove `.tgz`, `e2e/node_modules`, `e2e/package-lock.json` | |
| 23 | + |
| 24 | +Cleanup always runs, even on failure. The exit code from the smoke test is preserved. |
| 25 | + |
| 26 | +## Other scripts |
| 27 | + |
| 28 | +```bash |
| 29 | +# Test the published npm package (post-publish validation) |
| 30 | +npm run test:e2e:published |
| 31 | +``` |
| 32 | + |
| 33 | +## What the smoke test covers |
| 34 | + |
| 35 | +| # | Area | What's tested | |
| 36 | +|---|------|---------------| |
| 37 | +| 1 | Basic compress | ratio, token_ratio, message count, verbatim store | |
| 38 | +| 2 | Uncompress round-trip | lossless content restoration | |
| 39 | +| 3 | Dedup | exact duplicate detection (>=200 char messages) | |
| 40 | +| 4 | Token budget (fit) | binary search finds a recencyWindow that fits | |
| 41 | +| 5 | Token budget (tight) | correctly reports `fits: false` when impossible | |
| 42 | +| 6 | defaultTokenCounter | returns positive number | |
| 43 | +| 7 | Preserve keywords | keywords retained in compressed output | |
| 44 | +| 8 | sourceVersion | flows into compression metadata | |
| 45 | +| 9 | embedSummaryId | summary_id embedded in compressed content | |
| 46 | +| 10 | Factory functions | createSummarizer, createEscalatingSummarizer exported | |
| 47 | +| 11 | forceConverge | best-effort truncation, no regression | |
| 48 | +| 12 | Fuzzy dedup | runs without errors, message count preserved | |
| 49 | +| 13 | Provenance metadata | _cce_original structure (ids, summary_id, version) | |
| 50 | +| 14 | Missing verbatim store | missing_ids reported correctly | |
| 51 | +| 15 | Custom tokenCounter | invoked and used for ratio calculation | |
| 52 | +| 16 | Edge cases | empty input, single message | |
| 53 | +| 17 | Async path (mock summarizer) | compress returns Promise, summarizer called, round-trip works | |
| 54 | +| 18 | Async + token budget | async binary search produces fits/tokenCount/recencyWindow | |
| 55 | +| 19 | System role | system messages auto-preserved, never compressed | |
| 56 | +| 20 | tool_calls | messages with tool_calls pass through intact | |
| 57 | +| 21 | Re-compression | compress already-compressed output, recover via chained stores | |
| 58 | +| 22 | Recursive uncompress | nested provenance fully expanded | |
| 59 | +| 23 | minRecencyWindow | floor enforced during budget binary search | |
| 60 | +| 24 | Large conversation (31 msgs) | compression + lossless round-trip at scale | |
| 61 | +| 25 | Large conversation + budget | binary search converges on 50% budget target | |
| 62 | +| 26 | Verbatim store as object | uncompress accepts plain Record, not just function | |
0 commit comments