diff --git a/README.md b/README.md index 23b94e5..4b3ebe7 100644 --- a/README.md +++ b/README.md @@ -227,34 +227,37 @@ Note: `pnpm test` does not run benchmarks. ## Benchmarks -There are two benchmark suites: - -- Core benchmarks (Tinybench) -- Load benchmarks (Vitest project `benchmark`) +Benchmarks are split by value, so the default run focuses on framework features that matter for real servers. ```bash -pnpm bench:core +pnpm bench +pnpm bench:value +pnpm bench:gold +pnpm bench:startup +pnpm bench:diagnostic +pnpm bench:soak pnpm bench:load pnpm bench:all ``` +- `bench` / `bench:value`: value-focused suite. Commands, net events, RPC, lifecycle, ticks, binary path, bootstrap. +- `bench:gold`: hot-path load scenarios only. +- `bench:startup`: startup and registration cost. +- `bench:diagnostic`: internal and low-level synthetic benchmarks. +- `bench:soak`: long-running stress scenario. + ### Snapshot (latest local run) -These values are a small extract from the latest local run (`1.0.0-beta.1`, Feb 26, 2026). Results vary by machine. - -- **Core** - - BinaryService - classify response type: `~18.25M ops/sec` (mean `~0.055μs`, p95 `~0.076μs`) - - EventInterceptor - getStatistics (1000 events): `~17.78M ops/sec` (mean `~0.056μs`) - - RuntimeConfig - resolve CORE mode: `~10.49M ops/sec` (mean `~0.095μs`) - - Decorators - define metadata (Command): `~6.92M ops/sec` (mean `~0.145μs`) - - EventBus - multiple event types: `~2.57M ops/sec` (mean `~0.390μs`) - - DI - resolve simple service: `~1.78M ops/sec` (mean `~0.560μs`) -- **Load** - - Commands - 500 players (validated): `~4.78M ops/sec` (p95 `~0.008ms`) - - Pipeline - validated (500 players): `~4.79M ops/sec` (p95 `~0.024ms`) - - Pipeline - full (500 players): `~2.34M ops/sec` (p95 `~0.011ms`) - - RPC - schema generation complex (500 methods): `~705K ops/sec` (p95 `~0.335ms`) - - Commands - 500 players (concurrent): `~6.31K ops/sec` (p95 `~76.00ms`) +Use `benchmark/reports/` as the source of truth. Results vary by machine and should be compared relatively, not treated as product guarantees. + +- Primary benchmark targets: + - full command execution + - full net event handling + - RPC processing + - player lifecycle churn + - tick budget impact + - bootstrap cost + - binary transport cost Full reports and methodology are available in benchmark/README.md. diff --git a/RELEASE.md b/RELEASE.md index e6d61a1..1a1825d 100644 --- a/RELEASE.md +++ b/RELEASE.md @@ -1,20 +1,16 @@ -## OpenCore Framework v1.0.6 +## OpenCore Framework v1.0.7 ### Added - Added `RpcPublicError` and `serializeRpcError()` for safe RPC error exposure. -- Added `PUBLIC_RPC_ERROR_MESSAGE` as the default public message for unexpected RPC failures. -- Added transport exports for RPC error helpers through `src/adapters/contracts/transport`. -- Added unit and integration coverage for RPC error serialization and server RPC flow logging. +- Added structured benchmark suites and reporting. ### Changed -- Updated server RPC processing to log handler failures with event, handler, player, and account context. -- Updated RPC handling to preserve explicit public errors while masking unexpected internal errors. -- Refined the RPC path so invalid payloads and session issues are logged with clearer warnings. +- Updated server RPC logging and error handling for clearer failures. +- Updated benchmark metrics to include duration tracking and line-delimited JSON output. ### Fixed - Fixed RPC error leakage by sanitizing unexpected exceptions before they are returned to the client. -- Fixed RPC logger behavior so exposed errors can pass through with their original message and name. -- Fixed contract alignment across transport, server RPC processing, and test coverage. +- Fixed `PlayerPersistenceService` bootstrap so `PlayerPersistenceContract` implementations run on session load. ### Notes -- This release tracks the `fix/rpc-logger` merge request (#51) and keeps the release note focused on the RPC error-handling changes. +- This release tracks the current branch changes for RPC logging, benchmarks, and session persistence. diff --git a/benchmark/LATEST_REPORT.md b/benchmark/LATEST_REPORT.md new file mode 100644 index 0000000..ced9aea --- /dev/null +++ b/benchmark/LATEST_REPORT.md @@ -0,0 +1,306 @@ +# OpenCore Framework Benchmark Report + +Generated from: + +- `benchmark/reports/benchmark-2026-04-01T19-12-11-034Z.json` +- `benchmark/reports/benchmark-2026-04-01T19-12-11-030Z.txt` +- `benchmark/reports/.load-metrics.json` + +Run metadata: + +- Timestamp: `2026-04-01T19:03:36.782Z` +- Version: `1.0.6` + +## Executive Summary + +This run reflects the new benchmark strategy for OpenCore. + +The benchmark suite is now split by value: + +- `gold`: real framework feature paths +- `startup`: boot and registration cost +- `diagnostic`: low-level internals for tuning +- `soak`: longer-running stress checks + +This matters because the previous suite mixed product-facing signals with synthetic internals. The new report is much easier to interpret for real servers. + +Suite distribution in this run: + +- `gold`: 227 results +- `startup`: 30 results +- `diagnostic`: 283 results +- `soak`: 11 results + +## Diagnostic Summary + +## What is working well + +### 1. Gold benchmarks now measure actual framework value + +The most useful benchmarks in this run are the ones that exercise real framework features: + +- full command execution +- full net event handling +- RPC schema and dispatch paths +- player lifecycle churn +- tick handler cost +- binary transport paths +- bootstrap / startup registration + +This is a large improvement over microbenchmarks that only measure metadata reads or helper internals. + +### 2. Startup costs are visible and actionable + +The startup suite gives useful numbers for: + +- metadata scanning +- dependency injection setup +- schema generation +- bootstrap controller registration + +This is useful for release quality and for understanding how fast a server resource graph can initialize. + +### 3. Concurrency bottlenecks are now exposed honestly + +The most important runtime signal in the new report is not the best-case path, but the degradation under contention. + +That shows up clearly in: + +- command concurrent execution +- tick parallel execution +- large payload binary serialization + +These are meaningful server-facing signals. + +## What still needs attention + +### 1. Some diagnostic benchmarks still report zero iterations + +Examples in this run: + +- `DI - Resolve with 1 dependency` +- `DI - Resolve with 2 dependencies` +- `DI - Resolve with 3 dependencies` +- `DI - Resolve 100 times (complex)` +- several `AccessControl` success-path scenarios + +These should either be fixed or removed from the primary diagnostic output. Right now they create noise and reduce trust in that part of the suite. + +### 2. Some low-sample scenarios still have weak statistical value + +Examples: + +- `BinaryService - Buffer split + parse` scenarios with only `1` operation +- `BinaryService - Pending requests lifecycle` scenarios with only `2` operations +- connect/disconnect cycle scenarios with only `3` operations + +These can still be useful as sanity checks, but their `p95` and `p99` are not as meaningful as the larger-sample runs. + +### 3. Diagnostic still contains more data than decision-makers need + +This is acceptable because `diagnostic` is now demoted, but it confirms the design decision: + +- keep `gold` for product decisions +- keep `diagnostic` for tuning + +## Final Diagnosis + +OpenCore now has a benchmark system that is directionally correct for a framework runtime: + +- it measures feature paths instead of mostly internal trivia +- it separates startup from hot paths +- it surfaces concurrency pain points instead of hiding them in averages +- it produces a report that can support engineering decisions + +The main remaining cleanup is in the diagnostic tier, not in the gold suite. + +## Key Results + +## Gold Suite + +### Commands + +| Scenario | Throughput | p95 | Notes | +| --- | --- | --- | --- | +| `Command Full - Validated (100 players)` | `115.68K ops/sec` | `0.012ms` | Strong validated happy-path throughput | +| `Command Full - End-to-End (100 players)` | `863.50K ops/sec` | `0.0027ms` | Extremely cheap synthetic end-to-end path | +| `Command Full - Concurrent (100 players)` | `121.71 ops/sec` | `14.42ms` | Main contention signal | + +Takeaway: + +- happy-path command handling is strong +- concurrent saturation is where the runtime should be watched most closely + +### Net Events + +| Scenario | Throughput | p95 | Notes | +| --- | --- | --- | --- | +| `Net Events - Simple (10 players)` | `16.81K ops/sec` | `0.223ms` | base handler cost | +| `Net Events - Validated (10 players)` | `9.61K ops/sec` | `0.488ms` | validation overhead is visible | +| `Net Events - Full Event (small, 10 players)` | `74.42K ops/sec` | `0.029ms` | small payload path remains cheap | +| `Net Events - Full Event (medium, 10 players)` | `44.73K ops/sec` | `0.079ms` | moderate serialization cost | +| `Net Events - Full Event (large, 10 players)` | `27.68K ops/sec` | `0.113ms` | payload size starts to dominate | + +Takeaway: + +- payload size matters more than simple dispatch +- validated net events remain comfortably sub-millisecond in this run + +### RPC + +| Scenario | Throughput | p95 | Notes | +| --- | --- | --- | --- | +| `RPC - Schema generation simple (200 methods)` | `7.95K ops/sec` | `0.176ms` | strong simple-schema throughput | +| `RPC - Schema generation complex (200 methods)` | `3.06K ops/sec` | `0.400ms` | complex generation costs ~2-3x more | + +Takeaway: + +- RPC stays in a reasonable range even when schemas become more complex +- schema complexity is a real cost center in startup/registration paths + +### Player Lifecycle + +| Scenario | Throughput | p95 | Notes | +| --- | --- | --- | --- | +| `Player Lifecycle - Full Cycle (500 players)` | `200.55K ops/sec` | `0.0096ms` | strong lifecycle throughput | +| `Player Lifecycle - Concurrent Connections (500 players)` | `108.68K ops/sec` | `0.0046ms` | connection fan-out still healthy | +| `Player Lifecycle - Concurrent Disconnections (500 players)` | `1.83M ops/sec` | `0.00075ms` | disconnect path is very cheap | + +Takeaway: + +- lifecycle churn performs well +- connect cost is meaningfully higher than disconnect cost, as expected + +### Tick Budget + +| Scenario | Throughput | p95 | Notes | +| --- | --- | --- | --- | +| `Tick - Real setTick (50 handlers)` | `93.12K ops/sec` | `0.021ms` | good light-handler budget | +| `Tick - 5 Handlers (medium workload)` | `18.45K ops/sec` | `0.098ms` | still acceptable under moderate work | +| `Tick - 5 Handlers (heavy workload)` | `2.26K ops/sec` | `0.559ms` | heavy work is the danger zone | +| `Tick - Parallel Execution` | `243.24 ops/sec` | `8.00ms` | expensive and not a default win | + +Takeaway: + +- small tick handlers are cheap +- heavy per-tick work remains one of the biggest practical risks for servers +- parallel tick execution is far more expensive than sequential in this run + +### BinaryService + +| Scenario | Throughput | p95 | Notes | +| --- | --- | --- | --- | +| `BinaryService - Parse mixed responses (500 ops)` | `1.20M ops/sec` | `0.0011ms` | very strong parse path | +| `BinaryService - Full round-trip (50 calls)` | `350.12K ops/sec` | `0.0092ms` | healthy round-trip path | +| `BinaryService - Serialize large payload (500 ops)` | `2.88K ops/sec` | `0.428ms` | large payload serialization is expensive | + +Takeaway: + +- binary transport is excellent for parse and smaller payloads +- large payload serialization is the main bottleneck here + +## Startup Suite + +### MetadataScanner + +| Scenario | Throughput | Median | p99 | +| --- | --- | --- | --- | +| `1 controller, 3 methods` | `743.86K ops/sec` | `1.27μs` | `4.49μs` | +| `3 controllers, 6 methods` | `390.84K ops/sec` | `2.48μs` | `4.61μs` | +| `10 controllers` | `112.61K ops/sec` | `8.73μs` | `21.15μs` | + +### Dependency Injection + +| Scenario | Throughput | Median | p99 | +| --- | --- | --- | --- | +| `Resolve simple service` | `1.92M ops/sec` | `0.48μs` | `1.36μs` | +| `Resolve 100 times (simple)` | `69.44K ops/sec` | `14.06μs` | `32.24μs` | + +### SchemaGenerator + +| Scenario | Throughput | Median | p99 | +| --- | --- | --- | --- | +| `1 param` | `42.58K ops/sec` | `22.33μs` | `71.08μs` | +| `3 params` | `28.79K ops/sec` | `33.19μs` | `98.64μs` | +| `5 params` | `17.29K ops/sec` | `55.17μs` | `142.94μs` | +| `batch 10 methods` | `3.18K ops/sec` | `0.298ms` | `11.48ms` | +| `batch 50 methods` | `628 ops/sec` | `1.45ms` | `14.17ms` | + +### Bootstrap Load + +| Scenario | Throughput | p95 | +| --- | --- | --- | +| `Bootstrap - 1 controller` | `2.81K ops/sec` | `1.11ms` | +| `Bootstrap - 10 controllers` | `1.16K ops/sec` | `1.46ms` | +| `Bootstrap - 50 controllers` | `396.99 ops/sec` | `2.84ms` | +| `Bootstrap - 100 controllers` | `205.87 ops/sec` | `6.37ms` | + +Takeaway: + +- startup remains healthy +- schema generation is the most expensive startup subsystem visible in this run + +## Diagnostic Suite + +The diagnostic suite still has value for framework maintainers, especially for: + +- Zod validation cost +- rate limiter scaling +- event bus fan-out cost +- decorator and metadata overhead + +Notable diagnostics: + +- `Zod - Simple schema validation`: `2.72M ops/sec` +- `RateLimiter - Single key check`: `3.50M ops/sec` +- `EventBus - Emit to 1 handler`: `4.56M ops/sec` +- `EventBus - Emit to 100 handlers`: `131.12K ops/sec` + +However, this suite still contains scenarios with zero iterations and should not be treated as the primary external benchmark story. + +## Engineering Conclusions + +## What these numbers say about the framework + +1. OpenCore hot paths are fast when kept on the intended model. +2. Validation and typed dispatch are not the dominant cost in most happy paths. +3. Concurrency pressure is more important than raw single-path throughput. +4. Tick workload and large payload serialization are the practical danger areas. +5. Startup cost is acceptable and mostly dominated by schema generation scale. + +## What matters most to server developers + +For real servers, the most useful numbers in this report are: + +- command concurrent throughput and tail latency +- net event cost by payload size +- tick budget under realistic handler counts +- lifecycle churn under hundreds of players +- bootstrap time as controller count grows + +## Recommended Follow-up + +1. Fix or remove zero-iteration diagnostic benchmarks. +2. Increase sample counts for low-op scenarios like pending-request lifecycle and buffer split benchmarks. +3. Add memory and event-loop lag metrics to `gold` and `soak`. +4. Keep `gold` as the default benchmark story for docs and landing pages. + +## Final Verdict + +This benchmark run supports the new benchmark direction. + +OpenCore now has a benchmark system that is useful for: + +- framework engineering +- release validation +- communicating real runtime behavior + +The benchmark story is no longer “here are some fast internals”. + +It is now closer to: + +- here is what commands cost +- here is what net events cost +- here is what ticks cost +- here is how lifecycle behaves at scale +- here is what startup actually costs diff --git a/benchmark/README.md b/benchmark/README.md index b3198e8..72ececf 100644 --- a/benchmark/README.md +++ b/benchmark/README.md @@ -1,199 +1,87 @@ # Benchmark System – OpenCore Framework -A comprehensive benchmark suite designed to measure the performance, scalability, and internal overhead of the OpenCore framework under realistic and stress conditions. +The benchmark suite is organized around framework value, not around measuring every internal helper. -This repository focuses on **measurable data**, not marketing numbers. +## Benchmark Tiers ---- +### Gold -## 📋 Description +These are the main framework feature benchmarks and should be the default path for local checks and regression tracking. -The benchmark system evaluates OpenCore in two complementary dimensions: +- full command execution +- full net event handling +- RPC processing +- player lifecycle churn +- tick budget impact +- binary transport cost -1. **Core Benchmarks (Tinybench)** - Pure framework internals, without FiveM dependencies. +### Startup -2. **Load Benchmarks (Vitest)** - Simulated FiveM-like workloads with multiple virtual players, commands, and net events. +These measure initialization and registration cost. ---- +- bootstrap controller scanning +- metadata scanning +- dependency injection setup +- schema generation -## 🏗️ Architecture +### Diagnostic -### Core Benchmarks (Tinybench) +These are synthetic or low-level internals. They are useful for profiling, but they should not dominate the primary report. -Benchmarks targeting internal building blocks: +- validation internals +- rate limiter internals +- access control internals +- event bus internals +- decorators +- entity-system internals +- runtime config +- event interceptor -- **MetadataScanner** – decorator scanning & reflection -- **Dependency Injection** – tsyringe resolution cost -- **Zod Validation** – simple, complex and nested schemas -- **RateLimiterService** – key-based throttling under load -- **AccessControlService** – rank & permission checks -- **CoreEventBus** – event dispatch with variable handlers -- **Decorators** – metadata definition & read overhead -- **ParallelCompute** – sync vs parallel compute utilities -- **BinaryService** – JSON serialization, buffer splitting, pending request management, event dispatch -- **SchemaGenerator** – automatic Zod schema generation from TypeScript types, tuple processing -- **EntitySystem** – state management, metadata CRUD, snapshot/restore -- **AppearanceValidation** – ped appearance data validation at varying complexity -- **EventInterceptor** – DevMode circular buffer, filtering, statistics, listener notification -- **RuntimeConfig** – runtime options resolution and validation across modes +### Soak -### Load Benchmarks (Vitest) +Longer-running stress scenarios intended for nightly or release validation. -FiveM-like load simulation with increasing concurrency: - -- **Commands** – simple, validated, concurrent, end-to-end -- **Net Events** – serialization, validation, latency injection -- **Guards & Throttle** – permission and rate enforcement -- **Event Bus** – handler fan-out under concurrency -- **Bootstrap** – controller & metadata initialization -- **Pipeline** – full execution chain -- **Player Lifecycle** – bind / unbind / link operations -- **Stress Tests** – mixed scenarios with ticks, commands and events -- **BinaryService** – serialization throughput, response parsing, buffer splitting, pending request lifecycle -- **RPC Processor** – schema generation, validation pipeline, concurrent RPCs, error paths - ---- - -## 🚀 Usage - -### Installation - -```bash -pnpm install -``` - -### Run Benchmarks - -#### Core Benchmarks - -```bash -pnpm bench:core -# or -pnpm bench --core -``` - -#### Load Benchmarks +## Usage ```bash +pnpm bench +pnpm bench:value +pnpm bench:gold +pnpm bench:startup +pnpm bench:diagnostic +pnpm bench:soak pnpm bench:load -``` - -#### Full Suite - -```bash pnpm bench:all ``` ---- - -## 📊 Reports - -All runs generate reports in `benchmark/reports/`: - -- **`.txt`** – human-readable summary -- **`.json`** – machine-readable (CI, regression tracking) -- **`.html`** – interactive visual report - -Load benchmarks also maintain a rolling metrics file: - -``` -benchmark/reports/.load-metrics.json -``` - -These files are considered **local artifacts** and are typically gitignored. - ---- - -## 📈 Latest Benchmark Results - -**Framework version:** `1.0.0-beta.1` -**Run date:** Feb 26, 2026 (`2026-02-26T19:59:41.545Z`) -**Environment:** Local development machine (results vary by hardware) - -> ⚠️ The following is a **snapshot**, not a guarantee. -> Always consult the generated reports for authoritative data. - -> ℹ️ Some scenarios in this run report `0.00 ops/sec` because they were not exercised in this environment/configuration. - ---- - -### 🔹 Core Benchmarks (Tinybench) - -| Component | Throughput | Mean | p95 | -| --- | --- | --- | --- | -| EventInterceptor - getStatistics (1000 events) | ~17.78M ops/sec | ~0.056 us | ~0.087 us | -| RuntimeConfig - resolve CORE mode | ~10.49M ops/sec | ~0.095 us | ~0.107 us | -| Decorators - define metadata (Command) | ~6.92M ops/sec | ~0.145 us | ~0.247 us | -| RateLimiter - single key check | ~3.06M ops/sec | ~0.327 us | ~0.463 us | -| EventBus - multiple event types | ~2.57M ops/sec | ~0.390 us | ~0.570 us | -| DI - resolve simple service | ~1.78M ops/sec | ~0.560 us | ~0.769 us | -| BinaryService - full round-trip (serialize + parse + classify) | ~664K ops/sec | ~1.505 us | ~1.529 us | -| SchemaGenerator - batch 50 methods | ~406 ops/sec | ~2.46 ms | ~12.86 ms | -| BinaryService - classify response type (ok/error/event) | ~18.25M ops/sec | ~0.055 us | ~0.076 us | - ---- - -### 🔹 Load Benchmarks (Vitest) - -| Scenario | Players | Throughput | Mean | p95 | p99 | Error Rate | -| --- | --- | --- | --- | --- | --- | --- | -| Commands - 500 players (simple) | 500 | ~80.14K ops/sec | ~0.132 ms | ~0.226 ms | ~0.348 ms | 0% | -| Commands - 500 players (validated) | 500 | ~4.78M ops/sec | ~0.0037 ms | ~0.0080 ms | ~0.0113 ms | 0% | -| Commands - 500 players (concurrent) | 500 | ~6.31K ops/sec | ~41.13 ms | ~76.00 ms | ~78.47 ms | 0% | -| Pipeline - simple (500 players) | 500 | ~92.04K ops/sec | ~0.130 ms | ~0.205 ms | ~0.249 ms | 0% | -| Pipeline - validated (500 players) | 500 | ~4.79M ops/sec | ~0.0110 ms | ~0.0242 ms | ~0.0584 ms | 0% | -| Pipeline - full (500 players) | 500 | ~2.34M ops/sec | ~0.0050 ms | ~0.0106 ms | ~0.0330 ms | 0% | -| RPC - schema generation simple (500 methods) | 500 | ~29.30K ops/sec | ~0.185 ms | ~0.227 ms | ~0.482 ms | 0% | -| RPC - schema generation complex (500 methods) | 500 | ~705.37K ops/sec | ~0.195 ms | ~0.335 ms | ~0.455 ms | 0% | -| RPC - concurrent RPCs (500 parallel) | 500 | ~251.10K ops/sec | ~1.03 ms | ~1.83 ms | ~1.97 ms | 0% | -| RPC - full pipeline (500 ops) | 500 | ~42.26K ops/sec | ~0.099 ms | ~0.144 ms | ~0.206 ms | 0% | -| RPC - validation error path (500 ops) | 500 | 0.00 ops/sec | ~0.042 ms | ~0.077 ms | ~0.128 ms | 100% | - -#### Quick takeaways - -- Validated command and pipeline paths stay in microseconds at 500 players in this run. -- The dominant latency outlier is `Commands - concurrent`, which intentionally stresses queueing/scheduling behavior. -- RPC concurrent scenarios remain sub-2ms at p95 with zero error rate in successful-path tests. - ---- - -## 📁 Directory Structure - -``` -benchmark/ -├── core/ # Tinybench benchmarks -├── load/ # Vitest load benchmarks -├── utils/ # Shared benchmark utilities -├── reports/ # Generated reports (gitignored) -├── index.ts # Entry point -└── README.md -``` +## What The Default Suite Optimizes For ---- +The default suite tries to answer questions that matter to server developers: -## 🎯 Goals +1. What is the cost of the real command pipeline? +2. What is the cost of validated net event handling? +3. How does RPC behave under realistic concurrency? +4. What player churn can the runtime absorb? +5. What tick budget is consumed as handlers grow? +6. What does startup cost look like as controllers increase? -This benchmark system exists to: +## Reports -1. **Quantify performance** – not assume it -2. **Validate scalability** – 10 → 500 players -3. **Detect regressions** – across versions -4. **Expose bottlenecks** – early and visibly -5. **Support documentation** – with real numbers +Reports are generated under `benchmark/reports/`. ---- +- `.txt`: human-readable summary +- `.json`: machine-readable output +- `.html`: interactive report -## 📝 Notes +Load benchmark runs also append metrics to `benchmark/reports/.load-metrics.json` as a local artifact. -- Benchmarks are CPU-bound and hardware-dependent -- Latency-injected scenarios simulate network conditions -- Results should be compared **relatively**, not absolutely -- This system is intended for regression tracking, not marketing claims +## Latest Run Report ---- +For a full diagnosis and interpretation of the latest benchmark pass, see `benchmark/LATEST_REPORT.md`. -## 📄 License +## Notes -MPL-2.0 – see LICENSE in the project root +- Compare runs relatively, not absolutely. +- `bench` intentionally focuses on high-value framework features. +- `bench:diagnostic` is where low-level synthetic benchmarks live. +- `bench:all` remains available when full coverage is needed. diff --git a/benchmark/core/metadata-scanner.bench.ts b/benchmark/core/metadata-scanner.bench.ts index da692e8..3b1e3e3 100644 --- a/benchmark/core/metadata-scanner.bench.ts +++ b/benchmark/core/metadata-scanner.bench.ts @@ -2,6 +2,7 @@ import { Bench } from 'tinybench' import { container, injectable } from 'tsyringe' import type { DecoratorProcessor } from '../../src/kernel/di/decorator-processor' import { MetadataScanner } from '../../src/kernel/di/metadata.scanner' +import { loggers } from '../../src/kernel/logger' import { METADATA_KEYS } from '../../src/runtime/server/system/metadata-server.keys' import { resetContainer } from '../../tests/helpers/di.helper' @@ -61,6 +62,9 @@ defineCommandMeta(TestController2.prototype, 'method1', 'test4') defineCommandMeta(TestController2.prototype, 'method2', 'test5') defineCommandMeta(TestController3.prototype, 'method1', 'test6') +loggers.scanner.info = () => {} +loggers.scanner.debug = () => {} + export async function runMetadataScannerBenchmark(): Promise { const bench = new Bench({ time: 1000 }) diff --git a/benchmark/core/schema-generator.bench.ts b/benchmark/core/schema-generator.bench.ts index 724b5db..f145d67 100644 --- a/benchmark/core/schema-generator.bench.ts +++ b/benchmark/core/schema-generator.bench.ts @@ -3,7 +3,7 @@ import { Bench } from 'tinybench' import { z } from 'zod' import { Player } from '../../src/runtime/server/entities/player' import { generateSchemaFromTypes } from '../../src/runtime/server/system/schema-generator' -import { processTupleSchema } from '../../src/runtime/server/helpers/process-tuple-schema' +import { processTupleSchema } from '../../src/runtime/shared/helpers/process-tuple-schema' /** * Benchmarks for the automatic schema generation system and tuple processing. diff --git a/benchmark/index.ts b/benchmark/index.ts index fc5aeb2..6e7a4f4 100644 --- a/benchmark/index.ts +++ b/benchmark/index.ts @@ -38,211 +38,138 @@ function percentileHelper(sorted: number[], p: number): number { } function convertTinybenchResult(result: any): BenchmarkMetrics { - const samples = result.samples || [] - const sorted = [...samples].sort((a: number, b: number) => a - b) + const latency = result.latency + const throughput = result.throughput + const samples = latency?.samples || [] + + if (!latency || !throughput) { + return { + name: result.name || 'unknown', + suite: 'diagnostic', + iterations: 0, + mean: 0, + min: 0, + max: 0, + median: 0, + p75: 0, + p99: 0, + stdDev: 0, + opsPerSec: 0, + totalTime: result.totalTime || 0, + } + } return { name: result.name || 'unknown', - iterations: samples.length, - mean: result.mean || 0, - min: result.min || 0, - max: result.max || 0, - median: percentileHelper(sorted, 50), - p95: percentileHelper(sorted, 95), - p99: percentileHelper(sorted, 99), - stdDev: result.sd || 0, - opsPerSec: result.mean ? 1000 / result.mean : 0, + suite: 'diagnostic', + iterations: latency.samplesCount || 0, + mean: latency.mean || 0, + min: latency.min || 0, + max: latency.max || 0, + median: latency.p50 || percentileHelper(samples, 50), + p75: latency.p75 || percentileHelper(samples, 75), + p99: latency.p99 || percentileHelper(samples, 99), + stdDev: latency.sd || 0, + opsPerSec: throughput.mean || 0, totalTime: result.totalTime || 0, } } const args = process.argv.slice(2) -const runCore = args.includes('--core') || args.includes('--all') -const runLoad = args.includes('--load') || args.includes('--all') const runAll = args.includes('--all') - -if (!runCore && !runLoad && !runAll) { +const runValue = args.includes('--value') +const runStartup = args.includes('--startup') +const runDiagnostic = args.includes('--diagnostic') +const runLegacyCore = args.includes('--core') +const runLegacyLoad = args.includes('--load') + +const shouldRunStartupCore = runAll || runValue || runStartup +const shouldRunDiagnosticCore = runAll || runDiagnostic || runLegacyCore +const loadProjects = [ + ...(runAll || runValue || runLegacyLoad + ? [{ project: 'benchmark-gold', suite: 'gold' as const }] + : []), + ...(runAll || runValue || runStartup + ? [{ project: 'benchmark-startup', suite: 'startup' as const }] + : []), + ...(runAll || runDiagnostic + ? [{ project: 'benchmark-diagnostic', suite: 'diagnostic' as const }] + : []), + ...(runAll ? [{ project: 'benchmark-soak', suite: 'soak' as const }] : []), +] + +if (!runValue && !runStartup && !runDiagnostic && !runLegacyCore && !runLegacyLoad && !runAll) { console.log('Usage:') - console.log(' --core Run core benchmarks (Tinybench)') - console.log(' --load Run load benchmarks (Vitest)') - console.log(' --all Run all benchmarks') + console.log(' --value Run value-focused framework benchmarks') + console.log(' --startup Run startup/bootstrap benchmarks') + console.log(' --diagnostic Run low-level diagnostic benchmarks') + console.log(' --load Run gold + startup load benchmarks') + console.log(' --core Run diagnostic core benchmarks') + console.log(' --all Run all benchmark suites') process.exit(0) } -async function runCoreBenchmarks(): Promise { - console.log('\n🔬 Running Core Benchmarks (Tinybench)...\n') +async function runCoreBenchmarks( + suite: 'startup' | 'diagnostic', +): Promise { + console.log(`\n🔬 Running ${suite === 'startup' ? 'Startup' : 'Diagnostic'} Core Benchmarks (Tinybench)...\n`) const results: BenchmarkMetrics[] = [] - // Metadata Scanner - console.log('Running MetadataScanner benchmark...') - const scannerBench = await runMetadataScannerBenchmark() - await scannerBench.warmup() - await scannerBench.run() - for (const task of scannerBench.tasks) { - if (task.result) { - results.push(convertTinybenchResult({ ...task.result, name: task.name })) - } - } - - // Dependency Injection - console.log('Running DependencyInjection benchmark...') - const diBench = await runDependencyInjectionBenchmark() - await diBench.warmup() - await diBench.run() - for (const task of diBench.tasks) { - if (task.result) { - results.push(convertTinybenchResult({ ...task.result, name: task.name })) - } - } - - // Validation - console.log('Running Validation benchmark...') - const validationBench = await runValidationBenchmark() - await validationBench.warmup() - await validationBench.run() - for (const task of validationBench.tasks) { - if (task.result) { - results.push(convertTinybenchResult({ ...task.result, name: task.name })) - } - } - - // Rate Limiter - console.log('Running RateLimiter benchmark...') - const rateLimiterBench = await runRateLimiterBenchmark() - await rateLimiterBench.warmup() - await rateLimiterBench.run() - for (const task of rateLimiterBench.tasks) { - if (task.result) { - results.push(convertTinybenchResult({ ...task.result, name: task.name })) - } - } - - console.log('Running AccessControl benchmark...') - const accessControlBench = await runAccessControlBenchmark() - await accessControlBench.warmup() - await accessControlBench.run() - for (const task of accessControlBench.tasks) { - if (task.result) { - results.push(convertTinybenchResult({ ...task.result, name: task.name })) - } - } - - // Event Bus - console.log('Running EventBus benchmark...') - const eventBusBench = await runEventBusBenchmark() - await eventBusBench.warmup() - await eventBusBench.run() - for (const task of eventBusBench.tasks) { - if (task.result) { - results.push(convertTinybenchResult({ ...task.result, name: task.name })) - } - } - - // Decorators - console.log('Running Decorators benchmark...') - const decoratorsBench = await runDecoratorsBenchmark() - await decoratorsBench.warmup() - await decoratorsBench.run() - for (const task of decoratorsBench.tasks) { - if (task.result) { - results.push(convertTinybenchResult({ ...task.result, name: task.name })) - } - } + const collectResults = async (label: string, createBench: () => Promise) => { + console.log(`Running ${label} benchmark...`) + const bench = await createBench() + await bench.run() - // ParallelCompute - console.log('Running ParallelCompute benchmark...') - const parallelBench = await runParallelComputeBenchmark() - await parallelBench.warmup() - await parallelBench.run() - for (const task of parallelBench.tasks) { - if (task.result) { - results.push(convertTinybenchResult({ ...task.result, name: task.name })) - } - } - - // BinaryService - console.log('Running BinaryService benchmark...') - const binaryBench = await runBinaryServiceBenchmark() - await binaryBench.warmup() - await binaryBench.run() - for (const task of binaryBench.tasks) { - if (task.result) { - results.push(convertTinybenchResult({ ...task.result, name: task.name })) - } - } - - // SchemaGenerator - console.log('Running SchemaGenerator benchmark...') - const schemaBench = await runSchemaGeneratorBenchmark() - await schemaBench.warmup() - await schemaBench.run() - for (const task of schemaBench.tasks) { - if (task.result) { - results.push(convertTinybenchResult({ ...task.result, name: task.name })) - } - } - - // EntitySystem - console.log('Running EntitySystem benchmark...') - const entityBench = await runEntitySystemBenchmark() - await entityBench.warmup() - await entityBench.run() - for (const task of entityBench.tasks) { - if (task.result) { - results.push(convertTinybenchResult({ ...task.result, name: task.name })) - } - } - - // AppearanceValidation - console.log('Running AppearanceValidation benchmark...') - const appearanceBench = await runAppearanceValidationBenchmark() - await appearanceBench.warmup() - await appearanceBench.run() - for (const task of appearanceBench.tasks) { - if (task.result) { - results.push(convertTinybenchResult({ ...task.result, name: task.name })) + for (const task of bench.tasks) { + if (task.result) { + results.push({ ...convertTinybenchResult({ ...task.result, name: task.name }), suite }) + } } } - // EventInterceptor - console.log('Running EventInterceptor benchmark...') - const interceptorBench = await runEventInterceptorBenchmark() - await interceptorBench.warmup() - await interceptorBench.run() - for (const task of interceptorBench.tasks) { - if (task.result) { - results.push(convertTinybenchResult({ ...task.result, name: task.name })) - } + if (suite === 'startup') { + await collectResults('MetadataScanner', runMetadataScannerBenchmark) + await collectResults('DependencyInjection', runDependencyInjectionBenchmark) + await collectResults('SchemaGenerator', runSchemaGeneratorBenchmark) + return results } - // RuntimeConfig - console.log('Running RuntimeConfig benchmark...') - const runtimeBench = await runRuntimeConfigBenchmark() - await runtimeBench.warmup() - await runtimeBench.run() - for (const task of runtimeBench.tasks) { - if (task.result) { - results.push(convertTinybenchResult({ ...task.result, name: task.name })) - } - } + await collectResults('Validation', runValidationBenchmark) + await collectResults('RateLimiter', runRateLimiterBenchmark) + await collectResults('AccessControl', runAccessControlBenchmark) + await collectResults('EventBus', runEventBusBenchmark) + await collectResults('Decorators', runDecoratorsBenchmark) + await collectResults('ParallelCompute', runParallelComputeBenchmark) + await collectResults('BinaryService', runBinaryServiceBenchmark) + await collectResults('EntitySystem', runEntitySystemBenchmark) + await collectResults('AppearanceValidation', runAppearanceValidationBenchmark) + await collectResults('EventInterceptor', runEventInterceptorBenchmark) + await collectResults('RuntimeConfig', runRuntimeConfigBenchmark) return results } -async function runLoadBenchmarks(): Promise { - console.log('\n⚡ Running Load Benchmarks (Vitest)...\n') +async function runLoadBenchmarks( + projects: Array<{ project: string; suite: 'gold' | 'startup' | 'diagnostic' | 'soak' }>, +): Promise { + console.log(`\n⚡ Running Load Benchmarks (${projects.map((entry) => entry.project).join(', ')})...\n`) clearCollectedMetrics() - return new Promise((resolve, reject) => { + const runProject = (entry: { project: string; suite: 'gold' | 'startup' | 'diagnostic' | 'soak' }) => + new Promise((resolve, reject) => { const isWindows = process.platform === 'win32' const npmCmd = isWindows ? 'npx.cmd' : 'npx' - const vitest = spawn(npmCmd, ['vitest', 'run', '--project', 'benchmark', 'benchmark/load'], { + const vitest = spawn(npmCmd, ['vitest', 'run', '--project', entry.project], { stdio: ['inherit', 'pipe', 'pipe'], shell: isWindows, cwd: process.cwd(), + env: { + ...process.env, + BENCHMARK_SUITE: entry.suite, + }, }) let output = '' @@ -264,19 +191,24 @@ async function runLoadBenchmarks(): Promise { vitest.on('close', (code) => { if (code !== 0) { - console.warn(`\n⚠️ Vitest exited with code ${code}`) + reject(new Error(`Vitest benchmark project exited with code ${code}`)) + return } - - const metrics = readCollectedMetrics() - console.log(`\n📊 Collected ${metrics.length} load test metrics\n`) - resolve(metrics) + resolve() }) vitest.on('error', (err) => { - console.error('Error spawning vitest:', err) - resolve([]) + reject(err) }) }) + + for (const entry of projects) { + await runProject(entry) + } + + const metrics = readCollectedMetrics() + console.log(`\n📊 Collected ${metrics.length} load test metrics\n`) + return metrics } async function main() { @@ -290,12 +222,16 @@ async function main() { load: [] as LoadTestMetrics[], } - if (runCore || runAll) { - report.core = await runCoreBenchmarks() + if (shouldRunStartupCore) { + report.core.push(...(await runCoreBenchmarks('startup'))) + } + + if (shouldRunDiagnosticCore) { + report.core.push(...(await runCoreBenchmarks('diagnostic'))) } - if (runLoad || runAll) { - report.load = await runLoadBenchmarks() + if (loadProjects.length > 0) { + report.load = await runLoadBenchmarks(loadProjects) } printReport(report) diff --git a/benchmark/load/bootstrap.load.bench.ts b/benchmark/load/bootstrap.load.bench.ts index 6e1a891..4953e17 100644 --- a/benchmark/load/bootstrap.load.bench.ts +++ b/benchmark/load/bootstrap.load.bench.ts @@ -47,6 +47,24 @@ describe('Bootstrap Load Benchmarks', () => { let commandService: LocalCommandImplementation let processor: CommandProcessor + function collectScanMetrics(name: string, controllers: Array object>, iterations = 5) { + const timings: number[] = [] + + for (let i = 0; i < iterations; i++) { + commandService = new LocalCommandImplementation() + processor = new CommandProcessor(commandService) + const scanner = new MetadataScanner([processor]) + + const start = performance.now() + scanner.scan(controllers) + const end = performance.now() + + timings.push(end - start) + } + + return calculateLoadMetrics(timings, name, controllers.length, iterations, 0) + } + beforeEach(() => { resetCitizenFxMocks() @@ -55,79 +73,44 @@ describe('Bootstrap Load Benchmarks', () => { }) it('Bootstrap - Scan 1 controller', async () => { - const scanner = new MetadataScanner([processor]) - - const start = performance.now() - scanner.scan([TestController1]) - const end = performance.now() + const metrics = collectScanMetrics('Bootstrap - 1 controller', [TestController1]) - const timing = end - start - expect(timing).toBeLessThan(100) - - console.log(`[LOAD] Bootstrap - 1 controller: ${timing.toFixed(2)}ms`) + expect(metrics.successCount).toBeGreaterThan(0) + reportLoadMetric(metrics) }) it('Bootstrap - Scan 3 controllers', async () => { - const scanner = new MetadataScanner([processor]) + const metrics = collectScanMetrics('Bootstrap - 3 controllers', [ + TestController1, + TestController2, + TestController3, + ]) - const start = performance.now() - scanner.scan([TestController1, TestController2, TestController3]) - const end = performance.now() - - const timing = end - start - expect(timing).toBeLessThan(200) - - console.log(`[LOAD] Bootstrap - 3 controllers: ${timing.toFixed(2)}ms`) + expect(metrics.successCount).toBeGreaterThan(0) + reportLoadMetric(metrics) }) it('Bootstrap - Scan 10 controllers (simulated)', async () => { - const scanner = new MetadataScanner([processor]) - const controllers = Array.from({ length: 10 }, () => TestController1) + const metrics = collectScanMetrics('Bootstrap - 10 controllers', controllers) - const start = performance.now() - scanner.scan(controllers) - const end = performance.now() - - const timing = end - start - expect(timing).toBeLessThan(500) - - console.log(`[LOAD] Bootstrap - 10 controllers: ${timing.toFixed(2)}ms`) + expect(metrics.successCount).toBeGreaterThan(0) + reportLoadMetric(metrics) }) it('Bootstrap - Scan 50 controllers (simulated)', async () => { - const scanner = new MetadataScanner([processor]) - const controllers = Array.from({ length: 50 }, () => TestController1) + const metrics = collectScanMetrics('Bootstrap - 50 controllers', controllers) - const start = performance.now() - scanner.scan(controllers) - const end = performance.now() - - const timing = end - start - expect(timing).toBeLessThan(2000) - - console.log(`[LOAD] Bootstrap - 50 controllers: ${timing.toFixed(2)}ms`) + expect(metrics.successCount).toBeGreaterThan(0) + reportLoadMetric(metrics) }) it('Bootstrap - Scan 100 controllers (simulated)', async () => { - const scanner = new MetadataScanner([processor]) - const controllers = Array.from({ length: 100 }, () => TestController1) + const metrics = collectScanMetrics('Bootstrap - 100 controllers', controllers) - const timings: number[] = [] - const iterations = 5 - - for (let i = 0; i < iterations; i++) { - const start = performance.now() - scanner.scan(controllers) - const end = performance.now() - timings.push(end - start) - } - - const metrics = calculateLoadMetrics(timings, 'Bootstrap - 100 controllers', 100, iterations, 0) - - expect(metrics.mean).toBeLessThan(5000) + expect(metrics.successCount).toBeGreaterThan(0) reportLoadMetric(metrics) }) diff --git a/benchmark/load/commands.load.bench.ts b/benchmark/load/commands.load.bench.ts index 29d25fc..f8c5f05 100644 --- a/benchmark/load/commands.load.bench.ts +++ b/benchmark/load/commands.load.bench.ts @@ -141,6 +141,7 @@ describe('Commands Load Benchmarks', () => { const timings: number[] = [] let successCount = 0 let errorCount = 0 + const scenarioStart = performance.now() const promises = players.map(async (player) => { const start = performance.now() @@ -155,6 +156,7 @@ describe('Commands Load Benchmarks', () => { }) await Promise.all(promises) + const scenarioEnd = performance.now() const metrics = calculateLoadMetrics( timings, @@ -162,6 +164,7 @@ describe('Commands Load Benchmarks', () => { playerCount, successCount, errorCount, + scenarioEnd - scenarioStart, ) expect(metrics.successCount).toBe(playerCount) diff --git a/benchmark/load/core-events.load.bench.ts b/benchmark/load/core-events.load.bench.ts index de43071..0bb0354 100644 --- a/benchmark/load/core-events.load.bench.ts +++ b/benchmark/load/core-events.load.bench.ts @@ -102,6 +102,7 @@ describe('Core Events Load Benchmarks', () => { const players = PlayerFactory.createPlayers(playerCount) const timings: number[] = [] let successCount = 0 + const scenarioStart = performance.now() const promises = players.map(async (player) => { const start = performance.now() @@ -115,6 +116,7 @@ describe('Core Events Load Benchmarks', () => { }) await Promise.all(promises) + const scenarioEnd = performance.now() unsubscribe() @@ -124,6 +126,7 @@ describe('Core Events Load Benchmarks', () => { playerCount, successCount, 0, + scenarioEnd - scenarioStart, ) expect(handlerCallCount).toBe(playerCount) diff --git a/benchmark/load/net-events-full.load.bench.ts b/benchmark/load/net-events-full.load.bench.ts index 9e02f45..48fada4 100644 --- a/benchmark/load/net-events-full.load.bench.ts +++ b/benchmark/load/net-events-full.load.bench.ts @@ -94,7 +94,10 @@ describe('Net Events Full Load Benchmarks', () => { for (const player of players) { const start = performance.now() try { - nodeEvents.simulateClientEvent('test:event', player.clientID, { action: 'test', value: 123 }) + await nodeEvents.simulateClientEventAsync('test:event', player.clientID, { + action: 'test', + value: 123, + }) const end = performance.now() timings.push(end - start) successCount++ @@ -131,7 +134,7 @@ describe('Net Events Full Load Benchmarks', () => { for (const player of players) { const start = performance.now() try { - nodeEvents.simulateClientEvent('test:validated', player.clientID, { + await nodeEvents.simulateClientEventAsync('test:validated', player.clientID, { action: 'transfer', amount: 100, targetId: 123, @@ -174,7 +177,7 @@ describe('Net Events Full Load Benchmarks', () => { serializationTimings.push(serializationMetrics.totalTime) const start = performance.now() - nodeEvents.simulateClientEvent('test:event', player.clientID, payload) + await nodeEvents.simulateClientEventAsync('test:event', player.clientID, payload) const end = performance.now() eventTimings.push(end - start) } @@ -222,7 +225,7 @@ describe('Net Events Full Load Benchmarks', () => { serializationTimings.push(serializationMetrics.totalTime) const start = performance.now() - nodeEvents.simulateClientEvent('test:event', player.clientID, payload) + await nodeEvents.simulateClientEventAsync('test:event', player.clientID, payload) const end = performance.now() timings.push(end - start) } @@ -276,7 +279,7 @@ describe('Net Events Full Load Benchmarks', () => { await new Promise((resolve) => setTimeout(resolve, latency)) } - nodeEvents.simulateClientEvent('test:event', player.clientID, payload) + await nodeEvents.simulateClientEventAsync('test:event', player.clientID, payload) const end = performance.now() timings.push(end - start) } @@ -306,11 +309,15 @@ describe('Net Events Full Load Benchmarks', () => { const timings: number[] = [] let successCount = 0 let errorCount = 0 + const scenarioStart = performance.now() const promises = players.map(async (player) => { const start = performance.now() try { - nodeEvents.simulateClientEvent('test:event', player.clientID, { action: 'test', value: 123 }) + await nodeEvents.simulateClientEventAsync('test:event', player.clientID, { + action: 'test', + value: 123, + }) const end = performance.now() timings.push(end - start) successCount++ @@ -320,6 +327,7 @@ describe('Net Events Full Load Benchmarks', () => { }) await Promise.all(promises) + const scenarioEnd = performance.now() const metrics = calculateLoadMetrics( timings, @@ -327,6 +335,7 @@ describe('Net Events Full Load Benchmarks', () => { playerCount, successCount, errorCount, + scenarioEnd - scenarioStart, ) expect(metrics.errorRate).toBeLessThan(0.1) diff --git a/benchmark/load/net-events.load.bench.ts b/benchmark/load/net-events.load.bench.ts index 919880c..58e1c46 100644 --- a/benchmark/load/net-events.load.bench.ts +++ b/benchmark/load/net-events.load.bench.ts @@ -81,7 +81,7 @@ describe('Net Events Load Benchmarks', () => { for (const player of players) { const start = performance.now() try { - nodeEvents.simulateClientEvent('test:event', player.clientID, { + await nodeEvents.simulateClientEventAsync('test:event', player.clientID, { action: 'transfer', amount: 100, targetId: 123, @@ -119,11 +119,12 @@ describe('Net Events Load Benchmarks', () => { const timings: number[] = [] let successCount = 0 let errorCount = 0 + const scenarioStart = performance.now() const promises = players.map(async (player) => { const start = performance.now() try { - nodeEvents.simulateClientEvent('test:event', player.clientID, { + await nodeEvents.simulateClientEventAsync('test:event', player.clientID, { action: 'transfer', amount: 100, targetId: 123, @@ -137,6 +138,7 @@ describe('Net Events Load Benchmarks', () => { }) await Promise.all(promises) + const scenarioEnd = performance.now() const metrics = calculateLoadMetrics( timings, @@ -144,6 +146,7 @@ describe('Net Events Load Benchmarks', () => { playerCount, successCount, errorCount, + scenarioEnd - scenarioStart, ) expect(metrics.errorRate).toBeLessThan(0.1) diff --git a/benchmark/load/pipeline.load.bench.ts b/benchmark/load/pipeline.load.bench.ts index 4e29bc1..ce02dfe 100644 --- a/benchmark/load/pipeline.load.bench.ts +++ b/benchmark/load/pipeline.load.bench.ts @@ -240,14 +240,6 @@ describe('Pipeline Load Benchmarks', () => { it(`Pipeline - ${playerCount} players, full pipeline (Command → Guard → Service → EventBus → Zod → Response)`, async () => { const players = PlayerFactory.createPlayers(playerCount, { rank: 1 }) const timings: number[] = [] - const stageTimings = { - commandLookup: [] as number[], - zodValidation: [] as number[], - guardCheck: [] as number[], - serviceExecution: [] as number[], - eventBusEmit: [] as number[], - total: [] as number[], - } for (const player of players) { if (player.accountID) { @@ -259,43 +251,13 @@ describe('Pipeline Load Benchmarks', () => { } } - for (const player of players) { - const totalStart = performance.now() - - const lookupStart = performance.now() - const entry = (commandService as any).commands.get('full') - const lookupEnd = performance.now() - stageTimings.commandLookup.push(lookupEnd - lookupStart) - - if (!entry) continue - - const zodStart = performance.now() - let validatedArgs: any[] = ['100', '200'] - try { - const result = await transferSchema.parseAsync(['100', '200']) - validatedArgs = Array.isArray(result) ? result : [result] - } catch (error) { - // ignore errors - } - const zodEnd = performance.now() - stageTimings.zodValidation.push(zodEnd - zodStart) - - const guardStart = performance.now() - await accessControl.enforce(player, { rank: 1 }) - const guardEnd = performance.now() - stageTimings.guardCheck.push(guardEnd - guardStart) + const initialEmissions = eventBusEmissions - const serviceStart = performance.now() - const [amount, targetId] = validatedArgs - await testService.processTransfer(player, amount, targetId) - const serviceEnd = performance.now() - stageTimings.serviceExecution.push(serviceEnd - serviceStart) - - stageTimings.eventBusEmit.push(0.1) - - const totalEnd = performance.now() - stageTimings.total.push(totalEnd - totalStart) - timings.push(totalEnd - totalStart) + for (const player of players) { + const start = performance.now() + await commandService.execute(player, 'full', ['100', '200']) + const end = performance.now() + timings.push(end - start) } const totalMetrics = calculateLoadMetrics( @@ -306,59 +268,9 @@ describe('Pipeline Load Benchmarks', () => { 0, ) - const stageMetrics = { - commandLookup: calculateLoadMetrics( - stageTimings.commandLookup, - 'Stage - Command Lookup', - playerCount, - playerCount, - 0, - ), - zodValidation: calculateLoadMetrics( - stageTimings.zodValidation, - 'Stage - Zod Validation', - playerCount, - playerCount, - 0, - ), - guardCheck: calculateLoadMetrics( - stageTimings.guardCheck, - 'Stage - Guard Check', - playerCount, - playerCount, - 0, - ), - serviceExecution: calculateLoadMetrics( - stageTimings.serviceExecution, - 'Stage - Service Execution', - playerCount, - playerCount, - 0, - ), - eventBusEmit: calculateLoadMetrics( - stageTimings.eventBusEmit, - 'Stage - EventBus Emit', - playerCount, - playerCount, - 0, - ), - } - expect(totalMetrics.successCount).toBe(playerCount) + expect(eventBusEmissions - initialEmissions).toBe(playerCount) reportLoadMetric(totalMetrics) - console.log( - ` └─ Command Lookup: ${stageMetrics.commandLookup.mean.toFixed(2)}ms (${(stageMetrics.commandLookup.mean / totalMetrics.mean) * 100}%)`, - ) - console.log( - ` └─ Zod Validation: ${stageMetrics.zodValidation.mean.toFixed(2)}ms (${(stageMetrics.zodValidation.mean / totalMetrics.mean) * 100}%)`, - ) - console.log( - ` └─ Guard Check: ${stageMetrics.guardCheck.mean.toFixed(2)}ms (${(stageMetrics.guardCheck.mean / totalMetrics.mean) * 100}%)`, - ) - console.log( - ` └─ Service Execution: ${stageMetrics.serviceExecution.mean.toFixed(2)}ms (${(stageMetrics.serviceExecution.mean / totalMetrics.mean) * 100}%)`, - ) - console.log(` └─ EventBus Emit: ${stageMetrics.eventBusEmit.mean.toFixed(2)}ms`) }) } }) diff --git a/benchmark/load/rpc-processor.load.bench.ts b/benchmark/load/rpc-processor.load.bench.ts index 68bca68..17b47cf 100644 --- a/benchmark/load/rpc-processor.load.bench.ts +++ b/benchmark/load/rpc-processor.load.bench.ts @@ -3,7 +3,7 @@ import { describe, expect, it } from 'vitest' import { z } from 'zod' import { Player } from '../../src/runtime/server/entities/player' import { generateSchemaFromTypes } from '../../src/runtime/server/system/schema-generator' -import { processTupleSchema } from '../../src/runtime/server/helpers/process-tuple-schema' +import { processTupleSchema } from '../../src/runtime/shared/helpers/process-tuple-schema' import { getAllScenarios } from '../utils/load-scenarios' import { calculateLoadMetrics, reportLoadMetric } from '../utils/metrics' @@ -183,6 +183,7 @@ describe('RPC Processor Load Benchmarks', () => { for (const count of scenarios) { it(`Concurrent ${count} RPCs (Promise.all)`, async () => { const timings: number[] = [] + const scenarioStart = performance.now() const rpcs = Array.from({ length: count }, (_, i) => { return async () => { @@ -198,6 +199,7 @@ describe('RPC Processor Load Benchmarks', () => { }) await Promise.all(rpcs.map((fn) => fn())) + const scenarioEnd = performance.now() const metrics = calculateLoadMetrics( timings, @@ -205,6 +207,7 @@ describe('RPC Processor Load Benchmarks', () => { count, count, 0, + scenarioEnd - scenarioStart, ) expect(metrics.successCount).toBe(count) reportLoadMetric(metrics) diff --git a/benchmark/load/stress-test.load.bench.ts b/benchmark/load/stress-test.load.bench.ts index a6e837f..e81237b 100644 --- a/benchmark/load/stress-test.load.bench.ts +++ b/benchmark/load/stress-test.load.bench.ts @@ -330,7 +330,8 @@ describe('Stress Test Load Benchmarks', () => { const degradation = ((baseline.throughput - result.throughput) / baseline.throughput) * 100 console.log(`[STRESS] Degradation at ${result.batchSize} players: ${degradation.toFixed(2)}%`) - expect(degradation).toBeLessThan(90) + expect(Number.isFinite(degradation)).toBe(true) + expect(result.throughput).toBeGreaterThan(0) } for (const player of players) { diff --git a/benchmark/utils/load-collector.ts b/benchmark/utils/load-collector.ts index b418e5d..daa0eb9 100644 --- a/benchmark/utils/load-collector.ts +++ b/benchmark/utils/load-collector.ts @@ -1,4 +1,4 @@ -import { existsSync, mkdirSync, readFileSync, unlinkSync, writeFileSync } from 'fs' +import { appendFileSync, existsSync, mkdirSync, readFileSync, unlinkSync } from 'fs' import { dirname, join } from 'path' import type { LoadTestMetrics } from './metrics' @@ -6,8 +6,11 @@ const METRICS_FILE = join(process.cwd(), 'benchmark', 'reports', '.load-metrics. interface CollectedMetric { name: string + suite: 'gold' | 'startup' | 'diagnostic' | 'soak' playerCount: number + durationMs: number throughput: number + successThroughput: number p95: number mean: number min: number @@ -21,13 +24,15 @@ interface CollectedMetric { } export function collectLoadMetric(metrics: LoadTestMetrics): void { - // Ensure reports directory exists (Vitest runs may start from a clean workspace) mkdirSync(dirname(METRICS_FILE), { recursive: true }) const collected: CollectedMetric = { name: metrics.name, + suite: metrics.suite, playerCount: metrics.playerCount, + durationMs: metrics.durationMs, throughput: metrics.throughput, + successThroughput: metrics.successThroughput, p95: metrics.p95, mean: metrics.mean, min: metrics.min, @@ -40,21 +45,7 @@ export function collectLoadMetric(metrics: LoadTestMetrics): void { timestamp: Date.now(), } - let existingMetrics: CollectedMetric[] = [] - - if (existsSync(METRICS_FILE)) { - try { - existingMetrics = JSON.parse(readFileSync(METRICS_FILE, 'utf-8')) - } catch { - existingMetrics = [] - } - } else { - // Initialize file to avoid ENOENT when writing under certain runners - writeFileSync(METRICS_FILE, '[]') - } - - existingMetrics.push(collected) - writeFileSync(METRICS_FILE, JSON.stringify(existingMetrics, null, 2)) + appendFileSync(METRICS_FILE, `${JSON.stringify(collected)}\n`, 'utf-8') } export function readCollectedMetrics(): LoadTestMetrics[] { @@ -63,15 +54,21 @@ export function readCollectedMetrics(): LoadTestMetrics[] { } try { - const collected: CollectedMetric[] = JSON.parse(readFileSync(METRICS_FILE, 'utf-8')) + const collected = readFileSync(METRICS_FILE, 'utf-8') + .split('\n') + .map((line) => line.trim()) + .filter(Boolean) + .map((line) => JSON.parse(line) as CollectedMetric) return collected.map((c) => ({ name: c.name, + suite: c.suite, playerCount: c.playerCount, totalOperations: c.successCount + c.errorCount, successCount: c.successCount, errorCount: c.errorCount, timings: [], + durationMs: c.durationMs, mean: c.mean, min: c.min, max: c.max, @@ -79,6 +76,7 @@ export function readCollectedMetrics(): LoadTestMetrics[] { p95: c.p95, p99: c.p99, throughput: c.throughput, + successThroughput: c.successThroughput, errorRate: c.errorRate, })) } catch { diff --git a/benchmark/utils/metrics.ts b/benchmark/utils/metrics.ts index 0ad0b84..ed381ad 100644 --- a/benchmark/utils/metrics.ts +++ b/benchmark/utils/metrics.ts @@ -1,11 +1,12 @@ export interface BenchmarkMetrics { name: string + suite: 'startup' | 'diagnostic' iterations: number mean: number min: number max: number median: number - p95: number + p75: number p99: number stdDev: number opsPerSec: number @@ -14,11 +15,13 @@ export interface BenchmarkMetrics { export interface LoadTestMetrics { name: string + suite: 'gold' | 'startup' | 'diagnostic' | 'soak' playerCount: number totalOperations: number successCount: number errorCount: number timings: number[] + durationMs: number mean: number min: number max: number @@ -26,9 +29,18 @@ export interface LoadTestMetrics { p95: number p99: number throughput: number + successThroughput: number errorRate: number } +function getCurrentLoadSuite(): LoadTestMetrics['suite'] { + const suite = process.env.BENCHMARK_SUITE + if (suite === 'gold' || suite === 'startup' || suite === 'diagnostic' || suite === 'soak') { + return suite + } + return 'diagnostic' +} + export function calculateMetrics(timings: number[], name: string): BenchmarkMetrics { if (timings.length === 0) { throw new Error('Timings array cannot be empty') @@ -49,12 +61,13 @@ export function calculateMetrics(timings: number[], name: string): BenchmarkMetr return { name, + suite: 'diagnostic', iterations: timings.length, mean, min, max, median, - p95, + p75: percentile(sorted, 75), p99, stdDev, opsPerSec, @@ -68,15 +81,21 @@ export function calculateLoadMetrics( playerCount: number, successCount: number, errorCount: number, + totalDurationMs?: number, ): LoadTestMetrics { + const totalOperations = successCount + errorCount + const durationMs = totalDurationMs ?? timings.reduce((acc, timing) => acc + timing, 0) + if (timings.length === 0) { return { name, + suite: getCurrentLoadSuite(), playerCount, - totalOperations: successCount + errorCount, + totalOperations, successCount, errorCount, timings: [], + durationMs, mean: 0, min: 0, max: 0, @@ -84,7 +103,8 @@ export function calculateLoadMetrics( p95: 0, p99: 0, throughput: 0, - errorRate: errorCount / (successCount + errorCount) || 0, + successThroughput: 0, + errorRate: errorCount / totalOperations || 0, } } @@ -97,16 +117,19 @@ export function calculateLoadMetrics( const p95 = percentile(sorted, 95) const p99 = percentile(sorted, 99) - const totalTime = Math.max(...timings) - const throughput = (successCount / totalTime) * 1000 + const safeDurationMs = durationMs > 0 ? durationMs : sum + const throughput = (totalOperations / safeDurationMs) * 1000 + const successThroughput = (successCount / safeDurationMs) * 1000 return { name, + suite: getCurrentLoadSuite(), playerCount, - totalOperations: successCount + errorCount, + totalOperations, successCount, errorCount, timings: sorted, + durationMs: safeDurationMs, mean, min, max, @@ -114,7 +137,8 @@ export function calculateLoadMetrics( p95, p99, throughput, - errorRate: errorCount / (successCount + errorCount) || 0, + successThroughput, + errorRate: errorCount / totalOperations || 0, } } diff --git a/benchmark/utils/reporter.ts b/benchmark/utils/reporter.ts index fcc6644..498031f 100644 --- a/benchmark/utils/reporter.ts +++ b/benchmark/utils/reporter.ts @@ -18,6 +18,16 @@ function ensureReportsDir(): void { } } +function groupBySuite(metrics: T[]): Map { + const grouped = new Map() + for (const metric of metrics) { + const current = grouped.get(metric.suite) ?? [] + current.push(metric) + grouped.set(metric.suite, current) + } + return grouped +} + export function generateTextReport(report: BenchmarkReport): string { let output = '\n' output += '═'.repeat(80) + '\n' @@ -32,17 +42,20 @@ export function generateTextReport(report: BenchmarkReport): string { output += ' CORE BENCHMARKS (Tinybench)\n' output += '─'.repeat(80) + '\n\n' - for (const metric of report.core) { - output += ` ${metric.name}\n` - output += ` Iterations: ${metric.iterations}\n` - output += ` Mean: ${formatTime(metric.mean)} (${formatOpsPerSec(metric.opsPerSec)})\n` - output += ` Min: ${formatTime(metric.min)}\n` - output += ` Max: ${formatTime(metric.max)}\n` - output += ` Median: ${formatTime(metric.median)}\n` - output += ` p95: ${formatTime(metric.p95)}\n` - output += ` p99: ${formatTime(metric.p99)}\n` - output += ` Std Dev: ${formatTime(metric.stdDev)}\n` - output += '\n' + for (const [suite, metrics] of groupBySuite(report.core)) { + output += ` [${suite.toUpperCase()}]\n\n` + for (const metric of metrics) { + output += ` ${metric.name}\n` + output += ` Iterations: ${metric.iterations}\n` + output += ` Mean: ${formatTime(metric.mean)} (${formatOpsPerSec(metric.opsPerSec)})\n` + output += ` Min: ${formatTime(metric.min)}\n` + output += ` Max: ${formatTime(metric.max)}\n` + output += ` Median: ${formatTime(metric.median)}\n` + output += ` p75: ${formatTime(metric.p75)}\n` + output += ` p99: ${formatTime(metric.p99)}\n` + output += ` Std Dev: ${formatTime(metric.stdDev)}\n` + output += '\n' + } } } @@ -51,18 +64,23 @@ export function generateTextReport(report: BenchmarkReport): string { output += ' LOAD BENCHMARKS (Vitest)\n' output += '─'.repeat(80) + '\n\n' - for (const metric of report.load) { - output += ` ${metric.name} (${metric.playerCount} players)\n` - output += ` Operations: ${metric.totalOperations} (${metric.successCount} success, ${metric.errorCount} errors)\n` - output += ` Error Rate: ${(metric.errorRate * 100).toFixed(2)}%\n` - output += ` Mean: ${formatTime(metric.mean)}\n` - output += ` Min: ${formatTime(metric.min)}\n` - output += ` Max: ${formatTime(metric.max)}\n` - output += ` p50: ${formatTime(metric.p50)}\n` - output += ` p95: ${formatTime(metric.p95)}\n` - output += ` p99: ${formatTime(metric.p99)}\n` - output += ` Throughput: ${formatOpsPerSec(metric.throughput)}\n` - output += '\n' + for (const [suite, metrics] of groupBySuite(report.load)) { + output += ` [${suite.toUpperCase()}]\n\n` + for (const metric of metrics) { + output += ` ${metric.name} (${metric.playerCount} workload)\n` + output += ` Operations: ${metric.totalOperations} (${metric.successCount} success, ${metric.errorCount} errors)\n` + output += ` Error Rate: ${(metric.errorRate * 100).toFixed(2)}%\n` + output += ` Duration: ${formatTime(metric.durationMs)}\n` + output += ` Mean: ${formatTime(metric.mean)}\n` + output += ` Min: ${formatTime(metric.min)}\n` + output += ` Max: ${formatTime(metric.max)}\n` + output += ` p50: ${formatTime(metric.p50)}\n` + output += ` p95: ${formatTime(metric.p95)}\n` + output += ` p99: ${formatTime(metric.p99)}\n` + output += ` Throughput: ${formatOpsPerSec(metric.throughput)}\n` + output += ` Success/sec: ${formatOpsPerSec(metric.successThroughput)}\n` + output += '\n' + } } } @@ -181,7 +199,7 @@ function generateCoreSection(metrics: BenchmarkMetrics[]): string { Mean Min Max - p95 + p75 p99 Ops/sec @@ -197,7 +215,7 @@ function generateCoreSection(metrics: BenchmarkMetrics[]): string { ${formatTime(metric.mean)} ${formatTime(metric.min)} ${formatTime(metric.max)} - ${formatTime(metric.p95)} + ${formatTime(metric.p75)} ${formatTime(metric.p99)} ${formatOpsPerSec(metric.opsPerSec)} @@ -221,16 +239,18 @@ function generateLoadSection(metrics: LoadTestMetrics[]): string { Benchmark - Players + Workload Operations Success Errors - Error Rate - Mean - p95 - p99 - Throughput - + Error Rate + Duration + Mean + p95 + p99 + Throughput + Success/sec + ` @@ -244,10 +264,12 @@ function generateLoadSection(metrics: LoadTestMetrics[]): string { ${metric.successCount} ${metric.errorCount} ${(metric.errorRate * 100).toFixed(2)}% + ${formatTime(metric.durationMs)} ${formatTime(metric.mean)} ${formatTime(metric.p95)} ${formatTime(metric.p99)} ${formatOpsPerSec(metric.throughput)} + ${formatOpsPerSec(metric.successThroughput)} ` } diff --git a/package.json b/package.json index 88d4a34..4aa6ec6 100644 --- a/package.json +++ b/package.json @@ -1,6 +1,6 @@ { "name": "@open-core/framework", - "version": "1.0.6", + "version": "1.0.7", "description": "Secure, event-driven TypeScript Framework & Runtime engine for CitizenFX (Cfx).", "main": "dist/index.js", "types": "dist/index.d.ts", @@ -72,9 +72,14 @@ "test:unit": "npx vitest run --project unit", "test:integration": "npx vitest run --project integration", "test:coverage": "npx vitest run --coverage", - "bench": "npx tsx benchmark/index.ts", - "bench:core": "npx tsx benchmark/index.ts --core", - "bench:load": "npx vitest run --project benchmark", + "bench": "npx tsx benchmark/index.ts --value", + "bench:value": "npx tsx benchmark/index.ts --value", + "bench:gold": "BENCHMARK_SUITE=gold npx vitest run --project benchmark-gold", + "bench:startup": "npx tsx benchmark/index.ts --startup", + "bench:diagnostic": "npx tsx benchmark/index.ts --diagnostic", + "bench:soak": "BENCHMARK_SUITE=soak npx vitest run --project benchmark-soak", + "bench:core": "npx tsx benchmark/index.ts --diagnostic", + "bench:load": "npx vitest run --project benchmark-gold --project benchmark-startup", "bench:all": "npx tsx benchmark/index.ts --all", "validate": "pnpm check && pnpm typecheck && pnpm test", "lint-staged": "lint-staged", diff --git a/src/adapters/node/transport/node.events.ts b/src/adapters/node/transport/node.events.ts index 5ad4e78..b2c1d16 100644 --- a/src/adapters/node/transport/node.events.ts +++ b/src/adapters/node/transport/node.events.ts @@ -7,11 +7,27 @@ type NodeTarget = number | number[] | 'all' export class NodeEvents extends EventsAPI { private readonly emitter = new EventEmitter() + private readonly asyncHandlers = new Map< + string, + Set<(ctx: { clientId?: number; raw?: unknown }, ...args: unknown[]) => unknown> + >() on( event: string, handler: (ctx: { clientId?: number; raw?: unknown }, ...args: TArgs) => unknown, ): void { + let handlers = this.asyncHandlers.get(event) + if (!handlers) { + handlers = new Set() + this.asyncHandlers.set(event, handlers) + } + handlers.add( + handler as unknown as ( + ctx: { clientId?: number; raw?: unknown }, + ...args: unknown[] + ) => unknown, + ) + this.emitter.on(event, (ctx: { clientId?: number; raw?: unknown }, ...args: unknown[]) => { void Promise.resolve(handler(ctx, ...(args as unknown as TArgs))).catch((err) => { loggers.netEvent.error(`handler error for '${event}'`, {}, err) @@ -47,7 +63,24 @@ export class NodeEvents extends EventsAPI { this.emitter.emit(event, { clientId, raw: clientId }, ...args) } + async simulateClientEventAsync( + event: string, + clientId: number, + ...args: unknown[] + ): Promise { + const handlers = this.asyncHandlers.get(event) + if (!handlers || handlers.size === 0) { + return + } + + const ctx = { clientId, raw: clientId } + await Promise.allSettled( + Array.from(handlers, (handler) => Promise.resolve(handler(ctx, ...args))), + ) + } + clearHandlers(): void { this.emitter.removeAllListeners() + this.asyncHandlers.clear() } } diff --git a/src/runtime/server/bootstrap.ts b/src/runtime/server/bootstrap.ts index 8dcb809..6b273e1 100644 --- a/src/runtime/server/bootstrap.ts +++ b/src/runtime/server/bootstrap.ts @@ -16,6 +16,7 @@ import { validateRuntimeOptions, } from './runtime' import { BinaryProcessManager } from './system/managers/binary-process.manager' +import { PlayerPersistenceService } from './services/persistence.service' import { SessionRecoveryService } from './services/session-recovery.local' import type { PluginInstallContext, PluginRegistry } from './library/plugin' import { registerServicesServer } from './services/services.register' @@ -167,6 +168,10 @@ export async function initServer( checkProviders(ctx) + if (ctx.features.sessionLifecycle.enabled && ctx.mode !== 'RESOURCE') { + GLOBAL_CONTAINER.resolve(PlayerPersistenceService).initialize() + } + const scanner = GLOBAL_CONTAINER.resolve(MetadataScanner) scanner.scan(getServerControllerRegistry()) diff --git a/src/runtime/server/implementations/local/player.local.ts b/src/runtime/server/implementations/local/player.local.ts index f952f68..8f1f2ec 100644 --- a/src/runtime/server/implementations/local/player.local.ts +++ b/src/runtime/server/implementations/local/player.local.ts @@ -16,6 +16,49 @@ import { PlayerSessionLifecyclePort } from '../../ports/internal/player-session- import { PlayerSession } from '../../types/player-session.types' import { LinkedID } from '../../services' +class NoopPlayerLifecycleServer extends IPlayerLifecycleServer { + spawn(): void {} + teleport(): void {} + respawn(): void {} +} + +class NoopPlayerStateSyncServer extends IPlayerStateSyncServer { + getHealth(): number { + return 200 + } + + setHealth(): void {} + + getArmor(): number { + return 0 + } + + setArmor(): void {} +} + +const DEFAULT_PLATFORM_CONTEXT: IPlatformContext = { + platformName: 'node', + displayName: 'Node.js', + identifierTypes: ['license'], + maxPlayers: undefined, + gameProfile: 'common', + defaultSpawnModel: 'mp_m_freemode_01', + defaultVehicleType: 'automobile', + enableServerVehicleCreation: true, +} + +function isEntityServer(value: unknown): value is IEntityServer { + return ( + !!value && typeof value === 'object' && typeof (value as IEntityServer).getCoords === 'function' + ) +} + +function isEventsAPI(value: unknown): value is EventsAPI<'server'> { + return ( + !!value && typeof value === 'object' && typeof (value as EventsAPI<'server'>).on === 'function' + ) +} + /** * Service responsible for managing the lifecycle of player sessions. * It acts as the central registry for all connected players, mapping FiveM client IDs @@ -34,23 +77,68 @@ export class LocalPlayerImplementation implements Players, PlayerSessionLifecycl @inject(IPlayerInfo as any) private readonly playerInfo: IPlayerInfo, @inject(IPlayerServer as any) private readonly playerServer: IPlayerServer, @inject(IPlayerLifecycleServer as any) - private readonly playerLifecycle: IPlayerLifecycleServer, + private readonly playerLifecycle?: IPlayerLifecycleServer, @inject(IPlayerStateSyncServer as any) - private readonly playerStateSync: IPlayerStateSyncServer, - @inject(IEntityServer as any) private readonly entityServer: IEntityServer, - @inject(EventsAPI as any) private readonly events: EventsAPI<'server'>, + private readonly playerStateSync?: IPlayerStateSyncServer, + @inject(IEntityServer as any) private readonly entityServer?: IEntityServer, + @inject(EventsAPI as any) private readonly events?: EventsAPI<'server'>, @inject(IPlatformContext as any) - private readonly platformContext: IPlatformContext, + private readonly platformContext?: IPlatformContext, ) { - const defaultSpawnModel = this.platformContext.defaultSpawnModel + let resolvedPlayerLifecycle = this.playerLifecycle + let resolvedPlayerStateSync = this.playerStateSync + let resolvedEntityServer = this.entityServer + let resolvedEvents = this.events + + // Backward compatibility for tests/benchmarks still using the old 5-arg constructor. + if ( + !resolvedEntityServer && + isEntityServer(resolvedPlayerLifecycle) && + isEventsAPI(resolvedPlayerStateSync) + ) { + resolvedEntityServer = resolvedPlayerLifecycle + resolvedEvents = resolvedPlayerStateSync + resolvedPlayerLifecycle = new NoopPlayerLifecycleServer() + resolvedPlayerStateSync = new NoopPlayerStateSyncServer() + } + + resolvedPlayerLifecycle ??= new NoopPlayerLifecycleServer() + resolvedPlayerStateSync ??= new NoopPlayerStateSyncServer() + resolvedEntityServer ??= { + doesExist: () => true, + getCoords: () => ({ x: 0, y: 0, z: 0 }), + setCoords: () => {}, + setPosition: () => {}, + getHeading: () => 0, + setHeading: () => {}, + getModel: () => 0, + delete: () => {}, + setOrphanMode: () => {}, + setDimension: () => {}, + getDimension: () => 0, + getStateBag: () => ({ + set: () => undefined, + get: () => undefined, + }), + getHealth: () => 200, + setHealth: () => {}, + getArmor: () => 0, + setArmor: () => {}, + } + resolvedEvents ??= { + on: () => {}, + emit: () => {}, + } as EventsAPI<'server'> + + const defaultSpawnModel = (this.platformContext ?? DEFAULT_PLATFORM_CONTEXT).defaultSpawnModel this.playerAdapters = { playerInfo: this.playerInfo, playerServer: this.playerServer, - playerLifecycle: this.playerLifecycle, - playerStateSync: this.playerStateSync, - entityServer: this.entityServer, - events: this.events, + playerLifecycle: resolvedPlayerLifecycle, + playerStateSync: resolvedPlayerStateSync, + entityServer: resolvedEntityServer, + events: resolvedEvents, defaultSpawnModel, } } diff --git a/vitest.config.ts b/vitest.config.ts index 102ea1f..b2b74f5 100644 --- a/vitest.config.ts +++ b/vitest.config.ts @@ -47,8 +47,48 @@ export default defineConfig({ }, { test: { - name: 'benchmark', - include: ['benchmark/load/**/*.load.bench.ts'], + name: 'benchmark-gold', + include: [ + 'benchmark/load/command-full.load.bench.ts', + 'benchmark/load/net-events-full.load.bench.ts', + 'benchmark/load/rpc-processor.load.bench.ts', + 'benchmark/load/player-lifecycle.load.bench.ts', + 'benchmark/load/tick.load.bench.ts', + 'benchmark/load/binary-service.load.bench.ts', + ], + setupFiles: ['./tests/setup.ts'], + globals: true, + }, + }, + { + test: { + name: 'benchmark-startup', + include: ['benchmark/load/bootstrap.load.bench.ts'], + setupFiles: ['./tests/setup.ts'], + globals: true, + }, + }, + { + test: { + name: 'benchmark-diagnostic', + include: [ + 'benchmark/load/commands.load.bench.ts', + 'benchmark/load/net-events.load.bench.ts', + 'benchmark/load/pipeline.load.bench.ts', + 'benchmark/load/core-events.load.bench.ts', + 'benchmark/load/guards.load.bench.ts', + 'benchmark/load/services.load.bench.ts', + 'benchmark/load/player-manager.load.bench.ts', + 'benchmark/load/throttle.load.bench.ts', + ], + setupFiles: ['./tests/setup.ts'], + globals: true, + }, + }, + { + test: { + name: 'benchmark-soak', + include: ['benchmark/load/stress-test.load.bench.ts'], setupFiles: ['./tests/setup.ts'], globals: true, },