You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
> **Coverage:** C# and Python tracked via Codecov. To activate the badge: visit [codecov.io](https://codecov.io), log in with GitHub, enable this repository, then re-run CI — the badge updates automatically after the first successful upload.
@@ -142,7 +142,7 @@ graph TB
142
142
|**Distributed storage systems**| Delta Lake on ADLS Gen2, MinIO S3-compatible object storage |`spark/jobs/ingest_and_embed.py` (lines 80-95), `appsettings.json` storage config |
143
143
|**Large-scale data processing**| PySpark batch pipeline, 100M+ record ingestion with partitioning |`spark/jobs/ingest_and_embed.py`, `docs/BENCHMARKS.md` scaling projections |
144
144
|**High-performance services**| .NET 8 Web API: P50 152ms, P99 425ms at 500 qps |`src/VectorCatalog.Api/`, `docs/BENCHMARKS.md` latency tables |
145
-
|**Azure-native tooling**| AKS Helm chart with HPA, managed disks, Azure Monitor integration |`helm/vector-catalog/` (11 files, 879 lines) |
145
+
|**Azure-native tooling**| AKS Helm chart with HPA, managed disks, Azure Monitor integration |`helm/vectorscale/` (11 files, 879 lines) |
146
146
|**Production observability**| OpenTelemetry distributed traces, Prometheus metrics, Serilog structured logs |`Infrastructure/Observability/`, correlation IDs in all requests |
0 commit comments