autodiscovery: self-configuring check scheduling via trial-mode configs (PoC B)#50479
autodiscovery: self-configuring check scheduling via trial-mode configs (PoC B)#50479vitkyrka wants to merge 60 commits into
Conversation
The smoke procedure used to spell out the full manual sequence: docker compose up the krakend stack, docker run the agent with six bind mounts, manually reach configcheck/agent status. That setup predates `dda inv discovery-dev.build-image` (self-contained image with the same RUNPATH layout) and the integrations-core `test_e2e_discovery` harness driven by `ddev env test --dev` with `DDEV_E2E_AGENT`. Both pieces are in tree now, so the test phase collapses to a single command. Keep the build phase rigour (agent.build → rtloader.install-with-bazel → restore bazel .so files → build image) — that part is still load- bearing — and the pitfalls that aren't obsoleted by the new flow: the agent.build-overwrites-rtloader gotcha, the empty-instances yaml guard, and Python init timing. Drop the bind-mount, docker-network, configcheck-grep, and docker-agent-run.sh sections; they only described the manual workaround. Drop the commit-IDs reproducibility footer; the dev-branch SHAs there were stale on the day they were written. Co-Authored-By: Claude Opus 4.7 <noreply@anthropic.com>
For the advanced auto-config experiment. New optional field on integration.Config, populated by the auto_conf_discovery.yaml provider in a follow-up commit. Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>
Recognise the discovery: block in the file format and populate integration.Config.Discovery. The file is picked up via the existing .yaml extension matcher; only the configFormat struct gains a new field and GetIntegrationConfigFromFile copies it into the returned integration.Config. Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>
Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>
Hints first (when exposed), then remaining exposed ports in declared order. Dedup-aware. Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>
Per-(serviceID, configHash) cache. Successes never expire; failures expire after caller-supplied TTL. Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>
HTTP-GET each candidate port + path with a 500ms per-probe budget and a 2s overall budget. Verify Content-Type is text/plain or application/openmetrics-text and that the body's first non-comment line is a Prometheus exposition line. Cache success/failure per (serviceID, config hash). Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>
Tiny shim so %%discovered_port%% resolution can flow through the existing GetExtraConfig path; no resolver signature change required. Also tightens fakeService.GetExtraConfig in the prober tests to error on unknown keys (matches the contract of real Service impls). Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>
Routes via Resolvable.GetExtraConfig("discovered_port"). Populated by
autodiscovery/discovery's serviceWithProbeResult wrapper after a
successful probe.
Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>
When a Config has Discovery set, run the OpenMetrics prober against the matched Service before configresolver.Resolve. On match wrap the service so %%discovered_port%% resolves; on no match skip scheduling the check (logged at DEBUG). Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>
SubstituteTemplateEnvVars is called at config-load time with a nil service. Without a nil check, GetDiscoveredPort panicked on res.GetExtraConfig. Match the pattern used by GetPort/GetPid/ GetHostname: return a NoResolverError early when res is nil so the caller can ignore it (config_reader.go:517 already does). Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>
…plan Cross-language plan (Go + C++ + Python) for the Agent-side infrastructure that calls a Python discover() classmethod via rtloader, replacing the existing krakend-experiment Go prober and %%discovered_port%% template var. Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>
Co-Authored-By: Claude Sonnet 4.6 <noreply@anthropic.com>
Co-Authored-By: Claude Sonnet 4.6 <noreply@anthropic.com>
Co-Authored-By: Claude Sonnet 4.6 <noreply@anthropic.com>
Co-Authored-By: Claude Sonnet 4.6 <noreply@anthropic.com>
autoconfig.go calls discoverer.NewPythonBridge() unconditionally; without this stub the symbol is undefined in builds where the python tag is absent (e.g. cluster agent). Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>
Co-Authored-By: Claude Sonnet 4.6 <noreply@anthropic.com>
Records the exact build + bind-mount sequence that successfully validates the Plan B implementation against a real krakend container. Includes the pitfalls hit during the manual run (Python ABI mismatch, RUNPATH/RPATH bind mounts, conf.d vs data/ confusion, Python init race) so an automated harness can avoid each one. Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>
Co-Authored-By: Claude Sonnet 4.6 <noreply@anthropic.com>
Co-Authored-By: Claude Sonnet 4.6 <noreply@anthropic.com>
The previous commit accidentally added "py" to ruff's exclude list to work around a pre-commit hook failure on a transient local working-tree directory. The directory is gone; revert the config change.
Surfaces ErrPythonNotReady from the Python bridge when rtloader has not yet initialised, and skips the negative cache for that error so the next AD reconcile event re-attempts the probe. Fixes a startup race where AD reconciles before Python init completes (~30s gap), caches the failure, and never re-probes in stable conditions — the krakend e2e smoke test previously had to bounce the target container to clear the cache. Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>
Resolves the AD-vs-Python-init startup race for advanced auto-config templates. Previously, AutoDiscovery's first reconcile fired before rtloader.Initialize completed; the discoverer returned ErrPythonNotReady (uncached after the previous fix) and no future event triggered a retry in stable conditions, so the integration's check was never scheduled without manually bouncing the target container. - pkg/collector/python: signalPythonReady closes a once-channel at the end of Initialize; WaitReady blocks on it. - discoverer.WaitForPython is the public entry point (with a no-op stub for builds without the python tag, so cluster-agent compiles cleanly). - configmgr.rescanDiscoveryTemplates iterates active services with Discovery templates and re-runs reconcileService for each. - AutoConfig.start launches a fire-and-forget goroutine that waits for Python to be ready and then runs the rescan. The bridge MUST NOT block on Python init in the AD reconcile path: fx hooks are sequential and that would deadlock against the very hook that triggers Initialize. Verified end-to-end against the krakend tests/docker compose: krakend check is now scheduled ~9 s after agent start without any manual container bounce, sourcing http://<container-ip>:9090/metrics from the Python discover() result. Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>
Drops the manual krakend-bounce step now that AutoConfig automatically re-reconciles services with discovery templates once Python is ready. Adds a note on the "skipped — python not yet ready" startup log being expected and benign, plus the dev/lib rtloader restore step (needed after every agent rebuild because cmake links against host Python 3.12). Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>
`dda inv agent.build` re-links rtloader against the host's python3.X-dev headers and overwrites the bazel-built .so files in dev/lib/. The resulting agent fails inside the discovery-dev image with `libpython3.12.so.1.0: cannot open shared object file` because the container ships Python 3.13. Detect this by extracting the libpython version the rtloader is linked against and confirming the matching libpython exists in dev/embedded/lib/ (where bazel installs it). Fail with the exact remediation commands instead of letting the user discover the issue inside the running agent container.
This reverts commit 7a95910. The rescan-on-Python-ready mechanism is being replaced by an in-bridge lazy InitPython that mirrors the python check loader's existing convention (loader.go: pythonOnce.Do(InitPython) when python_lazy_loading is true). The lazy-init shape is simpler, also fixes the CLI agent check subcommand (which hits the same race in a fresh process), and removes ~111 lines of one-shot recovery plumbing. Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>
Cache entries for failed probes now track an attempts counter and a
nextRetryAt computed from a schedule []time.Duration. Once the schedule
is exhausted, the entry transitions to givenUp and is never probed
again. cache.lookup returns one of {miss, hit, pending, givenUp}; the
discoverer's Discover() inspects this and decides probe-or-skip.
Default schedule is [5s, 5s, 30s × 8] — 11 attempts (1 initial + 10
retries) over ~4 min 10 s. The first two slots target common
~10-30 s container-startup races; the remaining slots match the
existing 30 s TTL value to keep the steady-state probe rate the same.
cache.forget(svcID) is added and tested; wiring to the config manager
comes in the next commit.
Co-Authored-By: Claude Opus 4.7 <noreply@anthropic.com>
- forget: use strings.HasPrefix instead of manual slice-and-compare; the explicit len(k) >= len(prefix) guard was redundant given the cacheKey format and made the intent harder to read. - Strengthen TestCacheForgetClearsAllEntriesForService to include a failure entry, so a regression that only deleted success entries would be caught. - Add TestDiscoverGivenUpNeverProbesAgain so the stateGivenUp branch in Discover() has explicit coverage at the discoverer level (cache_test only exercised it at the cache level). Co-Authored-By: Claude Opus 4.7 <noreply@anthropic.com>
Adds two methods to the Discoverer interface so configmgr can: - ask whether a (svcID, integration) pair still has retries pending (used to populate the retry-loop's pending set), and - drop cache entries when a service is removed (so a restarted container with a new svcID isn't affected, and an in-place restart with the same svcID doesn't see stale state). Co-Authored-By: Claude Opus 4.7 <noreply@anthropic.com>
- TestDiscoverGivenUpNeverProbesAgain and TestDiscoverIsPendingFalseAfterGiveUp inject a deterministic d.now instead of relying on the wall-clock advancing between two adjacent Discover calls. Both used retrySchedule = [0]; with the real clock, two time.Now() calls in a tight loop are not guaranteed to differ, so .Before could spuriously return true for the second call and skip the probe. - Add TestDiscoverForgetNoop to document Forget on a never-seen svcID is a no-op. - Align IsPending implementation doc wording with the interface doc. Co-Authored-By: Claude Opus 4.7 <noreply@anthropic.com>
reconcilingConfigManager now maintains a pendingDiscovery set — svcIDs with at least one discovery template whose cache entry is still pending (not given up). Membership is recomputed at the end of every reconcileService via updatePendingDiscovery. processDelService calls discoverer.Forget(svcID) and removes the entry from pendingDiscovery, so a stopped service doesn't leak state and a same-svcID restart starts fresh. The retry loop that consumes this set comes in the next commit. Co-Authored-By: Claude Opus 4.7 <noreply@anthropic.com>
- Rewrite TestPendingDiscoveryPrunedOnGiveUp to actually exercise the pending → given-up transition. The previous version asserted the correct outcome but didn't test the transition the name implied (it tested "never-pending", not "once-pending then given-up"). - Add TestPendingDiscoveryPrunedOnSuccess to cover the path the retry loop (Task 5) depends on most: a successful late discovery prunes the svcID from pendingDiscovery. - Add TestPendingDiscoveryNotPopulatedForNonDiscoveryTemplate so the "skip plain templates" guard is regression-tested. - Extend reconcileService doc comment to mention the pendingDiscovery side effect. Co-Authored-By: Claude Opus 4.7 <noreply@anthropic.com>
Walks pendingDiscovery under cm.m, reruns reconcileService for each svcID, returns merged ConfigChanges for the caller to apply via the scheduler outside the lock. Snapshot-then-iterate to handle reconcileService mutating pendingDiscovery via updatePendingDiscovery. The goroutine that calls this on a 5 s tick comes in the next commit. Co-Authored-By: Claude Opus 4.7 <noreply@anthropic.com>
retryPendingDiscoveries was reconcileService'ing per service and returning the merged ConfigChanges, but skipping cm.applyChanges — so cm.scheduledConfigs stayed out of sync after a successful late discovery. A subsequent processDelService for the same svcID would try to unschedule a config that scheduledConfigs never tracked, silently emitting a zero-value unschedule entry. Wrap each per-service result in cm.applyChanges, matching the existing pattern in processNewService / processDelConfigs. Strengthen TestRetryPendingDiscoveriesScheduledOnLateMatch to verify scheduledConfigs membership via mapOverLoadedConfigs, so future regressions of this kind get caught. Add TestRetryPendingDiscoveriesNoChangeWhenStillPending to cover the steady-state "still pending after retry" path. Co-Authored-By: Claude Opus 4.7 <noreply@anthropic.com>
5 s ticker in AutoConfig.start() calls cfgMgr.retryPendingDiscoveries and applies the resulting ConfigChanges via the existing scheduler path. The discoverer's cache decides per-call whether to actually probe; ticks while no entry is due short-circuit cheaply. 5 s matches the fastest slot in the default retry schedule. Co-Authored-By: Claude Opus 4.7 <noreply@anthropic.com>
- Add a debug log line in discoveryRetryLoop when the retry tick produces non-empty ConfigChanges, so the retry path is distinguishable from the initial-discovery path in production logs. - Add a comment to getAutoConfig flagging that the goroutines it launches must stay in sync with ac.start(). Without this, adding a new goroutine in start() silently breaks TestStop via the unbuffered stop-channel send. Co-Authored-By: Claude Opus 4.7 <noreply@anthropic.com>
Captures the manual smoke that validates the discoverer retry loop: a krakend container with a 60 s sleep before exec'ing the binary, expected log sequence (initial probe miss → retries through 5 s and 30 s slots → late-arriving match → check [OK]). Co-Authored-By: Claude Opus 4.7 <noreply@anthropic.com>
The previous stub always errored, so on cluster agent (no python build tag) every discovery template would probe via the stub, get an error, and the new retry loop would burn through all 11 retry attempts (~4 min per svcID×integration pair) before giving up. discoverer.New(nil) already returns a nil Discoverer, and configmgr nil-checks before every call (Discover, IsPending, Forget). With NewPythonBridge returning nil, no-python builds fail-closed cleanly: templates with Discovery set are skipped at resolve-time with the existing "no discoverer is configured" warning, no retry traffic. Co-Authored-By: Claude Opus 4.7 <noreply@anthropic.com>
The smoke doc referenced a /tmp directory that nobody else can reproduce from. Move the docker-compose + run_repro.sh into the tree under test/dockerfiles/discovery-dev/krakend-delayed/, with hard-coded host paths replaced by INTEGRATIONS_CORE_REPO (defaults to ../integrations-core next to the agent checkout). Update the smoke doc's "Late-arriving service: delayed-startup retry" section to point at the committed location. Co-Authored-By: Claude Opus 4.7 <noreply@anthropic.com>
Remove the discoverer package, the python.RunDiscover bridge, the rtloader run_discover/_run_discover symbols, and the discoveryRetryLoop + pendingDiscovery state machine. The integration.Discovery template field and YAML parsing are kept; resolveTemplateForService's discovery branch is stubbed to return (tpl, false) and will be filled in by the alt mechanism in a follow-up commit.
Resolve a Discovery template directly into a synthetic config carrying the Service info under __discovery_service__ and a TrialMode flag. Replaces the discoverer-package call path stripped in the previous commit. Co-Authored-By: Claude Sonnet 4.6 <noreply@anthropic.com>
…ecks After 5 consecutive failures of a trial-mode (discovery) check, AD removes the scheduled config and forgets its trial state. The runner calls AutoConfig.RecordTrialResult after each trial-mode check run. Co-Authored-By: Claude Sonnet 4.6 <noreply@anthropic.com>
Address code-review polish: warn when a trial check is past threshold but no scheduled config is found (was silent), and replace the magic number 5 with a named constant. Co-Authored-By: Claude Opus 4.7 <noreply@anthropic.com>
Trial-mode checks (scheduled by AD config discovery) report each run outcome to AutoConfig.RecordTrialResult and do not contribute to the integration_errors expvar. This produces the "silent retry until first success" UX the design calls for. - Add trialMode field + IsTrialMode() to PythonCheck; set from loader via integration.Config.TrialMode. - Add worker.SetTrialResultCallback / notifyTrialResult (trial.go) for a cycle-free callback from AutoConfig into the worker. - In worker.Run, after check.Run(): type-assert IsTrialMode(), call the callback, and on failure skip AddErrorsCount / service-check / stats. - In createNewAutoConfig, register ac.RecordTrialResult as the callback.
Snapshot the callback slice while holding the lock and invoke without the lock held to avoid blocking other registrations during slow callbacks. Rename SetTrialResultCallback to RegisterTrialResultCallback to reflect append-not-replace semantics. Co-Authored-By: Claude Opus 4.7 <noreply@anthropic.com>
The alt-PoC removes the run_discover/runDiscover bridge entirely; the Dockerfile's sanity check was requiring those symbols. Replace it with a plain file-existence check. Co-Authored-By: Claude Sonnet 4.6 <noreply@anthropic.com>
Co-Authored-By: Claude Sonnet 4.6 <noreply@anthropic.com>
CollectorImpl.RunCheck wraps every check in middleware.CheckWrapper before
handing it off to the scheduler. The worker's trial-mode hook does a type
assertion on the runtime check value (interface{ IsTrialMode() bool }),
and the wrapper does not satisfy that interface, so the assertion failed
silently and trial-mode failures were logged at ERROR + reported as
integration errors instead of being suppressed.
Add a forwarding IsTrialMode method on CheckWrapper that delegates to
the inner check via the same anonymous-interface assertion. Verified
against the krakend-delayed reproducer: failures now emit at DEBUG with
a "suppressing integration error" prefix and never reach AddErrorsCount.
The smoke test revealed a missed integration point in middleware.CheckWrapper that defeated trial-mode suppression. Document the finding and the trial-mode trace so the audit-cost concern is backed by a concrete example.
The previous version of the doc described the OM-base-class trial-mode
plumbing (placeholder URLs, _post_discovery_hook, _resolve_discovery,
ensure_discovery_resolved) which has since been replaced by an
AgentCheck-level mechanism (__new__ dispatch + _TrialModeProxy +
generate_configs classmethod).
Key changes from the previous version:
- Per-integration cost is now matched line-for-line with PoC A. The
earlier draft's claim of "PoC B krakend has +40 LOC vs main" is no
longer true: krakend is unchanged from main, n8n is unchanged, port-
hinted integrations are 1 line, boundary is ~5 lines (same shape as
PoC A's super().discover() override).
- PoC B uses the actual scraper as the verifier — there is no
probe/scraper asymmetry possible. Removes a class of failure modes
PoC A is structurally exposed to.
- All 7 e2e_discovery tests pass: krakend, boundary, cockroachdb, n8n,
pulsar, ray, temporal. (kuma is kind-based, kong has no test_e2e.py.)
- Added "Findings from running the PoC end-to-end" section documenting
five non-obvious issues surfaced by running the implementation
against real test environments: CheckWrapper IsTrialMode forwarding,
rtloader's "no subclasses" rule vs dynamic proxy classes, Python's
__new__/__init__ chaining for non-subclass returns, dd_agent_check
surfacing per-instance errors with multi-match discovery, and
discovery_min_instances semantics.
- Recommendation flipped from conditional ("ship PoC B IF multi-instance
isn't near-term") to direct ("ship PoC B"). Multi-instance is still
PoC A's structural advantage but PoC B's ergonomic gap that was the
main concession has closed.
Files inventory check summaryFile checks results against ancestor 80e785f4: Results for datadog-agent_7.80.0~devel.git.499.0a16ada.pipeline.111974053-1_amd64.deb:No change detected |
Static quality checks✅ Please find below the results from static quality gates Successful checksInfo
9 successful checks with minimal change (< 2 KiB)
On-wire sizes (compressed)
|
This comment has been minimized.
This comment has been minimized.
Regression DetectorRegression Detector ResultsMetrics dashboard Baseline: 80e785f Optimization Goals: ✅ No significant changes detected
|
| perf | experiment | goal | Δ mean % | Δ mean % CI | trials | links |
|---|---|---|---|---|---|---|
| ➖ | docker_containers_cpu | % cpu utilization | +0.42 | [-2.51, +3.35] | 1 | Logs |
Fine details of change detection per experiment
| perf | experiment | goal | Δ mean % | Δ mean % CI | trials | links |
|---|---|---|---|---|---|---|
| ➖ | uds_dogstatsd_20mb_12k_contexts_20_senders | memory utilization | +0.63 | [+0.58, +0.68] | 1 | Logs |
| ➖ | otlp_ingest_metrics | memory utilization | +0.58 | [+0.42, +0.74] | 1 | Logs |
| ➖ | docker_containers_cpu | % cpu utilization | +0.42 | [-2.51, +3.35] | 1 | Logs |
| ➖ | ddot_metrics | memory utilization | +0.39 | [+0.20, +0.59] | 1 | Logs |
| ➖ | docker_containers_memory | memory utilization | +0.37 | [+0.27, +0.47] | 1 | Logs |
| ➖ | tcp_syslog_to_blackhole | ingress throughput | +0.33 | [+0.18, +0.47] | 1 | Logs |
| ➖ | quality_gate_idle | memory utilization | +0.29 | [+0.24, +0.35] | 1 | Logs bounds checks dashboard |
| ➖ | ddot_metrics_sum_delta | memory utilization | +0.25 | [+0.06, +0.44] | 1 | Logs |
| ➖ | quality_gate_idle_all_features | memory utilization | +0.23 | [+0.19, +0.26] | 1 | Logs bounds checks dashboard |
| ➖ | file_tree | memory utilization | +0.18 | [+0.13, +0.23] | 1 | Logs |
| ➖ | ddot_logs | memory utilization | +0.08 | [+0.03, +0.14] | 1 | Logs |
| ➖ | file_to_blackhole_100ms_latency | egress throughput | +0.07 | [-0.05, +0.20] | 1 | Logs |
| ➖ | quality_gate_logs | % cpu utilization | +0.07 | [-0.91, +1.06] | 1 | Logs bounds checks dashboard |
| ➖ | file_to_blackhole_1000ms_latency | egress throughput | +0.02 | [-0.43, +0.46] | 1 | Logs |
| ➖ | file_to_blackhole_500ms_latency | egress throughput | +0.01 | [-0.37, +0.39] | 1 | Logs |
| ➖ | tcp_dd_logs_filter_exclude | ingress throughput | -0.00 | [-0.09, +0.09] | 1 | Logs |
| ➖ | file_to_blackhole_0ms_latency | egress throughput | -0.01 | [-0.50, +0.48] | 1 | Logs |
| ➖ | uds_dogstatsd_to_api | ingress throughput | -0.02 | [-0.21, +0.18] | 1 | Logs |
| ➖ | uds_dogstatsd_to_api_v3 | ingress throughput | -0.02 | [-0.21, +0.18] | 1 | Logs |
| ➖ | ddot_metrics_sum_cumulativetodelta_exporter | memory utilization | -0.07 | [-0.31, +0.16] | 1 | Logs |
| ➖ | ddot_metrics_sum_cumulative | memory utilization | -0.13 | [-0.29, +0.04] | 1 | Logs |
| ➖ | quality_gate_metrics_logs | memory utilization | -0.31 | [-0.55, -0.06] | 1 | Logs bounds checks dashboard |
| ➖ | otlp_ingest_logs | memory utilization | -0.69 | [-0.79, -0.59] | 1 | Logs |
Bounds Checks: ✅ Passed
| perf | experiment | bounds_check_name | replicates_passed | observed_value | links |
|---|---|---|---|---|---|
| ✅ | docker_containers_cpu | simple_check_run | 10/10 | 685 ≥ 26 | |
| ✅ | docker_containers_memory | memory_usage | 10/10 | 244.96MiB ≤ 370MiB | |
| ✅ | docker_containers_memory | simple_check_run | 10/10 | 719 ≥ 26 | |
| ✅ | file_to_blackhole_0ms_latency | memory_usage | 10/10 | 0.16GiB ≤ 1.20GiB | |
| ✅ | file_to_blackhole_0ms_latency | missed_bytes | 10/10 | 0B = 0B | |
| ✅ | file_to_blackhole_1000ms_latency | memory_usage | 10/10 | 0.22GiB ≤ 1.20GiB | |
| ✅ | file_to_blackhole_1000ms_latency | missed_bytes | 10/10 | 0B = 0B | |
| ✅ | file_to_blackhole_100ms_latency | memory_usage | 10/10 | 0.16GiB ≤ 1.20GiB | |
| ✅ | file_to_blackhole_100ms_latency | missed_bytes | 10/10 | 0B = 0B | |
| ✅ | file_to_blackhole_500ms_latency | memory_usage | 10/10 | 0.18GiB ≤ 1.20GiB | |
| ✅ | file_to_blackhole_500ms_latency | missed_bytes | 10/10 | 0B = 0B | |
| ✅ | quality_gate_idle | intake_connections | 10/10 | 3 ≤ 4 | bounds checks dashboard |
| ✅ | quality_gate_idle | memory_usage | 10/10 | 142.78MiB ≤ 147MiB | bounds checks dashboard |
| ✅ | quality_gate_idle_all_features | intake_connections | 10/10 | 3 ≤ 4 | bounds checks dashboard |
| ✅ | quality_gate_idle_all_features | memory_usage | 10/10 | 468.59MiB ≤ 495MiB | bounds checks dashboard |
| ✅ | quality_gate_logs | intake_connections | 10/10 | 3 ≤ 6 | bounds checks dashboard |
| ✅ | quality_gate_logs | memory_usage | 10/10 | 173.10MiB ≤ 195MiB | bounds checks dashboard |
| ✅ | quality_gate_logs | missed_bytes | 10/10 | 0B = 0B | bounds checks dashboard |
| ✅ | quality_gate_metrics_logs | cpu_usage | 10/10 | 334.80 ≤ 2000 | bounds checks dashboard |
| ✅ | quality_gate_metrics_logs | intake_connections | 10/10 | 4 ≤ 6 | bounds checks dashboard |
| ✅ | quality_gate_metrics_logs | memory_usage | 10/10 | 384.54MiB ≤ 430MiB | bounds checks dashboard |
| ✅ | quality_gate_metrics_logs | missed_bytes | 10/10 | 0B = 0B | bounds checks dashboard |
Explanation
Confidence level: 90.00%
Effect size tolerance: |Δ mean %| ≥ 5.00%
Performance changes are noted in the perf column of each table:
- ✅ = significantly better comparison variant performance
- ❌ = significantly worse comparison variant performance
- ➖ = no significant change in performance
A regression test is an A/B test of target performance in a repeatable rig, where "performance" is measured as "comparison variant minus baseline variant" for an optimization goal (e.g., ingress throughput). Due to intrinsic variability in measuring that goal, we can only estimate its mean value for each experiment; we report uncertainty in that value as a 90.00% confidence interval denoted "Δ mean % CI".
For each experiment, we decide whether a change in performance is a "regression" -- a change worth investigating further -- if all of the following criteria are true:
-
Its estimated |Δ mean %| ≥ 5.00%, indicating the change is big enough to merit a closer look.
-
Its 90.00% confidence interval "Δ mean % CI" does not contain zero, indicating that if our statistical model is accurate, there is at least a 90.00% chance there is a difference in performance between baseline and comparison variants.
-
Its configuration does not mark it "erratic".
CI Pass/Fail Decision
✅ Passed. All Quality Gates passed.
- quality_gate_logs, bounds check missed_bytes: 10/10 replicas passed. Gate passed.
- quality_gate_logs, bounds check intake_connections: 10/10 replicas passed. Gate passed.
- quality_gate_logs, bounds check memory_usage: 10/10 replicas passed. Gate passed.
- quality_gate_idle_all_features, bounds check intake_connections: 10/10 replicas passed. Gate passed.
- quality_gate_idle_all_features, bounds check memory_usage: 10/10 replicas passed. Gate passed.
- quality_gate_idle, bounds check memory_usage: 10/10 replicas passed. Gate passed.
- quality_gate_idle, bounds check intake_connections: 10/10 replicas passed. Gate passed.
- quality_gate_metrics_logs, bounds check cpu_usage: 10/10 replicas passed. Gate passed.
- quality_gate_metrics_logs, bounds check missed_bytes: 10/10 replicas passed. Gate passed.
- quality_gate_metrics_logs, bounds check intake_connections: 10/10 replicas passed. Gate passed.
- quality_gate_metrics_logs, bounds check memory_usage: 10/10 replicas passed. Gate passed.
Summary
PoC B for Configuration Discovery for Agent Integrations. Compare with PoC A in #50372.
Tracks Confluence ticket DSCVR/6650004331.
Companion integrations-core PR: DataDog/integrations-core#23624 (branch
vitkyrka/disco-autoconfig-alt).Approach
Instead of a new rtloader symbol that bridges into Python probe logic (PoC A), PoC B reuses the existing check runner as the probe mechanism. When AutoDiscovery matches a container via
auto_conf_discovery.yaml, it schedules a lightweight trial-mode config — a synthetic check instance that carries the matched service information. The check itself (on the Python side) uses that information to find a working configuration, running as a normal check run. The runner reports the outcome back to AD via a post-run callback.On success AD unschedules the trial config — the check has already self-configured and is running for real. On consecutive failures AD applies a budget (5 failures) before giving up and unscheduling.
No new rtloader symbols. The discovery logic lives entirely in the check, which the runner drives as usual.
Changes
comp/core/autodiscovery/integration/config.go:TrialMode boolfield; included in Digestcomp/core/autodiscovery/autodiscoveryimpl/configmgr.go: builds synthetic trial-mode configs in theresolveTemplateForServicediscovery branchcomp/core/autodiscovery/autodiscoveryimpl/trial.go(new): consecutive-failure counter per check; 5-failure threshold triggers unschedulecomp/core/autodiscovery/autodiscoveryimpl/autoconfig.go:RecordTrialResult,unscheduleCheckByIDcomp/core/autodiscovery/component.go:RecordTrialResultin public interfacepkg/collector/python/check.go+loader.go: propagateTrialModefrom config to the Python check objectpkg/collector/worker/trial.go(new): post-run callback registry (notifyTrialResult)pkg/collector/worker/worker.go: after each run, if the check is in trial mode callnotifyTrialResult; suppress error expvars on trial failurecomp/collector/collector/collectorimpl/internal/middleware/check_wrapper.go: forward trial-mode flag through the collector middlewareDesign doc / comparison with PoC A
docs/superpowers/2026-05-06-disco-autoconfig-alt-comparison.mdTest plan
dda inv test --targets=./comp/core/autodiscovery/...— unit tests passtest_e2e_discoverytests pass (ray, kuma, cockroachdb, temporal, boundary, pulsar, kong)dda inv linter.go— clean on touched packages🤖 Generated with Claude Code