PoC: feat(profiling): add heap-live profiling for memory leak detection#3623
PoC: feat(profiling): add heap-live profiling for memory leak detection#3623realFlowControl wants to merge 4 commits intomasterfrom
Conversation
|
Track allocations that survive across profile exports using heap-live-samples and heap-live-size sample types. Samples are emitted in batches at export time. Enabled via DD_PROFILING_HEAP_LIVE_ENABLED when allocation profiling is active. Co-Authored-By: Claude Opus 4.5 <noreply@anthropic.com>
Benchmarks [ profiler ]Benchmark execution time: 2026-02-06 08:06:24 Comparing candidate commit 49b5835 in PR branch Found 0 performance improvements and 0 performance regressions! Performance is the same for 28 metrics, 8 unstable metrics. |
bc087f2 to
817465a
Compare
Codecov Report✅ All modified and coverable lines are covered by tests. Additional details and impacted files@@ Coverage Diff @@
## master #3623 +/- ##
==========================================
- Coverage 62.21% 62.14% -0.08%
==========================================
Files 141 141
Lines 13387 13387
Branches 1753 1753
==========================================
- Hits 8329 8319 -10
- Misses 4260 4270 +10
Partials 798 798 see 4 files with indirect coverage changes Continue to review full report in Codecov by Sentry.
🚀 New features to boost your workflow:
|
- Use functional style (map + match) in collect_batched_heap_live_samples - Only create ProfileIndex when heap-live tracking is enabled - Replace 32 repetitive I/O profiling lines with a loop - Use filter_map in sample type filter method - Add early bail-out in free_allocation when heap-live is disabled Co-Authored-By: Claude Opus 4.5 <noreply@anthropic.com>
Replace default SipHash with a simple bit-mixing hasher optimized for pointer addresses. Since pointers are already well-distributed, we use `ptr ^ (ptr >> 4)` instead of expensive cryptographic hashing. This reduces overhead in untrack_allocation() which is called on every free. Co-Authored-By: Claude Opus 4.5 <noreply@anthropic.com>
Benchmarks [ tracer ]Benchmark execution time: 2026-02-06 08:47:53 Comparing candidate commit 49b5835 in PR branch Found 1 performance improvements and 4 performance regressions! Performance is the same for 186 metrics, 3 unstable metrics. scenario:ComposerTelemetryBench/benchTelemetryParsing
scenario:SamplingRuleMatchingBench/benchRegexMatching1
scenario:SamplingRuleMatchingBench/benchRegexMatching2
scenario:SamplingRuleMatchingBench/benchRegexMatching3
scenario:SamplingRuleMatchingBench/benchRegexMatching4
|
…p-live Add an AllocationFilter (lock-free bloom filter) that checks if a pointer could possibly be tracked before doing the expensive DashMap lookup in free_allocation(). Uses atomic operations with Relaxed ordering — no locks needed. This provides a fast path for 99.9%+ of free() calls that are for non-tracked allocations, reducing overhead from hash computation and lock acquisition to just two atomic loads and bit tests. - AllocationFilter: 4KB fixed-size array of AtomicU64 (32768 bits) - Two hash functions for ~5% false positive rate at max capacity - Completely lock-free: fetch_or to set bits, load to test - Mark filter BEFORE DashMap insert to avoid false negatives - False positives are acceptable (just an extra DashMap lookup) - Cleared on profile export and fork Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
Warning
do not merge, this is PoC
Description
Track allocations that survive across profile exports using
heap-live-samplesandheap-live-sizesample types. Samples are emitted in batches at export time.Enable via
DD_PROFILING_HEAP_LIVE_ENABLEDordatadog.profiling.heap_live_enabled(default disabled), only works when allocation profiling is active.Reviewer checklist
PROF-13688