Skip to content

implement mvp for span derived primary tags#11358

Draft
Sam-Maya wants to merge 4 commits into
masterfrom
sam-maya/span-derived-primary-tags-v1
Draft

implement mvp for span derived primary tags#11358
Sam-Maya wants to merge 4 commits into
masterfrom
sam-maya/span-derived-primary-tags-v1

Conversation

@Sam-Maya
Copy link
Copy Markdown

@Sam-Maya Sam-Maya commented May 12, 2026

What Does This Do

Adds an MVP implementation of span-derived primary tags for client-side stats in the Java tracer (spec). Users can configure a list of span tag keys via DD_TRACE_STATS_ADDITIONAL_TAGS; the tracer extracts the matching values from each eligible span, adds them as an additional aggregation dimension on MetricKey, and emits them in the v0.6/stats payload under the new AdditionalMetricTags field (msgpack array of "key:value" strings, mirroring PeerTags).

Motivation

Customers want to aggregate APM stats by custom dimensions beyond the standard env primary tag (e.g. region, tenant_id) for more granular per-segment visibility. Previously this was only available via the deprecated agent-side config; this PR moves it tracer-side per the v1.3.0 spec.

Behavior summary

  • Config: DD_TRACE_STATS_ADDITIONAL_TAGS=region,tenant_id (env), dd.trace.stats.additional.tags=region,tenant_id (sysprop), or trace.stats.additional.tags=region,tenant_id in dd-java-tracer.properties. Comma-separated list of tag keys.
  • Normalization: configured keys are deduplicated and sorted alphabetically once at startup so config order/duplicates don't affect aggregation keys.
  • Per-span extraction: for each eligible span (measured / top-level / configured span-kind), the configured tag keys are looked up via span.unsafeGetTag(...); only present values are included. Trace/chunk-level tags are visible through the same call.
  • MetricKey participation: extracted tags become part of the aggregation key — spans with region:us-east-1 aggregate separately from region:eu-west-1; spans without the configured tag aggregate together (empty list).
  • Wire format: emitted as AdditionalMetricTags (repeated string, each entry "key:value") in the per-metric msgpack map. The field is only included when at least one tag was extracted, so the 99% of customers who don't configure the feature pay zero payload overhead.
  • Backwards compatible: older agents that don't recognize the field ignore it; this PR does not add a discovery flag.
  • Soft cap: MAX_ADDITIONAL_TAG_KEYS = 10. The backend stats pipeline accepts only 4 primary tag dimensions by default (occasionally up to ~10 for elevated quotas); going beyond gets dropped downstream while inflating tracer-side cardinality. Extras are dropped post-sort with a log.warn.

Additional Notes

This is an MVP. Out of scope for this PR:

  • Tag-value cardinality protection (the per-value-per-tag cap and blocked_by_tracer overflow behavior from the spec). The cap above is on the number of configured keys, not on the number of unique values per key. Cardinality protection will land separately as it requires an HLL/bounded-set estimator and a DD_TRACE_STATS_ADDITIONAL_TAGS_CARDINALITY_LIMIT config.
  • `setMeasuredTag(key, value)` programmatic API (spec marks this as future work).
  • Tags not available during aggregation - Tags that are not available on spans during stat calculation are not included in this MVP.

Contributor Checklist

Jira ticket: [PROJ-IDENT]

}

public List<String> getTraceStatsAdditionalTags() {
return tryMakeImmutableList(configProvider.getList(TRACE_STATS_ADDITIONAL_TAGS));
Copy link
Copy Markdown
Contributor

@sarahchen6 sarahchen6 May 12, 2026

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Instead of a List where something like region,tenant_id and tenant_id,region would be seen as different values and duplicates would be preserved, instead we can define this as a Set? assuming order and duplicates do not matter. For example,

public Set<String> getTraceStatsAdditionalTags() {
  return tryMakeImmutableSet(configProvider.getList(TRACE_STATS_ADDITIONAL_TAGS));
}

similar to https://github.com/DataDog/dd-trace-java/blob/master/internal-api/src/main/java/datadog/trace/api/Config.java#L5107. Claude recommended the following change as well to dd-trace-core/src/main/java/datadog/trace/common/metrics/ConflatingMetricsAggregator.java so that additionalTagKeys is still iterable while maintaining no duplicates, no config-order sensitivity, and stable payload/key ordering:

List<String> orderedAdditionalTagKeys = new ArrayList<>(additionalTagKeys);
Collections.sort(orderedAdditionalTagKeys);
this.additionalTagKeys = Collections.unmodifiableList(orderedAdditionalTagKeys);

and corresponding tests that check duplicate keys / same keys but re-ordered return the expected result 🤔

final boolean hasGrpcStatusCode = key.getGrpcStatusCode() != null;
final int mapSize =
15
16
Copy link
Copy Markdown
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

A future improvement can be to make this conditional such as hasServiceSource and hasHttpMethod below. Otherwise, most customers will always get an empty additional tag and thus an unnecessarily large payload

boolean includeEndpointInMetrics) {
this.ignoredResources = ignoredResources;
this.additionalTagKeys =
additionalTagKeys == null ? Collections.emptyList() : additionalTagKeys;
Copy link
Copy Markdown
Contributor

@sarahchen6 sarahchen6 May 12, 2026

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

We could also add a MAX_ADDITIONAL_TAG_KEYS value limiting the number of tag keys that customers can set to prevent "infinite" tag keys and possible exploitation(? - not sure if exploitation is possible here but a limit and warning once you hit the limit seems safer)

Copy link
Copy Markdown
Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

We only allow 4 unique primary tag keys in the UI so if a user is setting more than that they are aggregating their stats on an additional dimension only for that value to get discarded in our stats pipeline.
IIRC we do increase the amount of keys a customer can set but its rare, a max value of 10 I think makes sense here. It provides protection against the extreme while leaving some room for customers with an increased key count.

Copy link
Copy Markdown
Contributor

@dougqh dougqh left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Given that we've already had issues with cardinality in this code, for me, a minimal viable solution needs to include a more robust solution to tag cardinality.

While right now there is a size cap. When we hit the size cap, we silently drop stats which leads to a bad user experience.

I think we need to sort those issues out; otherwise, adding more dimensions will just exacerbate the problem.

ADDITIONAL_TAG_VALUES_CACHE = DDCaches.newFixedSizeCache(64);
private static final Function<
String, Pair<DDCache<String, UTF8BytesString>, Function<String, UTF8BytesString>>>
ADDITIONAL_TAG_VALUES_CACHE_ADDER =
Copy link
Copy Markdown
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

In hindsight, I wish we hadn't chosen to concatenate key + ":" + value in the payload. For keeping memory down, separate fields would probably have been better.

return Collections.emptyList();
}

private List<UTF8BytesString> getAdditionalTags(CoreSpan<?> span) {
Copy link
Copy Markdown
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

We already have high allocation from creating MetricKey-s just to perform Map look-ups.
Producing an intermediate list with UTF8BytesStrings is going to compound that problem significantly.

Copy link
Copy Markdown
Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I copied how we interact with peer tags, to keep behavior consistent. What do you propose I do differently?

key ->
Pair.of(
DDCaches.newFixedSizeCache(512),
value -> UTF8BytesString.create(key + ":" + value));
Copy link
Copy Markdown
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I think as is this degenerates into hot allocation when a tag has high cardinality. In high cardinality situation, we need to avoid the concatenation before it happens.

Right now, we're still paying the allocation cost and undoing the benefit of the cache.

Copy link
Copy Markdown
Contributor

@dougqh dougqh May 13, 2026

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Just to verify, I ran this through Claude as well...

Vulnerabilities / Misconfiguration risk

  1. No per-value cardinality cap (acknowledged in PR description)
    The MAX_ADDITIONAL_TAG_KEYS = 10 cap protects against the number of configured keys. It does nothing about the number of unique values per key. A single misconfiguration —
    DD_TRACE_STATS_ADDITIONAL_TAGS=user_id or request_id or trace_id — gives the customer an unbounded MetricKey set, unbounded keys map growth, and an oversized payload to the
    agent. The PR explicitly defers cardinality protection ("requires an HLL/bounded-set estimator"), which is reasonable for MVP — but in its current form this config is a foot-gun
    and should not be enabled by default, which it isn't. Two suggestions:
  • Add a log.warn (one-shot, at startup) when additionalTagKeys is non-empty, calling out the cardinality risk and pointing at the future
    DD_TRACE_STATS_ADDITIONAL_TAGS_CARDINALITY_LIMIT config.
  • Consider a stop-gap "max unique MetricKeys before we stop adding" guard rather than waiting for the full HLL implementation.
  1. Tag values are not length-bounded before being interned and serialized
    UTF8BytesString.create(key + ":" + value) will faithfully serialize whatever the customer set on the span tag. If application code stuffs a JSON body or a stack trace into a tag
    named in DD_TRACE_STATS_ADDITIONAL_TAGS, the per-value cache entry, the MetricKey, and the msgpack payload all carry it. This is pre-existing for peer tags too, but again is
    amplified here because the user controls which keys feed this path. A value.length() < N guard with a one-shot warn would be cheap insurance.

Overhead
5. Cache footprint scales with MAX_ADDITIONAL_TAG_KEYS × per-key-cache-size
ADDITIONAL_TAG_VALUES_CACHE is newFixedSizeCache(64) (plenty) but each entry holds an inner newFixedSizeCache(512). With 10 configured keys that's up to 10×512 = 5,120 cached
UTF8BytesString entries plus the lambda closures, vs. the existing peer-tags cache of similar size. Acceptable, but worth a comment near ADDITIONAL_TAG_VALUES_CACHE explaining
the bound (mirrors the existing comment on PEER_TAGS_CACHE).

Copy link
Copy Markdown
Contributor

@dougqh dougqh May 13, 2026

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Also worth noting, that the cache is still unbounded in terms of bytes. That has bit us with SQL statements previously.

And I guess part of what Claude is pointing out is that tags may also contain complicated objects, we might need to filter those out otherwise allocation could explode if misconfigured. Although, that was definitely a pre-existing issue.

Copy link
Copy Markdown
Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

added basic cardinality control and will add a char limit to the string values, would that address all your concerns here?

Copy link
Copy Markdown
Contributor

@dougqh dougqh May 14, 2026

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I'm experimenting with a revised approach.

I think the simplest solution might be to combine the cardinality tracking and the caching into a single class.

In hindsight, I'd argue that DDCache isn't a great fit for metrics. DDCache works well with low cardinality, but doesn't work well in situations where cardinality can be high.

In a strict sense, that's okay because the GC will be able to reclaim the memory, so memory isn't unbounded. However, a solution that deals with high cardinality more gracefully would be preferable.


private final int limitPerTag;
private final HealthMetrics healthMetrics;
private final ConcurrentHashMap<String, Set<String>> seenValuesPerTag = new ConcurrentHashMap<>();
Copy link
Copy Markdown
Contributor

@dougqh dougqh May 13, 2026

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I think we're going to want a lighter weight collection for this purpose.
ConcurrentHashMap is very allocation heavy.

As mentioned elsewhere, I think a better would be to have an AdditionalTagCardinalityLimiter per tag. Then we don't need the extra allocation heft of one of the ConcurrentHashMap.

I'm also debating whether we can do something cheaper for the Set, but that I think we could leave for later.

I'm also hoping to get away from the extra coordination overhead needed for ConcurrentHashMap by moving more of the metrics work into a background thread.

Copy link
Copy Markdown
Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

What do you propose we use instead? An array of length 100 would work but then our lookup time is less efficient.


private final Set<String> ignoredResources;
private final List<String> additionalTagKeys;
private final AdditionalTagsCardinalityLimiter cardinalityLimiter;
Copy link
Copy Markdown
Contributor

@dougqh dougqh May 13, 2026

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I think rather than a single AdditionalTagsCardinalityLimiter that handles all the extra tags. A better design would be a handler per configured tag.

That would avoid the Map of Set design which is adding undue allocation.

Although for the most part, I think we just need to take a step back and rework this code before we add more to it. Rather than continue to review this PR, I'd like to work on solutions to these issues more directly.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

3 participants