Skip to content

Conversation

@kwasniew
Copy link
Contributor

@kwasniew kwasniew commented Jan 21, 2026

Description

Adds full impact metrics support to this SDK

This PR is the first example of impact metrics implemented mostly in Yggdrasil, with the SDK doing the wrapper work.
The actual change is <100 LOC. Most of the PR is README and tests.

What the SDK has to do is:

  • orchestrate calls to the Ygg engine
  • add label resolution for impact metrics
  • verify everything works together in a test

Interesting decisions:

  • Node restores regular Unleash bucket metrics on failure, Python doesn't. This is pre-existing behavior. I didn't want to change that. However, we have smart restoration for impact metrics that is identical to the Node SDK's impact metric restoration on failure. Please shout if you disagree with this decision :)
  • The ImpactMetrics class is a wrapper around the Ygg engine with extra label resolution added on top
  • For testing, I decided to use file bootstrap to simplify setup and avoid mocks. I also test using the Unleash Client API and check what was sent over the wire. This test gives high ROI without testing the internals. The only thing I didn't like—but I found similar examples in other tests—is how we trigger metric sending. The public API doesn't expose an explicit send metrics method, so I use the internal method for metric sending (aggregate_and_send_metrics). I still prefer it to an arbitrary wait for the scheduler.
  • New feature (non-breaking change which adds functionality)

How Has This Been Tested?

Please describe the tests that you ran to verify your changes.

  • Unit tests
  • Spec Tests
  • Integration tests / Manual Tests

Checklist:

  • My code follows the style guidelines of this project
  • I have performed a self-review of my own code
  • I have commented my code, particularly in hard-to-understand areas
  • I have made corresponding changes to the documentation
  • My changes generate no new warnings
  • I have added tests that prove my fix is effective or that my feature works
  • New and existing unit tests pass locally with my changes
  • Any dependent changes have been merged and published in downstream modules

@coveralls
Copy link

coveralls commented Jan 21, 2026

Pull Request Test Coverage Report for Build 21248587395

Details

  • 65 of 70 (92.86%) changed or added relevant lines in 4 files are covered.
  • No unchanged relevant lines lost coverage.
  • Overall coverage decreased (-0.07%) to 93.342%

Changes Missing Coverage Covered Lines Changed/Added Lines %
UnleashClient/init.py 6 7 85.71%
UnleashClient/environment_resolver.py 14 15 93.33%
UnleashClient/periodic_tasks/send_metrics.py 8 11 72.73%
Totals Coverage Status
Change from base Build 19699333839: -0.07%
Covered Lines: 701
Relevant Lines: 751

💛 - Coveralls

@kwasniew kwasniew force-pushed the impact-metrics-with-ygg branch from 850ddee to 41adbe3 Compare January 21, 2026 09:00
@kwasniew kwasniew force-pushed the impact-metrics-with-ygg branch from 8642418 to 5730f0c Compare January 21, 2026 09:09
@kwasniew kwasniew force-pushed the impact-metrics-with-ygg branch from cb22b2e to 8b21608 Compare January 21, 2026 09:15
@kwasniew kwasniew force-pushed the impact-metrics-with-ygg branch from f59beb5 to a637b07 Compare January 21, 2026 09:19
@kwasniew kwasniew force-pushed the impact-metrics-with-ygg branch from 1cd0a29 to a9fa478 Compare January 21, 2026 09:39
@kwasniew kwasniew requested a review from sighphyre January 21, 2026 09:46
Copy link
Member

@sighphyre sighphyre left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Yeah cool, this shaped up really nicely

engine: UnleashEngine,
) -> None:
metrics_bucket = engine.get_metrics()
impact_metrics = engine.collect_impact_metrics()
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

If I remember the binding code, this can potentially raise an exception. Think that's pretty unlikely but how do you feel about making this flow not bork metrics if this fails?

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

hmm, I though it should never happen. But to be safe with some weird internal errors I can wrap it with an error handler

self.metric_job: Job = None
self.engine = UnleashEngine()
self._impact_metrics = ImpactMetrics(
self.engine, self.unleash_app_name, self.unleash_environment
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

We don't want to do what we do in Node and get this from the token? I'm going to deprecate this environment across the SDKs over the next few months, just a heads up

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

That's a good point. I remember fixing it in Node so we can do the same here.

return self._run_state == _RunState.INITIALIZED

@property
def impact_metrics(self) -> ImpactMetrics:
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Ah, found the Java developer! So I don't think this adds anything to our lives in the current shape. This is necessary pattern in Java but the magic of Python properties is that we can do this later if we ever feel the need without impacting the public API. I would just expose the field directly to be honest

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Honestly I didn't want to do this but got inspired by def connection_id(self) which should probably be a regular property

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

It probably should. Dunno why it's like that haha, probably AI code that got a bit too excited


```

### Impact metrics
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Thank you!

labels = self._resolve_labels(flag_context)
self._engine.observe_histogram(name, value, labels)

def _resolve_labels(
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I don't think this is wrong but it does look like "Java developer slipped, fell and landed in the Python runtime" to me. What you have is clear so I'm going to make some recommendations based on what I would do given my Python background and you can pick and choose some or none of this

I think the creation of the dict here is a bit unfortunate. We close over the environment and appName fields but literally only use them here to new this dict up. Smells like a way to work around Python's lack of destructuring. Good news, Python does have destructuring though!

This is also definitely begging to be a list comprehension with destructuring. That may or may not read okay to someone without a Python background but it reads fine to me

This is how I would have done this:

def __init__(self, engine: UnleashEngine, app_name: str, environment: str):
    self._engine = engine
    self._base_labels = {
        "appName": app_name,
        "environment": environment,
    }


def _variant_label(self, flag_name: str, ctx) -> str:
    variant = self._engine.get_variant(flag_name, ctx)
    if variant and variant.enabled:
        return variant.name
    if variant and variant.feature_enabled:
        return "enabled"
    return "disabled"


def _resolve_labels(
    self, flag_context: Optional[MetricFlagContext]
) -> Dict[str, str]:
    if not flag_context:
    	## Just a lil defensive copying so we don't leak mutable state
        return dict(self._base_labels)

    return {
        **self._base_labels,
        **{
            flag: self._variant_label(flag, flag_context.context)
            for flag in flag_context.flag_names
        },
    }

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

yup I like your version more. thanks for pointing it towards more idiomatic python

"""Context for resolving feature flag values as metric labels."""

flag_names: List[str] = field(default_factory=list)
context: Dict[str, Any] = field(default_factory=dict)
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Getting some mixed messages from this field. The class is a context but so is this. Is it contexts all the way down?

Weirdly enough because of this, I have no context for what this actually represents

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

this is consistent with what we called it in Node SDK. This is our user context so that we can evaluate flag_names to enabled/disabled/variant_name with a user context. I agree it may be confusing here since in Node Context has a proper shape:

export interface Context {
  [key: string]: string | Date | undefined | number | Properties;
  currentTime?: Date;
  userId?: string;
  sessionId?: string;
  remoteAddress?: string;
  environment?: string;
  appName?: string;
  properties?: Properties;
}

Copy link
Contributor Author

@kwasniew kwasniew Jan 21, 2026

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I think this SDK uses plain dict for user context everywhere and I don't want to make a big change in this PR

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Ah! It's the actual context. Yep, makes sense, let's keep it the same. Context needs a proper type some day, today is not that day

@github-project-automation github-project-automation bot moved this from New to Approved PRs in Issues and PRs Jan 21, 2026
@kwasniew kwasniew force-pushed the impact-metrics-with-ygg branch from 75d07f3 to d4b9794 Compare January 21, 2026 12:55
@kwasniew kwasniew force-pushed the impact-metrics-with-ygg branch from 37db13b to d41836b Compare January 21, 2026 13:07
@kwasniew kwasniew force-pushed the impact-metrics-with-ygg branch from 1cefe13 to bb4445f Compare January 22, 2026 11:45
@kwasniew kwasniew force-pushed the impact-metrics-with-ygg branch from 6445a59 to a935bd3 Compare January 22, 2026 12:32
@kwasniew kwasniew closed this Jan 22, 2026
@github-project-automation github-project-automation bot moved this from Approved PRs to Done in Issues and PRs Jan 22, 2026
@kwasniew kwasniew reopened this Jan 22, 2026
@github-project-automation github-project-automation bot moved this from Done to New in Issues and PRs Jan 22, 2026
@sighphyre sighphyre merged commit 9082c32 into main Jan 23, 2026
92 of 118 checks passed
@github-project-automation github-project-automation bot moved this from New to Done in Issues and PRs Jan 23, 2026
@sighphyre sighphyre deleted the impact-metrics-with-ygg branch January 23, 2026 08:35
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

Status: Done

Development

Successfully merging this pull request may close these issues.

3 participants