Skip to content
Open
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
53 changes: 52 additions & 1 deletion sdks/how-evaluation-works.mdx
Original file line number Diff line number Diff line change
Expand Up @@ -22,6 +22,57 @@ For more details, check our open-source SDKs [here](https://github.com/statsig-i

This is not generally recommended, but for advanced use cases - e.g. a series of related experiments that needs to reuse the control and test buckets, we now expose the ability to copy and set the salts used for deterministic hashing. This is meant to be used with care and is only available to Project Administrators. It is available in the Overflow (...) menu in Experiments.

## Evaluation Order

When evaluating gates, experiments, and layers, the SDK iterates through a list of rules generated by the server. Rules are evaluated sequentially and the first matching rule determines the result. Overrides always take precedence because they appear first in the rule list.

<Note>
Each step uses the hash-based bucketing described above. Layer allocation and group assignment use different salts, so a user's position in the layer is independent of their group assignment within the experiment.
</Note>

### Experiments

When an experiment is evaluated (you call `getExperiment`), it follows this evaluation order:

1. **ID overrides** — Specific user/unit IDs mapped to a group
2. **Conditional overrides** — Segment or gate-based overrides, evaluated in order
3. **Layer holdouts** — If the experiment is in a layer, layer-level holdout gates are checked
4. **Holdout gates** — Experiment-level holdout gates; users in a holdout receive default values
5. **Experiment exclusion** — Mutual exclusion segments that prevent users from being in multiple experiments
6. **Start status** — If the experiment is not started, users receive default values (with optional non-production environment exceptions)
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

aside: we could get rid of this
or move layer holdout and experiment start status down below to the layers section

7. **Layer allocation** — For experiments in a layer, the user's bucket (based on the layer's universe salt) must fall within the experiment's allocated segments. This is checked **before** targeting.
8. **Targeting gate** — Users who fail the targeting gate receive default values. This is checked **after** layer allocation.
9. **Group assignment** — The user's bucket (based on the experiment salt) determines which group they fall into. Groups are cumulative ranges across 1000 buckets.
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

aside: don't think we need to say Groups are cumulative ranges across 1000 buckets.


### Layers

When a layer is evaluated (you call `getLayer`), it follows this evaluation order:

1. **Override rules** — ID overrides from all experiments in the layer
2. **Layer holdout gates** — Holdout gates attached to the layer
3. **Experiment allocation** — Each experiment in the layer has a `configDelegate` rule. The user's bucket determines which experiment they are delegated to.
4. **Delegated experiment evaluation** — Once delegated, the experiment's own evaluation runs (start status, targeting gate, group bucketing as described above)

If no experiment allocation rule matches, the user receives the layer's default values.

### Holdouts

Holdout gates evaluate in this order:

1. **Experiment exclusion** — Exclusion segments (if applicable)
2. **ID overrides** — Specific user/unit IDs
3. **Population targeting gate** — If the holdout has a targeting gate, users who fail it are not held out
4. **Holdout percentage** — The pass percentage on the holdout rule determines the holdout rate

### Gates

When a feature gate is evaluated (you call `checkGate`), it follows this evaluation order:

1. **ID overrides** — Specific user/unit IDs mapped to pass/fail
2. **Conditional overrides** — Segment or gate-based overrides
3. **Holdout rules** — If the gate has holdouts attached
4. **Rules** — The gate's targeting rules, evaluated in the order they appear in the console. Each rule has its own conditions and pass percentage.

## When Evaluation Happens
Evaluation happens when the gate or experiment is checked on Server SDKs. To be able to do this, Server SDKs hold the entire ruleset of your project in memory - a representation of each gate or experiment in JSON. On client SDKs, we evaluate all of the gates/experiments when you call initialize - on our servers.

Expand All @@ -38,4 +89,4 @@ A common assumption is that Statsig tracks of a list of all ids and what group t

## Evaluation of null/empty unitIDs

Note, we do not apply any filtering/ business logic before we assign an individual userID. This means that even a null or empty unitID will be bucketed depending on the salt.
Note, we do not apply any filtering/ business logic before we assign an individual userID. This means that even a null or empty unitID will be bucketed depending on the salt.