Anchor consumption grid lower bound to consumption_floor parameter#8
Open
hmgaudecker wants to merge 42 commits intomainfrom
Open
Anchor consumption grid lower bound to consumption_floor parameter#8hmgaudecker wants to merge 42 commits intomainfrom
hmgaudecker wants to merge 42 commits intomainfrom
Conversation
Consumption is now declared as `IrregSpacedGrid(n_points=N)` (no fixed points). Callers inject log-spaced gridpoints from `consumption_floor` to $300k via `aca_model.consumption_grid. inject_consumption_points(params=..., model=...)` before solving. This means the lowest consumption choice equals the per-iteration floor, removing a degree of freedom from the grid and eliminating the previous mismatch where c < floor was a legal grid choice. Requires pylcm support for runtime-supplied points on continuous action grids (PR OpenSourceEconomics/pylcm#338). aca-model CI now installs pylcm from the matching `feature/runtime-action-grids` branch. Other changes: - `consumption_grid.py`: new module with `compute_consumption_points` and `inject_consumption_points` helpers. - `benchmark.get_benchmark_params(*, model=None)`: when `model` is given, returns params with consumption points injected. - `benchmark.get_benchmark_initial_conditions`: switch from `.start` / `.stop` to `to_jax().min()` / `.max()` so it works on both `LinSpacedGrid` and `PiecewiseLinSpacedGrid` (the AIME grid is now piecewise; this was a pre-existing bug surfacing as `AttributeError`). Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>
`utility_scale_factor` was registered as a regime function returning a (n_pref_types,) array, then re-indexed by `pref_type` inside `bequest` and `utility`. pylcm broadcasts function outputs to per-cell scalars before consumption, so that `[pref_type]` indexing produced silent NaN in the dead regime's V — surfaced as the all-NaN failure on the ASV benchmark. Mirror the `discount_factor` pattern: take the state as input, return a per-cell scalar. Drop the `[pref_type]` indexing on `utility_scale_factor` from `utility` and `bequest` (those still index the params-Series `consumption_weight` and `coefficient_rra`, which is the supported pattern — only DAG function outputs are pre-broadcast). The matching pylcm validator (PR #338) now raises a clear `RegimeInitializationError` when a function output is consumed via state-indexing in a downstream consumer; this aca-model change is the fix that lets the dead regime construct under that validator. Tests in `test_preferences.py` and `test_model_components.py` updated to pass scalar `utility_scale_factor` and supply the new `pref_type` arg to `utility_scale_factor`. Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>
Reduce n_assets_batch_size from 2 to 1 in MODEL_CONFIG so the assets state axis is streamed one slice at a time, lowering peak GPU memory during solve on the V100-PCIE-16GB. Benchmark grid config is unchanged. Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>
The OOM at production grid sizes came from pylcm's deferred diagnostics flush in solve_brute (`_emit_deferred_diagnostics` materialising a fused per-period reduction graph at end-of-solve), not from per-period peak. Halving the assets batch did not address that; reverting so the production loop runs at its previous throughput. Workaround for the diagnostics OOM lives in aca-estimation's simulate tasks (log_level="off"). Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>
5 tasks
PR #339's per-period `block_until_ready` made the OOM surface inside the loop instead of at the post-loop diagnostic flush, but the 7.26 GiB allocation request was the same — it isn't the diagnostic accumulator, it's a real per-period `max_Q_over_a` working set at production grid sizes (`n_consumption=70`, `n_assets=24`, `n_aime=12`, plus the per-target next-V gather across reachable regimes). Cutting the assets-axis chunk back to 1 reduces the per-kernel peak. Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>
Production solve allocates a per-period Q intermediate of shape `(non-assets-states × actions)` per assets-batch slot. With `n_assets_batch_size=1` we already chunk that axis to the minimum; the remaining outer-state product (aime × wage_res × hcc × pref_type × health × ...) times the action grid still pushes past the V100 16 GB once `pref_type` is split off into its own partition lift, which removes a free factor that previously thinned the kernel. Add a sibling `n_aime_batch_size` knob (default 1, 0 in `BENCHMARK_GRID_CONFIG`) and thread it through both AIME grid types in `_build_aime_grid`. AIME has 12 prod gridpoints in the LinSpaced fallback and 32 in the PiecewiseLinSpaced production path, so a unit batch shrinks the live Q intermediate by roughly that factor — enough headroom to land back inside V100 memory. Pairs with the pylcm-side fix that stops `_DiagnosticRow` pinning per-period V templates in device memory (lazy-solve-diagnostics branch). The diagnostic leak masked the underlying batching gap; once it's gone, the Q intermediate is the next thing to size for the device. Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>
The grid floor already tracks the per-iteration `consumption_floor` parameter; the ceiling was a hardcoded 300k constant. Surface it as a fixed param via a marker function (`consumption_grid_upper_bound`) so callers can declare the bracket per model creation, and read it back at inject time from each regime's `resolved_fixed_params`. The marker function's output is intentionally unused — its only job is to put `max_consumption` in the regime params template so pylcm's fixed-param machinery captures it. dags.tree pruning drops the call at solve / simulate. Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>
The runtime-upper-bound change requires every caller to supply `max_consumption` via `fixed_params`; estimation tasks (e.g. `task_simulate_aca`) hit a `KeyError` mid-pipeline because they construct the model from data-derived `fixed_params` that have no reason to mention a grid bracket. Centralise the default in both `baseline.model.create_model` and `aca.model.create_model` so existing callers keep working with the prior 300k bracket and only opt-in callers need to override.
Lets callers opt in to pylcm's simulate-AOT path (`Model(n_subjects=...)`) without bypassing the aca-model factories.
The aca-model factories now require `n_subjects` as a kw-only int with no default — there's never a good reason for an aca-model caller to leave it unspecified, and silently letting it default to `None` (= no AOT, lazy-compile path) was exactly how the simulate-AOT benefit went unused on the prod estimation loop. Forcing each caller to make a deliberate choice catches that. Tests pass `n_subjects=1` for bare `get_params_template()` / shock-grid-inspection paths that never simulate.
… to Model The marker-function-via-DAG pattern didn't survive pylcm's pruning: `consumption_grid_upper_bound`'s output is unused, so dags.tree drops it before its `max_consumption` parameter reaches the params template, and `broadcast_to_template` has nowhere to put the value. Result: `resolved_fixed_params["max_consumption"]` was always missing, `inject_consumption_points` raised KeyError. Sidestep pylcm's params machinery for this knob: - Drop the `consumption_grid_upper_bound` marker function and the `_with_max_consumption_default` helper. - Add `max_consumption: float` (kw-only, required, no default) to all three factories: `baseline.create_model`, `aca.create_model`, `create_benchmark_model`. - Each factory attaches the value directly to the returned `Model` instance (`model.max_consumption = ...`). - `inject_consumption_points` reads `model.max_consumption` directly. No defaults — every caller passes the bracket explicitly.
Adds `MAX_CONSUMPTION = 300_000.0` to `baseline/regimes/_common.py` next to the other grid bounds (assets `stop=500_000.0`, AIME `stop=8_000.0`). The two `create_model` factories and `create_benchmark_model` no longer take `max_consumption` as a kwarg; each factory reads the constant directly and attaches it onto `model.max_consumption`. `inject_consumption_points` is unchanged — it still reads `model.max_consumption` (the legitimate consumer that combines it with the per-iteration `consumption_floor`). Routed via the Model attribute rather than `fixed_params` because pylcm validates fixed_params keys against the regime DAG and rejects entries no function consumes (`InvalidParamsError: Unknown keys: ['max_consumption']`). Also pins the pylcm CI ref to 6c610d1 — the squash-merge of pylcm #341 (int32 lock-in) into feat/simulate-aot-n-subjects — to make this build deterministic against pylcm drift. Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>
With consumption now declared as `IrregSpacedGrid(n_points=N)` and points filled at runtime from `geomspace(consumption_floor, max_consumption, N)`, the grid clusters densely just above `consumption_floor`. At the lowest-asset / highest-OOP-shock corner, those near-floor consumption choices push `next_assets = cash_on_hand - OOP - consumption` slightly below the assets grid's old lower bound (`0` for the bare model, `-max_annual_labor_income` when wage_params are available). Out-of-bounds interpolation of next-period V then injects NaN, which propagates back through E[V] and eventually fails `validate_V`. Symptom on the production solve: `Value function at age 93 in regime 'retiree_oamc_forced_forcedout': 7317 of 207360 values are NaN`, with the `[NOTE]` showing E[V] NaN concentrated at the lowest assets indices and the highest hcc_transitory shock. Subtract `MAX_CONSUMPTION` from the assets floor to give a worst-case single-period drain margin. With 24 linspace points spanning the wider range, the per-point density change is negligible; the dead state and the bare-model fallback get the margin too. The asymmetry fix is the cheapest one — no change to the consumption grid type, no change to per-iteration parameters, no new constraints. Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>
This reverts commit 714fee0.
Two new DAG functions in canwork & ss != "forced" regimes: - target_his(his, labor_supply, is_medicaid_eligible): HIS class of the surviving target regime. Mirrors the cross-HIS branches inside _make_transition_canwork (tied → nongroup when stopping work, Medicaid override → nongroup). - imputed_pension_wealth_next_period(next_aime, target_his, period, ...): computes pw_next_imputed = benefit_imputed(next_pia, next_period, target_his) · epdv_constant_pension[next_period] using bare-name parameters into 1-period-shifted views of the imputation arrays (`*_next_period`). Inlining is required because pylcm's AST shape inference doesn't trace nested calls into pensions.benefit. next_assets continues to consume pension_assets_adjustment, which now sees a real imputed_pension_wealth_next_period via the DAG (previously fixed to 0.0 in aca-estimation). The chained dependency next_aime → imputed_pension_wealth_next_period → pension_assets_adjustment is unblocked by pylcm exempting next_<state> names from fixed_param extraction (PR pylcm#342). Also drops pension_assets_adjustment from borrowing_constraint: a negative correction at a cross-HIS transition can leave no feasible action and inject `-inf` into V via `argmax_and_max(initial=-inf, where=F_arr)`, which then cancels with `0 * -inf = NaN`. The correction is a post-decision shift on next-period assets and must not gate the current consumption choice.
…iod key The frozen benchmark_params.pkl was generated when aca-estimation's _assemble_params.py still wrote the placeholder `fp["imputed_pension_wealth_next_period"] = 0.0` into fixed_params. Now that the regime registers `imputed_pension_wealth_next_period` as a DAG function (pension imputation correction in 4ae4446), pylcm's `_resolve_fixed_params` rejects the stale key with `InvalidParamsError: Unknown keys: ['imputed_pension_wealth_next_period']`. Drop the key on load so the snapshot stays valid. Regenerating `benchmark_params.pkl` end-to-end would also remove it; the filter is a no-op for a fresh snapshot.
The frozen `benchmark_params.pkl` predates aca-data's `_shift_one_period_forward` change, so the 1-period-shifted views the pension correction consumes are missing. Synthesise them on load with the same transformation aca-data applies. Regenerating the snapshot end-to-end would also produce the keys; this filter is a no-op for a fresh snapshot.
target_his is a DAG function returning an HealthInsuranceState int, used to index 2D imputation arrays inside imputed_pension_wealth_next_period. pylcm needs the categorical mapping declared so array_from_series can reshape (age, target_his)-indexed Series correctly. Mirrors the existing 'his' entry — same enum class.
The shifted imputation arrays (`imp_*_next_period`) are consumed by `imputed_pension_wealth_next_period(target_his, period, ...)`. pylcm's `_validate_and_reorder_levels` matches Series MultiIndex level names against the function's parameter names, so the level needs to be `target_his`, not `his`.
…sion chain) state_transitions["assets"] becomes a per-target dict. The dead target gets a simpler `next_assets_terminal` (cash + transfers - consumption - oop) without the `pension_assets_adjustment` chain, because: 1. There is no future for a dead agent — the imputation correction is meaningless. 2. `pension_assets_adjustment` consumes `imputed_pension_wealth_next_period` which consumes `next_aime`. The dead per-target transitions don't include `next_aime` (dead has no aime state), so dags can't resolve it and pylcm leaks `next_aime` into the kernel signature with no value to pass. Non-dead targets keep `assets_and_income.next_assets` (full version with the pension correction).
The pension imputation correction's `imputed_pension_wealth_next_period` indexes shifted arrays via `arr[period, target_his]`, where `target_his` is a DAG output (computed by `health_insurance.target_his` on nongroup/tied/retiree regimes), not a state. pylcm reads the level name `target_his` off the function body via AST inference and rejects matching `pd.Series` fixed_params unless `target_his` is declared as a derived categorical. Production `task_simulate_baseline` calls `create_model(...)` directly, which previously only forwarded the user's `derived_categoricals` arg. The benchmark module was masking this by injecting target_his via `_DERIVED_CATEGORICALS`. Move the declaration to `create_model` itself so the correction works in production without per-caller setup. Tighten the param annotation: pylcm's `Model.derived_categoricals` is a flat `Mapping[str, DiscreteGrid]`, never the nested form.
Same fix as baseline.model.create_model e1a3eb2: ACA variant model creation also takes its own path through `Model(...)`, so the production `task_simulate_aca_*` flows hit the same "Unrecognised indexing parameter 'target_his'" error after the pension correction landed. Move the derived-categorical declaration into the function itself rather than relying on per-caller setup. Tighten the param annotation to match pylcm's flat `Mapping[str, DiscreteGrid]`.
…ation Asserts that `validate_initial_conditions` admits a subject placed at `assets = -1_000_000` in `retiree_nomc_inelig_canwork` under the benchmark model. Encodes the economic story: with the consumption floor / transfer system, any past assets level is representable — `c = c_floor` is always feasible because `transfers` tops up cash-on-hand to the floor. The test passes today on benchmark params; it doesn't reproduce the gpu-01 failure (production-side, separate setup loaded by `aca-estimation`'s `assemble_fixed_params`). Kept as a permanent regression guard so a future change that re-introduces a constraint shape that rejects extreme negatives is caught immediately at benchmark scale. Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>
…ellation
The expression `cash_on_hand + transfers` suffers float32 catastrophic
cancellation when `|cash_on_hand|` is much larger than `consumption_floor`.
For a subject at $-1{,}000{,}000$ in starting assets:
cash_on_hand ≈ -1e6 (dominated by assets)
transfers = max(0, c_floor - cash_on_hand) ≈ c_floor + 1e6
cash_on_hand + transfers ≈ c_floor ± 0.1 (fp32 error at 1e6 magnitude)
The lowest grid `c` is exactly `c_floor`. With unfavorable rounding,
`c_floor <= c_floor - 0.1` is False — every action gets rejected and
`validate_initial_conditions` raises. This is exactly the failure
gpu-01 hit on `task_simulate_aca_*`: the per-constraint diagnostic
showed `borrowing_constraint = False` (rejects every action by itself)
while `positive_leisure = True`.
The algebraic identity `cash_on_hand + transfers == max(cash_on_hand,
floor)` (where `floor = c_floor * equivalence_scale`) holds exactly
because `transfers` is defined as `max(0, floor - cash_on_hand)`.
Substituting in:
cash_on_hand + max(0, floor - cash_on_hand)
= max(cash_on_hand, cash_on_hand + floor - cash_on_hand)
= max(cash_on_hand, floor)
The `max` form has no cancellation: it returns `floor` exactly when
`cash_on_hand << floor`, and `cash_on_hand` exactly otherwise. Switch
the constraint to take `consumption_floor` and `equivalence_scale`
directly and compute `floor = consumption_floor * equivalence_scale`
in-line.
Add a precision-specific unit test asserting `c = c_floor` is admitted
at `cash_on_hand = -$1M` in fp32. The pre-existing benchmark-based
regression guard (`test_extreme_negative_assets_subject_passes_
validation`) didn't catch the bug because benchmark params land on the
favorable side of the rounding; the new test exercises the exact
cancellation case.
Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>
The previous pin (6c610d1, "Lock integer dtype to int32 end-to-end")
predates pylcm #342, so the test_initial_conditions_extreme_assets
test (and any other test that solves a benchmark regime carrying the
pension-imputation correction) raised:
InvalidParamsError: Missing required parameter:
'retiree_nomc_inelig_canwork__imputed_pension_wealth_next_period__next_aime'
#342's `regime_template` change exempts `next_<state>` references
inside transition signatures from `fixed_param` extraction, which the
correction's `imputed_pension_wealth_next_period(next_aime, ...)`
signature relies on. The new pin tracks `feat/simulate-aot-n-subjects`,
which carries #342, #339, #340 (n_subjects API used by
`create_benchmark_model`), and the per-constraint validation
diagnostic.
Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>
Production failure root cause: `consumption_floor` is a Python fp64 float (≈ 1597.0921419521899); `consumption` arrives from the model's fp32 grid (`jnp.geomspace(consumption_floor, ...)`), quantized to 1597.0921630859375 — one fp32 ulp above the input. Without an explicit dtype cast on the floor, `consumption_floor * equivalence_scale` keeps its fp64 type, the comparison promotes to fp64, and the lowest grid point evaluates as 1597.0921630859375 > 1597.0921419521899 → False. Constraint rejects every action. Cast `consumption_floor` to `consumption.dtype` before the multiply so both sides of the `max` use the same precision. Constraint then admits c=c_floor by exact equality in fp32. Diagnosed via the per-constraint admissibility table (pylcm 838473e/ e4cae2a): production showed `borrowing_constraint=False` at modest asset levels (e.g. -$42k), where neither cash_on_hand magnitude nor NaN propagation could explain the rejection. Local repro pinned the ulp mismatch. Add `test_borrowing_constraint_admits_c_floor_with_python_float_floor` as a regression guard at the precise production scenario. Drop the debug script; it served its purpose. Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>
`jnp.asarray(consumption_floor, dtype=consumption.dtype)` quantized the Python-float `consumption_floor` to the action grid's dtype to match the fp32-quantized consumption grid, so the `c == c_floor` boundary compared as exact equality. The pylcm canonical-float boundary cast (#345) routes every continuous-grid `to_jax()` through `canonical_float_dtype()`. Under `jax_enable_x64=True` (set in `aca_model/__init__.py`) that's `fp64`, so the action grid no longer quantizes the floor and Python-float / grid-value cannot disagree on dtype in the first place. Drop the regression test pinned to the cast workaround; the `max(cash_on_hand, floor)` cancellation guard and the full validate- initial-conditions integration test stay in place. Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>
8cb2936 to
c895bd9
Compare
`from tests.helpers.social_security import …` collided with the sibling `tests/__init__.py` packages in aca-data and aca-estimation when pytest collected from the aca-dev workspace root — whichever `tests` package got imported first shadowed the others. Use a relative import so each test module resolves its own helpers package unambiguously. Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>
Reverts the relative-import attempt and instead removes the empty tests/__init__.py (which was colliding with aca-data and aca-estimation's identically named stubs across the aca-dev workspace). A new tests/conftest.py prepends the tests directory to sys.path so `from helpers.social_security import ...` resolves unambiguously. Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>
Cleanup driven by pylcm's canonical-float boundary cast (#345). With every input pinned to fp64 under `jax_enable_x64=True` (which `aca_model/__init__.py` sets at import), aca-side precision workarounds no longer have a hook. Source: - `borrowing_constraint`: switch from `consumption <= max(cash_on_hand, floor)` to `consumption <= cash_on_hand + transfers`. The two are algebraically identical (`cash_on_hand + transfers == max(cash_on_hand, floor)`); the `max` form was justified by float32 catastrophic cancellation at extreme negative cash_on_hand, which cannot occur under fp64. The constraint now consumes `transfers` directly instead of recomputing `consumption_floor * equivalence_scale` — `transfers` is already a DAG node, so the resolved interface is shorter. Defaults dropped (callers must pass everything explicitly): - `aca_model.benchmark.create_benchmark_model`: `pref_type_grid`. - `aca_model.benchmark.get_benchmark_params`: `model`. - `aca_model.benchmark.get_benchmark_initial_conditions`: `n_subjects`, `seed`. - `aca_model.baseline.model.create_model`: `fixed_params`, `wage_params`, `derived_categoricals`, `grid_config`, `pref_type_grid`. - `aca_model.aca.model.create_model`: `policy`, `fixed_params`, `wage_params`, `derived_categoricals`, `grid_config`. - `aca_model.baseline.regimes.build_all_regimes`: same five. - `aca_model.aca.regimes.build_all_regimes`: same four. - `aca_model.baseline.regimes._common.build_grids`: same four. - Drop `GRID_CONFIG` import where it was only used as a default value. Tests: - New `tests/helpers/model.py` exposes `make_baseline_model` and `make_aca_model` factories that wrap `create_model` with `None` for every optional input. Tests that don't need fixed params reach the factories through the helper rather than spelling out six `None`s each. Production code stays default-free. - New `test_benchmark_simulate_obeys_borrowing_constraint`: pins the invariant `consumption <= cash_on_hand + transfers` on every alive row of the benchmark simulation. Catches a regression that drops the constraint from a regime, replaces transfers with something looser, or lets an action grid skip the floor. - `test_initial_conditions_extreme_assets`: drop the fp32-specific cancellation regression test (the runtime no longer reaches that path); replace with a pair of unit tests for the new `borrowing_constraint(consumption, cash_on_hand, transfers)` signature.
…e cash
The `consumption <= cash_on_hand + transfers` form (algebraically
identical to `consumption <= max(cash_on_hand, floor)`) rounds short by
sub-ULP at extreme `|cash_on_hand|` ~ 1e6 — for HRS-bottom-coded
subjects at `assets=-$1{,}000{,}000$`, the additive RHS comes in at
`floor - 5.7e-11` (fp64), flipping the kink-boundary `<=` for the
lowest consumption gridpoint. Production task_simulate_aca_no_mandate
on HPC fails at validate_initial_conditions for those subjects.
The `max(cash_on_hand, floor)` form has no cancellation and returns
`floor` exactly when `cash_on_hand < floor`. This is a general
floating-point precision concern at extreme operands, not an
fp32-specific workaround. Docstring updated accordingly.
Reverts the signature back to
`(consumption, cash_on_hand, consumption_floor, equivalence_scale)`.
Tests:
- `test_borrowing_constraint_admits_floor_at_million_dollar_negative_cash`:
unit-level reproducer of the production failure — passes only with
the `max` form.
- The two new `_at_floor` / `_above_post_transfer_resources` unit tests
switch back to the new signature.
- `test_benchmark_simulate_obeys_borrowing_constraint`: post-hoc check
uses `max(cash_on_hand, floor)` rather than `cash_on_hand +
transfers` (the additive form has the same sub-ULP issue and would
spuriously trip on the same rows).
`jnp.geomspace(consumption_floor, max_consumption, num=n)` returns `consumption_floor * r^0 == consumption_floor` mathematically, but some XLA backends drift the first point by sub-ULP. CUDA at n=70 produces `consumption_floor + 2.27e-13`. The borrowing_constraint compares `consumption[0]` against `max(cash_on_hand, consumption_floor)` and any positive drift above `consumption_floor` flips the kink- boundary `<=` for subjects with very negative cash — explaining the HPC-only `task_simulate` failures (~250 subjects) that didn't reproduce on CPU. Pin the first gridpoint back to `consumption_floor` after geomspace. The same drift exists at the upper end (`pts[-1] != max_consumption` exactly) but doesn't flip any constraint comparison, so it's left alone. `tests/test_consumption_grid.py` parametrises the invariant over `n_points = 5, 16, 64, 70, 100` so a future XLA / JAX upgrade that introduces drift at any of these counts surfaces here rather than at `validate_initial_conditions` on HPC.
Sweeps in the dtype-barrier polish, simulate AOT-during-solve, and the persistence/benchmark fixes from feat/canonical-float-dtype. Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>
Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>
de76dd1 to
9dd1e2f
Compare
This file contains hidden or bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
Add this suggestion to a batch that can be applied as a single commit.This suggestion is invalid because no changes were made to the code.Suggestions cannot be applied while the pull request is closed.Suggestions cannot be applied while viewing a subset of changes.Only one suggestion per line can be applied in a batch.Add this suggestion to a batch that can be applied as a single commit.Applying suggestions on deleted lines is not supported.You must change the existing code in this line in order to create a valid suggestion.Outdated suggestions cannot be applied.This suggestion has been applied or marked resolved.Suggestions cannot be applied from pending reviews.Suggestions cannot be applied on multi-line comments.Suggestions cannot be applied while the pull request is queued to merge.Suggestion cannot be applied right now. Please check back later.
Summary
consumptionbecomes anIrregSpacedGrid(n_points=N)action grid; the actual gridpoints are supplied per estimation iteration via the newaca_model.consumption_grid.inject_consumption_points(params=..., model=...)helper, log-spaced from the per-iterationconsumption_floorparameter to $300k.c < floorwas a legal grid choice. The agent can now never consume below the transfer floor.Depends on
feature/runtime-action-gridsuntil that lands.Notes
get_benchmark_initial_conditionsnow usesto_jax().min()/.max()instead of.start/.stopso it works on the piecewise AIME grid (pre-existing AttributeError once AIME became piecewise).Test plan
pixi run -e tests-cpu tests aca-model/tests/(199 passed)🤖 Generated with Claude Code