Skip to content

Build OpenAI weekly reset fast-mode forecast #9

@cbusillo

Description

@cbusillo

Objective

Help the user decide when it is safe to turn on high-burn OpenAI fast/thinking mode across multiple OpenAI accounts before the weekly reset.

Finish Line

Context Panel can show, per OpenAI account and across enabled accounts, estimated remaining weekly allowance, reset timing, forecast runway, and a clear recommendation such as safe, safe for about N hours, save fast mode, or needs calibration.

Current Status

State: Forecast model and tests already existed; PR #14 now wires the app shell headline to forecasts derived from the current cached OpenAI percent-window limits instead of sample-only data.
Next action: Add editable reserve/burn-rate calibration and account selection once the setup app has configurable accounts.
Blocked by: Editable account/configuration UI in #5.
Last verified: 2026-05-06.

Scope

  • Multiple OpenAI ChatGPT accounts.
  • Weekly reset windows and manually entered or observed reset times.
  • Local usage ledger from installation onward.
  • Optional manual correction when provider UI shows exhausted/remaining/reset state.
  • Burn-rate estimates for standard and fast/thinking modes.
  • Safety buffer and confidence labels.
  • Widget copy for safe/unsafe/uncalibrated states.

Acceptance Criteria

  • Forecast model distinguishes official, observed, manual, estimated, stale, and unknown data.
  • App can calibrate each OpenAI account reset time and starting allowance.
  • Widget can answer whether fast mode is safe now and for roughly how long.
  • Tests cover runway math, weekly reset behavior, reserve buffer, and multi-account aggregation.

Relationships

Validation

  • scripts/commit-gate.sh passed locally with 36 tests.
  • Forecast headline now reads from cached OpenAI percent limits after connector refresh/store load.
  • Existing forecast tests cover safe-through-reset, limited runway, save-fast-mode, calibration-needed, and multi-account portfolio selection.
  • PR Add OpenAI limit probe prototype #14 CI/CodeQL pending after commit b95d524.

Decisions

  • Do not pretend the estimate is exact when usage comes from local/manual observation.
  • Prefer useful confidence language over false precision.

Open Questions

  • What default reserve buffer should Context Panel use before saying fast mode is safe?
  • Should fast-mode burn rate start as a user-entered multiplier or only learn from history?

Metadata

Metadata

Assignees

No one assigned

    Labels

    planDurable planning issue

    Projects

    No projects

    Milestone

    No milestone

    Relationships

    None yet

    Development

    No branches or pull requests

    Issue actions