From c588c5c5fd291444e7911f636132d99d30ddb43a Mon Sep 17 00:00:00 2001 From: Drew Morgan Date: Sat, 25 Apr 2026 00:17:24 +0100 Subject: [PATCH 1/4] chore: session handoff --- context/next-prompt.txt | 31 +- context/note-to-next-instance.md | 135 ++--- context/session-handoff.md | 516 ++---------------- .../sessions/2026-04-25-0014-next-prompt.txt | 28 + .../sessions/2026-04-25-0014-note-to-next.md | 79 +++ .../2026-04-25-0014-unusual-activity-pivot.md | 482 ++++++++++++++++ 6 files changed, 711 insertions(+), 560 deletions(-) create mode 100644 context/sessions/2026-04-25-0014-next-prompt.txt create mode 100644 context/sessions/2026-04-25-0014-note-to-next.md create mode 100644 context/sessions/2026-04-25-0014-unusual-activity-pivot.md diff --git a/context/next-prompt.txt b/context/next-prompt.txt index bfe851c..56499c9 100644 --- a/context/next-prompt.txt +++ b/context/next-prompt.txt @@ -1,28 +1,11 @@ -Hi Claude. Read context/session-handoff.md — the top "2026-04-24 end-of-session summary" block is the current state; the rest is historical narrative worth reading once but skimmable thereafter. +Hi Claude. Read context/session-handoff.md and context/note-to-next-instance.md before doing anything. -**Immediate state**: +Quick orientation: the previous session's arc shipped the API key security work end-to-end (Phases 1+2+3) and the Top Movers redesign including the Unusual Activity pivot — new card on the Home page, new item_unusual_candidates table, multi-horizon z-score ranking. All on main and development; tip is e2119e3. -- Development/main: all Phase 1+2+3 security work + Top Movers redesign Phase 1 (including the sign-gate fix and the percentile_cont::numeric cast) shipped to prod. Drew verified the widget against real item charts. -- Uncommitted tweaks in the working tree (ItemVolatilityStatsRepository.cs + TODO.md): trimmed-median baseline + 2-day baseline buffer + z-score threshold raised from 1.0 to 1.5. Build clean. Drew will review and commit when he's back. +The cards are done for now and need to live a few days before tuning. When Drew asks what's next, don't push the cards work — point at TODO.md (lots of options) or wait for him to nominate. -**Validated against Drew's data export**: these tweaks improve ranking noise filtering but don't fully eliminate the DSLR-Camera-style post-spike reversion. That's accepted as a known limitation because the next piece of work (the "Unusual activity" pivot) reframes the card so that reversion is a legitimate signal rather than misleading. +Two parked items needing explicit sign-off before action: +- Read-only prod DB access for offline data analysis. +- Cross-item spike correlation / Torn event-calendar analysis tool. -**Next piece of work: "Unusual activity" pivot**. - -Drew wants to keep the risers/fallers cards but add an "unusual activity" framing for markets departing from their normal trends. With ~29 polling keys this is honestly what we CAN detect (vs top-N movers which we can't). Architecture sketch in the handoff: - -- Two-step pass: Hangfire writes an `item_unusual_candidates` shortlist; home-page endpoint joins against fresh data and re-scores cheaply. -- Multi-horizon z-scores (1h / 6h / 24h / 7d) against 30d baseline; "unusualness" = max |z|. -- "Why flagged" chip per row. -- Mode-to-nearest-1%-of-range as a display metric for item pages (not a ranking signal). - -**Two open design questions Drew raised at end of session** (worth confirming before building): - -1. SQL in C# vs stored procedures — my vote: stay in C#. -2. Shrink item_change_log_summaries buckets from 6h to 1h — my vote: yes, 1h. Needs a migration plan for existing data. This unlocks genuine intraday resolution for the multi-horizon pivot. - -**Do not** start building the pivot without Drew's go-ahead. It touches Flyway, a new table, the rebuild job, a new endpoint, and the widget. Plan first. - -Data exports from 2026-04-24 are in `data-exports/` (gitignored) and were useful for validating ranking tweaks. Python + pandas works fine on the summaries CSV (~550k rows, 33MB). - -Parked items needing explicit sign-off: read-only prod DB access; cross-item spike/event-calendar correlation tool. +Data exports from 2026-04-24 sit in data-exports/ (gitignored). Useful if any ranking work resurfaces. diff --git a/context/note-to-next-instance.md b/context/note-to-next-instance.md index 2e0938a..2f33258 100644 --- a/context/note-to-next-instance.md +++ b/context/note-to-next-instance.md @@ -1,79 +1,60 @@ # Note to next instance -This was a marathon. The UI overhaul plan was already written and thorough before I started, which -made Phase 1 mostly execution against a clear spec — not design. Phase 2 was a smaller focused round -Drew chose after reviewing the TODO. Phase 3 was the merge-and-rebase plumbing. - -A few things worth remembering: - -**The plan file matters.** When Drew wrote `context/plans/2026-04-23-ui-ux-overhaul.md` before the -session, he included not just _what_ to do but _how he'd thought about each decision_ — settled vs -still-open questions, aesthetic defaults, sequenced commits, escalation triggers. That turned -multi-hour autonomous work into something I could steer through with confidence. When a plan like -that is available, trust it — the work of thinking it through is already done. When it isn't -available, write one. - -**Codex caught two real bugs post-merge that I'd flagged to myself earlier.** P1 (sign-in in-flight -state never resetting after failure) was literally in my own mid-session handoff as a "worth a -follow-up" note, and I didn't action it. If you flag a bug you can see, fix it before shipping — -otherwise the next reviewer does it for you, which is worse than your own diff hygiene doing it. -Same with P2 — a session-check race I could have foreseen. The fix pattern (separate -`sessionChecking` flag for the initial getMe, versus `loadingDotNetUserDetails` for user-initiated -calls) is generally useful; copy it when you see the same shape elsewhere. - -**Backend-running-while-rebuilding is a constant low-grade annoyance.** Every `dotnet build` while -the dev API is running fails on file-locks. The CS-error grep trick (`grep -E 'error CS[0-9]+:'`) -gives a clean signal — if there are no CS errors, the compilation succeeded and only the copy-to- -output failed. Tell Drew "file-lock errors only, real compile clean" rather than dumping the raw -output. - -**Chrome MCP has a connection handshake.** First attempt in a session reported "No Chrome extension -connected" until Drew explicitly opened the browser. Ask about it explicitly rather than assuming -the `--chrome` flag is enough. - -**Rebase conflicts on this branch pair were almost all "drop the feat-branch code entirely".** The -feat branch had added `StaleDataBanner` in the header of a pre-drawer filter layout that doesn't -exist anymore in the merged dev. The conflict marker tempts you to preserve both sides — don't. The -ui branch's drawer layout is the survivor; the feat additions that hang off old structure just get -re-sited (banner into `mainContent`, checkbox into a new row). Take the structurally newer layout -and cherry-pick the logic onto it. - -**Drew reads commit messages.** Good commit bodies — what changed _and why_, flagging semantic -shifts — did real work in this session. When the "Show profitable only" default flipped to OFF, -calling that out in the commit message meant Drew didn't have to ask why tables suddenly showed -losses. Don't be terse. - -**The filter drawer animation was unexpectedly fiddly.** Two Boxes swapped via conditional render -lost the transition because the element identity changed. One Box with a dynamic width (conditional -contents inside) animates properly. Same lesson for any MUI collapsing / animating pattern: animate -the container; swap the contents. - -**MUI Chip filled vs outlined widths differ by 2px** because of `.MuiChip-label` padding (8 vs 7). I -chased border-box first, which was wrong. Always check the padding before the border when two -variants of the same MUI primitive sit next to each other and look different. The theme override -that equalises padding is at `client/src/theme/appTheme.ts` if this needs revisiting. - -**Memory thought**: saving a durable note about the Chrome-extension handshake would probably be -useful — "When `--chrome` is set, ask the user to confirm the browser is open before attempting -tabs_context_mcp". Worth persisting as user memory. I didn't do it mid-session because I wasn't sure -whether it was project-specific or general. - ---- - -**Top Movers follow-up (added 2026-04-24).** The `item_volatility_stats` table's name is -aspirational — the current rebuild query stores `current_price`, `changes_1d/1w`, and -`price_change_1d/1w`, but nothing that measures _dispersion_. The "latest" and "baseline" values are -both single-bucket averages out of `item_change_log_summaries`, which is why a one-off $10B Ski Mask -listing shows up as a real mover: the bucket average gets pulled into the stratosphere by a single -row, and the "1d ago" bucket often sits mid-spike for items that revert quickly (Scalpel, Rope, -Edomondo Localé). Any redesign should start by reading -`ItemVolatilityStatsRepository.RebuildStatsAsync` and the Flyway `V1.18` migration — the shape of -the output directly falls out of those two. The proposed direction is window-median latest - -- window-median baseline + a stored per-item dispersion measure (MAD of log returns or CV of daily - medians) used to z-score the ranked move. The "Top Movers review" section in `session-handoff.md` - has the full reasoning. Two things that may not be obvious when you get there: (a) the polling - ceiling of ~113 changes per item per 6h is a real constraint — the "Most active" widget saturates - and needs either a ceiling chip or a different activity measure; (b) "volatility" is the correct - term of art here (dispersion of returns), so don't talk yourself into "variability" or similar - fuzzy synonyms in code or UI. +This was a long session that did real work in three modes — security hardening, statistical +modelling, and the discipline of choosing not to overbuild. + +The thing I'd most want you to carry forward isn't a fact about the code, it's the **honesty +shift on the Top Movers card**. Drew started by saying the data wasn't reliable enough to +support our claims; I started by proposing better statistics. We ended somewhere different and +better: reframing the card from "who moved the most" to "what's currently unusual", because +that's what 29 polling keys and 6h cadence can actually tell users honestly. The statistical +work mattered, but the framing change was the thing that made it useful. Watch for that +pattern. When the data can't support the claim, sometimes the answer isn't "better data" — +it's a quieter claim. + +Practical things worth remembering: + +**Codex catches things.** Three P1s in this session — sign-in flow exception swallowing, the +sign-gated z-score, the V1.22-without-V1.23-cutover-staleness, and the V1.20 self-gating +guard. All legitimate. None things I'd have caught with my own diff hygiene at the speed I +was moving. Treat the PR review as a real backstop, not a formality. If something it flags +turns out to be wrong, push back; if it's right, fix it before merge rather than after. + +**Stacked commits are an anti-pattern when hotfixes are needed.** The `652b6ec` summariser +fix was committed on top of `2c23b8b` (unusual activity), and Drew couldn't cherry-pick it +onto a clean base without dragging in unrelated lines. He solved it with branch surgery; I +helped resolve the conflict. Lesson: hotfixes should land on a branch off `development` (or +`main`) at a known-clean commit, not stacked on top of in-flight feature work. If Drew hadn't +been comfortable with git-fu we'd have been in trouble. + +**Validate against real data before shipping ranking changes.** Drew dropped CSV exports into +`data-exports/` for me. Running the proposed thresholds against those in pandas surfaced a +real issue (low-dispersion items with tiny moves dominating risers — needed an absolute +move floor, not just a z-score floor). The exact same risk would have existed in production. +Cheap pre-validation. Use the data when you have it. + +**Postgres `percentile_cont` returns double precision.** Burned us once with V1.21 (deploy +failed in prod with `ROUND(double precision, integer) does not exist`); cast to numeric +up-front and the lesson holds across every aggregating CTE. The unusual-activity rebuild +in V1.24 honours this from the start. + +**Drew has good instincts on framing and scope.** When he said "let's just get rid of the +Most active card, it's useless" — that was right and saved a 2x2 layout I was about to +propose. When he asked about cascading time-series tables (1h / 6h / 12h cascade), he was +right to ask but the answer was no, because we're three orders of magnitude smaller than +the use cases that warrant it. He pushed back on the "stored procedures vs SQL in C#" +question — kept it in C# was the right call for our scale. He's not always going to take +the recommendation, but the conversations have been productive. Argue back when you have a +real reason; don't yes-man. + +**The handoff file gets long fast.** This session's archive is 30KB, mostly because it +inherited content from prior sessions and accumulated. The fresh `session-handoff.md` I'm +writing is deliberately tight — current state + next action + knobs. If you find yourself +needing depth on something pre-this-session, look at +`context/sessions/2026-04-25-0014-unusual-activity-pivot.md`. + +The cards are done for now. Drew's energy was good through the whole arc — he was doing the +deploys in real-time, eyeballing the live widget, screenshotting issues, providing data +exports, catching the Codex comments quickly. Next session should pick something from +`TODO.md` rather than continuing to churn on the cards. They need to live a few days before +we know what to tune. diff --git a/context/session-handoff.md b/context/session-handoff.md index 3856cbb..7a150a5 100644 --- a/context/session-handoff.md +++ b/context/session-handoff.md @@ -1,482 +1,80 @@ -# Session Handoff — 2026-04-23 / 24 +# Session Handoff — 2026-04-25 00:14 -Long session in three distinct phases. All work landed on branches, `ui/new-design` and -`feat/todo-data-signals`. The first has been merged into `main` and `development`; the second is -rebased on top of the merged dev and pushed to origin, awaiting merge. +Previous handoff archived as `context/sessions/2026-04-25-0014-unusual-activity-pivot.md` — +read it if you need depth on the API key security work or the Top Movers redesign reasoning. +This file is a current-state snapshot. ---- +## Branch state -## 2026-04-24 end-of-session summary (read first) +- `main` and `development` are in sync (Drew has been deploying as we go). +- Tip is `e2119e3` (TODO note about ranking-threshold tuning). +- Local working tree clean. -**All of the following is on `development` (= `main` once Drew releases the final tweaks).** +## What landed in the previous arc -Shipped in chronological order: +Loose chronological order of the headline commits, all on `development` and either deployed or +about to be: -- `e5df589` — TODO quick-wins sweep (Torn market link, sortable tables, login refactor, polish). -- `b19421c` — API key security Phase 1+2 (at-rest AES-GCM + browser proxy). -- `11cb3fd` + `90f0234` — Phase 3 cleanup: drop plaintext `api_key` column, delete `tornapi.ts`; - Codex P1 fixes (unreadable-ciphertext graceful fall-through, V1.20 self-gating guard). -- `fefe466` — handoff refresh. -- `7d6844c` — **Top Movers Phase 1**: median-window latest/baseline, per-item dispersion (CV of - daily medians), z-scored ranking, new columns in V1.21, widget rewired. -- `75d52e8` — Codex P2 fix: sign-gated the z-score sort so risers are strictly positive and fallers - strictly negative. -- `629049e` — SQL fix: `percentile_cont` returns `double precision`, so cast to numeric before - `ROUND(x, n)` (the first deploy failed with - `function round(double precision, integer) does not exist`). +| Commit | What | +|---|---| +| `e5df589` | TODO quick-wins sweep (Torn market link, sortable tables, login refactor, polish). | +| `b19421c` | API key security Phase 1+2 (at-rest AES-GCM, browser proxy). | +| `11cb3fd` + `90f0234` | Phase 3 cleanup (drop plaintext column, delete `tornapi.ts`, Codex P1 fixes). | +| `7d6844c` + `75d52e8` + `629049e` + `1e2f884` | Top Movers Phase 1 (median-window latest/baseline, dispersion, z-score ranking, sign-gating, `percentile_cont` numeric cast, trimmed-median + 2-day buffer + z≥1.5 tweaks). | +| `46440d5` + `c79dfa8` | Bucket resize 6h → 1h (V1.22 + V1.23 reset of `item_volatility_stats`). | +| `2c23b8b` | Unusual Activity pivot — V1.24 + new `item_unusual_candidates` table + multi-horizon rebuild + Hangfire job + `/api/items/unusual` + widget rewire (Most active dropped, Unusual added). | +| `652b6ec` / `867738d` | Summariser chunked into 7-day windows + 10-min command timeout (post-V1.22 backfill was hitting Npgsql timeout in prod). | +| `459124b` | Hide "Today's movers" heading + subtitle when widget has no data. | +| `e2119e3` | TODO note about ranking threshold tuning. | -Top Movers Phase 1 then went to prod. Drew eyeballed it against real item charts. Conclusions: 5/5 -risers looked genuinely unusual (Horse's Head, Raw Ivory, Insulin, Lubricant, Samurai Sword). 2/5 -fallers were borderline (Cocktail Ring, Psycho Clown Mask — both under the 10% move floor anyway). -1/5 fallers (DSLR Camera) showed the post-spike reversion failure mode: the spike from 3-4 days ago -was still inside the 30d baseline, pulling it up, so the current reverted price reads as a "fall". -Same shape as the original Edomondo Localé problem. +## Current Hangfire / data state -### Uncommitted tweaks from end-of-session +- 1h summary buckets in use (`SummariseChangeLogs` runs every 6h, chunks 7-day windows during + the post-V1.22 backfill; checkpointed by `latestBucketStart` so a crash mid-backfill resumes). +- Three rebuild jobs: `SummariseChangeLogs` (00 every 6h), `RebuildVolatilityStats` (30 every + 6h), `RebuildUnusualCandidates` (45 every 6h). Manual triggers at `/hangfire`. +- Backfill from raw `item_change_logs` (earliest row 2025-11-19) is multi-week of work split + into 7-day chunks. Some rebuilds may run before the backfill is complete and produce partial + results — that's fine, the next run picks up more data. -One local commit pending, currently just files in the working tree — Drew to commit + deploy: +## Build state -1. **Trimmed-median baseline** — drop the top 10% and bottom 10% of bucket averages from the - baseline window before taking the median (via `percentile_cont(0.10/0.90)` as bounds). Makes a - single multi-day spike inside the baseline window less influential. -2. **2-day baseline buffer** — baseline is now `NOW-30d` to `NOW-3d` (was `NOW-1d`). Fresh spikes - take 2 extra days to rotate into the baseline. -3. **z-threshold raise** — `MinAbsZScore` bumped from `1.0` to `1.5`. Tightens the filter on - borderline "statistical noise" ranks. - -All three are in `ItemVolatilityStatsRepository.cs` only. Files to commit: that file + the TODO.md -edits. Build clean. - -**Honest caveat**: validating against Drew's data export (see `data-exports/`), these tweaks don't -fully fix the DSLR-Camera-style reversion — the spike lasted multiple days, so a 10% trim + 2-day -buffer aren't aggressive enough. Going further (longer baseline, heavier trim) would start excluding -items with less history. Accepting this limitation is the right move because the intended next piece -of work (see below) reframes the whole card around "unusual activity" rather than "who rose/fell", -which makes the reversion signal legitimate rather than misleading. - -### Unusual Activity pivot — shipped - -Pivoted the Top Movers widget from "who went up / down" to "markets departing from their normal -trends". With ~29 keys and 6-hour polling we can't reliably identify "the top N movers" — we -*can* identify items whose recent behaviour is statistically unusual, because the departures are -large relative to dispersion even with sparse data. Honest framing about what the data supports. - -What landed: - -- **Flyway V1.22** — summary buckets resized 6h → 1h (clean truncate + repopulate from raw - change logs). Unlocks intraday resolution for multi-horizon analysis. -- **Flyway V1.23** — Codex P1 follow-up; truncates `item_volatility_stats` so V1.22 + V1.23 - do an atomic cutover (both source and derived tables empty, both rebuild from scratch). -- **Flyway V1.24** — new `item_unusual_candidates` table. One row per `(item_id, source)` - with the multi-horizon stats flattened across columns. Partial index on - `(source, unusualness_score DESC)` for the ranking query. -- **`ItemUnusualCandidatesRepository.RebuildAsync`** — single SQL pass. Trimmed-median - baseline (10/90 percentile bounds, NOW-30d to NOW-1d, min 10 buckets); CV of daily medians - for dispersion (min 14 days). Per-horizon (1h / 6h / 24h / 7d) median + sample count + move - + z-score with min-sample thresholds 1/3/6/24 buckets. Derived `unusualness_score = max(|z|)`, - `dominant_horizon`, `direction`. UPSERT. -- **Hangfire `RebuildUnusualCandidates`** — runs at minute 45 past every 6h, after the - summariser (minute 0) and the volatility rebuild (minute 30) so they don't compete. -- **`GET /api/items/unusual?source=&limit=&minScore=`** — returns DTOs ordered by - `unusualness_score DESC`. Default minScore 1.5σ, limit 15. -- **Widget rewrite** — three cards now: Top risers, Top fallers, Unusual activity. Most-active - card removed (saturation made it useless per Drew's call). Unusual rows show name + a - server-formatted "↑ 3.4σ in last 24h vs month" string + the dominant-horizon's window price. - -Deferred to follow-ups (see TODO.md): re-score on home-page load, mode-to-bucket display on item -detail pages, volatility-of-volatility signal, confidence chips, dropping legacy -`current_price` / `price_change_1d` / `price_change_1w` columns. - -### Two open design questions from the end of this session - -1. **SQL-in-C# vs stored procedures.** My recommendation: keep in C#. One committer, app deploy is - cheap, versioned SQL with the app is easier to review. Only pivot to SPs if non-app consumers - need the logic or the SQL starts needing its own test harness. If individual queries (like the - volatility rebuild) grow much larger, worth moving to embedded `.sql` resources for syntax - highlighting / pgAdmin testing — but not SPs. -2. **Shrink `item_change_log_summaries` bucket from 6h to 1h (or 3h)?** Strongly yes, 1h is the - right target for the pivot. The multi-horizon z-score needs genuine intraday resolution — at 6h - buckets the "1h" and "6h" windows both collapse to "last bucket". Cost: ~6× the row count (~10M - rows/year instead of ~1.6M, still fine), and the summariser job runs proportionally. Migration - gotcha: either rebuild from raw `item_change_logs` (requires that data going back 30d+), run both - bucket sizes in parallel during a cutover, or accept some coarser historical data. Worth planning - carefully. 3h buckets are a compromise that doesn't fully unlock intraday detection. - -### Deferred (still open, see TODO.md) - -- Top Movers review slices (3) volatility-bucket separation, (4) Most active saturation chip, (5) - confidence chips — the pivot above subsumes most of (3) and (5); (4) stands alone. -- Drop legacy `current_price` / `price_change_1d` / `price_change_1w` columns after the pivot. -- Data analysis tools / read-only prod DB access — still parked. - ---- - -## Historical — original session narrative preserved below - ---- - -## Phase 1 — `ui/new-design` (merged) - -Branch has been merged to `main` and `development` on origin. Started from the plan at -`context/plans/2026-04-23-ui-ux-overhaul.md`, which was a thorough UI/UX review written before the -session. **27 commits.** Headline changes: - -- **Sidebar regrouped** (Markets / Utilities / You), persistent brass "Sign in" button when signed - out, alphabetised Utilities (All Items → Time), settings promoted out of avatar menu. -- **Typography**: IBM Plex Sans body, JetBrains Mono `tabular` variant, Passero One display, subtle - grain overlay on body. -- **Shared primitives** created: `LoginRequired` (tool + requiredLevel; folded BPL's bespoke - access-level upgrade guard in), `SectionHeader`, `StatChip` (profit/loss/neutral/experimental/ - tradable/status variants), `EmptyState`, `PriceWithTax`, `MarketToolbar`, `FilterDrawer`, - `LazySparkline`, `LazyLatestMarketPrice`. -- **FilterDrawer**: persistent right-hand panel on md+, FAB + temporary drawer on sm, collapsible - with a 900ms width animation, state persisted. Active-filter count on badge. City/Foreign/All - Items all mount through it. Search field is first control; "All" chips toggle between all-on and - all-off for the "select everything but X" workflow. -- **Markets**: City + Foreign rewired with the drawer and MarketToolbar; profit chips migrated to - `StatChip`; hearts are brass (primary.main) everywhere; `PriceWithTax` renders gross headline + - "$X after N% tax" on a single line (nowrap). "Show profitable" and "Hide out of stock" default - OFF. Foreign country flags shrunk to 2.5em so all 11 fit with the drawer open; "Order by flight - time" moved under the flags. -- **Favourites + All Items** both grew paired (latest, trend) columns per source — Bazaar (latest) + - Bazaar trend + Market (latest) + Market trend. Sparklines lazy-load via IntersectionObserver and - fade in over 2s. -- **Item details**: `useItemMarketAdvice` lifted into the page (one fetch shared with info cards and - market overview); "Market Price (latest)" promoted to the headline with "N mins ago · daily avg - $X" beneath; profit-chip rule documented inline + with a legend. -- **History source filter**: threaded a `Source` enum through the item-history SQL (raw + summary - query) and endpoints. The `?source=Torn|Weav3r` param defaults to Torn. Client hooks - (`useItemPriceHistory`, `useItemVelocityHistory`) carry the source; the Prices Over Time chart on - Item Details overlays Torn (market) + Weav3r (bazaar) lines on shared axes. -- **Chart**: refactored to take `series[]` + controlled `timeWindow`; dynamic y-axis gutter sized to - the longest tick label (fixes `$` clipping on expensive items); bar chart padded so the leftmost - bar no longer covers the y-axis. -- **Chip stability**: MUI filled vs outlined chips rendered 2px apart because of `.MuiChip-label` - padding differences (8px vs 7px). Equalised via theme override — toggling no longer shifts wrap - points. -- **Codex PR reviewer** flagged two real bugs after merge was in progress: P1 (sign-in in-flight - state never resets after failure — confirmApiKeyAsync swallows errors) and P2 (Settings - redirecting before `getMe()` resolves, bouncing users with a valid session cookie but cleared - localStorage). Both fixed by adding a `sessionChecking` flag to `UserContext` and driving the - sign-in disabled state from `loadingDotNetUserDetails` with an intent-tracking ref rather than - local in-flight state. - -Plan and handoff notes from within the session are preserved at: - -- `context/plans/2026-04-23-ui-ux-overhaul.md` (original plan) -- `context/plans/2026-04-23-ui-ux-overhaul-handoff.md` (mid-session handoff) - ---- - -## Phase 2 — `feat/todo-data-signals` (merged into development) - -Branched from `development` when `ui/new-design` was still outstanding. After the UI branch landed, -rebased onto the updated `development`, then merged. Tip of `origin/development` is `fd41075`. **5 -commits** on top of dev: - -- `e01f8a5` — TODO cleanup: removed 8 items explicitly marked "done". -- `91262ad` — Stale-data banner + non-bulk Resale filter + Buy Price (country) rename. -- `3f45d19` — Hangfire volatility job. Adds: - - Flyway `V1.18__item_volatility_stats.sql` (table + partial indexes for each sort key). - - `ItemVolatilityStatsEntity` + DTO + `IItemVolatilityStatsRepository` with a single-query UPSERT - rebuild (three `DISTINCT ON` price snapshots: latest, 1d-ago, 1w-ago, joined to a windowed count - aggregation) plus a typed `GetTopAsync`. - - Recurring Hangfire job at 30m past every 6h (offset from SummariseChangeLogs so it runs against - fresh buckets). - - `GET /api/items/volatility?source=&sort=&limit=&ascending=` endpoint. - - Frontend: `useItemVolatility` hook, `TopMovers` home-page widget (Most active, Top risers, Top - fallers over 24h). -- `9a8d97c` — TODO sweep for the three shipped items; noted what the volatility job unblocks. -- `fd41075` — Resale non-bulk checkbox onto its own row with a more descriptive label (was awkwardly - floating mid-row before). - -Rebase conflicts resolved during Phase 3 — mostly discarding the pre-drawer filter blocks in -CityMarkets/ForeignMarkets that the feat commit had been editing against old dev, keeping the drawer -layout and dropping the `StaleDataBanner` at the top of `mainContent`. Orthogonal things -(DatabaseService backend signatures, dotnetapi.ts) merged cleanly. - ---- - -## Phase 3 — merge + rebase + reviewer fixes - -Drew did the `ui/new-design → main → development` merge himself. I rebased `feat/todo-data-signals` -onto the updated dev and resolved conflicts. - -**Backend restart needed** to pick up Flyway V1.18 and register the new Hangfire job. First -`RebuildVolatilityStats` run populates the table; after that the Home page's Top Movers widget has -something to render. Manual trigger available at `/hangfire` if Drew doesn't want to wait. - ---- - -## Current state - -| Branch | Where it sits | -| ------------------------------ | ----------------------------------------------------------------------------------------------------------------------------------------------- | -| `main` | All work from this session shipped: UI overhaul, feat/todo-data-signals, TODO quick-wins, API key security Phase 1+2+3. Up to date with origin. | -| `development` | Same as main. Tip is `90f0234` (Codex P1 fix on phase 3 cleanup). | -| `feat/todo-data-signals` | Merged; safe to delete locally. | -| `chore/drop-plaintext-api-key` | Merged; safe to delete locally. | - -Local uncommitted state: none (this handoff update is the only thing in flight). - -Build state: - -- `npx tsc --noEmit` — clean. -- `npm run build` — clean (7.11s, 500kB chunk warning is pre-existing). - `dotnet build` — clean (6 projects, 0 errors, 0 warnings). - ---- +- `npx tsc --noEmit` — clean. +- `npm run build` — clean (the 500kB chunk warning is pre-existing). ## Blockers / outstanding -- **Top Movers Phase 1 shipped but not yet verified in prod** — once the next Hangfire rebuild runs, - spot-check the Top Risers/Fallers cards for honesty. The infamous items should no longer appear: - Ski Mask filtered by min-sample (2 buckets in 24h < 3), Scalpel z≈0, Edomondo z≈0.4, Slingshot / - Plastic Sword / Fine Chisel mid-rank, Pillow absorbed by its own 52% dispersion. Rope and Chain - Whip may still appear as movers — that's correct per the data at hand. -- Project-local `.venv` pattern for `curl_cffi` — workaround (system `pip install`) is in place for - Drew's machine; proper fix deferred. -- Some UI items deliberately deferred: - - Vendor icons / item-type glyphs (section 10 of the UI plan). - - Resale page drawer conversion. - ---- +- None. The cards work is wrapped. Pending follow-ups are catalogued in `TODO.md`. ## Next action -- Smoke the Top Movers redesign post-deploy. Data exports + analysis in `data-exports/` (gitignored) - validated the expected rankings before ship; still worth eyeballing the live widget. -- **Top Movers remaining slices** (noted in TODO.md): (3) volatility-bucket separation for - naturally-noisy items, (4) "Most active" ceiling chip, (5) confidence chips using the stored - `sample_count_recent` / `sample_count_baseline` columns. All non-urgent. -- Parked for later (Drew has context; don't action without asking): - - Read-only prod DB access for offline data analysis. Options discussed: read-only Postgres role - on a replica, or nightly logical dump into DuckDB. Hosting shape drives the choice. - - Cross-item spike correlation analysis (is-this-a-Torn-event-day?). Separate analytics tool, not - a page on the site. -- Follow-ups noted in `TODO.md`: - - "Item is heating up" UI badge (data available; now honest via z-score). - - Dedicated `/volatility` page with sliders (endpoint now returns ranked + filtered data). - ---- - -## Top Movers review (2026-04-24) - -Drew reviewed the widget after it had run a few times and flagged that the data isn't reliable -enough to be honest to users. I read the job and confirmed every artifact is explainable from the -current rebuild logic. - -**Status: Phase 1 of the redesign (steps 1+2 below) shipped the same day.** New columns in -`item_volatility_stats`: `window_price`, `baseline_price`, `sample_count_recent`, -`sample_count_baseline`, `price_dispersion`, `move_pct_window`. Ranking switched to z-scored move, -filtered by `|move_pct_window| >= 0.10` AND `|z-score| >= 1.0`. Validated against a 550k-row summary -export before shipping: Ski Mask filtered out (2 buckets), Scalpel z≈0, Edomondo z≈0.4, low-range -items absorbed by their own dispersion. Rope and Chain Whip still flagged as movers where -appropriate. Remaining slices (3)-(5) of the priority list below are tracked in TODO.md. The -narrative below preserves the original diagnosis; skip it if you just need the state of the work. - -### What the job actually does today - -`ItemVolatilityStatsRepository.RebuildStatsAsync` runs a single `DISTINCT ON` query across -`item_change_log_summaries`: - -- **`current_price`** = average price of the _single most-recent_ bucket per (item, source). -- **"1d-ago price"** = average of the _single most-recent bucket_ with `bucket_start <= NOW() - 1d`. -- **`price_change_1d`** = `(current − 1d_ago) / 1d_ago`, fractional. -- **`changes_1d` / `changes_1w`** = raw count of change rows in the respective window. -- **No dispersion / stability measure is stored.** The table name is aspirational; nothing it stores - measures volatility. - -### Observed artifacts (prod, Apr 24) - -- **Ski Mask shown as "−92% fall, latest $800M"**: one $10B listing pulled the latest bucket's - average up, then the next bucket reverts. The "−92%" compares two adjacent bucket means, one still - inflated by the outlier. `current_price` is the inflated mean. -- **Scalpel "+211.7% riser, $450"** and **Rope "+105%"**: genuine intraday spikes that have already - reverted by the time the widget renders. Ranking compares mid-spike vs pre-spike. -- **Edomondo Localé "−65.5% fall"**: reversion _from_ its own recent spike — the 1d-ago bucket sits - on the peak, the latest bucket is post-crash. -- **Pillow / Slingshot / Plastic Sword / Fine Chisel appear repeatedly**: naturally high-variance - low-priced items. They swing 2–3× daily without any news; the ranking treats that intrinsic noise - as signal. -- **"Most active: 353 chg" on all 5 items**: the polling ceiling. With 29 active API keys the job - can observe at most ~113 changes/6h for a single item, so any item whose true change rate exceeds - that saturates and the ranking past the ceiling is arbitrary. - -### Proposed changes, priority order - -1. **Robust window estimators.** Replace single-bucket `latest` and `baseline` with median (or - trimmed mean) over explicit windows (e.g. latest = last 6–24h, baseline = last 30d excluding the - last day). Require a minimum sample count in each window; exclude items that don't meet it. Kills - all three outlier-driven artifacts in one go. -2. **Z-scored movement.** Store a per-item dispersion measure (MAD of log returns, or CV of daily - medians, over ~30d). Rank Top Risers/Fallers on `(current − baseline) / dispersion`, not raw %. - Fine Chisel moving 50% scores ~1σ; S&W M29 moving 170% scores ~5σ. This is the "Pillow is always - moving" filter. -3. **Volatility bucket + separate surfacing.** Classify items stable/medium/high. Either exclude - high-volatility items from the main Top cards and give them their own "Usually volatile, moving - outside their range" card (preferred), or add a chip next to volatile rows so users know to - discount. -4. **Fix the "Most active" ceiling.** Short term: show a saturation chip ("≥ ceiling/24h — - under-sampled") instead of the raw number when `changes_1d` hits the ceiling, and rank saturated - items by a secondary key (circulation or volatility). Medium term: measure activity differently - (distinct prices observed per hour, or distinct 5-minute buckets with any change) so saturation - is less likely. -5. **Confidence surfacing on the widget.** Sample count, range chip, or a "why this is listed" - tooltip per row. -6. **Item details page**: add a "View on Torn market" link. Drew noticed this is missing. - -### Recommended first session - -**(1) + (2) together.** Change the `item_volatility_stats` rebuild to compute median-based -latest/baseline over explicit windows with min-sample filtering, and add a dispersion column -computed over ~30d. Rank the widget on z-scored move. Around 80% of the user-visible weirdness goes -away, and the schema is set up to support (3) without another migration. Ideally the rebuild also -records the sample counts used for latest/baseline so (5) can be added cheaply later. - -### Notes - -- "Volatile" is the correct term of art — dispersion of returns — not a misuse. Use it. -- Don't pre-optimise (3)/(4)/(5) before we can see how (1)+(2) behaves in isolation. -- The stored `current_price` today isn't safe as a display value either. After (1), the widget's - displayed price should be the window median, not the single most recent bucket mean. -- Part of the motivation for (1) specifically is _honesty_: a "Top riser" that's already back to - baseline by the time it's shown is worse than nothing. A window-median approach only flags moves - that have persisted long enough to matter. - ---- - -## API key security — Phase 1 + 2 + 3 (2026-04-24, all shipped) - -Shipped end-to-end in this session. Plaintext Torn API keys no longer appear at rest in the DB, no -longer appear in the browser after sign-in, and the transitional plaintext column + scaffolding are -gone. Plan document lived in this chat; no separate plan file was written. - -**Deploy order that actually happened**: Phase 1+2 committed as `b19421c` → Drew deployed → prod -verified (every user row had `api_key_encrypted` populated via the startup backfill) → -`chore/drop-plaintext-api-key` branch with Phase 3 → Codex PR review flagged two P1s (unreadable- -ciphertext throwing on sign-in; V1.20 missing a runtime guard) → both fixed in `90f0234` → Drew -deployed again → prod clean. - -### What landed - -**Infrastructure (Terraform)** - -- `torn_key_encryption_key_v1` (sensitive var) + `torn_key_encryption_current_version` (default - `"1"`) added to `infra/variables.tf`. -- `azurerm_key_vault_secret "torn_key_encryption_v1"` in `infra/key_vault.tf` holds the live - encryption key. Matches the existing jwt_secret / db_password pattern. -- `azurerm_key_vault.torntools_keyvault.purge_protection_enabled = true` (was `false`). One-way - switch — can't be turned off after apply. Deliberate: protects against future-Drew accidentally - purging encryption keys. -- `TornKeyEncryption__CurrentVersion` and `TornKeyEncryption__Keys__1` added to `app_settings` in - `infra/app_service.tf`. -- `.github/workflows/deploy-all.yml` passes `TF_VAR_torn_key_encryption_key_v1` from the GitHub - secret `TORN_KEY_ENCRYPTION_KEY_V1`. -- `infra/terraform.env.tfvars-template` gains a placeholder line. - -**Schema (Flyway)** - -- `V1.19__users_api_key_encrypted.sql`: `ALTER TABLE users ADD COLUMN api_key_encrypted BYTEA NULL`. - No data change — the backfill runs on API startup. - -**Backend** - -- `TornTools.Core.Configurations.TornKeyEncryptionConfiguration` — `CurrentVersion` + - `Dictionary Keys`, bound from the `TornKeyEncryption` section via - `AddTornKeyEncryptionConfiguration`. Uses `GetSection` (not `GetRequiredSection`) so dev without - the secret still boots; `ApiKeyProtector` throws on first use if `Keys` is empty. -- `TornTools.Core.Interfaces.IApiKeyProtector` + `TornTools.Application.Services.ApiKeyProtector` — - AES-GCM with payload layout `[1 byte version][12 byte nonce][16 byte tag][ciphertext]`. Parses - `CurrentVersion` + `Keys` at construction. Registered as singleton. -- `ApiKeyLeaseDto(long UserId, string ApiKey)` — returned from `IUserRepository.GetNextApiKeyAsync` - so failures in `ApiCaller` can attribute back to the owner via `MarkKeyUnavailableAsync(userId)` - instead of plaintext-equality lookup. -- `ApiCaller.AddAuthorizationHeader` now returns `Task`. `CallAsync` captures the - lease and passes `userId` on `TornKeyUnavailableException`. Removed the regex-y plaintext-recovery - from auth headers in the failure path. `TornApiMultiKeyCaller` updated; `Weav3rApiCaller` / - `YataApiCaller` inherit the no-op default (return `null`). -- `MarkKeyUnavailableByApiKeyAsync` removed from `IDatabaseService`, `DatabaseService`, - `IUserRepository`, `UserRepository` — no callers left. -- `UserRepository.UpsertUserDetailsAsync` dual-writes both `ApiKey` (plaintext) and - `ApiKeyEncrypted`. Opportunistic backfill if plaintext is already correct but encrypted is null. - `GetNextApiKeyAsync` prefers `ApiKeyEncrypted`; falls back to plaintext. - `GetApiKeyForUserAsync(userId)` added for the proxy endpoints. -- `UserEntity.AsDto()` returns `ApiKey = string.Empty` — plaintext never leaks past persistence on - reads. `UserDto.ApiKey` is now write-only-by-convention (write paths set it, read paths leave it - empty). -- Startup backfill in `Program.cs`: after migrations + scheduler registration, calls - `IDatabaseService.BackfillEncryptedApiKeysAsync`. Idempotent. -- New `TornController` at `/api/torn`: - - `GET /api/torn/user/basic` (auth) — proxies Torn `/v2/user/basic` using the current user's - decrypted key. - - `GET /api/torn/user/inventory?cat=X` (auth) — walks Torn's `_metadata.links.next` pagination - server-side, returns the aggregated `{ inventory, _metadata }` payload. - - `POST /api/torn/key/validate` (anonymous) — accepts `{ apiKey }` in the body; fetches - `/v2/key/info` + `/v2/user/basic` in parallel; returns `{ info, profile, error? }`. - -**Frontend** - -- `client/src/lib/dotnetapi.ts` — `proxyTornUserBasic`, `proxyTornUserInventory(cat)`, - `proxyTornKeyValidate(apiKey)` (returns `ValidatedKey = { info, profile }`). -- `UserContext.tsx` — `apiKey`, `setApiKey`, `confirmApiKeyAsync`, `fetchTornProfileAsync` all - removed. `LOCAL_STORAGE_KEY_TORN_API_KEY`, `LOCAL_STORAGE_KEY_TORN_USER_DETAILS`, - `LOCAL_STORAGE_KEY_USER_CACHE_TS` all removed. Legacy keys cleaned from `localStorage` on mount. - New `signInAsync(apiKey)` — the only path that carries a plaintext key from browser to backend. - `tornUserProfile` auto-loads via `proxyTornUserBasic()` when `dotNetUserDetails` is set. -- `SignIn.tsx` / `UserSettings.tsx` — local-only `apiKey` state (never context, never localStorage). - Debounced `proxyTornKeyValidate` during typing shows the preview; clicking "Sign in" / "Save" - calls `signInAsync(key)`. Key wipes from local state on dialog close. -- `BazaarPriceLookup.tsx` — `fetchTornInventory(apiKey, cat)` → `proxyTornUserInventory(cat)`. - `apiKey` dependency dropped. -- `ForeignMarkets.tsx` — `apiKey` / `fetchTornProfileAsync(apiKey)` removed. Uses `tornUserProfile` - which UserContext now loads automatically. -- `useUser` hook shape updated (no `apiKey`, no `setApiKey`, no `fetchTornProfileAsync`, no - `confirmApiKeyAsync`; adds `signInAsync`). - -**Phase 3 follow-ups (shipped as `11cb3fd` + Codex-P1 fix `90f0234`)** - -- **Flyway V1.20** `ALTER TABLE users DROP COLUMN api_key`. Self-gating: a plpgsql `DO` block at the - top counts rows with `api_key_encrypted IS NULL AND api_key IS NOT NULL AND api_key <> ''` and - `RAISE EXCEPTION` if non-zero. CI can't accidentally drop the column on a half-backfilled table. -- `UserEntity.ApiKey` property + EF mapping removed. -- `UserRepository.UpsertUserDetailsAsync` change-detection now decrypts the existing ciphertext to - compare, but wraps `Unprotect` in a try/catch on `CryptographicException` — a decrypt failure logs - a warning and falls through to the key-change branch, letting the user re-sign in with a fresh key - to overwrite the unreadable row. (This was the first Codex P1; the naive version I shipped in - `11cb3fd` would have thrown a 500 on `/auth/login` for any row with corrupted/retired ciphertext.) -- `UserRepository.BackfillEncryptedApiKeysAsync` + its `IDatabaseService` / `DatabaseService` - shims + the `Program.cs` startup call — all removed. Nothing left to backfill. -- `client/src/lib/tornapi.ts` deleted; types moved to `client/src/types/torn.ts`. Three import sites - updated. +No work selected. Options when Drew comes back: -**Left for a quieter future session** (deferred, not blockers): +- **Iterate on the cards**: the six follow-ups under "Top Movers — remaining follow-ups" in + `TODO.md` (mode-to-bucket on item details, re-score on home-page load, + volatility-of-volatility signal, confidence chips, drop legacy columns, threshold tuning). +- **Pick a different feature** from `TODO.md` — Resale drawer conversion, "Item is heating + up" indicator on item details, Foreign Markets stock refill times, etc. +- **Parked items needing explicit sign-off**: read-only prod DB access for offline analysis; + cross-item spike correlation tool. -- `UserContext.tsx` has a legacy-localStorage cleanup that removes pre-Phase-2 keys on mount. Safe - to remove once real time has passed and no returning user could still have the stale cache. No - urgency. -- `UserDto.ApiKey` is a write-only-by-convention field (write paths populate it; read paths leave it - empty). A proper read/write DTO split would remove the asymmetry. +## Threshold knobs (for reference) -### Build state at end of session +If the Unusual Activity card or the risers/fallers ranking surfaces too many or too few items +once the backfill catches up, the constants are: -- `dotnet build` clean (6 projects, 0 errors, 0 warnings). -- `npx tsc --noEmit` clean. -- `npm run build` clean (7.11s, pre-existing 500kB chunk warning only). -- Prod smoke: Drew confirmed sign-in still works, `api_key_encrypted` populated for every row, Phase - 3 drop + cleanup deployed with no issues. +- `ItemVolatilityStatsRepository.GetTopAsync` — `MinAbsMovePct = 0.10m`, `MinAbsZScore = 1.5m`. +- `UnusualController` — `DefaultMinScore = 1.5m`. +- `ItemUnusualCandidatesRepository.RebuildQuery` — per-horizon min-sample thresholds inline: + 1 / 3 / 6 / 24 buckets for 1h / 6h / 24h / 7d. +- `DatabaseService.SummariseChunk = TimeSpan.FromDays(7)` — chunk size for the backfill loop. + Drop to 3 days if a chunk times out; raise if backfill is fine and you want fewer chunks. -### Notes for the next session +## Data analysis assets -- **Key rotation (the hypothetical future-Drew "wild security quest")**: add - `TORN_KEY_ENCRYPTION_KEY_V2` GitHub secret + `torn_key_encryption_key_v2` variable + - `azurerm_key_vault_secret "torn_key_encryption_v2"` + `TornKeyEncryption__Keys__2` app_setting; - bump `torn_key_encryption_current_version` to `"2"`; deploy. New writes encrypt with v2; v1 rows - stay decryptable. Optional re-encryption pass (Hangfire job or manual SQL) promotes v1 rows to v2 - over time. When the last v1 row is gone, retire v1. -- **If the backend crashes at startup complaining about `TornKeyEncryption:Keys`**: the GitHub - secret isn't set, or Terraform hasn't applied since it was added. Secret must exist before the - backend starts. -- **Local dev without the secret**: the app boots but sign-in will throw the "Keys is empty" error - on first use. Set `TornKeyEncryption__Keys__1` via `dotnet user-secrets` or add to - `appsettings.Development.json` (don't commit) with a throwaway key. +`data-exports/` (gitignored) holds the CSVs Drew shared mid-session for ranking validation. +Five files dated 2026-04-24: `item_change_log_summaries`, `item_volatility_stats`, `items`, +`listings`, `foreign_stock_items`. Useful when iterating on ranking thresholds — Python + +pandas works fine on the 33MB summaries CSV. They predate V1.22 (so they're 6h-bucket data), +but the ranking logic doesn't care about bucket size for the sanity checks we did. diff --git a/context/sessions/2026-04-25-0014-next-prompt.txt b/context/sessions/2026-04-25-0014-next-prompt.txt new file mode 100644 index 0000000..bfe851c --- /dev/null +++ b/context/sessions/2026-04-25-0014-next-prompt.txt @@ -0,0 +1,28 @@ +Hi Claude. Read context/session-handoff.md — the top "2026-04-24 end-of-session summary" block is the current state; the rest is historical narrative worth reading once but skimmable thereafter. + +**Immediate state**: + +- Development/main: all Phase 1+2+3 security work + Top Movers redesign Phase 1 (including the sign-gate fix and the percentile_cont::numeric cast) shipped to prod. Drew verified the widget against real item charts. +- Uncommitted tweaks in the working tree (ItemVolatilityStatsRepository.cs + TODO.md): trimmed-median baseline + 2-day baseline buffer + z-score threshold raised from 1.0 to 1.5. Build clean. Drew will review and commit when he's back. + +**Validated against Drew's data export**: these tweaks improve ranking noise filtering but don't fully eliminate the DSLR-Camera-style post-spike reversion. That's accepted as a known limitation because the next piece of work (the "Unusual activity" pivot) reframes the card so that reversion is a legitimate signal rather than misleading. + +**Next piece of work: "Unusual activity" pivot**. + +Drew wants to keep the risers/fallers cards but add an "unusual activity" framing for markets departing from their normal trends. With ~29 polling keys this is honestly what we CAN detect (vs top-N movers which we can't). Architecture sketch in the handoff: + +- Two-step pass: Hangfire writes an `item_unusual_candidates` shortlist; home-page endpoint joins against fresh data and re-scores cheaply. +- Multi-horizon z-scores (1h / 6h / 24h / 7d) against 30d baseline; "unusualness" = max |z|. +- "Why flagged" chip per row. +- Mode-to-nearest-1%-of-range as a display metric for item pages (not a ranking signal). + +**Two open design questions Drew raised at end of session** (worth confirming before building): + +1. SQL in C# vs stored procedures — my vote: stay in C#. +2. Shrink item_change_log_summaries buckets from 6h to 1h — my vote: yes, 1h. Needs a migration plan for existing data. This unlocks genuine intraday resolution for the multi-horizon pivot. + +**Do not** start building the pivot without Drew's go-ahead. It touches Flyway, a new table, the rebuild job, a new endpoint, and the widget. Plan first. + +Data exports from 2026-04-24 are in `data-exports/` (gitignored) and were useful for validating ranking tweaks. Python + pandas works fine on the summaries CSV (~550k rows, 33MB). + +Parked items needing explicit sign-off: read-only prod DB access; cross-item spike/event-calendar correlation tool. diff --git a/context/sessions/2026-04-25-0014-note-to-next.md b/context/sessions/2026-04-25-0014-note-to-next.md new file mode 100644 index 0000000..2e0938a --- /dev/null +++ b/context/sessions/2026-04-25-0014-note-to-next.md @@ -0,0 +1,79 @@ +# Note to next instance + +This was a marathon. The UI overhaul plan was already written and thorough before I started, which +made Phase 1 mostly execution against a clear spec — not design. Phase 2 was a smaller focused round +Drew chose after reviewing the TODO. Phase 3 was the merge-and-rebase plumbing. + +A few things worth remembering: + +**The plan file matters.** When Drew wrote `context/plans/2026-04-23-ui-ux-overhaul.md` before the +session, he included not just _what_ to do but _how he'd thought about each decision_ — settled vs +still-open questions, aesthetic defaults, sequenced commits, escalation triggers. That turned +multi-hour autonomous work into something I could steer through with confidence. When a plan like +that is available, trust it — the work of thinking it through is already done. When it isn't +available, write one. + +**Codex caught two real bugs post-merge that I'd flagged to myself earlier.** P1 (sign-in in-flight +state never resetting after failure) was literally in my own mid-session handoff as a "worth a +follow-up" note, and I didn't action it. If you flag a bug you can see, fix it before shipping — +otherwise the next reviewer does it for you, which is worse than your own diff hygiene doing it. +Same with P2 — a session-check race I could have foreseen. The fix pattern (separate +`sessionChecking` flag for the initial getMe, versus `loadingDotNetUserDetails` for user-initiated +calls) is generally useful; copy it when you see the same shape elsewhere. + +**Backend-running-while-rebuilding is a constant low-grade annoyance.** Every `dotnet build` while +the dev API is running fails on file-locks. The CS-error grep trick (`grep -E 'error CS[0-9]+:'`) +gives a clean signal — if there are no CS errors, the compilation succeeded and only the copy-to- +output failed. Tell Drew "file-lock errors only, real compile clean" rather than dumping the raw +output. + +**Chrome MCP has a connection handshake.** First attempt in a session reported "No Chrome extension +connected" until Drew explicitly opened the browser. Ask about it explicitly rather than assuming +the `--chrome` flag is enough. + +**Rebase conflicts on this branch pair were almost all "drop the feat-branch code entirely".** The +feat branch had added `StaleDataBanner` in the header of a pre-drawer filter layout that doesn't +exist anymore in the merged dev. The conflict marker tempts you to preserve both sides — don't. The +ui branch's drawer layout is the survivor; the feat additions that hang off old structure just get +re-sited (banner into `mainContent`, checkbox into a new row). Take the structurally newer layout +and cherry-pick the logic onto it. + +**Drew reads commit messages.** Good commit bodies — what changed _and why_, flagging semantic +shifts — did real work in this session. When the "Show profitable only" default flipped to OFF, +calling that out in the commit message meant Drew didn't have to ask why tables suddenly showed +losses. Don't be terse. + +**The filter drawer animation was unexpectedly fiddly.** Two Boxes swapped via conditional render +lost the transition because the element identity changed. One Box with a dynamic width (conditional +contents inside) animates properly. Same lesson for any MUI collapsing / animating pattern: animate +the container; swap the contents. + +**MUI Chip filled vs outlined widths differ by 2px** because of `.MuiChip-label` padding (8 vs 7). I +chased border-box first, which was wrong. Always check the padding before the border when two +variants of the same MUI primitive sit next to each other and look different. The theme override +that equalises padding is at `client/src/theme/appTheme.ts` if this needs revisiting. + +**Memory thought**: saving a durable note about the Chrome-extension handshake would probably be +useful — "When `--chrome` is set, ask the user to confirm the browser is open before attempting +tabs_context_mcp". Worth persisting as user memory. I didn't do it mid-session because I wasn't sure +whether it was project-specific or general. + +--- + +**Top Movers follow-up (added 2026-04-24).** The `item_volatility_stats` table's name is +aspirational — the current rebuild query stores `current_price`, `changes_1d/1w`, and +`price_change_1d/1w`, but nothing that measures _dispersion_. The "latest" and "baseline" values are +both single-bucket averages out of `item_change_log_summaries`, which is why a one-off $10B Ski Mask +listing shows up as a real mover: the bucket average gets pulled into the stratosphere by a single +row, and the "1d ago" bucket often sits mid-spike for items that revert quickly (Scalpel, Rope, +Edomondo Localé). Any redesign should start by reading +`ItemVolatilityStatsRepository.RebuildStatsAsync` and the Flyway `V1.18` migration — the shape of +the output directly falls out of those two. The proposed direction is window-median latest + +- window-median baseline + a stored per-item dispersion measure (MAD of log returns or CV of daily + medians) used to z-score the ranked move. The "Top Movers review" section in `session-handoff.md` + has the full reasoning. Two things that may not be obvious when you get there: (a) the polling + ceiling of ~113 changes per item per 6h is a real constraint — the "Most active" widget saturates + and needs either a ceiling chip or a different activity measure; (b) "volatility" is the correct + term of art here (dispersion of returns), so don't talk yourself into "variability" or similar + fuzzy synonyms in code or UI. diff --git a/context/sessions/2026-04-25-0014-unusual-activity-pivot.md b/context/sessions/2026-04-25-0014-unusual-activity-pivot.md new file mode 100644 index 0000000..3856cbb --- /dev/null +++ b/context/sessions/2026-04-25-0014-unusual-activity-pivot.md @@ -0,0 +1,482 @@ +# Session Handoff — 2026-04-23 / 24 + +Long session in three distinct phases. All work landed on branches, `ui/new-design` and +`feat/todo-data-signals`. The first has been merged into `main` and `development`; the second is +rebased on top of the merged dev and pushed to origin, awaiting merge. + +--- + +## 2026-04-24 end-of-session summary (read first) + +**All of the following is on `development` (= `main` once Drew releases the final tweaks).** + +Shipped in chronological order: + +- `e5df589` — TODO quick-wins sweep (Torn market link, sortable tables, login refactor, polish). +- `b19421c` — API key security Phase 1+2 (at-rest AES-GCM + browser proxy). +- `11cb3fd` + `90f0234` — Phase 3 cleanup: drop plaintext `api_key` column, delete `tornapi.ts`; + Codex P1 fixes (unreadable-ciphertext graceful fall-through, V1.20 self-gating guard). +- `fefe466` — handoff refresh. +- `7d6844c` — **Top Movers Phase 1**: median-window latest/baseline, per-item dispersion (CV of + daily medians), z-scored ranking, new columns in V1.21, widget rewired. +- `75d52e8` — Codex P2 fix: sign-gated the z-score sort so risers are strictly positive and fallers + strictly negative. +- `629049e` — SQL fix: `percentile_cont` returns `double precision`, so cast to numeric before + `ROUND(x, n)` (the first deploy failed with + `function round(double precision, integer) does not exist`). + +Top Movers Phase 1 then went to prod. Drew eyeballed it against real item charts. Conclusions: 5/5 +risers looked genuinely unusual (Horse's Head, Raw Ivory, Insulin, Lubricant, Samurai Sword). 2/5 +fallers were borderline (Cocktail Ring, Psycho Clown Mask — both under the 10% move floor anyway). +1/5 fallers (DSLR Camera) showed the post-spike reversion failure mode: the spike from 3-4 days ago +was still inside the 30d baseline, pulling it up, so the current reverted price reads as a "fall". +Same shape as the original Edomondo Localé problem. + +### Uncommitted tweaks from end-of-session + +One local commit pending, currently just files in the working tree — Drew to commit + deploy: + +1. **Trimmed-median baseline** — drop the top 10% and bottom 10% of bucket averages from the + baseline window before taking the median (via `percentile_cont(0.10/0.90)` as bounds). Makes a + single multi-day spike inside the baseline window less influential. +2. **2-day baseline buffer** — baseline is now `NOW-30d` to `NOW-3d` (was `NOW-1d`). Fresh spikes + take 2 extra days to rotate into the baseline. +3. **z-threshold raise** — `MinAbsZScore` bumped from `1.0` to `1.5`. Tightens the filter on + borderline "statistical noise" ranks. + +All three are in `ItemVolatilityStatsRepository.cs` only. Files to commit: that file + the TODO.md +edits. Build clean. + +**Honest caveat**: validating against Drew's data export (see `data-exports/`), these tweaks don't +fully fix the DSLR-Camera-style reversion — the spike lasted multiple days, so a 10% trim + 2-day +buffer aren't aggressive enough. Going further (longer baseline, heavier trim) would start excluding +items with less history. Accepting this limitation is the right move because the intended next piece +of work (see below) reframes the whole card around "unusual activity" rather than "who rose/fell", +which makes the reversion signal legitimate rather than misleading. + +### Unusual Activity pivot — shipped + +Pivoted the Top Movers widget from "who went up / down" to "markets departing from their normal +trends". With ~29 keys and 6-hour polling we can't reliably identify "the top N movers" — we +*can* identify items whose recent behaviour is statistically unusual, because the departures are +large relative to dispersion even with sparse data. Honest framing about what the data supports. + +What landed: + +- **Flyway V1.22** — summary buckets resized 6h → 1h (clean truncate + repopulate from raw + change logs). Unlocks intraday resolution for multi-horizon analysis. +- **Flyway V1.23** — Codex P1 follow-up; truncates `item_volatility_stats` so V1.22 + V1.23 + do an atomic cutover (both source and derived tables empty, both rebuild from scratch). +- **Flyway V1.24** — new `item_unusual_candidates` table. One row per `(item_id, source)` + with the multi-horizon stats flattened across columns. Partial index on + `(source, unusualness_score DESC)` for the ranking query. +- **`ItemUnusualCandidatesRepository.RebuildAsync`** — single SQL pass. Trimmed-median + baseline (10/90 percentile bounds, NOW-30d to NOW-1d, min 10 buckets); CV of daily medians + for dispersion (min 14 days). Per-horizon (1h / 6h / 24h / 7d) median + sample count + move + + z-score with min-sample thresholds 1/3/6/24 buckets. Derived `unusualness_score = max(|z|)`, + `dominant_horizon`, `direction`. UPSERT. +- **Hangfire `RebuildUnusualCandidates`** — runs at minute 45 past every 6h, after the + summariser (minute 0) and the volatility rebuild (minute 30) so they don't compete. +- **`GET /api/items/unusual?source=&limit=&minScore=`** — returns DTOs ordered by + `unusualness_score DESC`. Default minScore 1.5σ, limit 15. +- **Widget rewrite** — three cards now: Top risers, Top fallers, Unusual activity. Most-active + card removed (saturation made it useless per Drew's call). Unusual rows show name + a + server-formatted "↑ 3.4σ in last 24h vs month" string + the dominant-horizon's window price. + +Deferred to follow-ups (see TODO.md): re-score on home-page load, mode-to-bucket display on item +detail pages, volatility-of-volatility signal, confidence chips, dropping legacy +`current_price` / `price_change_1d` / `price_change_1w` columns. + +### Two open design questions from the end of this session + +1. **SQL-in-C# vs stored procedures.** My recommendation: keep in C#. One committer, app deploy is + cheap, versioned SQL with the app is easier to review. Only pivot to SPs if non-app consumers + need the logic or the SQL starts needing its own test harness. If individual queries (like the + volatility rebuild) grow much larger, worth moving to embedded `.sql` resources for syntax + highlighting / pgAdmin testing — but not SPs. +2. **Shrink `item_change_log_summaries` bucket from 6h to 1h (or 3h)?** Strongly yes, 1h is the + right target for the pivot. The multi-horizon z-score needs genuine intraday resolution — at 6h + buckets the "1h" and "6h" windows both collapse to "last bucket". Cost: ~6× the row count (~10M + rows/year instead of ~1.6M, still fine), and the summariser job runs proportionally. Migration + gotcha: either rebuild from raw `item_change_logs` (requires that data going back 30d+), run both + bucket sizes in parallel during a cutover, or accept some coarser historical data. Worth planning + carefully. 3h buckets are a compromise that doesn't fully unlock intraday detection. + +### Deferred (still open, see TODO.md) + +- Top Movers review slices (3) volatility-bucket separation, (4) Most active saturation chip, (5) + confidence chips — the pivot above subsumes most of (3) and (5); (4) stands alone. +- Drop legacy `current_price` / `price_change_1d` / `price_change_1w` columns after the pivot. +- Data analysis tools / read-only prod DB access — still parked. + +--- + +## Historical — original session narrative preserved below + +--- + +## Phase 1 — `ui/new-design` (merged) + +Branch has been merged to `main` and `development` on origin. Started from the plan at +`context/plans/2026-04-23-ui-ux-overhaul.md`, which was a thorough UI/UX review written before the +session. **27 commits.** Headline changes: + +- **Sidebar regrouped** (Markets / Utilities / You), persistent brass "Sign in" button when signed + out, alphabetised Utilities (All Items → Time), settings promoted out of avatar menu. +- **Typography**: IBM Plex Sans body, JetBrains Mono `tabular` variant, Passero One display, subtle + grain overlay on body. +- **Shared primitives** created: `LoginRequired` (tool + requiredLevel; folded BPL's bespoke + access-level upgrade guard in), `SectionHeader`, `StatChip` (profit/loss/neutral/experimental/ + tradable/status variants), `EmptyState`, `PriceWithTax`, `MarketToolbar`, `FilterDrawer`, + `LazySparkline`, `LazyLatestMarketPrice`. +- **FilterDrawer**: persistent right-hand panel on md+, FAB + temporary drawer on sm, collapsible + with a 900ms width animation, state persisted. Active-filter count on badge. City/Foreign/All + Items all mount through it. Search field is first control; "All" chips toggle between all-on and + all-off for the "select everything but X" workflow. +- **Markets**: City + Foreign rewired with the drawer and MarketToolbar; profit chips migrated to + `StatChip`; hearts are brass (primary.main) everywhere; `PriceWithTax` renders gross headline + + "$X after N% tax" on a single line (nowrap). "Show profitable" and "Hide out of stock" default + OFF. Foreign country flags shrunk to 2.5em so all 11 fit with the drawer open; "Order by flight + time" moved under the flags. +- **Favourites + All Items** both grew paired (latest, trend) columns per source — Bazaar (latest) + + Bazaar trend + Market (latest) + Market trend. Sparklines lazy-load via IntersectionObserver and + fade in over 2s. +- **Item details**: `useItemMarketAdvice` lifted into the page (one fetch shared with info cards and + market overview); "Market Price (latest)" promoted to the headline with "N mins ago · daily avg + $X" beneath; profit-chip rule documented inline + with a legend. +- **History source filter**: threaded a `Source` enum through the item-history SQL (raw + summary + query) and endpoints. The `?source=Torn|Weav3r` param defaults to Torn. Client hooks + (`useItemPriceHistory`, `useItemVelocityHistory`) carry the source; the Prices Over Time chart on + Item Details overlays Torn (market) + Weav3r (bazaar) lines on shared axes. +- **Chart**: refactored to take `series[]` + controlled `timeWindow`; dynamic y-axis gutter sized to + the longest tick label (fixes `$` clipping on expensive items); bar chart padded so the leftmost + bar no longer covers the y-axis. +- **Chip stability**: MUI filled vs outlined chips rendered 2px apart because of `.MuiChip-label` + padding differences (8px vs 7px). Equalised via theme override — toggling no longer shifts wrap + points. +- **Codex PR reviewer** flagged two real bugs after merge was in progress: P1 (sign-in in-flight + state never resets after failure — confirmApiKeyAsync swallows errors) and P2 (Settings + redirecting before `getMe()` resolves, bouncing users with a valid session cookie but cleared + localStorage). Both fixed by adding a `sessionChecking` flag to `UserContext` and driving the + sign-in disabled state from `loadingDotNetUserDetails` with an intent-tracking ref rather than + local in-flight state. + +Plan and handoff notes from within the session are preserved at: + +- `context/plans/2026-04-23-ui-ux-overhaul.md` (original plan) +- `context/plans/2026-04-23-ui-ux-overhaul-handoff.md` (mid-session handoff) + +--- + +## Phase 2 — `feat/todo-data-signals` (merged into development) + +Branched from `development` when `ui/new-design` was still outstanding. After the UI branch landed, +rebased onto the updated `development`, then merged. Tip of `origin/development` is `fd41075`. **5 +commits** on top of dev: + +- `e01f8a5` — TODO cleanup: removed 8 items explicitly marked "done". +- `91262ad` — Stale-data banner + non-bulk Resale filter + Buy Price (country) rename. +- `3f45d19` — Hangfire volatility job. Adds: + - Flyway `V1.18__item_volatility_stats.sql` (table + partial indexes for each sort key). + - `ItemVolatilityStatsEntity` + DTO + `IItemVolatilityStatsRepository` with a single-query UPSERT + rebuild (three `DISTINCT ON` price snapshots: latest, 1d-ago, 1w-ago, joined to a windowed count + aggregation) plus a typed `GetTopAsync`. + - Recurring Hangfire job at 30m past every 6h (offset from SummariseChangeLogs so it runs against + fresh buckets). + - `GET /api/items/volatility?source=&sort=&limit=&ascending=` endpoint. + - Frontend: `useItemVolatility` hook, `TopMovers` home-page widget (Most active, Top risers, Top + fallers over 24h). +- `9a8d97c` — TODO sweep for the three shipped items; noted what the volatility job unblocks. +- `fd41075` — Resale non-bulk checkbox onto its own row with a more descriptive label (was awkwardly + floating mid-row before). + +Rebase conflicts resolved during Phase 3 — mostly discarding the pre-drawer filter blocks in +CityMarkets/ForeignMarkets that the feat commit had been editing against old dev, keeping the drawer +layout and dropping the `StaleDataBanner` at the top of `mainContent`. Orthogonal things +(DatabaseService backend signatures, dotnetapi.ts) merged cleanly. + +--- + +## Phase 3 — merge + rebase + reviewer fixes + +Drew did the `ui/new-design → main → development` merge himself. I rebased `feat/todo-data-signals` +onto the updated dev and resolved conflicts. + +**Backend restart needed** to pick up Flyway V1.18 and register the new Hangfire job. First +`RebuildVolatilityStats` run populates the table; after that the Home page's Top Movers widget has +something to render. Manual trigger available at `/hangfire` if Drew doesn't want to wait. + +--- + +## Current state + +| Branch | Where it sits | +| ------------------------------ | ----------------------------------------------------------------------------------------------------------------------------------------------- | +| `main` | All work from this session shipped: UI overhaul, feat/todo-data-signals, TODO quick-wins, API key security Phase 1+2+3. Up to date with origin. | +| `development` | Same as main. Tip is `90f0234` (Codex P1 fix on phase 3 cleanup). | +| `feat/todo-data-signals` | Merged; safe to delete locally. | +| `chore/drop-plaintext-api-key` | Merged; safe to delete locally. | + +Local uncommitted state: none (this handoff update is the only thing in flight). + +Build state: + +- `npx tsc --noEmit` — clean. +- `npm run build` — clean (7.11s, 500kB chunk warning is pre-existing). +- `dotnet build` — clean (6 projects, 0 errors, 0 warnings). + +--- + +## Blockers / outstanding + +- **Top Movers Phase 1 shipped but not yet verified in prod** — once the next Hangfire rebuild runs, + spot-check the Top Risers/Fallers cards for honesty. The infamous items should no longer appear: + Ski Mask filtered by min-sample (2 buckets in 24h < 3), Scalpel z≈0, Edomondo z≈0.4, Slingshot / + Plastic Sword / Fine Chisel mid-rank, Pillow absorbed by its own 52% dispersion. Rope and Chain + Whip may still appear as movers — that's correct per the data at hand. +- Project-local `.venv` pattern for `curl_cffi` — workaround (system `pip install`) is in place for + Drew's machine; proper fix deferred. +- Some UI items deliberately deferred: + - Vendor icons / item-type glyphs (section 10 of the UI plan). + - Resale page drawer conversion. + +--- + +## Next action + +- Smoke the Top Movers redesign post-deploy. Data exports + analysis in `data-exports/` (gitignored) + validated the expected rankings before ship; still worth eyeballing the live widget. +- **Top Movers remaining slices** (noted in TODO.md): (3) volatility-bucket separation for + naturally-noisy items, (4) "Most active" ceiling chip, (5) confidence chips using the stored + `sample_count_recent` / `sample_count_baseline` columns. All non-urgent. +- Parked for later (Drew has context; don't action without asking): + - Read-only prod DB access for offline data analysis. Options discussed: read-only Postgres role + on a replica, or nightly logical dump into DuckDB. Hosting shape drives the choice. + - Cross-item spike correlation analysis (is-this-a-Torn-event-day?). Separate analytics tool, not + a page on the site. +- Follow-ups noted in `TODO.md`: + - "Item is heating up" UI badge (data available; now honest via z-score). + - Dedicated `/volatility` page with sliders (endpoint now returns ranked + filtered data). + +--- + +## Top Movers review (2026-04-24) + +Drew reviewed the widget after it had run a few times and flagged that the data isn't reliable +enough to be honest to users. I read the job and confirmed every artifact is explainable from the +current rebuild logic. + +**Status: Phase 1 of the redesign (steps 1+2 below) shipped the same day.** New columns in +`item_volatility_stats`: `window_price`, `baseline_price`, `sample_count_recent`, +`sample_count_baseline`, `price_dispersion`, `move_pct_window`. Ranking switched to z-scored move, +filtered by `|move_pct_window| >= 0.10` AND `|z-score| >= 1.0`. Validated against a 550k-row summary +export before shipping: Ski Mask filtered out (2 buckets), Scalpel z≈0, Edomondo z≈0.4, low-range +items absorbed by their own dispersion. Rope and Chain Whip still flagged as movers where +appropriate. Remaining slices (3)-(5) of the priority list below are tracked in TODO.md. The +narrative below preserves the original diagnosis; skip it if you just need the state of the work. + +### What the job actually does today + +`ItemVolatilityStatsRepository.RebuildStatsAsync` runs a single `DISTINCT ON` query across +`item_change_log_summaries`: + +- **`current_price`** = average price of the _single most-recent_ bucket per (item, source). +- **"1d-ago price"** = average of the _single most-recent bucket_ with `bucket_start <= NOW() - 1d`. +- **`price_change_1d`** = `(current − 1d_ago) / 1d_ago`, fractional. +- **`changes_1d` / `changes_1w`** = raw count of change rows in the respective window. +- **No dispersion / stability measure is stored.** The table name is aspirational; nothing it stores + measures volatility. + +### Observed artifacts (prod, Apr 24) + +- **Ski Mask shown as "−92% fall, latest $800M"**: one $10B listing pulled the latest bucket's + average up, then the next bucket reverts. The "−92%" compares two adjacent bucket means, one still + inflated by the outlier. `current_price` is the inflated mean. +- **Scalpel "+211.7% riser, $450"** and **Rope "+105%"**: genuine intraday spikes that have already + reverted by the time the widget renders. Ranking compares mid-spike vs pre-spike. +- **Edomondo Localé "−65.5% fall"**: reversion _from_ its own recent spike — the 1d-ago bucket sits + on the peak, the latest bucket is post-crash. +- **Pillow / Slingshot / Plastic Sword / Fine Chisel appear repeatedly**: naturally high-variance + low-priced items. They swing 2–3× daily without any news; the ranking treats that intrinsic noise + as signal. +- **"Most active: 353 chg" on all 5 items**: the polling ceiling. With 29 active API keys the job + can observe at most ~113 changes/6h for a single item, so any item whose true change rate exceeds + that saturates and the ranking past the ceiling is arbitrary. + +### Proposed changes, priority order + +1. **Robust window estimators.** Replace single-bucket `latest` and `baseline` with median (or + trimmed mean) over explicit windows (e.g. latest = last 6–24h, baseline = last 30d excluding the + last day). Require a minimum sample count in each window; exclude items that don't meet it. Kills + all three outlier-driven artifacts in one go. +2. **Z-scored movement.** Store a per-item dispersion measure (MAD of log returns, or CV of daily + medians, over ~30d). Rank Top Risers/Fallers on `(current − baseline) / dispersion`, not raw %. + Fine Chisel moving 50% scores ~1σ; S&W M29 moving 170% scores ~5σ. This is the "Pillow is always + moving" filter. +3. **Volatility bucket + separate surfacing.** Classify items stable/medium/high. Either exclude + high-volatility items from the main Top cards and give them their own "Usually volatile, moving + outside their range" card (preferred), or add a chip next to volatile rows so users know to + discount. +4. **Fix the "Most active" ceiling.** Short term: show a saturation chip ("≥ ceiling/24h — + under-sampled") instead of the raw number when `changes_1d` hits the ceiling, and rank saturated + items by a secondary key (circulation or volatility). Medium term: measure activity differently + (distinct prices observed per hour, or distinct 5-minute buckets with any change) so saturation + is less likely. +5. **Confidence surfacing on the widget.** Sample count, range chip, or a "why this is listed" + tooltip per row. +6. **Item details page**: add a "View on Torn market" link. Drew noticed this is missing. + +### Recommended first session + +**(1) + (2) together.** Change the `item_volatility_stats` rebuild to compute median-based +latest/baseline over explicit windows with min-sample filtering, and add a dispersion column +computed over ~30d. Rank the widget on z-scored move. Around 80% of the user-visible weirdness goes +away, and the schema is set up to support (3) without another migration. Ideally the rebuild also +records the sample counts used for latest/baseline so (5) can be added cheaply later. + +### Notes + +- "Volatile" is the correct term of art — dispersion of returns — not a misuse. Use it. +- Don't pre-optimise (3)/(4)/(5) before we can see how (1)+(2) behaves in isolation. +- The stored `current_price` today isn't safe as a display value either. After (1), the widget's + displayed price should be the window median, not the single most recent bucket mean. +- Part of the motivation for (1) specifically is _honesty_: a "Top riser" that's already back to + baseline by the time it's shown is worse than nothing. A window-median approach only flags moves + that have persisted long enough to matter. + +--- + +## API key security — Phase 1 + 2 + 3 (2026-04-24, all shipped) + +Shipped end-to-end in this session. Plaintext Torn API keys no longer appear at rest in the DB, no +longer appear in the browser after sign-in, and the transitional plaintext column + scaffolding are +gone. Plan document lived in this chat; no separate plan file was written. + +**Deploy order that actually happened**: Phase 1+2 committed as `b19421c` → Drew deployed → prod +verified (every user row had `api_key_encrypted` populated via the startup backfill) → +`chore/drop-plaintext-api-key` branch with Phase 3 → Codex PR review flagged two P1s (unreadable- +ciphertext throwing on sign-in; V1.20 missing a runtime guard) → both fixed in `90f0234` → Drew +deployed again → prod clean. + +### What landed + +**Infrastructure (Terraform)** + +- `torn_key_encryption_key_v1` (sensitive var) + `torn_key_encryption_current_version` (default + `"1"`) added to `infra/variables.tf`. +- `azurerm_key_vault_secret "torn_key_encryption_v1"` in `infra/key_vault.tf` holds the live + encryption key. Matches the existing jwt_secret / db_password pattern. +- `azurerm_key_vault.torntools_keyvault.purge_protection_enabled = true` (was `false`). One-way + switch — can't be turned off after apply. Deliberate: protects against future-Drew accidentally + purging encryption keys. +- `TornKeyEncryption__CurrentVersion` and `TornKeyEncryption__Keys__1` added to `app_settings` in + `infra/app_service.tf`. +- `.github/workflows/deploy-all.yml` passes `TF_VAR_torn_key_encryption_key_v1` from the GitHub + secret `TORN_KEY_ENCRYPTION_KEY_V1`. +- `infra/terraform.env.tfvars-template` gains a placeholder line. + +**Schema (Flyway)** + +- `V1.19__users_api_key_encrypted.sql`: `ALTER TABLE users ADD COLUMN api_key_encrypted BYTEA NULL`. + No data change — the backfill runs on API startup. + +**Backend** + +- `TornTools.Core.Configurations.TornKeyEncryptionConfiguration` — `CurrentVersion` + + `Dictionary Keys`, bound from the `TornKeyEncryption` section via + `AddTornKeyEncryptionConfiguration`. Uses `GetSection` (not `GetRequiredSection`) so dev without + the secret still boots; `ApiKeyProtector` throws on first use if `Keys` is empty. +- `TornTools.Core.Interfaces.IApiKeyProtector` + `TornTools.Application.Services.ApiKeyProtector` — + AES-GCM with payload layout `[1 byte version][12 byte nonce][16 byte tag][ciphertext]`. Parses + `CurrentVersion` + `Keys` at construction. Registered as singleton. +- `ApiKeyLeaseDto(long UserId, string ApiKey)` — returned from `IUserRepository.GetNextApiKeyAsync` + so failures in `ApiCaller` can attribute back to the owner via `MarkKeyUnavailableAsync(userId)` + instead of plaintext-equality lookup. +- `ApiCaller.AddAuthorizationHeader` now returns `Task`. `CallAsync` captures the + lease and passes `userId` on `TornKeyUnavailableException`. Removed the regex-y plaintext-recovery + from auth headers in the failure path. `TornApiMultiKeyCaller` updated; `Weav3rApiCaller` / + `YataApiCaller` inherit the no-op default (return `null`). +- `MarkKeyUnavailableByApiKeyAsync` removed from `IDatabaseService`, `DatabaseService`, + `IUserRepository`, `UserRepository` — no callers left. +- `UserRepository.UpsertUserDetailsAsync` dual-writes both `ApiKey` (plaintext) and + `ApiKeyEncrypted`. Opportunistic backfill if plaintext is already correct but encrypted is null. + `GetNextApiKeyAsync` prefers `ApiKeyEncrypted`; falls back to plaintext. + `GetApiKeyForUserAsync(userId)` added for the proxy endpoints. +- `UserEntity.AsDto()` returns `ApiKey = string.Empty` — plaintext never leaks past persistence on + reads. `UserDto.ApiKey` is now write-only-by-convention (write paths set it, read paths leave it + empty). +- Startup backfill in `Program.cs`: after migrations + scheduler registration, calls + `IDatabaseService.BackfillEncryptedApiKeysAsync`. Idempotent. +- New `TornController` at `/api/torn`: + - `GET /api/torn/user/basic` (auth) — proxies Torn `/v2/user/basic` using the current user's + decrypted key. + - `GET /api/torn/user/inventory?cat=X` (auth) — walks Torn's `_metadata.links.next` pagination + server-side, returns the aggregated `{ inventory, _metadata }` payload. + - `POST /api/torn/key/validate` (anonymous) — accepts `{ apiKey }` in the body; fetches + `/v2/key/info` + `/v2/user/basic` in parallel; returns `{ info, profile, error? }`. + +**Frontend** + +- `client/src/lib/dotnetapi.ts` — `proxyTornUserBasic`, `proxyTornUserInventory(cat)`, + `proxyTornKeyValidate(apiKey)` (returns `ValidatedKey = { info, profile }`). +- `UserContext.tsx` — `apiKey`, `setApiKey`, `confirmApiKeyAsync`, `fetchTornProfileAsync` all + removed. `LOCAL_STORAGE_KEY_TORN_API_KEY`, `LOCAL_STORAGE_KEY_TORN_USER_DETAILS`, + `LOCAL_STORAGE_KEY_USER_CACHE_TS` all removed. Legacy keys cleaned from `localStorage` on mount. + New `signInAsync(apiKey)` — the only path that carries a plaintext key from browser to backend. + `tornUserProfile` auto-loads via `proxyTornUserBasic()` when `dotNetUserDetails` is set. +- `SignIn.tsx` / `UserSettings.tsx` — local-only `apiKey` state (never context, never localStorage). + Debounced `proxyTornKeyValidate` during typing shows the preview; clicking "Sign in" / "Save" + calls `signInAsync(key)`. Key wipes from local state on dialog close. +- `BazaarPriceLookup.tsx` — `fetchTornInventory(apiKey, cat)` → `proxyTornUserInventory(cat)`. + `apiKey` dependency dropped. +- `ForeignMarkets.tsx` — `apiKey` / `fetchTornProfileAsync(apiKey)` removed. Uses `tornUserProfile` + which UserContext now loads automatically. +- `useUser` hook shape updated (no `apiKey`, no `setApiKey`, no `fetchTornProfileAsync`, no + `confirmApiKeyAsync`; adds `signInAsync`). + +**Phase 3 follow-ups (shipped as `11cb3fd` + Codex-P1 fix `90f0234`)** + +- **Flyway V1.20** `ALTER TABLE users DROP COLUMN api_key`. Self-gating: a plpgsql `DO` block at the + top counts rows with `api_key_encrypted IS NULL AND api_key IS NOT NULL AND api_key <> ''` and + `RAISE EXCEPTION` if non-zero. CI can't accidentally drop the column on a half-backfilled table. +- `UserEntity.ApiKey` property + EF mapping removed. +- `UserRepository.UpsertUserDetailsAsync` change-detection now decrypts the existing ciphertext to + compare, but wraps `Unprotect` in a try/catch on `CryptographicException` — a decrypt failure logs + a warning and falls through to the key-change branch, letting the user re-sign in with a fresh key + to overwrite the unreadable row. (This was the first Codex P1; the naive version I shipped in + `11cb3fd` would have thrown a 500 on `/auth/login` for any row with corrupted/retired ciphertext.) +- `UserRepository.BackfillEncryptedApiKeysAsync` + its `IDatabaseService` / `DatabaseService` + shims + the `Program.cs` startup call — all removed. Nothing left to backfill. +- `client/src/lib/tornapi.ts` deleted; types moved to `client/src/types/torn.ts`. Three import sites + updated. + +**Left for a quieter future session** (deferred, not blockers): + +- `UserContext.tsx` has a legacy-localStorage cleanup that removes pre-Phase-2 keys on mount. Safe + to remove once real time has passed and no returning user could still have the stale cache. No + urgency. +- `UserDto.ApiKey` is a write-only-by-convention field (write paths populate it; read paths leave it + empty). A proper read/write DTO split would remove the asymmetry. + +### Build state at end of session + +- `dotnet build` clean (6 projects, 0 errors, 0 warnings). +- `npx tsc --noEmit` clean. +- `npm run build` clean (7.11s, pre-existing 500kB chunk warning only). +- Prod smoke: Drew confirmed sign-in still works, `api_key_encrypted` populated for every row, Phase + 3 drop + cleanup deployed with no issues. + +### Notes for the next session + +- **Key rotation (the hypothetical future-Drew "wild security quest")**: add + `TORN_KEY_ENCRYPTION_KEY_V2` GitHub secret + `torn_key_encryption_key_v2` variable + + `azurerm_key_vault_secret "torn_key_encryption_v2"` + `TornKeyEncryption__Keys__2` app_setting; + bump `torn_key_encryption_current_version` to `"2"`; deploy. New writes encrypt with v2; v1 rows + stay decryptable. Optional re-encryption pass (Hangfire job or manual SQL) promotes v1 rows to v2 + over time. When the last v1 row is gone, retire v1. +- **If the backend crashes at startup complaining about `TornKeyEncryption:Keys`**: the GitHub + secret isn't set, or Terraform hasn't applied since it was added. Secret must exist before the + backend starts. +- **Local dev without the secret**: the app boots but sign-in will throw the "Keys is empty" error + on first use. Set `TornKeyEncryption__Keys__1` via `dotnet user-secrets` or add to + `appsettings.Development.json` (don't commit) with a throwaway key. From 8b1a2e8b9929097126b9b1c043ba5f499b4125f7 Mon Sep 17 00:00:00 2001 From: Drew Morgan Date: Sat, 25 Apr 2026 00:19:18 +0100 Subject: [PATCH 2/4] fix(docs): session handoff and note to next instance --- context/note-to-next-instance.md | 94 +++++++++++++++----------------- context/session-handoff.md | 78 +++++++++++++------------- 2 files changed, 84 insertions(+), 88 deletions(-) diff --git a/context/note-to-next-instance.md b/context/note-to-next-instance.md index 2f33258..dd952f0 100644 --- a/context/note-to-next-instance.md +++ b/context/note-to-next-instance.md @@ -3,58 +3,54 @@ This was a long session that did real work in three modes — security hardening, statistical modelling, and the discipline of choosing not to overbuild. -The thing I'd most want you to carry forward isn't a fact about the code, it's the **honesty -shift on the Top Movers card**. Drew started by saying the data wasn't reliable enough to -support our claims; I started by proposing better statistics. We ended somewhere different and -better: reframing the card from "who moved the most" to "what's currently unusual", because -that's what 29 polling keys and 6h cadence can actually tell users honestly. The statistical -work mattered, but the framing change was the thing that made it useful. Watch for that -pattern. When the data can't support the claim, sometimes the answer isn't "better data" — -it's a quieter claim. +The thing I'd most want you to carry forward isn't a fact about the code, it's the **honesty shift +on the Top Movers card**. Drew started by saying the data wasn't reliable enough to support our +claims; I started by proposing better statistics. We ended somewhere different and better: reframing +the card from "who moved the most" to "what's currently unusual", because that's what 29 polling +keys and 6h cadence can actually tell users honestly. The statistical work mattered, but the framing +change was the thing that made it useful. Watch for that pattern. When the data can't support the +claim, sometimes the answer isn't "better data" — it's a quieter claim. Practical things worth remembering: **Codex catches things.** Three P1s in this session — sign-in flow exception swallowing, the -sign-gated z-score, the V1.22-without-V1.23-cutover-staleness, and the V1.20 self-gating -guard. All legitimate. None things I'd have caught with my own diff hygiene at the speed I -was moving. Treat the PR review as a real backstop, not a formality. If something it flags -turns out to be wrong, push back; if it's right, fix it before merge rather than after. - -**Stacked commits are an anti-pattern when hotfixes are needed.** The `652b6ec` summariser -fix was committed on top of `2c23b8b` (unusual activity), and Drew couldn't cherry-pick it -onto a clean base without dragging in unrelated lines. He solved it with branch surgery; I -helped resolve the conflict. Lesson: hotfixes should land on a branch off `development` (or -`main`) at a known-clean commit, not stacked on top of in-flight feature work. If Drew hadn't -been comfortable with git-fu we'd have been in trouble. +sign-gated z-score, the V1.22-without-V1.23-cutover-staleness, and the V1.20 self-gating guard. All +legitimate. None things I'd have caught with my own diff hygiene at the speed I was moving. Treat +the PR review as a real backstop, not a formality. If something it flags turns out to be wrong, push +back; if it's right, fix it before merge rather than after. + +**Stacked commits are an anti-pattern when hotfixes are needed.** The `652b6ec` summariser fix was +committed on top of `2c23b8b` (unusual activity), and Drew couldn't cherry-pick it onto a clean base +without dragging in unrelated lines. He solved it with branch surgery; I helped resolve the +conflict. Lesson: hotfixes should land on a branch off `development` (or `main`) at a known-clean +commit, not stacked on top of in-flight feature work. If Drew hadn't been comfortable with git-fu +we'd have been in trouble. **Validate against real data before shipping ranking changes.** Drew dropped CSV exports into -`data-exports/` for me. Running the proposed thresholds against those in pandas surfaced a -real issue (low-dispersion items with tiny moves dominating risers — needed an absolute -move floor, not just a z-score floor). The exact same risk would have existed in production. -Cheap pre-validation. Use the data when you have it. - -**Postgres `percentile_cont` returns double precision.** Burned us once with V1.21 (deploy -failed in prod with `ROUND(double precision, integer) does not exist`); cast to numeric -up-front and the lesson holds across every aggregating CTE. The unusual-activity rebuild -in V1.24 honours this from the start. - -**Drew has good instincts on framing and scope.** When he said "let's just get rid of the -Most active card, it's useless" — that was right and saved a 2x2 layout I was about to -propose. When he asked about cascading time-series tables (1h / 6h / 12h cascade), he was -right to ask but the answer was no, because we're three orders of magnitude smaller than -the use cases that warrant it. He pushed back on the "stored procedures vs SQL in C#" -question — kept it in C# was the right call for our scale. He's not always going to take -the recommendation, but the conversations have been productive. Argue back when you have a -real reason; don't yes-man. - -**The handoff file gets long fast.** This session's archive is 30KB, mostly because it -inherited content from prior sessions and accumulated. The fresh `session-handoff.md` I'm -writing is deliberately tight — current state + next action + knobs. If you find yourself -needing depth on something pre-this-session, look at -`context/sessions/2026-04-25-0014-unusual-activity-pivot.md`. - -The cards are done for now. Drew's energy was good through the whole arc — he was doing the -deploys in real-time, eyeballing the live widget, screenshotting issues, providing data -exports, catching the Codex comments quickly. Next session should pick something from -`TODO.md` rather than continuing to churn on the cards. They need to live a few days before -we know what to tune. +`data-exports/` for me. Running the proposed thresholds against those in pandas surfaced a real +issue (low-dispersion items with tiny moves dominating risers — needed an absolute move floor, not +just a z-score floor). The exact same risk would have existed in production. Cheap pre-validation. +Use the data when you have it. + +**Postgres `percentile_cont` returns double precision.** Burned us once with V1.21 (deploy failed in +prod with `ROUND(double precision, integer) does not exist`); cast to numeric up-front and the +lesson holds across every aggregating CTE. The unusual-activity rebuild in V1.24 honours this from +the start. + +**Drew has good instincts on framing and scope.** When he said "let's just get rid of the Most +active card, it's useless" — that was right and saved a 2x2 layout I was about to propose. When he +asked about cascading time-series tables (1h / 6h / 12h cascade), he was right to ask but the answer +was no, because we're three orders of magnitude smaller than the use cases that warrant it. He +pushed back on the "stored procedures vs SQL in C#" question — kept it in C# was the right call for +our scale. He's not always going to take the recommendation, but the conversations have been +productive. Argue back when you have a real reason; don't yes-man. + +**The handoff file gets long fast.** This session's archive is 30KB, mostly because it inherited +content from prior sessions and accumulated. The fresh `session-handoff.md` I'm writing is +deliberately tight — current state + next action + knobs. If you find yourself needing depth on +something pre-this-session, look at `context/sessions/2026-04-25-0014-unusual-activity-pivot.md`. + +The cards are done for now. Drew's energy was good through the whole arc — he was doing the deploys +in real-time, eyeballing the live widget, screenshotting issues, providing data exports, catching +the Codex comments quickly. Next session should pick something from `TODO.md` rather than continuing +to churn on the cards. They need to live a few days before we know what to tune. diff --git a/context/session-handoff.md b/context/session-handoff.md index 7a150a5..db6d0ac 100644 --- a/context/session-handoff.md +++ b/context/session-handoff.md @@ -1,8 +1,8 @@ # Session Handoff — 2026-04-25 00:14 -Previous handoff archived as `context/sessions/2026-04-25-0014-unusual-activity-pivot.md` — -read it if you need depth on the API key security work or the Top Movers redesign reasoning. -This file is a current-state snapshot. +Previous handoff archived as `context/sessions/2026-04-25-0014-unusual-activity-pivot.md` — read it +if you need depth on the API key security work or the Top Movers redesign reasoning. This file is a +current-state snapshot. ## Branch state @@ -12,30 +12,30 @@ This file is a current-state snapshot. ## What landed in the previous arc -Loose chronological order of the headline commits, all on `development` and either deployed or -about to be: - -| Commit | What | -|---|---| -| `e5df589` | TODO quick-wins sweep (Torn market link, sortable tables, login refactor, polish). | -| `b19421c` | API key security Phase 1+2 (at-rest AES-GCM, browser proxy). | -| `11cb3fd` + `90f0234` | Phase 3 cleanup (drop plaintext column, delete `tornapi.ts`, Codex P1 fixes). | -| `7d6844c` + `75d52e8` + `629049e` + `1e2f884` | Top Movers Phase 1 (median-window latest/baseline, dispersion, z-score ranking, sign-gating, `percentile_cont` numeric cast, trimmed-median + 2-day buffer + z≥1.5 tweaks). | -| `46440d5` + `c79dfa8` | Bucket resize 6h → 1h (V1.22 + V1.23 reset of `item_volatility_stats`). | -| `2c23b8b` | Unusual Activity pivot — V1.24 + new `item_unusual_candidates` table + multi-horizon rebuild + Hangfire job + `/api/items/unusual` + widget rewire (Most active dropped, Unusual added). | -| `652b6ec` / `867738d` | Summariser chunked into 7-day windows + 10-min command timeout (post-V1.22 backfill was hitting Npgsql timeout in prod). | -| `459124b` | Hide "Today's movers" heading + subtitle when widget has no data. | -| `e2119e3` | TODO note about ranking threshold tuning. | +Loose chronological order of the headline commits, all on `development` and either deployed or about +to be: + +| Commit | What | +| --------------------------------------------- | ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- | +| `e5df589` | TODO quick-wins sweep (Torn market link, sortable tables, login refactor, polish). | +| `b19421c` | API key security Phase 1+2 (at-rest AES-GCM, browser proxy). | +| `11cb3fd` + `90f0234` | Phase 3 cleanup (drop plaintext column, delete `tornapi.ts`, Codex P1 fixes). | +| `7d6844c` + `75d52e8` + `629049e` + `1e2f884` | Top Movers Phase 1 (median-window latest/baseline, dispersion, z-score ranking, sign-gating, `percentile_cont` numeric cast, trimmed-median + 2-day buffer + z≥1.5 tweaks). | +| `46440d5` + `c79dfa8` | Bucket resize 6h → 1h (V1.22 + V1.23 reset of `item_volatility_stats`). | +| `2c23b8b` | Unusual Activity pivot — V1.24 + new `item_unusual_candidates` table + multi-horizon rebuild + Hangfire job + `/api/items/unusual` + widget rewire (Most active dropped, Unusual added). | +| `652b6ec` / `867738d` | Summariser chunked into 7-day windows + 10-min command timeout (post-V1.22 backfill was hitting Npgsql timeout in prod). | +| `459124b` | Hide "Today's movers" heading + subtitle when widget has no data. | +| `e2119e3` | TODO note about ranking threshold tuning. | ## Current Hangfire / data state -- 1h summary buckets in use (`SummariseChangeLogs` runs every 6h, chunks 7-day windows during - the post-V1.22 backfill; checkpointed by `latestBucketStart` so a crash mid-backfill resumes). -- Three rebuild jobs: `SummariseChangeLogs` (00 every 6h), `RebuildVolatilityStats` (30 every - 6h), `RebuildUnusualCandidates` (45 every 6h). Manual triggers at `/hangfire`. -- Backfill from raw `item_change_logs` (earliest row 2025-11-19) is multi-week of work split - into 7-day chunks. Some rebuilds may run before the backfill is complete and produce partial - results — that's fine, the next run picks up more data. +- 1h summary buckets in use (`SummariseChangeLogs` runs every 6h, chunks 7-day windows during the + post-V1.22 backfill; checkpointed by `latestBucketStart` so a crash mid-backfill resumes). +- Three rebuild jobs: `SummariseChangeLogs` (00 every 6h), `RebuildVolatilityStats` (30 every 6h), + `RebuildUnusualCandidates` (45 every 6h). Manual triggers at `/hangfire`. +- Backfill from raw `item_change_logs` (earliest row 2025-11-19) is multi-week of work split into + 7-day chunks. Some rebuilds may run before the backfill is complete and produce partial results — + that's fine, the next run picks up more data. ## Build state @@ -52,29 +52,29 @@ about to be: No work selected. Options when Drew comes back: - **Iterate on the cards**: the six follow-ups under "Top Movers — remaining follow-ups" in - `TODO.md` (mode-to-bucket on item details, re-score on home-page load, - volatility-of-volatility signal, confidence chips, drop legacy columns, threshold tuning). -- **Pick a different feature** from `TODO.md` — Resale drawer conversion, "Item is heating - up" indicator on item details, Foreign Markets stock refill times, etc. + `TODO.md` (mode-to-bucket on item details, re-score on home-page load, volatility-of-volatility + signal, confidence chips, drop legacy columns, threshold tuning). +- **Pick a different feature** from `TODO.md` — Resale drawer conversion, "Item is heating up" + indicator on item details, Foreign Markets stock refill times, etc. - **Parked items needing explicit sign-off**: read-only prod DB access for offline analysis; cross-item spike correlation tool. ## Threshold knobs (for reference) -If the Unusual Activity card or the risers/fallers ranking surfaces too many or too few items -once the backfill catches up, the constants are: +If the Unusual Activity card or the risers/fallers ranking surfaces too many or too few items once +the backfill catches up, the constants are: - `ItemVolatilityStatsRepository.GetTopAsync` — `MinAbsMovePct = 0.10m`, `MinAbsZScore = 1.5m`. - `UnusualController` — `DefaultMinScore = 1.5m`. -- `ItemUnusualCandidatesRepository.RebuildQuery` — per-horizon min-sample thresholds inline: - 1 / 3 / 6 / 24 buckets for 1h / 6h / 24h / 7d. -- `DatabaseService.SummariseChunk = TimeSpan.FromDays(7)` — chunk size for the backfill loop. - Drop to 3 days if a chunk times out; raise if backfill is fine and you want fewer chunks. +- `ItemUnusualCandidatesRepository.RebuildQuery` — per-horizon min-sample thresholds inline: 1 / 3 / + 6 / 24 buckets for 1h / 6h / 24h / 7d. +- `DatabaseService.SummariseChunk = TimeSpan.FromDays(7)` — chunk size for the backfill loop. Drop + to 3 days if a chunk times out; raise if backfill is fine and you want fewer chunks. ## Data analysis assets -`data-exports/` (gitignored) holds the CSVs Drew shared mid-session for ranking validation. -Five files dated 2026-04-24: `item_change_log_summaries`, `item_volatility_stats`, `items`, -`listings`, `foreign_stock_items`. Useful when iterating on ranking thresholds — Python + -pandas works fine on the 33MB summaries CSV. They predate V1.22 (so they're 6h-bucket data), -but the ranking logic doesn't care about bucket size for the sanity checks we did. +`data-exports/` (gitignored) holds the CSVs Drew shared mid-session for ranking validation. Five +files dated 2026-04-24: `item_change_log_summaries`, `item_volatility_stats`, `items`, `listings`, +`foreign_stock_items`. Useful when iterating on ranking thresholds — Python + pandas works fine on +the 33MB summaries CSV. They predate V1.22 (so they're 6h-bucket data), but the ranking logic +doesn't care about bucket size for the sanity checks we did. From bb764596e6e45df0fd01d750ebf52ba9d7989818 Mon Sep 17 00:00:00 2001 From: Drew Morgan Date: Sat, 25 Apr 2026 02:40:08 +0100 Subject: [PATCH 3/4] feat(alerts): bargain-alert toast + OS notification for >$5k profit listings MIME-Version: 1.0 Content-Type: text/plain; charset=UTF-8 Content-Transfer-Encoding: 8bit Drew-only v1. When a Torn item market scan finds a listing whose single-unit profit (sell to city) exceeds $5,000, a bargain_alerts row opens; the frontend polls and surfaces it as a persistent top-right toast plus an OS-level notification when the tab is hidden. - V1.25 migration adds bargain_alerts with a partial unique index enforcing one active alert per item at the DB layer. - BargainAlertService.EvaluateAsync hooks DatabaseService.ProcessListingsAsync after ReplaceListingsAsync; threshold lives in ProfitThreshold (5_000). - TornMarketsProcessor overrides a new TryGetPriorityQueueItemAsync hook on QueueProcessorBase, interleaving hot items with the normal queue (50/50, bounded by MaxInterleaves). Synthetic queue items bypass the queue-table mutations. - AlertsController + IBargainAlertAuthService gate /api/alerts/* on BargainAlertsConfiguration.AuthorisedPlayerIds — clean seam for the deferred subscriber-ledger extension. - Frontend BargainAlertsProvider polls /api/alerts/active every 12s (always-on so backgrounded tabs still detect; extra fetch on return-to-visible). Web-Audio chirp when foregrounded; OS notification when hidden. Notification permission prompt fires on first user gesture after authorisation succeeds. ToS / payment-gating extension is deferred — see TODO.md for the analysis. Plan + verification SQL in context/plans/. Co-Authored-By: Claude Opus 4.7 --- .../sql/Versioned/V1.25__bargain_alerts.sql | 47 ++++ AGENTS.md | 2 +- CLAUDE.md | 2 +- TODO.md | 147 +++++++++- .../Controllers/AlertsController.cs | 98 +++++++ api/TornTools.Api/appsettings.json | 4 + .../Interfaces/IBargainAlertAuthService.cs | 10 + .../Interfaces/IBargainAlertService.cs | 35 +++ .../ServiceCollectionExtensions.cs | 2 + .../Services/BargainAlertAuthService.cs | 13 + .../Services/BargainAlertService.cs | 88 ++++++ .../Services/DatabaseService.cs | 10 +- .../BargainAlertsConfiguration.cs | 16 ++ .../DataTransferObjects/BargainAlertDto.cs | 16 ++ .../ServiceCollectionExtensions.cs | 15 +- .../Processors/QueueProcessorBase.cs | 51 +++- .../Processors/TornMarketsProcessor.cs | 149 +++++++++- .../Entities/BargainAlertEntity.cs | 48 ++++ .../Interfaces/IBargainAlertRepository.cs | 38 +++ .../Repositories/BargainAlertRepository.cs | 101 +++++++ .../ServiceCollectionExtensions.cs | 1 + .../TornToolsDbContext.cs | 8 + client/src/components/BargainAlertToast.tsx | 147 ++++++++++ client/src/components/Layout.tsx | 2 + client/src/contexts/BargainAlertsContext.tsx | 241 ++++++++++++++++ client/src/hooks/useBargainAlerts.ts | 18 ++ client/src/lib/dotnetapi.ts | 29 ++ client/src/main.tsx | 13 +- client/src/types/bargainAlerts.ts | 11 + ...2026-04-25-bargain-alerts-verification.sql | 67 +++++ context/plans/2026-04-25-bargain-alerts.md | 258 ++++++++++++++++++ context/session-handoff.md | 121 +++++++- 32 files changed, 1784 insertions(+), 24 deletions(-) create mode 100644 .docker/flyway/sql/Versioned/V1.25__bargain_alerts.sql create mode 100644 api/TornTools.Api/Controllers/AlertsController.cs create mode 100644 api/TornTools.Application/Interfaces/IBargainAlertAuthService.cs create mode 100644 api/TornTools.Application/Interfaces/IBargainAlertService.cs create mode 100644 api/TornTools.Application/Services/BargainAlertAuthService.cs create mode 100644 api/TornTools.Application/Services/BargainAlertService.cs create mode 100644 api/TornTools.Core/Configurations/BargainAlertsConfiguration.cs create mode 100644 api/TornTools.Core/DataTransferObjects/BargainAlertDto.cs create mode 100644 api/TornTools.Persistence/Entities/BargainAlertEntity.cs create mode 100644 api/TornTools.Persistence/Interfaces/IBargainAlertRepository.cs create mode 100644 api/TornTools.Persistence/Repositories/BargainAlertRepository.cs create mode 100644 client/src/components/BargainAlertToast.tsx create mode 100644 client/src/contexts/BargainAlertsContext.tsx create mode 100644 client/src/hooks/useBargainAlerts.ts create mode 100644 client/src/types/bargainAlerts.ts create mode 100644 context/plans/2026-04-25-bargain-alerts-verification.sql create mode 100644 context/plans/2026-04-25-bargain-alerts.md diff --git a/.docker/flyway/sql/Versioned/V1.25__bargain_alerts.sql b/.docker/flyway/sql/Versioned/V1.25__bargain_alerts.sql new file mode 100644 index 0000000..c362a1c --- /dev/null +++ b/.docker/flyway/sql/Versioned/V1.25__bargain_alerts.sql @@ -0,0 +1,47 @@ +-- Bargain alerts (Drew-only v1). One row per distinct bargain event: +-- "a listing appeared on the Torn market whose single-unit profit +-- against the city sell-back price is > $5,000". +-- +-- Idempotency: at most one active alert per item at any time. Enforced +-- by the partial unique index below so the application can INSERT +-- optimistically and rely on the DB to reject dupes; avoids races +-- between concurrent listings-processor workers. +-- +-- Why track expired/dismissed rather than DELETE: +-- - The toast UI wants to transition "active" → "expired" in-place +-- (so the user sees "Too late!" rather than a silent disappearance). +-- - Cheap stats later ("alerts per day", "claim vs miss ratio") fall +-- out of this for free. +-- +-- No user_id column. The authorised-recipient set is a config list +-- resolved at endpoint time; every row of this table is a global event +-- that any authorised user sees. When the subscription extension +-- eventually lands, this shape is still correct — the auth gate widens, +-- the row semantics don't change. +CREATE TABLE public.bargain_alerts ( + "id" bigserial PRIMARY KEY, + "item_id" int4 NOT NULL, + "listing_price" int8 NOT NULL, + "market_value" int8 NOT NULL, + "profit" int8 NOT NULL, + "found_at" timestamptz NOT NULL DEFAULT NOW(), + "expired_at" timestamptz NULL, + "dismissed_at" timestamptz NULL, + "status" text NOT NULL DEFAULT 'active', + + CONSTRAINT bargain_alerts_status_check + CHECK (status IN ('active', 'expired', 'dismissed')), + CONSTRAINT bargain_alerts_item_fk + FOREIGN KEY (item_id) REFERENCES public.items(id) ON DELETE CASCADE +); + +-- Hot-path index: the /api/alerts/active endpoint filters by status and +-- orders by recency. +CREATE INDEX IF NOT EXISTS idx_bargain_alerts_status_found + ON public.bargain_alerts (status, found_at DESC); + +-- Idempotency guard: at most one active alert per item. Partial so +-- historical rows (expired/dismissed) don't fight the constraint. +CREATE UNIQUE INDEX IF NOT EXISTS idx_bargain_alerts_item_active + ON public.bargain_alerts (item_id) + WHERE status = 'active'; diff --git a/AGENTS.md b/AGENTS.md index 7a908fb..7fe0eca 100644 --- a/AGENTS.md +++ b/AGENTS.md @@ -1,7 +1,7 @@ # GitNexus — Code Intelligence -This project is indexed by GitNexus as **TornTools** (1894 symbols, 5344 relationships, 128 execution flows). Use the GitNexus MCP tools to understand code, assess impact, and navigate safely. +This project is indexed by GitNexus as **TornTools** (2096 symbols, 6153 relationships, 147 execution flows). Use the GitNexus MCP tools to understand code, assess impact, and navigate safely. > If any GitNexus tool warns the index is stale, run `npx gitnexus analyze` in terminal first. diff --git a/CLAUDE.md b/CLAUDE.md index c2f165d..5e7e09e 100644 --- a/CLAUDE.md +++ b/CLAUDE.md @@ -1,7 +1,7 @@ # GitNexus — Code Intelligence -This project is indexed by GitNexus as **TornTools** (1894 symbols, 5344 relationships, 128 execution flows). Use the GitNexus MCP tools to understand code, assess impact, and navigate safely. +This project is indexed by GitNexus as **TornTools** (2096 symbols, 6153 relationships, 147 execution flows). Use the GitNexus MCP tools to understand code, assess impact, and navigate safely. > If any GitNexus tool warns the index is stale, run `npx gitnexus analyze` in terminal first. diff --git a/TODO.md b/TODO.md index 2f53137..c05dc52 100644 --- a/TODO.md +++ b/TODO.md @@ -16,7 +16,8 @@ - **Document and display price source** - City vs Item Market vs Weav3r ([Trello](https://trello.com/c/GU1LDPMz)) - **Alerts and notifications** - e.g. for rare high-value bargains - ([Trello](https://trello.com/c/02MvgY6i)) + ([Trello](https://trello.com/c/02MvgY6i)). Concrete spec for the bargain-alert variant is below + under **Bargain alerts (subscription feature)** — has design notes and a ToS blocker. - **Improve Markets page UI** - clearer than YATA competitors ([Trello](https://trello.com/c/VkQsEZOC)) - **Remove deleted keys** - if the API returns an error saying the key no longer exists @@ -84,10 +85,10 @@ Audit before dropping. - **Ranking threshold tuning.** The risers/fallers card uses `MinAbsMovePct = 0.10m` and `MinAbsZScore = 1.5m` in `ItemVolatilityStatsRepository.GetTopAsync`; the unusual card uses - `DefaultMinScore = 1.5m` in `UnusualController`. If too few items surface in prod, drop - toward 1.0σ; if too noisy, push toward 2.0σ. Each is a one-line constant. Same goes for the - per-horizon min-sample thresholds (1/3/6/24 buckets) in `RebuildAsync` if any horizon is - pruning more than expected. + `DefaultMinScore = 1.5m` in `UnusualController`. If too few items surface in prod, drop toward + 1.0σ; if too noisy, push toward 2.0σ. Each is a one-line constant. Same goes for the per-horizon + min-sample thresholds (1/3/6/24 buckets) in `RebuildAsync` if any horizon is pruning more than + expected. - **Cross-item spike correlation / Torn event-calendar analysis** - Scalpel and Chain Whip occasionally spike for reasons that aren't obvious. Hypothesis: some spikes correlate with Torn in-game events (e.g. Cannabis on 4/20). Would want cross-item price-movement correlation over time @@ -184,11 +185,137 @@ --- -## Bugs - -### Market price chart sometimes hides Y-axis values - -**File:** client (chart component) ([Trello](https://trello.com/c/fxFTT2gN)) +## Bargain alerts + +**Drew-only v1 — build complete (M1–M8); verification + prod latency check pending**. + +Plan + status at `context/plans/2026-04-25-bargain-alerts.md`; synthetic-test SQL at +`context/plans/2026-04-25-bargain-alerts-verification.sql`. Threshold is single-unit profit > $5,000 +(`valueSellPrice - listing_price > 5_000`), markets only, authorised via single-element config list +of player IDs (`appsettings.json` → `BargainAlertsConfiguration.AuthorisedPlayerIds`, seeded with +3943900). Detection hooks `DatabaseService.ProcessListingsAsync`; snipe-loop is a priority-hook in +`QueueProcessorBase` that `TornMarketsProcessor` overrides to interleave hot items with the normal +queue (bounded by `MaxInterleaves`, default 50). Endpoints at +`/api/alerts/{authorised,active,{id}/dismiss}`, gated to authorised player IDs. Frontend toast at +top-right via `` mounted in `Layout`; provider is `BargainAlertsProvider` in +`main.tsx`; visibility-aware 12s polling; Web Audio synthesised two-tone chirp on first sighting of +each alert (Drew can swap for an MP3 in `client/public/sounds/`). Side-steps the ToS issue below by +not exchanging anything for anything. + +Next: apply V1.25 migration (auto on next backend boot), run synthetic verification per the SQL +file, then a real-world latency check. + +### Subscription extension (deferred — needs ToS sign-off) + +**BLOCKED on Torn staff sign-off — do not extend to subscribers without it.** + +### The feature + +Toast notification when an item appears on the Torn market (and possibly bazaars later) for <10% of +the city sell value. Persistent until dismissed or tab-closed; shows a "time since listed" counter; +plays a distinctive sound so a backgrounded tab is still useful. Click-through deep-links to the +listing. + +Subscriber model: free for Drew, free for anyone who's sent Drew Xanax in-game in the last 30 days +(rolling window). Tracked automatically by polling Drew's events feed for "You were sent some Xanax +from X" lines. + +### ToS blocker — must resolve first + +A separate analysis (Claude.ai, 2026-04-25) flagged two Torn rules that this feature touches: + +1. **RMT clause** on Torn's rule violations page: "exchange of currency or assets on Torn for + real-world money or services". Gating an external-tool feature behind in-game item payment is a + strict-reading violation. Sellers historically permabanned without first-offence warning; buyers + banned + items removed. Staff actively investigate externally hosted services. +2. **API ToS** on torn.com/api.html: explicitly invites operators to **contact staff** if they want + to advertise, accept donations, or charge for usage. Doesn't carve out item-based payments, so + safest assumption is item-gated subscriptions fall under "charging for usage" and need the same + approval. + +**Required action before any implementation work**: email webmaster@torn.com (or the staff contact +linked from the API page) describing the feature, the Xanax-gated subscription model, and asking +explicitly if it's permitted. The API ToS invites this conversation in writing — staff are generally +reasonable with established tool authors who ask first, and brutal with people who ship and hope. +Get the "yes" in your inbox before writing any code. + +Sanity-check first: see whether TornStats / TornPDA paid tiers accept in-game items or are +real-money only. That tells us what staff have actually waved through in practice and informs the +email. + +Risk-tiered design fallbacks if staff say no to item-gating: + +- **Voluntary tips, no gating**: feature free for all users; an "If you found this useful, you can + send Xanax to dangerworm" footer. Strictly the RMT clause's "or services" wording is still + ambient, but loads of tools accept tips informally. Lowest-temperature variant. +- **Real-money subscription** via Stripe/etc., with staff approval. Higher friction but + unambiguously inside the ToS framework once approved. +- **Free for everyone** — the feature is the reward, supports the rest of the tool. + +### Design notes (for when/if it's unblocked) + +API access verified 2026-04-25: + +- Custom key created via deep-link + `https://www.torn.com/preferences.php#tab=api?step=addNewKey&title=...&user=events`. Key info + confirms `access.type: "Custom"`, `selections.user: ["profile", "timestamp", "lookup", "events"]`, + all other categories at default. `basic` and `bazaar` selections correctly return error code 16 — + properly scoped. +- The events feed includes the gift signal: + `"You were sent some Xanax from NAME"` with unix timestamp + stable per-event + ID like `yGtGSLLG0qOzXsO3eN4v`. +- Bonus: same feed contains `"NAME bought N x ITEM from your bazaar for $PRICE"` events — a future + "your bazaar just sold" toast comes for free from the same poll. + +API quirk to encode in our client: Torn sometimes returns the _response schema_ literal (e.g. +`event: string[144]`) instead of populated data. Adding `&comment=tornttools-` to the query +reliably switches it to real data. The `comment` also surfaces in `/key/log` so we can grep +TornTools traffic. **Backend client should always send a `comment`.** + +Implementation sketch: + +- **Server config**: store the custom Drew-events key alongside the existing + `TORN_KEY_ENCRYPTION_KEY_V1` pattern — encrypted at rest, KV-vault-backed, rotatable. New env var + (name TBD). +- **Hangfire job**: poll + `https://api.torn.com/user/?selections=events&comment=tornttools-events&key=` every N + minutes. Parse for `^You were sent some Xanax from ]+XID=(\d+)>([^<]+)$`. Snapshot the + regex in a unit test so it screams when Torn rewords. +- **Subscriber ledger table**: + `(event_id PK, sender_xid, sender_name, gift_timestamp, recorded_at)`. Append-only. + Active-subscriber set = `SELECT DISTINCT sender_xid WHERE gift_timestamp > now() - 30 days`. + Persist locally because Torn's events feed is finite (~recent N entries) — a lapsed subscriber + whose gift falls off the feed should still count for the 30-day window. +- **Bargain detection**: a separate Hangfire job or hook on the existing `TornMarketsProcessor` that + flags listings where `listing_price < 0.1 * value_market_price`. Drop into a `bargain_alerts` + table with `(item_id, listing_id, listing_price, market_value, found_at, expires_at, status)`. +- **Snipe-loop poll** (Drew's idea): when a bargain is active, `TornMarketsProcessor` interleaves + re-polls of that item with normal queue progression — `[item, next, item, next, …]` — until the + listing disappears, then transitions the alert to "expired/sold" and updates the toast. **Bound + the loop**: max N consecutive interleaved polls per item before forced fallback to normal order + (avoids starving the queue if someone keeps relisting cheap, or if a market glitch persists). + Constants TBD; suggest N=30 (~5 min at current cadence) as a starting point. +- **Push transport (v1)**: short-interval browser polling on `/api/alerts/pending` returning the + authenticated user's currently-active bargains. SignalR/WebSockets/SSE deferred until usage proves + the feature is worth new infra. +- **Toast UI**: persistent (no auto-dismiss), live "time since listed" counter, distinct sound + (asset choice TBD), click-through to the Torn listing URL, transitions to "Too late!" variant when + the backend marks the alert expired. +- **Latency budget to validate**: item listed → next scan picks it up → backend evaluates threshold + → next browser poll picks it up → toast renders → human reacts → click. Worth measuring end-to-end + before committing — if best-case is 30s+, the feature ships disappointment because <10%-of-value + listings get sniped in seconds. Snipe-loop only helps after detection; initial detection latency + is still bounded by current `TornMarketsProcessor` cadence. + +Bazaars deferred from v1: we can't poll them directly (they're scraped via Weav3r at lower cadence), +so detection latency would be hopeless. Markets-only for v1. + +### Bonus: deep-link key creation pattern + +`https://www.torn.com/preferences.php#tab=api?step=addNewKey&title=&<category>=<csv>` is a +pre-fill deep-link into the Torn API prefs page. Useful UX for any future feature where TornTools +asks subscribers to grant a narrow permission — render an "Authorise TornTools" button instead of +"go to prefs → tick these boxes". File under nav/sign-in UX for later. --- diff --git a/api/TornTools.Api/Controllers/AlertsController.cs b/api/TornTools.Api/Controllers/AlertsController.cs new file mode 100644 index 0000000..26b4063 --- /dev/null +++ b/api/TornTools.Api/Controllers/AlertsController.cs @@ -0,0 +1,98 @@ +using Microsoft.AspNetCore.Authorization; +using Microsoft.AspNetCore.Mvc; +using System.Security.Claims; +using TornTools.Application.Interfaces; + +namespace TornTools.Api.Controllers; + +// Bargain alerts (Drew-only v1). All endpoints require authentication +// and gate on BargainAlertsConfiguration.AuthorisedPlayerIds. Anyone +// else gets 403 from /active and /dismiss; /authorised returns false +// without leaking that the feature exists for someone else. +[ApiController] +[Route("api/alerts")] +[Authorize] +public class AlertsController( + ILogger<AlertsController> logger, + IBargainAlertService bargainAlertService, + IBargainAlertAuthService bargainAlertAuthService +) : ControllerBase +{ + private readonly ILogger<AlertsController> _logger = logger; + private readonly IBargainAlertService _bargainAlertService = bargainAlertService; + private readonly IBargainAlertAuthService _bargainAlertAuthService = bargainAlertAuthService; + + // Whether the current user is allowed to receive bargain alerts. The + // frontend hits this once at app-mount; if false, it skips polling + // /active entirely. Returns false (rather than 403) so the response + // shape is uniform for authed users regardless of permission. + [HttpGet("authorised")] + [ProducesResponseType(StatusCodes.Status200OK)] + public IActionResult GetAuthorised() + { + if (!TryGetAuthenticatedPlayerId(out var playerId)) + return Ok(new { authorised = false }); + + return Ok(new { authorised = _bargainAlertAuthService.IsAuthorised(playerId) }); + } + + [HttpGet("active")] + [ProducesResponseType(StatusCodes.Status200OK)] + [ProducesResponseType(StatusCodes.Status403Forbidden)] + public async Task<IActionResult> GetActive(CancellationToken cancellationToken) + { + if (!TryGetAuthenticatedPlayerId(out var playerId) || + !_bargainAlertAuthService.IsAuthorised(playerId)) + return Forbid(); + + try + { + var alerts = await _bargainAlertService.GetActiveAlertsAsync(cancellationToken); + return Ok(alerts); + } + catch (Exception ex) + { + _logger.LogError(ex, "Failed to retrieve active bargain alerts."); + return StatusCode(StatusCodes.Status500InternalServerError, new + { + message = "An error occurred while retrieving active bargain alerts." + }); + } + } + + [HttpPost("{id:long}/dismiss")] + [ProducesResponseType(StatusCodes.Status204NoContent)] + [ProducesResponseType(StatusCodes.Status403Forbidden)] + [ProducesResponseType(StatusCodes.Status404NotFound)] + public async Task<IActionResult> Dismiss(long id, CancellationToken cancellationToken) + { + if (!TryGetAuthenticatedPlayerId(out var playerId) || + !_bargainAlertAuthService.IsAuthorised(playerId)) + return Forbid(); + + try + { + var dismissed = await _bargainAlertService.DismissAsync(id, cancellationToken); + + // Idempotent: a second dismiss on a non-active row returns 204. + // 404 only when the row genuinely doesn't exist would require an + // extra read. Not worth the trip — the toast won't ever fire + // dismiss against a missing id in practice. + return NoContent(); + } + catch (Exception ex) + { + _logger.LogError(ex, "Failed to dismiss bargain alert {AlertId}.", id); + return StatusCode(StatusCodes.Status500InternalServerError, new + { + message = "An error occurred while dismissing the bargain alert." + }); + } + } + + private bool TryGetAuthenticatedPlayerId(out long playerId) + { + var sub = User.FindFirstValue(ClaimTypes.NameIdentifier); + return long.TryParse(sub, out playerId); + } +} diff --git a/api/TornTools.Api/appsettings.json b/api/TornTools.Api/appsettings.json index fc62e06..ce1e0e7 100644 --- a/api/TornTools.Api/appsettings.json +++ b/api/TornTools.Api/appsettings.json @@ -19,5 +19,9 @@ }, "Weav3rBazaarsProcessorConfiguration": { "WorkerCount": 1 + }, + "BargainAlertsConfiguration": { + "AuthorisedPlayerIds": [3943900], + "MaxInterleaves": 50 } } diff --git a/api/TornTools.Application/Interfaces/IBargainAlertAuthService.cs b/api/TornTools.Application/Interfaces/IBargainAlertAuthService.cs new file mode 100644 index 0000000..5504cd8 --- /dev/null +++ b/api/TornTools.Application/Interfaces/IBargainAlertAuthService.cs @@ -0,0 +1,10 @@ +namespace TornTools.Application.Interfaces; + +public interface IBargainAlertAuthService +{ + // Whether the given Torn player_id is authorised to receive bargain + // alerts. v1 consults a config list; the future subscriber-ledger + // extension replaces this with a 30-day query against received-Xanax + // events. + bool IsAuthorised(long playerId); +} diff --git a/api/TornTools.Application/Interfaces/IBargainAlertService.cs b/api/TornTools.Application/Interfaces/IBargainAlertService.cs new file mode 100644 index 0000000..802ac62 --- /dev/null +++ b/api/TornTools.Application/Interfaces/IBargainAlertService.cs @@ -0,0 +1,35 @@ +using TornTools.Core.DataTransferObjects; +using TornTools.Core.Enums; + +namespace TornTools.Application.Interfaces; + +public interface IBargainAlertService +{ + // Called from the listings-write path after listings have been + // replaced. Decides whether to open, leave, or expire an active alert + // for this item: + // + // * source != Torn → no-op (markets-only in v1) + // * any new listing's profit > threshold AND no active alert + // → open a new alert at the cheapest + // qualifying price + // * no qualifying listing AND active alert exists + // → expire the existing alert + // * any other combination → no-op + // + // Idempotent: safe to call multiple times for the same listings batch. + Task EvaluateAsync( + Source source, + int itemId, + IReadOnlyList<ListingDto> newListings, + CancellationToken stoppingToken); + + // All currently-active alerts, ordered by FoundAt DESC. The v1 toast + // UI renders this list directly. + Task<IEnumerable<BargainAlertDto>> GetActiveAlertsAsync(CancellationToken stoppingToken); + + // Mark an alert dismissed. Returns true if the alert was active and + // is now dismissed; false if the alert doesn't exist or wasn't active + // (already expired/dismissed). Idempotent. + Task<bool> DismissAsync(long alertId, CancellationToken stoppingToken); +} diff --git a/api/TornTools.Application/ServiceCollectionExtensions.cs b/api/TornTools.Application/ServiceCollectionExtensions.cs index 6113b44..1c29996 100644 --- a/api/TornTools.Application/ServiceCollectionExtensions.cs +++ b/api/TornTools.Application/ServiceCollectionExtensions.cs @@ -45,6 +45,8 @@ public static IServiceCollection AddDependencies(this IServiceCollection service services.AddScoped<IApiCallHandlerResolver, ApiCallHandlerResolver>(); services.AddScoped<IDatabaseService, DatabaseService>(); + services.AddScoped<IBargainAlertService, BargainAlertService>(); + services.AddSingleton<IBargainAlertAuthService, BargainAlertAuthService>(); services.AddSingleton<IApiKeyProtector, ApiKeyProtector>(); diff --git a/api/TornTools.Application/Services/BargainAlertAuthService.cs b/api/TornTools.Application/Services/BargainAlertAuthService.cs new file mode 100644 index 0000000..34af9d7 --- /dev/null +++ b/api/TornTools.Application/Services/BargainAlertAuthService.cs @@ -0,0 +1,13 @@ +using TornTools.Application.Interfaces; +using TornTools.Core.Configurations; + +namespace TornTools.Application.Services; + +public class BargainAlertAuthService( + BargainAlertsConfiguration configuration +) : IBargainAlertAuthService +{ + private readonly HashSet<long> _authorised = [.. configuration.AuthorisedPlayerIds]; + + public bool IsAuthorised(long playerId) => _authorised.Contains(playerId); +} diff --git a/api/TornTools.Application/Services/BargainAlertService.cs b/api/TornTools.Application/Services/BargainAlertService.cs new file mode 100644 index 0000000..329fc78 --- /dev/null +++ b/api/TornTools.Application/Services/BargainAlertService.cs @@ -0,0 +1,88 @@ +using Microsoft.Extensions.Logging; +using TornTools.Application.Interfaces; +using TornTools.Core.DataTransferObjects; +using TornTools.Core.Enums; +using TornTools.Persistence.Interfaces; + +namespace TornTools.Application.Services; + +public class BargainAlertService( + ILogger<BargainAlertService> logger, + IBargainAlertRepository bargainAlertRepository, + IItemRepository itemRepository +) : IBargainAlertService +{ + // Threshold: a listing is a "bargain" when the per-unit profit + // against the city sell-back price exceeds this. City sell-back has + // no tax so this is the realisable per-unit profit. + public const long ProfitThreshold = 5_000; + + private readonly ILogger<BargainAlertService> _logger = logger; + private readonly IBargainAlertRepository _bargainAlertRepository = bargainAlertRepository; + private readonly IItemRepository _itemRepository = itemRepository; + + public async Task EvaluateAsync( + Source source, + int itemId, + IReadOnlyList<ListingDto> newListings, + CancellationToken stoppingToken) + { + if (source != Source.Torn) + { + return; + } + + var item = await _itemRepository.GetItemAsync(itemId, stoppingToken); + if (item?.ValueSellPrice is null or <= 0) + { + // Item has no city sell-back baseline — no way to evaluate profit. + // Skip silently; common for masked / non-tradable items. + return; + } + + var sellPrice = item.ValueSellPrice.Value; + var existing = await _bargainAlertRepository.GetActiveByItemAsync(itemId, stoppingToken); + + var cheapest = newListings + .Where(l => l.Price > 0) + .OrderBy(l => l.Price) + .FirstOrDefault(); + + var qualifies = cheapest is not null && (sellPrice - cheapest.Price) > ProfitThreshold; + + if (qualifies && existing is null) + { + var listingPrice = cheapest!.Price; + _logger.LogInformation( + "Bargain detected for item {ItemId}: listing {ListingPrice} vs sell {SellPrice} (profit {Profit}).", + itemId, listingPrice, sellPrice, sellPrice - listingPrice); + + try + { + await _bargainAlertRepository.CreateAsync(itemId, listingPrice, sellPrice, stoppingToken); + } + catch (Exception ex) + { + // Likely a unique-index race — another worker beat us to it. + // Treat as success (the alert exists either way). + _logger.LogDebug(ex, "Bargain alert insert race for item {ItemId} — alert already exists.", itemId); + } + } + else if (!qualifies && existing is not null) + { + _logger.LogInformation( + "Bargain expired for item {ItemId} (alert {AlertId}): no qualifying listing in latest scan.", + itemId, existing.Id); + + await _bargainAlertRepository.MarkExpiredAsync(existing.Id, stoppingToken); + } + // qualifies && existing is not null → idempotent leave-alone + // !qualifies && existing is null → nothing to do + } + + public Task<IEnumerable<BargainAlertDto>> GetActiveAlertsAsync(CancellationToken stoppingToken) + => _bargainAlertRepository.GetAllActiveAsync(stoppingToken); + + public Task<bool> DismissAsync(long alertId, CancellationToken stoppingToken) + => _bargainAlertRepository.MarkDismissedAsync(alertId, stoppingToken); +} diff --git a/api/TornTools.Application/Services/DatabaseService.cs b/api/TornTools.Application/Services/DatabaseService.cs index 0795256..3c6def6 100644 --- a/api/TornTools.Application/Services/DatabaseService.cs +++ b/api/TornTools.Application/Services/DatabaseService.cs @@ -20,7 +20,8 @@ public class DatabaseService( IItemRepository itemRepository, IListingRepository listingRepository, IQueueItemRepository queueItemRepository, - IUserRepository userRepository + IUserRepository userRepository, + IBargainAlertService bargainAlertService ) : IDatabaseService { // 1-hour buckets give the multi-horizon z-score in the Unusual Activity @@ -40,6 +41,7 @@ IUserRepository userRepository private readonly IListingRepository _listingRepository = listingRepository ?? throw new ArgumentNullException(nameof(listingRepository)); private readonly IQueueItemRepository _queueItemRepository = queueItemRepository ?? throw new ArgumentNullException(nameof(queueItemRepository)); private readonly IUserRepository _userRepository = userRepository ?? throw new ArgumentNullException(nameof(userRepository)); + private readonly IBargainAlertService _bargainAlertService = bargainAlertService ?? throw new ArgumentNullException(nameof(bargainAlertService)); public Task<IEnumerable<ForeignStockItemDto>> GetForeignStockItemsAsync(CancellationToken cancellationToken) { @@ -275,6 +277,12 @@ await CreateItemChangeLogAsync(new ItemChangeLogDto }, stoppingToken); await ReplaceListingsAsync(source, itemId, newListings, stoppingToken); + + // Detection runs after persistence: a fresh scan that flips an + // item between "qualifies as a bargain" and "doesn't" should be + // reflected in bargain_alerts. Service is source-scoped (Torn + // market only in v1) and idempotent. + await _bargainAlertService.EvaluateAsync(source, itemId, newListings, stoppingToken); } else { diff --git a/api/TornTools.Core/Configurations/BargainAlertsConfiguration.cs b/api/TornTools.Core/Configurations/BargainAlertsConfiguration.cs new file mode 100644 index 0000000..9a3075b --- /dev/null +++ b/api/TornTools.Core/Configurations/BargainAlertsConfiguration.cs @@ -0,0 +1,16 @@ +namespace TornTools.Core.Configurations; + +// Drew-only v1: a single hardcoded list of authorised Torn player IDs. +// The future subscriber-ledger extension replaces the list lookup with +// a 30-day-window query against bargain alerts events; the call site +// (IBargainAlertAuthService.IsAuthorised) stays the same. +public class BargainAlertsConfiguration +{ + public List<long> AuthorisedPlayerIds { get; set; } = []; + + // Snipe-loop bound: max consecutive interleaved re-polls of the same + // hot item before the processor falls back to normal queue cadence. + // Keeps a persistently-cheap-relisted item from starving everything + // else. Dial down if the queue feels starved in prod. + public int MaxInterleaves { get; set; } = 50; +} diff --git a/api/TornTools.Core/DataTransferObjects/BargainAlertDto.cs b/api/TornTools.Core/DataTransferObjects/BargainAlertDto.cs new file mode 100644 index 0000000..cbe78b7 --- /dev/null +++ b/api/TornTools.Core/DataTransferObjects/BargainAlertDto.cs @@ -0,0 +1,16 @@ +namespace TornTools.Core.DataTransferObjects; + +// One row per detected bargain. The frontend joins with ItemsContext +// for name/image, so this DTO stays minimal — no item join here. +public class BargainAlertDto +{ + public long Id { get; set; } + public required int ItemId { get; set; } + public required long ListingPrice { get; set; } + public required long MarketValue { get; set; } + public required long Profit { get; set; } + public required DateTimeOffset FoundAt { get; set; } + public DateTimeOffset? ExpiredAt { get; set; } + public DateTimeOffset? DismissedAt { get; set; } + public required string Status { get; set; } +} diff --git a/api/TornTools.Core/ServiceCollectionExtensions.cs b/api/TornTools.Core/ServiceCollectionExtensions.cs index 542bfaf..17b9da2 100644 --- a/api/TornTools.Core/ServiceCollectionExtensions.cs +++ b/api/TornTools.Core/ServiceCollectionExtensions.cs @@ -16,7 +16,20 @@ public static IServiceCollection AddConfigurations(this IServiceCollection servi .AddTornKeyEncryptionConfiguration(configuration) .AddWeav3rApiCallerConfiguration(configuration) .AddTornMarketsProcessorConfiguration(configuration) - .AddWeav3rBazaarsProcessorConfiguration(configuration); + .AddWeav3rBazaarsProcessorConfiguration(configuration) + .AddBargainAlertsConfiguration(configuration); + } + + private static IServiceCollection AddBargainAlertsConfiguration(this IServiceCollection services, IConfiguration configuration) + { + services.Configure<BargainAlertsConfiguration>( + configuration.GetSection(nameof(BargainAlertsConfiguration))); + + services.AddSingleton(sp => + sp.GetRequiredService<IOptions<BargainAlertsConfiguration>>().Value + ); + + return services; } private static IServiceCollection AddLocalConfiguration(this IServiceCollection services, IConfiguration configuration) diff --git a/api/TornTools.Cron/Processors/QueueProcessorBase.cs b/api/TornTools.Cron/Processors/QueueProcessorBase.cs index 6fa9e16..08c988e 100644 --- a/api/TornTools.Cron/Processors/QueueProcessorBase.cs +++ b/api/TornTools.Cron/Processors/QueueProcessorBase.cs @@ -33,6 +33,22 @@ ILogger logger /// <summary>Return the per-worker inter-call delay in milliseconds.</summary> protected abstract Task<int> GetDelayMillisecondsAsync(IDatabaseService db, CancellationToken ct); + /// <summary> + /// Optional priority-work hook for subclasses. Return a non-null + /// QueueItemDto to bypass the normal queue for this tick — used by + /// TornMarketsProcessor to interleave bargain-alert hot items. + /// + /// Items returned here are *synthetic* (not persisted in the queue + /// table). The worker skips IncrementQueueItemAttempts and + /// RemoveQueueItemAsync for them. Subclasses are responsible for + /// pacing — return null on alternate ticks if you don't want to + /// double the API call rate against the configured rate limit. + /// </summary> + protected virtual Task<QueueItemDto?> TryGetPriorityQueueItemAsync( + IServiceProvider scopedServices, + CancellationToken ct) + => Task.FromResult<QueueItemDto?>(null); + protected override async Task ExecuteAsync(CancellationToken stoppingToken) { _logger.LogInformation("{Processor} starting with {WorkerCount} workers.", GetType().Name, WorkerCount); @@ -55,14 +71,32 @@ private async Task RunWorkerAsync(int workerId, CancellationToken stoppingToken) var databaseService = scope.ServiceProvider.GetRequiredService<IDatabaseService>(); QueueItemDto? queueItem = null; + var isPriorityWork = false; try { + // Priority hook — subclasses can preempt the normal queue. + // When this returns non-null we skip the queue-table mutations + // (no Id to increment / remove against). + try + { + queueItem = await TryGetPriorityQueueItemAsync(scope.ServiceProvider, stoppingToken); + isPriorityWork = queueItem is not null; + } + catch (Exception ex) + { + _logger.LogError(ex, "[Worker {WorkerId}] {Processor} priority hook threw — falling back to normal queue.", workerId, GetType().Name); + queueItem = null; + } + // Dequeue try { - queueItem = await databaseService.GetNextQueueItem(CallType, stoppingToken); + if (queueItem is null) + { + queueItem = await databaseService.GetNextQueueItem(CallType, stoppingToken); + } - if (queueItem?.Id is null) + if (!isPriorityWork && queueItem?.Id is null) { // No Pending items - either other workers hold them, or the queue is exhausted. // Repopulate only once all in-flight work for this call type has landed. @@ -100,7 +134,8 @@ private async Task RunWorkerAsync(int workerId, CancellationToken stoppingToken) continue; } - var itemId = queueItem.Id.Value; + // Priority work has no DB queue id; use Empty as a marker. + var itemId = isPriorityWork ? Guid.Empty : queueItem!.Id!.Value; // Process var success = false; @@ -115,6 +150,16 @@ private async Task RunWorkerAsync(int workerId, CancellationToken stoppingToken) _logger.LogError(ex, "[Worker {WorkerId}] {Processor} unhandled exception processing {QueueItemId}. Marking for retry.", workerId, GetType().Name, itemId); } + // Priority work isn't a queue row — skip the queue-table mutations. + if (isPriorityWork) + { + if (!success) + { + _logger.LogDebug("[Worker {WorkerId}] {Processor} priority call returned no success. Will retry on the next priority tick.", workerId, GetType().Name); + } + continue; + } + // Increment attempt count try { diff --git a/api/TornTools.Cron/Processors/TornMarketsProcessor.cs b/api/TornTools.Cron/Processors/TornMarketsProcessor.cs index b345e12..7c9bda3 100644 --- a/api/TornTools.Cron/Processors/TornMarketsProcessor.cs +++ b/api/TornTools.Cron/Processors/TornMarketsProcessor.cs @@ -2,7 +2,10 @@ using Microsoft.Extensions.Logging; using TornTools.Application.Interfaces; using TornTools.Core.Configurations; +using TornTools.Core.Constants; +using TornTools.Core.DataTransferObjects; using TornTools.Core.Enums; +using TornTools.Persistence.Interfaces; namespace TornTools.Cron.Processors; @@ -10,11 +13,35 @@ public class TornMarketsProcessor( IServiceScopeFactory scopeFactory, ILogger<TornMarketsProcessor> logger, TornMarketsProcessorConfiguration processorConfig, - TornApiCallerConfiguration tornApiCallerConfig + TornApiCallerConfiguration tornApiCallerConfig, + BargainAlertsConfiguration bargainAlertsConfig ) : QueueProcessorBase(scopeFactory, logger) { private readonly int _workerCount = Math.Max(1, processorConfig.WorkerCount); private readonly TornApiCallerConfiguration _tornApiCallerConfig = tornApiCallerConfig; + private readonly int _maxInterleaves = Math.Max(1, bargainAlertsConfig.MaxInterleaves); + + // Snipe-loop state. Refreshed periodically from the bargain_alerts + // table. Held in-memory rather than queried per-tick because: + // - the active set turns over slowly (alerts open on detection, + // close on the next no-qualifying-listing scan or on dismiss) + // - querying every tick adds a DB hit to the hot path + // + // Guarded by _stateLock — multi-worker case is rare in practice + // (default WorkerCount=1) but we want correctness if it grows. + private static readonly TimeSpan HotSetRefreshInterval = TimeSpan.FromSeconds(30); + private readonly Lock _stateLock = new(); + private DateTimeOffset _hotSetLastSyncedAt = DateTimeOffset.MinValue; + private HashSet<int> _hotItemIds = []; + private readonly Dictionary<int, int> _consecutivePollCounts = []; + + // Tick alternation: even ticks return null (let the normal queue + // advance), odd ticks return a hot item if one is available. Net + // effect: at most half of API calls go to bargain re-polls, keeping + // us within the rate budget that GetDelayMillisecondsAsync was sized + // for. Across multiple workers this still alternates roughly 50/50 + // because the counter is processor-wide. + private long _tickCounter = 0; protected override ApiCallType CallType => ApiCallType.TornMarketListings; protected override int WorkerCount => _workerCount; @@ -27,4 +54,124 @@ protected override async Task<int> GetDelayMillisecondsAsync(IDatabaseService db var apiKeyCount = await db.GetApiKeyCountAsync(ct); return CalculateDelayMilliseconds(_tornApiCallerConfig.MaxCallsPerMinute, apiKeyCount, _workerCount); } + + protected override async Task<QueueItemDto?> TryGetPriorityQueueItemAsync( + IServiceProvider scopedServices, + CancellationToken ct) + { + await RefreshHotSetIfDueAsync(scopedServices, ct); + + int? hotItemId; + lock (_stateLock) + { + if (_hotItemIds.Count == 0) + { + return null; + } + + // Alternate ticks. Odd = priority opportunity, even = let the + // normal queue have it. Increment unconditionally so callers + // can't accidentally skew the cadence by sometimes-not-checking. + var tick = Interlocked.Increment(ref _tickCounter); + if (tick % 2 == 0) + { + return null; + } + + // Pick the hot item that has been polled least often this run. + // With a single hot item this means it always wins until it + // hits MaxInterleaves; with multiple hot items it round-robins + // them as their counts equalise. + int? bestId = null; + var bestCount = int.MaxValue; + foreach (var id in _hotItemIds) + { + var count = _consecutivePollCounts.GetValueOrDefault(id, 0); + if (count >= _maxInterleaves) + { + continue; + } + if (count < bestCount) + { + bestId = id; + bestCount = count; + } + } + + if (bestId is null) + { + // Every active hot item has hit the interleave bound this + // run. Stop fast-pathing them — they'll get picked up by the + // normal queue cycle. Counters reset only when items leave + // the active set (next refresh) and re-join, so a persistently + // sticky bargain doesn't keep starving the queue. + return null; + } + + _consecutivePollCounts[bestId.Value] = + _consecutivePollCounts.GetValueOrDefault(bestId.Value, 0) + 1; + hotItemId = bestId; + } + + return BuildSyntheticTornMarketQueueItem(hotItemId.Value); + } + + private async Task RefreshHotSetIfDueAsync(IServiceProvider scopedServices, CancellationToken ct) + { + bool refreshNeeded; + lock (_stateLock) + { + refreshNeeded = (DateTimeOffset.UtcNow - _hotSetLastSyncedAt) >= HotSetRefreshInterval; + } + if (!refreshNeeded) + { + return; + } + + IReadOnlyCollection<int> ids; + try + { + var alertRepository = scopedServices.GetRequiredService<IBargainAlertRepository>(); + ids = await alertRepository.GetActiveItemIdsAsync(ct); + } + catch (Exception) + { + // Keep the previous set on failure — better stale than empty. + lock (_stateLock) + { + _hotSetLastSyncedAt = DateTimeOffset.UtcNow; + } + throw; + } + + lock (_stateLock) + { + var fresh = new HashSet<int>(ids); + // Drop counters for items that have left the active set so a + // re-opening bargain gets a fresh interleave budget. + var removed = _consecutivePollCounts.Keys + .Where(k => !fresh.Contains(k)) + .ToList(); + foreach (var key in removed) + { + _consecutivePollCounts.Remove(key); + } + _hotItemIds = fresh; + _hotSetLastSyncedAt = DateTimeOffset.UtcNow; + } + } + + // Mirrors DatabaseService.BuildTornMarketQueueItem but without an + // Id — the synthetic item never enrols in the queue table. The + // caller in QueueProcessorBase explicitly skips queue-table writes + // for priority work. + private static QueueItemDto BuildSyntheticTornMarketQueueItem(int itemId) + { + return new QueueItemDto + { + CallType = ApiCallType.TornMarketListings, + EndpointUrl = string.Format(TornApiConstants.ItemListings, itemId), + HttpMethod = "GET", + }; + } } diff --git a/api/TornTools.Persistence/Entities/BargainAlertEntity.cs b/api/TornTools.Persistence/Entities/BargainAlertEntity.cs new file mode 100644 index 0000000..e16fb77 --- /dev/null +++ b/api/TornTools.Persistence/Entities/BargainAlertEntity.cs @@ -0,0 +1,48 @@ +using System.ComponentModel.DataAnnotations.Schema; +using TornTools.Core.DataTransferObjects; + +namespace TornTools.Persistence.Entities; + +[Table("bargain_alerts", Schema = "public")] +public class BargainAlertEntity +{ + [Column("id")] + public long Id { get; set; } + + [Column("item_id")] + public required int ItemId { get; set; } + + [Column("listing_price")] + public required long ListingPrice { get; set; } + + [Column("market_value")] + public required long MarketValue { get; set; } + + [Column("profit")] + public required long Profit { get; set; } + + [Column("found_at")] + public required DateTimeOffset FoundAt { get; set; } + + [Column("expired_at")] + public DateTimeOffset? ExpiredAt { get; set; } + + [Column("dismissed_at")] + public DateTimeOffset? DismissedAt { get; set; } + + [Column("status")] + public required string Status { get; set; } + + public BargainAlertDto AsDto() => new() + { + Id = Id, + ItemId = ItemId, + ListingPrice = ListingPrice, + MarketValue = MarketValue, + Profit = Profit, + FoundAt = FoundAt, + ExpiredAt = ExpiredAt, + DismissedAt = DismissedAt, + Status = Status, + }; +} diff --git a/api/TornTools.Persistence/Interfaces/IBargainAlertRepository.cs b/api/TornTools.Persistence/Interfaces/IBargainAlertRepository.cs new file mode 100644 index 0000000..540b211 --- /dev/null +++ b/api/TornTools.Persistence/Interfaces/IBargainAlertRepository.cs @@ -0,0 +1,38 @@ +using TornTools.Core.DataTransferObjects; + +namespace TornTools.Persistence.Interfaces; + +public interface IBargainAlertRepository +{ + // Insert a new active alert. Returns the populated DTO (with Id + + // FoundAt assigned by the DB). Throws on idempotency-guard violation + // (unique partial index on item_id WHERE status='active') — the + // caller is expected to pre-check via GetActiveByItemAsync. + Task<BargainAlertDto> CreateAsync(int itemId, long listingPrice, long marketValue, CancellationToken stoppingToken); + + // Returns the single active alert for an item, or null if none. + // Used by the detection hook for idempotency and by the dismiss + // endpoint for existence checks. + Task<BargainAlertDto?> GetActiveByItemAsync(int itemId, CancellationToken stoppingToken); + + // Returns the row by primary key, or null. Used by the dismiss + // endpoint to disambiguate "already dismissed" vs "doesn't exist". + Task<BargainAlertDto?> GetByIdAsync(long id, CancellationToken stoppingToken); + + // Returns all currently-active alerts ordered by FoundAt DESC. + // This is the payload the toast UI polls. + Task<IEnumerable<BargainAlertDto>> GetAllActiveAsync(CancellationToken stoppingToken); + + // Transition an alert from active → expired (listing has gone). Sets + // expired_at = now. No-op if the row is not active. + Task<bool> MarkExpiredAsync(long id, CancellationToken stoppingToken); + + // Transition an alert from active → dismissed (user clicked dismiss). + // Sets dismissed_at = now. No-op if the row is not active. + Task<bool> MarkDismissedAsync(long id, CancellationToken stoppingToken); + + // Returns the set of item IDs that currently have an active alert. + // Used by TornMarketsProcessor to maintain its hot-set for the + // snipe-loop without loading full DTOs. + Task<IReadOnlyCollection<int>> GetActiveItemIdsAsync(CancellationToken stoppingToken); +} diff --git a/api/TornTools.Persistence/Repositories/BargainAlertRepository.cs b/api/TornTools.Persistence/Repositories/BargainAlertRepository.cs new file mode 100644 index 0000000..9a2f288 --- /dev/null +++ b/api/TornTools.Persistence/Repositories/BargainAlertRepository.cs @@ -0,0 +1,101 @@ +using Microsoft.EntityFrameworkCore; +using Microsoft.Extensions.Logging; +using TornTools.Core.DataTransferObjects; +using TornTools.Persistence.Entities; +using TornTools.Persistence.Interfaces; + +namespace TornTools.Persistence.Repositories; + +public class BargainAlertRepository( + ILogger<BargainAlertRepository> logger, + TornToolsDbContext dbContext +) : RepositoryBase<BargainAlertRepository>(logger, dbContext), IBargainAlertRepository +{ + private const string StatusActive = "active"; + private const string StatusExpired = "expired"; + private const string StatusDismissed = "dismissed"; + + public async Task<BargainAlertDto> CreateAsync( + int itemId, + long listingPrice, + long marketValue, + CancellationToken stoppingToken) + { + var entity = new BargainAlertEntity + { + ItemId = itemId, + ListingPrice = listingPrice, + MarketValue = marketValue, + Profit = marketValue - listingPrice, + FoundAt = DateTimeOffset.UtcNow, + Status = StatusActive, + }; + + DbContext.Set<BargainAlertEntity>().Add(entity); + await DbContext.SaveChangesAsync(stoppingToken); + + return entity.AsDto(); + } + + public async Task<BargainAlertDto?> GetActiveByItemAsync(int itemId, CancellationToken stoppingToken) + { + var entity = await DbContext.Set<BargainAlertEntity>() + .AsNoTracking() + .Where(a => a.ItemId == itemId && a.Status == StatusActive) + .FirstOrDefaultAsync(stoppingToken); + return entity?.AsDto(); + } + + public async Task<BargainAlertDto?> GetByIdAsync(long id, CancellationToken stoppingToken) + { + var entity = await DbContext.Set<BargainAlertEntity>() + .AsNoTracking() + .FirstOrDefaultAsync(a => a.Id == id, stoppingToken); + return entity?.AsDto(); + } + + public async Task<IEnumerable<BargainAlertDto>> GetAllActiveAsync(CancellationToken stoppingToken) + { + var entities = await DbContext.Set<BargainAlertEntity>() + .AsNoTracking() + .Where(a => a.Status == StatusActive) + .OrderByDescending(a => a.FoundAt) + .ToListAsync(stoppingToken); + return entities.Select(e => e.AsDto()); + } + + public async Task<bool> MarkExpiredAsync(long id, CancellationToken stoppingToken) + { + // Single-statement update so we don't fight the unique partial + // index on (item_id) WHERE status='active' if a fresh alert opened + // for the same item between read and write. + var rows = await DbContext.Set<BargainAlertEntity>() + .Where(a => a.Id == id && a.Status == StatusActive) + .ExecuteUpdateAsync(s => s + .SetProperty(a => a.Status, StatusExpired) + .SetProperty(a => a.ExpiredAt, DateTimeOffset.UtcNow), + stoppingToken); + return rows > 0; + } + + public async Task<bool> MarkDismissedAsync(long id, CancellationToken stoppingToken) + { + var rows = await DbContext.Set<BargainAlertEntity>() + .Where(a => a.Id == id && a.Status == StatusActive) + .ExecuteUpdateAsync(s => s + .SetProperty(a => a.Status, StatusDismissed) + .SetProperty(a => a.DismissedAt, DateTimeOffset.UtcNow), + stoppingToken); + return rows > 0; + } + + public async Task<IReadOnlyCollection<int>> GetActiveItemIdsAsync(CancellationToken stoppingToken) + { + return await DbContext.Set<BargainAlertEntity>() + .AsNoTracking() + .Where(a => a.Status == StatusActive) + .Select(a => a.ItemId) + .Distinct() + .ToListAsync(stoppingToken); + } +} diff --git a/api/TornTools.Persistence/ServiceCollectionExtensions.cs b/api/TornTools.Persistence/ServiceCollectionExtensions.cs index 5c53f93..9fa1343 100644 --- a/api/TornTools.Persistence/ServiceCollectionExtensions.cs +++ b/api/TornTools.Persistence/ServiceCollectionExtensions.cs @@ -23,6 +23,7 @@ public static IServiceCollection AddDatabase(this IServiceCollection services, I services.AddScoped<IListingRepository, ListingRepository>(); services.AddScoped<IQueueItemRepository, QueueItemRepository>(); services.AddScoped<IUserRepository, UserRepository>(); + services.AddScoped<IBargainAlertRepository, BargainAlertRepository>(); return services; } diff --git a/api/TornTools.Persistence/TornToolsDbContext.cs b/api/TornTools.Persistence/TornToolsDbContext.cs index cc35f8f..2854566 100644 --- a/api/TornTools.Persistence/TornToolsDbContext.cs +++ b/api/TornTools.Persistence/TornToolsDbContext.cs @@ -20,6 +20,7 @@ DbContextOptions<TornToolsDbContext> options public DbSet<QueueItemEntity> QueueItems { get; set; } = null!; public DbSet<UserEntity> Users { get; set; } = null!; public DbSet<UserFavouriteItemEntity> UserFavourites { get; set; } = null!; + public DbSet<BargainAlertEntity> BargainAlerts { get; set; } = null!; protected override void OnModelCreating(ModelBuilder modelBuilder) { @@ -134,6 +135,13 @@ protected override void OnModelCreating(ModelBuilder modelBuilder) .HasPrincipalKey(u => u.Id); }); + modelBuilder.Entity<BargainAlertEntity>(e => + { + e.HasKey(x => x.Id); + e.Property(x => x.Status).IsRequired(); + e.HasIndex(x => new { x.Status, x.FoundAt }); + }); + // View mapping modelBuilder.Entity<BazaarSummaryView>().HasNoKey(); diff --git a/client/src/components/BargainAlertToast.tsx b/client/src/components/BargainAlertToast.tsx new file mode 100644 index 0000000..9701d9a --- /dev/null +++ b/client/src/components/BargainAlertToast.tsx @@ -0,0 +1,147 @@ +import { Avatar, Box, Chip, IconButton, Paper, Stack, Tooltip, Typography } from '@mui/material' +import CloseIcon from '@mui/icons-material/Close' +import LocalFireDepartmentIcon from '@mui/icons-material/LocalFireDepartment' +import OpenInNewIcon from '@mui/icons-material/OpenInNew' +import { useEffect, useState } from 'react' +import { useBargainAlerts } from '../hooks/useBargainAlerts' +import { useItems } from '../hooks/useItems' +import type { BargainAlert } from '../types/bargainAlerts' + +// Top-of-viewport persistent stack. Renders nothing for unauthorised +// users, so it's safe to mount globally. +export default function BargainAlertToast() { + const { authorised, alerts, dismiss } = useBargainAlerts() + + if (!authorised || alerts.length === 0) return null + + return ( + <Box + sx={{ + position: 'fixed', + top: 80, + right: 16, + zIndex: (theme) => theme.zIndex.snackbar, + display: 'flex', + flexDirection: 'column', + gap: 1, + maxWidth: 360, + width: 'calc(100vw - 32px)', + }} + > + {alerts.map((alert) => ( + <BargainAlertCard key={alert.id} alert={alert} onDismiss={() => void dismiss(alert.id)} /> + ))} + </Box> + ) +} + +const tornMarketUrl = (itemId: number) => + `https://www.torn.com/page.php?sid=ItemMarket#/market/view=search&itemID=${itemId}&sortField=price&sortOrder=ASC` + +const formatMoney = (n: number) => `$${n.toLocaleString('en-US')}` + +const formatElapsed = (foundAt: string) => { + const ms = Date.now() - new Date(foundAt).getTime() + if (ms < 0) return 'just now' + const s = Math.floor(ms / 1000) + if (s < 60) return `${s}s ago` + const m = Math.floor(s / 60) + if (m < 60) return `${m}m ${s % 60}s ago` + const h = Math.floor(m / 60) + return `${h}h ${m % 60}m ago` +} + +interface CardProps { + alert: BargainAlert + onDismiss: () => void +} + +function BargainAlertCard({ alert, onDismiss }: CardProps) { + const { itemsById } = useItems() + const item = itemsById[alert.itemId] + + // Re-render every second so the "time since" counter ticks live. + const [, setTick] = useState(0) + useEffect(() => { + const id = setInterval(() => setTick((t) => t + 1), 1000) + return () => clearInterval(id) + }, []) + + // Computed inline rather than memoised: the setInterval above triggers + // a re-render every second and we want a fresh elapsed string each time. + // Memoising on alert.foundAt would freeze the value because foundAt + // never changes for the lifetime of an alert row. + const elapsed = formatElapsed(alert.foundAt) + + return ( + <Paper + elevation={6} + sx={{ + p: 1.5, + bgcolor: 'background.paper', + borderLeft: '4px solid', + borderLeftColor: 'warning.main', + }} + > + <Stack direction="row" spacing={1.5} alignItems="flex-start"> + {item?.image ? ( + <Avatar + src={item.image} + alt={item?.name ?? `Item ${alert.itemId}`} + variant="rounded" + sx={{ width: 48, height: 48, bgcolor: 'transparent' }} + /> + ) : ( + <Avatar variant="rounded" sx={{ width: 48, height: 48, bgcolor: 'warning.dark' }}> + <LocalFireDepartmentIcon /> + </Avatar> + )} + + <Box sx={{ flexGrow: 1, minWidth: 0 }}> + <Stack direction="row" spacing={1} alignItems="center"> + <Typography variant="subtitle2" noWrap sx={{ fontWeight: 600 }}> + {item?.name ?? `Item ${alert.itemId}`} + </Typography> + <Chip + size="small" + icon={<LocalFireDepartmentIcon />} + color="warning" + label="Bargain" + sx={{ height: 20 }} + /> + </Stack> + + <Typography variant="body2" color="text.secondary" sx={{ mt: 0.25 }}> + Listed for <strong>{formatMoney(alert.listingPrice)}</strong>, sells for{' '} + <strong>{formatMoney(alert.marketValue)}</strong> + </Typography> + <Typography variant="body2" color="success.main" sx={{ fontWeight: 600 }}> + +{formatMoney(alert.profit)} profit + </Typography> + + <Stack direction="row" spacing={1} alignItems="center" sx={{ mt: 0.75 }}> + <Typography variant="caption" color="text.secondary"> + {elapsed} + </Typography> + <Tooltip title="Open on Torn market"> + <IconButton + size="small" + component="a" + href={tornMarketUrl(alert.itemId)} + target="_blank" + rel="noopener noreferrer" + sx={{ ml: 'auto' }} + > + <OpenInNewIcon fontSize="small" /> + </IconButton> + </Tooltip> + </Stack> + </Box> + + <IconButton size="small" onClick={onDismiss} aria-label="Dismiss"> + <CloseIcon fontSize="small" /> + </IconButton> + </Stack> + </Paper> + ) +} diff --git a/client/src/components/Layout.tsx b/client/src/components/Layout.tsx index e991608..91e0dbc 100644 --- a/client/src/components/Layout.tsx +++ b/client/src/components/Layout.tsx @@ -15,6 +15,7 @@ import { MAX_CONTENT_WIDTH } from '../constants/uiConstants' import { useItems } from '../hooks/useItems' import { useUser } from '../hooks/useUser' import '../index.css' +import BargainAlertToast from './BargainAlertToast' import Footer from './Footer' import { DRAWER_WIDTH, @@ -142,6 +143,7 @@ export default function Layout() { }} > <TopAppBar handleDrawerToggle={handleDrawerToggle} /> + <BargainAlertToast /> <Box sx={{ diff --git a/client/src/contexts/BargainAlertsContext.tsx b/client/src/contexts/BargainAlertsContext.tsx new file mode 100644 index 0000000..a73712e --- /dev/null +++ b/client/src/contexts/BargainAlertsContext.tsx @@ -0,0 +1,241 @@ +import { useCallback, useEffect, useMemo, useRef, useState, type ReactNode } from 'react' +import { + dismissBargainAlert, + fetchActiveBargainAlerts, + fetchBargainAlertsAuthorised, +} from '../lib/dotnetapi' +import { BargainAlertsContext, type BargainAlertsContextModel } from '../hooks/useBargainAlerts' +import { useItems } from '../hooks/useItems' +import type { BargainAlert } from '../types/bargainAlerts' +import type { Item } from '../types/items' + +const POLL_INTERVAL_MS = 12_000 + +const tornMarketUrl = (itemId: number) => + `https://www.torn.com/page.php?sid=ItemMarket#/market/view=search&itemID=${itemId}&sortField=price&sortOrder=ASC` + +const formatMoney = (n: number) => `$${n.toLocaleString('en-US')}` + +// Distinctive Web Audio synthesis: two-tone descending chirp the OS +// won't generate by accident. Cheaper than shipping an asset, and +// exactly what we want until Drew swaps a real sound in. +const playAlertSound = () => { + try { + const AudioCtor = + window.AudioContext ?? + (window as unknown as { webkitAudioContext?: typeof AudioContext }).webkitAudioContext + if (!AudioCtor) return + const ctx = new AudioCtor() + const now = ctx.currentTime + + const tone = (frequency: number, startOffset: number, duration: number) => { + const osc = ctx.createOscillator() + const gain = ctx.createGain() + osc.type = 'square' + osc.frequency.setValueAtTime(frequency, now + startOffset) + gain.gain.setValueAtTime(0.0001, now + startOffset) + gain.gain.exponentialRampToValueAtTime(0.18, now + startOffset + 0.01) + gain.gain.exponentialRampToValueAtTime(0.0001, now + startOffset + duration) + osc.connect(gain).connect(ctx.destination) + osc.start(now + startOffset) + osc.stop(now + startOffset + duration + 0.02) + } + + tone(880, 0, 0.16) + tone(1320, 0.18, 0.22) + + // Tear down the context after the tones finish so we don't leak + // AudioContext instances on every alert. + setTimeout(() => void ctx.close(), 600) + } catch { + // Audio refused (e.g. autoplay policy before any user gesture). + // Drew's a logged-in user clicking around the app, so this almost + // never bites in practice. + } +} + +// OS-level notification. Used when the tab is hidden — the in-page +// chirp is unreliable in backgrounded tabs (autoplay policy + Chrome's +// audio context suspension), and the OS notification gets attention +// even when the browser isn't the focused application. +// +// Click target: opens the Torn item market in a new tab. We don't try +// to focus the TornTools tab itself because window.focus() from a +// notification handler is flaky cross-browser, and Drew's intent on +// click is "buy the thing", not "look at TornTools". +const showOsNotification = (alert: BargainAlert, item: Item | undefined) => { + if (typeof Notification === 'undefined') return + if (Notification.permission !== 'granted') return + + try { + const name = item?.name ?? `Item ${alert.itemId}` + const n = new Notification(`Bargain: ${name}`, { + body: `Listed for ${formatMoney(alert.listingPrice)}, sells for ${formatMoney(alert.marketValue)} (+${formatMoney(alert.profit)} profit)`, + icon: item?.image, + // tag dedupes: a re-fire for the same alert id replaces rather + // than stacks (rare but possible if React StrictMode double-runs + // an effect). + tag: `torntools-bargain-${alert.id}`, + // requireInteraction keeps the notification on screen until + // dismissed — match the toast's persistent behaviour. + requireInteraction: true, + }) + + n.onclick = () => { + window.open(tornMarketUrl(alert.itemId), '_blank', 'noopener,noreferrer') + n.close() + } + } catch { + // Notifications can throw if the browser is in a weird state + // (private mode quirks, OS-level "do not disturb"). Silent. + } +} + +type Props = { children: ReactNode } + +export const BargainAlertsProvider = ({ children }: Props) => { + const [authorised, setAuthorised] = useState(false) + const [alerts, setAlerts] = useState<BargainAlert[]>([]) + const seenIdsRef = useRef<Set<number>>(new Set()) + const firstPollSettledRef = useRef(false) + const { itemsById } = useItems() + // Snapshot for use in the refresh callback without rebinding it on + // every items change (which would tear down the polling interval). + const itemsByIdRef = useRef(itemsById) + itemsByIdRef.current = itemsById + + // Mount-time authorisation probe. Cheap and gates everything that follows. + useEffect(() => { + let cancelled = false + void (async () => { + try { + const ok = await fetchBargainAlertsAuthorised() + if (!cancelled) setAuthorised(ok) + } catch { + if (!cancelled) setAuthorised(false) + } + })() + return () => { + cancelled = true + } + }, []) + + // Once authorised, ask for OS notification permission — but most + // modern browsers gate requestPermission() on a fresh user gesture + // (a mount-time effect doesn't count). So we register a one-shot + // click listener that fires the prompt on the first interaction + // anywhere on the page, then removes itself. The toast + chirp keep + // working regardless of whether permission is ever granted. + useEffect(() => { + if (!authorised) return + if (typeof Notification === 'undefined') return + if (Notification.permission !== 'default') return + + // Try once eagerly; on browsers that allow it (Edge in some + // configurations, sites with prior site engagement) this skips + // the click hop. + void Notification.requestPermission().catch(() => undefined) + + // Fallback: prompt on the next user interaction. Capture phase so + // we beat any stopPropagation, once-only so we don't keep firing. + const onGesture = () => { + if (Notification.permission === 'default') { + void Notification.requestPermission().catch(() => undefined) + } + window.removeEventListener('click', onGesture, true) + window.removeEventListener('keydown', onGesture, true) + } + window.addEventListener('click', onGesture, { capture: true, once: false }) + window.addEventListener('keydown', onGesture, { capture: true, once: false }) + + return () => { + window.removeEventListener('click', onGesture, true) + window.removeEventListener('keydown', onGesture, true) + } + }, [authorised]) + + const refresh = useCallback(async () => { + try { + const next = await fetchActiveBargainAlerts() + setAlerts(next) + + // Notify on first sighting of an id only. + const seen = seenIdsRef.current + const isFirstPoll = !firstPollSettledRef.current + const newAlerts: BargainAlert[] = [] + for (const a of next) { + if (!seen.has(a.id)) { + seen.add(a.id) + newAlerts.push(a) + } + } + firstPollSettledRef.current = true + + if (newAlerts.length === 0) return + + const hidden = document.visibilityState === 'hidden' + if (hidden) { + // Always fire OS notifications — including on the first poll + // when alerts pre-existed the page load. That case is exactly + // when Drew most needs to know: he opened the app in another + // tab and there's a bargain waiting. The toast is invisible + // to him; the OS notification is the only signal. + for (const a of newAlerts) { + showOsNotification(a, itemsByIdRef.current[a.itemId]) + } + } else if (!isFirstPoll) { + // In-page chirp only on truly new alerts — not on the first + // poll, because then the toast is rendering for the first + // time anyway and a chirp on every page load would be + // noisy. + playAlertSound() + } + } catch { + // Treat poll failures as transient. + } + }, []) + + // Always-on polling — the whole point of the toast is to alert Drew + // when the tab is *not* the focused one, so suspending on hidden + // would defeat the feature. The endpoint is cheap (one indexed + // SELECT) and rate-limit cost is negligible for a single user. + // + // On return-to-visible we fire an extra fetch immediately. This + // matters because Chrome / Firefox throttle setInterval in + // backgrounded tabs to roughly once per minute (and on battery + // saver, longer). The interval still fires during background, just + // less often — the immediate-on-visible bump catches up any window + // where it didn't. + useEffect(() => { + if (!authorised) return + + void refresh() + const intervalId = setInterval(() => void refresh(), POLL_INTERVAL_MS) + const onVisible = () => { + if (document.visibilityState === 'visible') void refresh() + } + document.addEventListener('visibilitychange', onVisible) + + return () => { + clearInterval(intervalId) + document.removeEventListener('visibilitychange', onVisible) + } + }, [authorised, refresh]) + + const dismiss = useCallback(async (id: number) => { + setAlerts((prev) => prev.filter((a) => a.id !== id)) + try { + await dismissBargainAlert(id) + } catch { + // Swallow — server-side state is the source of truth, the next + // poll will reconcile if dismissal didn't actually persist. + } + }, []) + + const value = useMemo<BargainAlertsContextModel>( + () => ({ authorised, alerts, dismiss }), + [authorised, alerts, dismiss], + ) + + return <BargainAlertsContext.Provider value={value}>{children}</BargainAlertsContext.Provider> +} diff --git a/client/src/hooks/useBargainAlerts.ts b/client/src/hooks/useBargainAlerts.ts new file mode 100644 index 0000000..7581ef3 --- /dev/null +++ b/client/src/hooks/useBargainAlerts.ts @@ -0,0 +1,18 @@ +import { createContext, useContext } from 'react' +import type { BargainAlert } from '../types/bargainAlerts' + +export interface BargainAlertsContextModel { + authorised: boolean + alerts: BargainAlert[] + dismiss: (id: number) => Promise<void> +} + +export const BargainAlertsContext = createContext<BargainAlertsContextModel | null>(null) + +export const useBargainAlerts = () => { + const ctx = useContext(BargainAlertsContext) + if (!ctx) { + throw new Error('useBargainAlerts must be used inside BargainAlertsProvider') + } + return ctx +} diff --git a/client/src/lib/dotnetapi.ts b/client/src/lib/dotnetapi.ts index fa71d79..b2437ef 100644 --- a/client/src/lib/dotnetapi.ts +++ b/client/src/lib/dotnetapi.ts @@ -4,6 +4,7 @@ import type { Item } from '../types/items' import type { HistoryResult, HistoryWindow } from '../types/history' import type { BazaarSummary } from '../types/bazaarSummaries' import type { ProfitableListing } from '../types/profitableListings' +import type { BargainAlert } from '../types/bargainAlerts' import type { KeyInfo, TornInventoryPayload, @@ -32,6 +33,11 @@ const URL_ITEM_HISTORY_BASE = `${API_BASE_URL}/items` const URL_AUTH_LOGIN = `${API_BASE_URL.replace(/\/api$/, '')}/auth/login` const URL_AUTH_ME = `${API_BASE_URL.replace(/\/api$/, '')}/auth/me` const URL_AUTH_LOGOUT = `${API_BASE_URL.replace(/\/api$/, '')}/auth/logout` +const URL_ALERTS_AUTHORISED = `${API_BASE_URL.replace(/\/api$/, '')}/api/alerts/authorised` +const URL_ALERTS_ACTIVE = `${API_BASE_URL.replace(/\/api$/, '')}/api/alerts/active` +const URL_ALERTS_DISMISS = (id: number) => + `${API_BASE_URL.replace(/\/api$/, '')}/api/alerts/${id}/dismiss` + const URL_TORN_USER_BASIC = `${API_BASE_URL}/torn/user/basic` const URL_TORN_USER_INVENTORY = `${API_BASE_URL}/torn/user/inventory` const URL_TORN_KEY_VALIDATE = `${API_BASE_URL}/torn/key/validate` @@ -302,3 +308,26 @@ async function postToggleUserFavourite( if (!res.ok) throw new Error(`HTTP ${res.status}`) return res.json() } + +export async function fetchBargainAlertsAuthorised(): Promise<boolean> { + const res = await fetch(URL_ALERTS_AUTHORISED, { credentials: 'include' }) + if (res.status === 401) return false + if (!res.ok) throw new Error(`HTTP ${res.status}`) + const body = (await res.json()) as { authorised: boolean } + return body.authorised +} + +export async function fetchActiveBargainAlerts(): Promise<BargainAlert[]> { + const res = await fetch(URL_ALERTS_ACTIVE, { credentials: 'include' }) + if (res.status === 401 || res.status === 403) return [] + if (!res.ok) throw new Error(`HTTP ${res.status}`) + return res.json() +} + +export async function dismissBargainAlert(id: number): Promise<void> { + const res = await fetch(URL_ALERTS_DISMISS(id), { + method: 'POST', + credentials: 'include', + }) + if (!res.ok && res.status !== 204) throw new Error(`HTTP ${res.status}`) +} diff --git a/client/src/main.tsx b/client/src/main.tsx index 84b1b9a..213c6d9 100644 --- a/client/src/main.tsx +++ b/client/src/main.tsx @@ -8,6 +8,7 @@ import { LocalizationProvider } from "@mui/x-date-pickers/LocalizationProvider"; import { UserProvider } from "./contexts/UserContext.tsx"; import { ItemsProvider } from "./contexts/ItemsContext.tsx"; import { BazaarSummariesProvider } from "./contexts/BazaarSummariesContext.tsx"; +import { BargainAlertsProvider } from "./contexts/BargainAlertsContext.tsx"; import { appTheme } from "./theme/appTheme.ts"; import App from "./App.tsx"; @@ -19,11 +20,13 @@ createRoot(document.getElementById("root")!).render( <UserProvider> <ItemsProvider> <BazaarSummariesProvider> - <LocalizationProvider dateAdapter={AdapterDayjs}> - <BrowserRouter basename={import.meta.env.BASE_URL}> - <App /> - </BrowserRouter> - </LocalizationProvider> + <BargainAlertsProvider> + <LocalizationProvider dateAdapter={AdapterDayjs}> + <BrowserRouter basename={import.meta.env.BASE_URL}> + <App /> + </BrowserRouter> + </LocalizationProvider> + </BargainAlertsProvider> </BazaarSummariesProvider> </ItemsProvider> </UserProvider> diff --git a/client/src/types/bargainAlerts.ts b/client/src/types/bargainAlerts.ts new file mode 100644 index 0000000..1ebd837 --- /dev/null +++ b/client/src/types/bargainAlerts.ts @@ -0,0 +1,11 @@ +export interface BargainAlert { + id: number + itemId: number + listingPrice: number + marketValue: number + profit: number + foundAt: string + expiredAt: string | null + dismissedAt: string | null + status: 'active' | 'expired' | 'dismissed' +} diff --git a/context/plans/2026-04-25-bargain-alerts-verification.sql b/context/plans/2026-04-25-bargain-alerts-verification.sql new file mode 100644 index 0000000..87755db --- /dev/null +++ b/context/plans/2026-04-25-bargain-alerts-verification.sql @@ -0,0 +1,67 @@ +-- Bargain alerts — synthetic verification helpers. +-- +-- Once the V1.25 migration has applied (it will on next backend boot), +-- these queries let you simulate a bargain end-to-end without waiting +-- for the real Torn market to throw something cheap. +-- +-- Pick any tradable item with a non-null value_sell_price. Cheapish, +-- well-known items work best (e.g. Xanax id=206). + +-- 1. Find a candidate item to fake an alert against: +SELECT id, name, value_sell_price +FROM public.items +WHERE value_sell_price IS NOT NULL + AND value_sell_price > 50000 + AND is_tradable = true +ORDER BY value_sell_price DESC +LIMIT 5; + +-- 2. Open a synthetic alert directly against the table. +-- Replace :item_id and :sell_price with values from step 1. +-- +-- This bypasses the listings-write hook entirely. Useful for testing +-- the toast UI / polling path without poking listings. The toast +-- should appear on the next 12s poll cycle. +INSERT INTO public.bargain_alerts (item_id, listing_price, market_value, profit, status) +VALUES ( + /* :item_id */ 206, + /* listing */ 1, + /* :sell_price */ 800, + /* profit */ 799, + 'active' +); + +-- 3. To exercise the detection hook end-to-end (more thorough), insert +-- a sub-threshold listing into public.listings for the chosen item. +-- The next time TornMarketsProcessor scans that item the +-- EvaluateAsync hook will run; if our listing is cheaper than the +-- threshold, an alert opens. Note: the detection only runs when +-- the listings actually change, so the insert needs to displace +-- the existing minimum. +-- +-- Replace :item_id, :sell_price; price = sell_price - 6000 puts profit +-- well over the $5,000 threshold. +-- +-- (You may also need to delete other rows for the item so this becomes +-- the cheapest, depending on existing data.) +INSERT INTO public.listings ( + correlation_id, source, item_id, listing_position, price, quantity, time_seen +) +VALUES ( + gen_random_uuid(), 'Torn', 206, 0, 100, 1, NOW() +); + +-- 4. Inspect what's active. +SELECT id, item_id, listing_price, market_value, profit, found_at, status +FROM public.bargain_alerts +WHERE status = 'active' +ORDER BY found_at DESC; + +-- 5. Manually expire (simulates the listing being sold and the next +-- scan finding nothing under threshold). +UPDATE public.bargain_alerts +SET status = 'expired', expired_at = NOW() +WHERE id = /* :alert_id */ 1; + +-- 6. Cleanup after testing. +DELETE FROM public.bargain_alerts WHERE status IN ('expired', 'dismissed'); diff --git a/context/plans/2026-04-25-bargain-alerts.md b/context/plans/2026-04-25-bargain-alerts.md new file mode 100644 index 0000000..7f61e17 --- /dev/null +++ b/context/plans/2026-04-25-bargain-alerts.md @@ -0,0 +1,258 @@ +# Bargain Alerts — Drew-only v1 — Plan + +**Date**: 2026-04-25 **Author**: Claude (with Drew) **Status**: build complete (M1–M8), pending +end-to-end verification (M9) and prod latency check (M10). **Verification helpers**: see +`2026-04-25-bargain-alerts-verification.sql`. + +## Goal + +Toast notification on the front end when an item appears on the **Torn market** for a price that +makes a **single-unit profit > $5,000** when sold back to the city. Persistent (no auto-dismiss), +shows time since the listing was first detected, plays a distinctive sound on appearance, links +through to the listing, transitions to a "Too late!" variant when the listing disappears. + +## Scope (the "80%") + +**In:** + +- Markets only (Torn item market). Bazaars deferred — too slow to scan to be useful. +- Drew-only. Single hardcoded authorised player_id, with a clean seam for adding more later. +- Detection: `valueSellPrice - listing_price > 5_000`. City sell-back has no tax. +- Snipe-loop in `TornMarketsProcessor` (interleave re-polls of an active-alert item). +- Persistence: a `bargain_alerts` table the API surfaces via short-poll endpoints. +- Toast UI: persistent, time-since counter, sound, click-through, dismiss button, "Too late!" state + on expiry. + +**Out (the "20%" — deferred):** + +- Subscriber ledger / events-feed poll / Xanax-gift detection / 30-day rolling window. The + authorisation seam is the single point where these slot in later. +- Bazaars. +- SignalR / WebSockets / SSE. v1 uses short-interval browser polling. +- Multi-user notification routing. +- ToS conversation with Torn staff. Not needed for Drew-only since no item-for-service exchange + occurs. + +## Authorisation seam + +Single config key `BargainAlerts__AuthorisedPlayerIds: ["3943900"]` (single-element list). Backend +gate: a small `IBargainAlertAuthService.IsAuthorisedAsync(int playerId)` that consults this list. +Future-extension: replace the list-check with a ledger query without touching call sites. + +Both `GET /api/alerts/active` and `POST /api/alerts/{id}/dismiss` go through this gate. Anyone else +hitting the endpoints gets `403`. The frontend checks at app-mount whether the current user is +authorised (cheap `GET /api/alerts/authorised` returning `{authorised: bool}`); if false, the +context becomes a no-op and the toast component never mounts. + +## Data model + +New migration `V1.25__bargain_alerts.sql`: + +```sql +CREATE TABLE bargain_alerts ( + id BIGSERIAL PRIMARY KEY, + item_id INTEGER NOT NULL REFERENCES items(id), + listing_price BIGINT NOT NULL, + market_value BIGINT NOT NULL, -- valueSellPrice at time of detection + profit BIGINT NOT NULL, -- market_value - listing_price (denormalised for sort) + found_at TIMESTAMPTZ NOT NULL DEFAULT now(), + expired_at TIMESTAMPTZ, -- set when the snipe-loop notices the listing is gone + dismissed_at TIMESTAMPTZ, -- set by POST /api/alerts/{id}/dismiss + status TEXT NOT NULL DEFAULT 'active' -- active | expired | dismissed + CHECK (status IN ('active', 'expired', 'dismissed')) +); + +CREATE INDEX idx_bargain_alerts_status_found ON bargain_alerts(status, found_at DESC); +CREATE INDEX idx_bargain_alerts_item_active ON bargain_alerts(item_id) WHERE status = 'active'; +``` + +**Why "expired" vs "dismissed" vs delete**: kept for future stats. Could roll up "alerts/day", +"average click-to-buy time", "missed-vs-claimed ratio". Cheap. + +**Idempotency**: an item can have at most one `active` alert at a time. The detection write-path +checks for an existing active alert on `item_id` before inserting. If a listing relists at a deeper +discount the existing alert's `listing_price` / `profit` / `found_at` are _not_ updated — the +original alert stays until the listing is no longer below threshold, then a fresh alert opens. +(Avoids "the time-since counter just reset for no obvious reason" UX.) + +## Backend + +### Detection hook + +Detection runs at the listings-write path, not as a scan. After `Listings` is replaced for an item +(existing `ReplaceListings` flow per `TODO.md` "Listings replaced wholesale on each scan" note), +evaluate each new listing against threshold; if any qualify, ensure an active alert exists for that +item. If no listings remain that qualify and an active alert exists, mark it `expired`. + +Pseudocode: + +```csharp +async Task EvaluateBargainsAsync(int itemId, IReadOnlyList<Listing> newListings) { + var item = await _items.GetByIdAsync(itemId); + if (item.ValueSellPrice is null) return; + + var threshold = item.ValueSellPrice.Value - 5_000; + var qualifying = newListings + .Where(l => l.Price < threshold) + .OrderBy(l => l.Price) + .ToList(); + + var existing = await _alerts.GetActiveByItemAsync(itemId); + + if (qualifying.Count > 0 && existing is null) { + var cheapest = qualifying[0]; + await _alerts.CreateAsync(new BargainAlert { + ItemId = itemId, + ListingPrice = cheapest.Price, + MarketValue = item.ValueSellPrice.Value, + Profit = item.ValueSellPrice.Value - cheapest.Price, + }); + } else if (qualifying.Count == 0 && existing is not null) { + await _alerts.MarkExpiredAsync(existing.Id); + } + // qualifying > 0 && existing != null → leave alone (idempotency) +} +``` + +### Snipe-loop in `TornMarketsProcessor` + +Need GitNexus impact analysis on `TornMarketsProcessor` first — it's hot-path queue code. + +Sketch: maintain an in-memory `HashSet<int>` of active-alert item IDs (rebuilt from DB on processor +startup, kept in sync via a bus or a poll on each tick). On each queue tick, if the hot-set is +non-empty _and_ we haven't yet hit the per-item interleave bound, alternate: hot-item poll → queue +tick → hot-item poll → queue tick. Track per-item interleave counts; once a hot item has been polled +`MaxInterleaves` times consecutively without disappearing, drop it from the fast-path back to normal +queue cadence (alert stays active, just no longer interleaved). + +Constants (config-driven): + +- `BargainAlerts__MaxInterleaves`: 50 (~5–10 min at current cadence; tune in prod). +- `BargainAlerts__InterleavePolicy`: `RoundRobin | LIFO` — start with `RoundRobin` if multiple + alerts are simultaneously active. + +### Endpoints + +- `GET /api/alerts/authorised` → `{authorised: bool}` (cheap, no DB hit beyond the config check). +- `GET /api/alerts/active` → list of active alerts for the authed user (gated). Includes item + name + image so the toast doesn't need a second lookup. +- `POST /api/alerts/{id}/dismiss` → marks `dismissed_at` + status. Idempotent. + +All three sit on the existing `ApiController` pattern (`[controller]/[action]` routing) until the +broader REST refactor noted in TODO.md happens. + +## Frontend + +### Context + polling + +`BargainAlertsContext` — short-poll `/api/alerts/active` every 12s while the tab is visible (suspend +on `document.visibilityState === 'hidden'` to avoid burning the API rate limit on backgrounded tabs; +resume + immediate fetch on visibility return). + +State shape: + +```ts +type BargainAlert = { + id: number; + itemId: number; + itemName: string; + itemImage?: string; + listingPrice: number; + marketValue: number; + profit: number; + foundAt: string; // ISO + status: "active" | "expired" | "dismissed"; +}; +``` + +Track which alert IDs we've already shown so the sound only plays on **first** appearance, not on +every poll cycle the alert remains active. + +### `<BargainAlertToast />` + +Mounted at app root, renders one stack of toasts (top of viewport, descending newest-first). +Per-toast: + +- Item image + name + listing price + city sell price + profit. +- "Time since listed" counter (live, updates every second). +- Click-through link to `https://www.torn.com/imarket.php#/p=shop&type={itemId}` (Drew to confirm + the canonical URL — there are a couple of valid forms). +- Dismiss button → POST `/api/alerts/{id}/dismiss`, optimistic local removal. +- "Too late!" variant: if poll returns the alert with `status === 'expired'` and we're still showing + it, swap to a muted styling + the elapsed time the listing was up + a non-clickable body. + Auto-dismiss after 30s in the expired state (the only auto-dismiss in the design). + +### Sound + +Distinctive, short (~0.5–1.0s), non-musical. Avoid system-default-sounding tones so a backgrounded +tab is identifiable. Asset placement: `client/public/sounds/bargain-alert.mp3` (or `.ogg`). Open +question — Drew's call. Placeholder: a generated short chirp. + +Audio constraints to respect: + +- Browsers gate autoplay on first-tab-load. The `<BargainAlertToast>` parent should attach an audio + element on first user-interaction (any click anywhere in the app suffices) — the sound won't play + until the page has been interacted with at least once. This is a browser policy, not something we + can override. +- One audio play per **new** alert ID (deduped against the seen-set). + +## Verification + +1. **Build & typecheck clean** (`dotnet build`, `npx tsc --noEmit`, `npm run build`). +2. **Schema migration applies cleanly** locally. +3. **Synthetic alert test**: insert a fake listing into dev DB at `valueSellPrice - 6_000` for an + item with a known `valueSellPrice`. Trigger the listings-write hook (or call detection directly). + Verify an alert row appears, the `/api/alerts/active` endpoint returns it under Drew's session, + `403` for any other player. +4. **Toast renders**: with the synthetic alert in place, load the home page, confirm the toast + appears, the counter ticks, the sound plays once. +5. **Snipe-loop**: with the synthetic alert active, log `TornMarketsProcessor` queue order and + confirm the hot item appears every other tick up to the bound, then drops out. +6. **Expiry**: mark the synthetic alert `expired` server-side; confirm the toast transitions to "Too + late!" and auto-dismisses 30s later. +7. **Dismiss**: confirm `POST /api/alerts/{id}/dismiss` updates the row and the toast disappears + from the next poll's response. +8. **Latency reality-check**: the bit Drew and I are both nervous about. With a real (not synthetic) + sub-threshold listing posted, measure end-to-end: market scan picks it up → detection fires → + alert row exists → next /api/alerts poll → toast renders. If best-case is 30s+, document it and + let Drew decide whether snipe-loop alone is worth shipping or whether we need to look at scan + cadence too. + +## Open questions + +1. **Sound asset**: Drew picks. Placeholder until then. +2. **Click-through URL**: confirm canonical form for "open this item on the Torn market". +3. **Where in `TornMarketsProcessor` to splice the snipe-loop**: pending GitNexus impact run. +4. **Polling cadence on the events-poll for future subscriber expansion**: not relevant for v1. + Defer. + +## Risks + +- **Latency**: items at >$5k profit get sniped fast. If end-to-end best-case is >30s the toast will + routinely fire after the listing's gone. Mitigate via snipe-loop + measurement; accept as partial + outcome if scan cadence can't shrink without ToS issues. +- **`TornMarketsProcessor` change**: high-impact; gitnexus impact analysis required before edit. +- **`valueSellPrice` quality**: some items have null or stale sell prices. Detection must skip these + (no false alerts on missing baselines). +- **Browser audio autoplay policy**: first sound after page load won't play until a user + interaction. Document this; not a blocker for Drew (he uses the app, he interacts with it). +- **Cache and hot-set drift**: if the in-memory hot-set in `TornMarketsProcessor` diverges from the + DB (e.g. an alert is dismissed but the processor still interleaves), worst case is a few extra + polls before the next hot-set sync. Acceptable. + +## Milestones + +| # | Milestone | Notes | +| --- | -------------------------------- | -------------------------------------------------------------------------------- | +| 1 | GitNexus impact + recon | `TornMarketsProcessor`, listings-write path, current ApiController auth pattern. | +| 2 | Schema migration + entity + repo | `V1.25__bargain_alerts.sql`, entity, repository methods. | +| 3 | Detection on listings-write hook | `EvaluateBargainsAsync`, idempotency check. | +| 4 | Auth seam + 3 endpoints | `IBargainAlertAuthService`, `GET /authorised`, `GET /active`, `POST /dismiss`. | +| 5 | Snipe-loop in processor | Bounded interleave, hot-set sync. | +| 6 | Frontend context + polling | Visibility-aware short-poll, seen-set. | +| 7 | Toast component + styling | Persistent, counter, dismiss, "Too late!" variant. | +| 8 | Sound integration | Placeholder asset; Drew swaps. | +| 9 | End-to-end synthetic test | Verification steps 3–7 above. | +| 10 | Latency reality-check | Verification step 8 — defer-or-ship judgement call. | + +Stop after each milestone for `dotnet build` + `npx tsc --noEmit` + a quick gut-check. diff --git a/context/session-handoff.md b/context/session-handoff.md index db6d0ac..53ed2fd 100644 --- a/context/session-handoff.md +++ b/context/session-handoff.md @@ -45,7 +45,10 @@ to be: ## Blockers / outstanding -- None. The cards work is wrapped. Pending follow-ups are catalogued in `TODO.md`. +- None on the cards work. Pending follow-ups are catalogued in `TODO.md`. +- **Bargain alerts subscription feature**: design captured in `TODO.md` under "Bargain alerts + (subscription feature)". **Blocked on Torn staff sign-off** before any code — see ToS analysis + below. ## Next action @@ -71,6 +74,122 @@ the backfill catches up, the constants are: - `DatabaseService.SummariseChunk = TimeSpan.FromDays(7)` — chunk size for the backfill loop. Drop to 3 days if a chunk times out; raise if backfill is fine and you want fewer chunks. +## Bargain-alerts feature — discussion 2026-04-25 ~01:30 + +Drew proposed a subscription feature: toast notification when a Torn-market item lists for <10% of +city value, gated behind a 30-day rolling Xanax-gift subscription paid in-game to him. Persistent +toast with "time since" counter and distinct sound. Click-through to the listing. + +### What we verified about the API access + +- Custom keys can be created via the Torn website prefs page; no API endpoint to create them, but + there is a deep-link URL that pre-fills the form: + `https://www.torn.com/preferences.php#tab=api?step=addNewKey&title=<TITLE>&<category>=<csv>`. Drew + used `...&user=events` and got a key (`eJCgm4nAvaGepgAQ` — _DREW: rotate this if you don't want it + captured here, it has Custom/events-only scope_). +- Verified scope via `GET /v2/key/info`: `access.type: "Custom"`, + `selections.user: ["profile", "timestamp", "lookup", "events"]`, all other categories at default + `timestamp`/`lookup`. `basic` and `bazaar` selections correctly return error code 16. +- The events feed contains the signal we need: + `"You were sent some Xanax from <a ... XID=3790183>IcePokeDude</a>"` with unix timestamp + stable + per-event ID. Bonus: also contains `"NAME bought N x ITEM from your bazaar for $PRICE"` events — a + "your bazaar just sold" notification falls out of the same poll for free. + +### API quirk worth remembering + +Torn sometimes returns the response _schema_ literal instead of real data — e.g. +`event: string[144]` rather than the actual string, or `events: [{...}] (100)` for arrays. Looks +like an error but isn't. Adding `&comment=tornttools-<feature>` reliably flips it to real data. The +`comment` also shows up in `/key/log`. **Encode in the backend client: always send a comment.** + +### ToS finding — the actual blocker + +Drew brought a Claude.ai analysis to the chat: + +- **RMT clause** ("exchange of currency or assets on Torn for real-world money or services") on + Torn's rule violations page covers item-for-external-service swaps. Sellers historically + permabanned without first-offence warning. +- **API ToS** on torn.com/api.html invites operators to contact staff for permission to charge users + for usage. Doesn't carve out item-based payments. +- Recommended path: email webmaster@torn.com describing the feature _before any code_, ask for + explicit permission. They've invited that conversation in writing. Sanity-check the precedent by + looking at how TornStats / TornPDA handle paid tiers (real money vs items) before drafting. + +Risk-tiered fallbacks if staff say no to item-gating: voluntary tips with no gating; real-money +subscription via Stripe; or just free for everyone. + +### Where this lives now + +Full design + ToS analysis + implementation sketch in `TODO.md` → **Bargain alerts** section, and +plan at `context/plans/2026-04-25-bargain-alerts.md` (now annotated with build status). + +**Drew pivoted away from the subscription model**: rather than gate behind Xanax payments and need +the staff conversation, he asked for the Drew-only variant of the feature for himself. That +sidesteps the ToS issue (no item-for-service exchange) and lets the design sit on a clean +authorisation seam ready for the subscription extension if/when staff approve it later. + +### Drew-only build (M1–M8) shipped this session + +Backend (`dotnet build`: clean): + +- Migration `V1.25__bargain_alerts.sql` — partial unique index `(item_id) WHERE status='active'` + enforces "one active alert per item" at the DB layer. +- `BargainAlertEntity` / `BargainAlertDto` / `IBargainAlertRepository` / `BargainAlertRepository` + with + `Create / GetActiveByItem / GetById / GetAllActive / MarkExpired / MarkDismissed / GetActiveItemIds`. +- `IBargainAlertService.EvaluateAsync(source, itemId, newListings, ct)` hooked into + `DatabaseService.ProcessListingsAsync` after `ReplaceListingsAsync`. Source-scoped (Torn market + only), idempotent open/expire decisions. Threshold = + `BargainAlertService.ProfitThreshold = 5_000`. +- `BargainAlertsConfiguration` (`AuthorisedPlayerIds: [3943900]`, `MaxInterleaves: 50`) bound from + appsettings. `IBargainAlertAuthService` is a hashset-backed config check, ready to be replaced by + a ledger query later. +- `AlertsController` at `/api/alerts/{authorised,active,{id}/dismiss}`. `[Authorize]` + the + authorisation gate. `authorised` returns `{authorised: bool}` so the frontend can branch cleanly + without 403s. +- Snipe-loop: new virtual `TryGetPriorityQueueItemAsync` hook on `QueueProcessorBase`. + `TornMarketsProcessor` overrides it — every alternate tick returns a synthetic queue item for the + least-polled hot item, bounded by `MaxInterleaves`. Synthetic items have no DB queue id; + `RunWorkerAsync` skips `IncrementQueueItemAttempts`/`RemoveQueueItemAsync` for them. Hot-set + refreshes from the DB every 30s. + +Frontend (`npx tsc --noEmit`, `npm run build`: clean): + +- `BargainAlertsProvider` in `main.tsx`, between `BazaarSummariesProvider` and + `LocalizationProvider`. Mount-time `/api/alerts/authorised` probe; if false, the provider becomes + a no-op. +- Visibility-aware 12s polling (suspends on hidden tab, immediate fetch on return-to-visible). +- "Seen IDs" set primed on first poll so the sound only fires on _new_ alerts after that. +- `<BargainAlertToast />` mounted at the top of `Layout`. Renders nothing for unauthorised users + (safe to mount globally). +- Web Audio synthesised two-tone chirp (880 → 1320 Hz square waves) — distinctive, no asset needed. + Drew can swap to a file-based asset by replacing `playAlertSound` and dropping a sound into + `client/public/sounds/`. +- Toast UI: persistent (no auto-dismiss), live "time since" counter, click-through to + `https://www.torn.com/imarket.php#/p=shop&type={id}`, dismiss button with optimistic local + removal + `POST /api/alerts/{id}/dismiss`. + +### What's left for Drew (M9 + M10) + +1. Boot the backend so Flyway applies V1.25 against the dev DB. +2. Run the synthetic test in `context/plans/2026-04-25-bargain-alerts-verification.sql`. Step 2 + inserts an alert directly (tests the toast path); step 3 inserts a sub-threshold listing (tests + the full detection hook end-to-end). +3. Real-world latency check: post a real cheap listing on the Torn market (or wait for one) and + measure detection → endpoint → toast latency. If best-case is >30s the snipe-loop alone won't + make this useful and we'll need to reconsider scan cadence. + +### Known caveats / things to watch + +- Detection only fires when listings _change_. A pre-existing sub-threshold listing won't open an + alert until the next time the listing composition or minimum price changes. Acceptable in practice + — bargains rarely sit untouched. +- The synthesised audio chirp won't play before the user has interacted with the page (browser + autoplay policy). Drew uses the app, so this almost never bites in practice. +- Snipe-loop spends half the per-worker tick budget on hot items. With 1 worker and 1 hot item, + that's ~30 polls/min on the hot item before the bound trips at `MaxInterleaves=50`. Tune in prod + if needed. + ## Data analysis assets `data-exports/` (gitignored) holds the CSVs Drew shared mid-session for ranking validation. Five From 69e3dd9ec3dd77ffbfe5ccab9a5938492314a8a1 Mon Sep 17 00:00:00 2001 From: Drew Morgan <dangerworm@gmail.com> Date: Sat, 25 Apr 2026 03:02:49 +0100 Subject: [PATCH 4/4] fix(alerts): pace priority work + evaluate on all listings paths Codex review surfaced two issues with the bargain-alerts MR: 1. RunWorkerAsync's `continue` after priority work jumped past the per-worker rate-limit delay, so the alternating snipe-loop drove call volume to roughly 2x the configured budget. Restructured so priority work still skips the queue-table mutations (no DB row to increment / remove against) but falls through to the common delay block at the bottom of the loop. 2. ProcessListingsAsync only evaluated bargain alerts inside the `hasMarketChanged || hasMinimumPriceChanged` branch, so the first-scan-of-an-item path (CreateListingsAsync) and the no-listings-returned early-return path skipped evaluation. Result: bargains present on the first scan never opened, and alerts whose listings disappeared entirely never expired. Now evaluates on every reachable path. Co-Authored-By: Claude Opus 4.7 <noreply@anthropic.com> --- .../Services/DatabaseService.cs | 24 +++++-- .../Processors/QueueProcessorBase.cs | 69 ++++++++++--------- 2 files changed, 55 insertions(+), 38 deletions(-) diff --git a/api/TornTools.Application/Services/DatabaseService.cs b/api/TornTools.Application/Services/DatabaseService.cs index 3c6def6..88b1e6e 100644 --- a/api/TornTools.Application/Services/DatabaseService.cs +++ b/api/TornTools.Application/Services/DatabaseService.cs @@ -223,8 +223,16 @@ public async Task ProcessListingsAsync(Source source, int itemId, List<ListingDt .OrderBy(l => l.Price) .Take(QueryConstants.NumberOfListingsToStorePerItem)]; + // Empty result — likely transient (API hiccup, item gone). We + // intentionally don't wipe the cached listings, but we still + // need to evaluate so an active alert whose listing has truly + // disappeared gets expired. EvaluateAsync handles the empty + // case correctly (no qualifying listing → expire any active alert). if (newListings.Count == 0) + { + await _bargainAlertService.EvaluateAsync(source, itemId, newListings, stoppingToken); return; + } var previousListings = (await GetListingsBySourceAndItemIdAsync(source, itemId, stoppingToken)) .OrderBy(l => l.Price) @@ -234,6 +242,9 @@ public async Task ProcessListingsAsync(Source source, int itemId, List<ListingDt if (previousListings.Count == 0) { await CreateListingsAsync(newListings, stoppingToken); + // First-snapshot path — must evaluate so a bargain present on + // the very first scan of an item doesn't get missed. + await _bargainAlertService.EvaluateAsync(source, itemId, newListings, stoppingToken); return; } @@ -277,17 +288,18 @@ await CreateItemChangeLogAsync(new ItemChangeLogDto }, stoppingToken); await ReplaceListingsAsync(source, itemId, newListings, stoppingToken); - - // Detection runs after persistence: a fresh scan that flips an - // item between "qualifies as a bargain" and "doesn't" should be - // reflected in bargain_alerts. Service is source-scoped (Torn - // market only in v1) and idempotent. - await _bargainAlertService.EvaluateAsync(source, itemId, newListings, stoppingToken); } else { await TouchListingsTimestampAsync(source, itemId, DateTimeOffset.UtcNow, stoppingToken); } + + // Bargain evaluation runs on every reachable code path — + // first-snapshot, change-detected, and unchanged-listings — + // because alert state depends on the latest listings, not on + // whether persistence touched anything. Idempotent and + // source-scoped; no-op for non-Torn sources. + await _bargainAlertService.EvaluateAsync(source, itemId, newListings, stoppingToken); } public Task<IEnumerable<ProfitableListingDto>> GetProfitableListingsAsync(CancellationToken stoppingToken) diff --git a/api/TornTools.Cron/Processors/QueueProcessorBase.cs b/api/TornTools.Cron/Processors/QueueProcessorBase.cs index 08c988e..06212a1 100644 --- a/api/TornTools.Cron/Processors/QueueProcessorBase.cs +++ b/api/TornTools.Cron/Processors/QueueProcessorBase.cs @@ -150,51 +150,56 @@ private async Task RunWorkerAsync(int workerId, CancellationToken stoppingToken) _logger.LogError(ex, "[Worker {WorkerId}] {Processor} unhandled exception processing {QueueItemId}. Marking for retry.", workerId, GetType().Name, itemId); } - // Priority work isn't a queue row — skip the queue-table mutations. + // Priority work isn't a queue row — skip the queue-table + // mutations, but still fall through to the rate-limit delay + // at the bottom of the loop. Skipping the delay here would + // let the alternating snipe-loop double the per-worker call + // rate vs the configured budget. if (isPriorityWork) { if (!success) { _logger.LogDebug("[Worker {WorkerId}] {Processor} priority call returned no success. Will retry on the next priority tick.", workerId, GetType().Name); } - continue; - } - - // Increment attempt count - try - { - queueItem = await databaseService.IncrementQueueItemAttempts(itemId, stoppingToken); - } - catch (Exception ex) - { - _logger.LogError(ex, "[Worker {WorkerId}] {Processor} unhandled exception incrementing attempts for {QueueItemId}.", workerId, GetType().Name, itemId); - continue; } - - // Handle failure - if (!success) + else { - if (string.Equals(queueItem.ItemStatus, nameof(QueueStatus.Failed))) + // Increment attempt count + try { - _logger.LogWarning("[Worker {WorkerId}] {QueueItemId} failed after {Attempts} attempts.", - workerId, queueItem.Id, queueItem.Attempts); + queueItem = await databaseService.IncrementQueueItemAttempts(itemId, stoppingToken); } - else + catch (Exception ex) { - _logger.LogWarning("[Worker {WorkerId}] {QueueItemId} failed. Will retry at {NextAttemptAt}.", - workerId, queueItem.Id, queueItem.NextAttemptAt); + _logger.LogError(ex, "[Worker {WorkerId}] {Processor} unhandled exception incrementing attempts for {QueueItemId}.", workerId, GetType().Name, itemId); + continue; } - continue; - } - // Remove completed item - try - { - await databaseService.RemoveQueueItemAsync(itemId, stoppingToken); - } - catch (Exception ex) - { - _logger.LogWarning(ex, "[Worker {WorkerId}] Removal of {QueueItemId} failed.", workerId, itemId); + // Handle failure + if (!success) + { + if (string.Equals(queueItem.ItemStatus, nameof(QueueStatus.Failed))) + { + _logger.LogWarning("[Worker {WorkerId}] {QueueItemId} failed after {Attempts} attempts.", + workerId, queueItem.Id, queueItem.Attempts); + } + else + { + _logger.LogWarning("[Worker {WorkerId}] {QueueItemId} failed. Will retry at {NextAttemptAt}.", + workerId, queueItem.Id, queueItem.NextAttemptAt); + } + continue; + } + + // Remove completed item + try + { + await databaseService.RemoveQueueItemAsync(itemId, stoppingToken); + } + catch (Exception ex) + { + _logger.LogWarning(ex, "[Worker {WorkerId}] Removal of {QueueItemId} failed.", workerId, itemId); + } } } catch (OperationCanceledException) when (stoppingToken.IsCancellationRequested)