Conversation
…kens Adds CRISP_TOKEN_ID support using a deterministic SHA-256 hash of the user's ID. This ensures the same user always resumes the same Crisp conversation, even if cookies are cleared or they switch devices. Previously, Crisp relied solely on browser cookies for session identity. When cookies were lost (cleared, incognito, iframe cookie partitioning in Safari/Firefox), a new anonymous session was created — leading to duplicate conversations for the same user. Changes: - New hook: useCrispTokenId — generates stable token from SHA-256(userId) - crisp-proxy/page.tsx — sets CRISP_TOKEN_ID before Crisp script loads - useCrispProxyUrl — passes token as URL param to proxy page - SupportDrawer — wires up the token hook - crisp.ts — clears CRISP_TOKEN_ID on logout/session reset - global.d.ts — adds CRISP_TOKEN_ID and CRISP_WEBSITE_ID to Window type Ref: https://docs.crisp.chat/guides/chatbox-sdks/web-sdk/session-continuity/
- Move token setting from useEffect into inline <Script> tag so it's guaranteed to be set before Crisp's l.js loads - Cache generated tokens in memory to prevent undefined→resolved state change that caused iframe reloads - Use proper Window typing instead of (window as any) casts - Revert first useEffect deps back to [] (CRISP_RUNTIME_CONFIG is static)
…k banners
Previously the QR payment flow showed "You're getting 10% cashback!" based on
the perk campaign's discountPercentage. But the actual amount is capped by the
user's points balance (dynamicCapFormula), so users often received far less than
10% — creating confusion and support tickets.
Now we show the actual dollar amount (amountSponsored) which already accounts
for all caps. Users see "Peanut's got you! $0.50 back" instead of a misleading
percentage.
Changes:
- Pre-claim banner: shows "$X.XX back" instead of "X% cashback"
- Post-claim banner: shows "$X.XX back" with invite CTA
- 100%+ perks still get special messaging ("We paid for this bill!")
- Fallback copy when amount isn't available yet
…ounts consistently Remove unused percentage variable from pre-claim banner and align full-coverage detection logic between pre-claim and post-claim banners. Also update TransactionDetailsReceipt to use dollar-amount messaging instead of percentage-based, keeping perk messaging consistent across the app.
Small amounts (<$0.50) get factual tone + invite nudge instead of celebratory "Peanut's got you!" framing that feels patronizing for pocket change. Large amounts ($5+) get "your points are paying off" to reinforce the gamification loop.
Adds session_merge: true to CRISP_RUNTIME_CONFIG. This tells Crisp to automatically merge messages from old cookie-based sessions into the new token-based session when a user first opens chat after deploy. Without this, existing users who still have Crisp cookies would get a fresh empty session. With session_merge, their conversation history carries over seamlessly. Ref: https://docs.crisp.chat/guides/chatbox-sdks/web-sdk/session-continuity/#how-to-merge-messages-from-anonymous-sessions-to-token-sessions
Support agents can now click through to PostHog person page directly from Crisp sidebar to view session recordings and user events. Uses the same userId that PostHog identifies users with.
chore: prod release 134 fe
…0317-174844 Update content submodule (35 commits)
Two fixes: 1. Map backend sponsoredUsd → frontend amountSponsored on initial payment response. Previously amountSponsored was always undefined at pre-claim time due to field name mismatch, so the banner always hit the fallback copy. 2. Auto-claim perks under $0.50 — skip the hold-to-claim ceremony for small amounts. Backend already claims at payment time, so this is purely a UI shortcut that avoids making users hold a button for pocket change.
These perks are auto-claimed now and skip the pre-claim banner entirely, so the small-amount messaging branch was unreachable.
Prevents null from passing through — null < 0.5 is true in JS, which would cause unintended auto-claims.
fix: show dollar amounts instead of misleading percentages in cashback banners
…-sessions fix: prevent duplicate Crisp conversations with session continuity tokens
Reduces dependency on Infura/Alchemy (both near quota limits) by adding Chainstack nodes as the primary provider for three more chains. Chainstack was already primary for Ethereum and Arbitrum.
feat: add Chainstack RPCs for Polygon, Base, and BSC
…ok cooldown - ZeroDev: Remove Polygon check (not used in prod), use eth_supportedEntryPoints instead of eth_chainId for bundler (mandatory ERC-4337 method) - RPC: Critical vs non-critical chain distinction (Polygon down = degraded, not unhealthy). Added public fallback RPCs for Polygon. Parallel provider testing. - Main orchestrator: 30-min Discord webhook cooldown to prevent notification spam. Use plain fetch instead of fetchWithSentry to stop health check errors polluting Sentry. - Backend: Use /healthz endpoint instead of /users/username/hugo
Bundler and paymaster fetch calls had no timeout, unlike other health routes (backend 8s, RPC 5s). A hanging ZeroDev endpoint could stall the entire health check until the Vercel function timeout.
…-pick fix: cherry-pick health check improvements to main
feat: add /presskit redirect to Notion press kit
The validate-links script had no knowledge of /stories/ routes. Since peanut-content PR #16 added story pages with internal links to /stories/{slug}, every content PR now fails CI. Adds /{locale}/stories and /{locale}/stories/{slug} patterns, derived from content/stories/ directory like other route types.
fix: add stories routes to link validator
…ks-app-routes fix: add app routes to link validator allowlist
|
The latest updates on your projects. Learn more about Vercel for GitHub.
|
WalkthroughThe PR updates redirect rules for press kit documentation, enhances link validation to include app routes and story collection paths, refactors QR payment perk messaging from percentage-based to dollar-amount calculations, upgrades health check endpoints to use plain fetch with backend-specific probes, improves RPC health monitoring with parallelization and chain criticality assessment, and adds Crisp session token support with updated user data integration linking. Changes
Estimated code review effort🎯 4 (Complex) | ⏱️ ~50 minutes Possibly related PRs
Suggested labels
Suggested reviewers
🚥 Pre-merge checks | ❌ 3❌ Failed checks (1 warning, 2 inconclusive)
✏️ Tip: You can configure your own custom pre-merge checks in the settings. Important Merge conflicts detected (Beta)
✨ Finishing Touches📝 Generate docstrings
🧪 Generate unit tests (beta)
Comment |
There was a problem hiding this comment.
Actionable comments posted: 9
Caution
Some comments are outside the diff and can’t be posted inline due to platform limitations.
⚠️ Outside diff range comments (1)
src/app/api/health/backend/route.ts (1)
24-35:⚠️ Potential issue | 🟠 MajorThis config guard is dead with the defaulted constant.
PEANUT_API_URLis already defaulted insrc/constants/general.consts.ts, so this branch will never catch a missing env here. In an unconfigured staging/dev deployment, this route will probe the production backend instead of surfacing a local config issue. If this endpoint is meant to validate the current deployment, read the raw env vars here rather than the defaulted constant.🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@src/app/api/health/backend/route.ts` around lines 24 - 35, The health route currently checks the defaulted constant PEANUT_API_URL (so the guard never triggers); change the check to read the raw environment variable instead (e.g. use process.env.PEANUT_API_URL) inside the backend health handler in route.ts so an absent/unset env is detected; update the conditional that returns the 500 JSON (the block referencing PEANUT_API_URL) to test the raw env var and keep the same response shape and NO_CACHE_HEADERS.
🧹 Nitpick comments (2)
scripts/validate-links.ts (1)
139-145: Consider whether the stories index page should be registered unconditionally.The current logic only registers
/{locale}/storieswhenstorySlugs.length > 0. If content links to the stories listing page before any stories are added, validation will fail. This differs from the help pages pattern (lines 127-128) where the index is added unconditionally.If intentional (i.e., the stories page shouldn't exist without stories), this is fine as-is. Otherwise, you may want to move
paths.add(\/${locale}/stories`)` outside the conditional.🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@scripts/validate-links.ts` around lines 139 - 145, The code only adds the stories index path when storySlugs.length > 0, which can cause validation failures if other content links to /{locale}/stories before any stories exist; move the paths.add(`/${locale}/stories`) call out of the conditional so the index is always registered (leave the for loop for adding `/${locale}/stories/${slug}` inside the existing if block that iterates over storySlugs), referencing the existing symbols storySlugs, locale, paths.add to locate the change.src/components/TransactionDetails/TransactionDetailsReceipt.tsx (1)
617-623: Add type guard for consistency with qr-pay page.The check on line 618 (
amount !== undefined && amount !== null) doesn't verify thatamountis actually a number before calling.toFixed(2). The companion qr-pay page usestypeof amountSponsored === 'number'(lines 1260, 1290) for the same purpose.For defensive coding and consistency:
♻️ Proposed fix
- // Always show actual dollar amount — never percentage (misleading due to dynamic caps) - if (amount !== undefined && amount !== null) { + // Always show actual dollar amount — never percentage (misleading due to dynamic caps) + if (typeof amount === 'number') { if (perk.isCapped && perk.campaignCapUsd) { return `$${amount.toFixed(2)} cashback — campaign limit reached!` } return `You received $${amount.toFixed(2)} cashback!` }🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@src/components/TransactionDetails/TransactionDetailsReceipt.tsx` around lines 617 - 623, The amount null/undefined check in TransactionDetailsReceipt (the block referencing amount, perk.isCapped and perk.campaignCapUsd) must also assert the type before calling amount.toFixed(2); replace the existing guard (amount !== undefined && amount !== null) with a numeric type check (e.g., typeof amount === 'number') so you only call toFixed on a number, mirroring the qr-pay page pattern (see amountSponsored checks); keep the existing perk.isCapped and perk.campaignCapUsd logic intact.
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.
Inline comments:
In `@src/app/api/health/route.ts`:
- Around line 36-45: The cooldown timestamp is being set before the Discord
webhook POST and without verifying the response; change the logic in the health
route so that lastNotificationTime is only updated after a successful POST
(check the fetch/axios response status is 2xx and no network error), and if the
request fails (non-2xx or throws) do not update lastNotificationTime and
log/handle the failure instead; update both places where lastNotificationTime is
set (the current cooldown check block and the second occurrence around the
webhook POST) to follow this pattern.
- Around line 228-230: The handler currently calls
sendDiscordNotification(responseData) and returns immediately when overallStatus
=== 'unhealthy', which can let the notification be terminated; update the branch
so you either await sendDiscordNotification(responseData) before returning the
NextResponse.json(responseData, { status: 500, headers: NO_CACHE_HEADERS }) or
schedule it with Next.js's after(() =>
sendDiscordNotification(responseData).catch(console.error)) so the alert is
reliably sent; ensure you keep the existing error handling (catch) if using
after() or add try/catch around the awaited call to log failures.
In `@src/app/api/health/rpc/route.ts`:
- Around line 119-130: The aggregate top-level health currently ignores chains
whose overallStatus === 'degraded', so update the aggregation logic to consider
a chain with zero healthy providers as failing: when computing the global status
from per-chain results (look at chainResults[chain.name], summary.healthy and
overallStatus), treat any chain with summary.healthy === 0 (or
chainResults[chain.name].overallStatus === 'unhealthy' after recalculation) as
making the global status 'unhealthy'; otherwise, if any chain has degraded
providers (summary.healthy > 0 but overallStatus === 'degraded' or
summary.degraded > 0), make the global status 'degraded'; only set global
'healthy' if all chains have summary.healthy > 0 and no degraded counts—apply
the same fix in both aggregation sites (the block around where overallStatus is
set and the similar block noted at lines 142-161).
- Around line 45-71: The route currently treats "no providers configured" by
relying on Infura/Alchemy env vars and returns 500 before the provider-probing
block; instead, change the check to inspect the actual rpcUrls mapping used
below: compute whether providers exist by checking rpcUrls for each chain in
chainsToTest (e.g., verify rpcUrls[chain.id] && rpcUrls[chain.id].length > 0)
rather than env vars, and only short-circuit with a 500 when all chains have
empty rpcUrls; this ensures the probing logic that iterates chainsToTest ->
chainRpcs (and assigns providerName, writes into chainResults, and uses
CRITICAL_CHAINS) runs in keyless environments that rely on Chainstack/publicnode
entries.
In `@src/app/api/health/zerodev/route.ts`:
- Around line 46-59: The health check runs the bundler and paymaster probes
sequentially with individual AbortSignal.timeout(5000) calls, which can exceed
the parent's 8s budget; change the logic to run the probes in parallel (use
Promise.all or Promise.allSettled) so both fetches for the bundler (BUNDLER_URL
call with method 'eth_supportedEntryPoints') and the paymaster probe execute
concurrently, or alternatively reduce each probe's timeout to a value that
guarantees their sum fits within the parent's remaining budget; ensure each
fetch still uses an AbortSignal and that error/timeout handling for the existing
bundlerResponse and paymasterResponse handling code paths is preserved.
- Around line 65-72: The current health check only inspects bundlerResponse.ok
and sets results.arbitrum.bundler to healthy; instead, after parsing bundlerData
(from bundlerResponse.json()), verify that bundlerData.error is absent before
marking healthy—if bundlerData.error exists, set results.arbitrum.bundler.status
to unhealthy or degraded, include bundlerData.error (and bundlerResponse.status
/ bundlerResponseTime) in the entry so the JSON-RPC error is recorded, and only
set status to 'healthy' and entryPoints from bundlerData.result when
bundlerData.error is falsy.
In `@src/constants/general.consts.ts`:
- Line 41: The hardcoded Chainstack RPC URLs in src/constants/general.consts.ts
should be moved to environment variables: replace each embedded Chainstack URL
string in the RPC endpoints array/constant with
process.env.NEXT_PUBLIC_CHAINSTACK_<NETWORK>_RPC (or similar names you choose)
and reference those env vars where the constant (the RPC endpoints array defined
in this file) is used; ensure you validate presence (throw or fallback) at
startup and document the new env var names, add provider-side restrictions
(origin/IP allowlisting) and rotate the exposed keys.
In `@src/hooks/useCrispTokenId.ts`:
- Around line 35-57: The hook useCrispTokenId currently allows a resolved
generateCrispToken(userId) promise to overwrite state after userId changes;
capture the current userId at effect start (e.g. const activeUser = userId) and
before calling tokenCache.set(...) or setTokenId(...) verify the still-active
user matches activeUser, or alternatively store tokens in tokenCache keyed to
their owner (value = { owner: userId, token }) and ignore stale completions if
owner !== current userId; update the then/catch paths in useCrispTokenId to
perform this check so late resolutions cannot set the token for a different
user.
- Around line 4-24: The current generateCrispToken function (and
CRISP_TOKEN_SALT) creates a deterministic client-side token using SHA-256, which
is insecure per Crisp guidance; instead remove or disable client-side generation
and fetch a backend-generated random UUID v4 (persisted per user) from your API
when initializing Crisp. Update useCrispTokenId (or any caller of
generateCrispToken) to call your server endpoint to retrieve the stored
CRISP_TOKEN_ID for the user (creating and storing one server-side if missing),
and ensure the frontend simply uses that returned UUID v4 for Crisp
initialization rather than deriving it locally.
---
Outside diff comments:
In `@src/app/api/health/backend/route.ts`:
- Around line 24-35: The health route currently checks the defaulted constant
PEANUT_API_URL (so the guard never triggers); change the check to read the raw
environment variable instead (e.g. use process.env.PEANUT_API_URL) inside the
backend health handler in route.ts so an absent/unset env is detected; update
the conditional that returns the 500 JSON (the block referencing PEANUT_API_URL)
to test the raw env var and keep the same response shape and NO_CACHE_HEADERS.
---
Nitpick comments:
In `@scripts/validate-links.ts`:
- Around line 139-145: The code only adds the stories index path when
storySlugs.length > 0, which can cause validation failures if other content
links to /{locale}/stories before any stories exist; move the
paths.add(`/${locale}/stories`) call out of the conditional so the index is
always registered (leave the for loop for adding `/${locale}/stories/${slug}`
inside the existing if block that iterates over storySlugs), referencing the
existing symbols storySlugs, locale, paths.add to locate the change.
In `@src/components/TransactionDetails/TransactionDetailsReceipt.tsx`:
- Around line 617-623: The amount null/undefined check in
TransactionDetailsReceipt (the block referencing amount, perk.isCapped and
perk.campaignCapUsd) must also assert the type before calling amount.toFixed(2);
replace the existing guard (amount !== undefined && amount !== null) with a
numeric type check (e.g., typeof amount === 'number') so you only call toFixed
on a number, mirroring the qr-pay page pattern (see amountSponsored checks);
keep the existing perk.isCapped and perk.campaignCapUsd logic intact.
🪄 Autofix (Beta)
Fix all unresolved CodeRabbit comments on this PR:
- Push a commit to this branch (recommended)
- Create a new PR with the fixes
ℹ️ Review info
⚙️ Run configuration
Configuration used: Organization UI
Review profile: CHILL
Plan: Pro
Run ID: ef4d3764-0122-46a4-8bc7-23b80c420f76
📒 Files selected for processing (18)
redirects.jsonscripts/validate-links.tssrc/app/(mobile-ui)/qr-pay/page.tsxsrc/app/api/health/backend/route.tssrc/app/api/health/route.tssrc/app/api/health/rpc/route.tssrc/app/api/health/zerodev/route.tssrc/app/crisp-proxy/page.tsxsrc/components/Global/SupportDrawer/index.tsxsrc/components/TransactionDetails/TransactionDetailsReceipt.tsxsrc/constants/general.consts.tssrc/constants/support.tssrc/contentsrc/hooks/useCrispProxyUrl.tssrc/hooks/useCrispTokenId.tssrc/hooks/useCrispUserData.tssrc/types/global.d.tssrc/utils/crisp.ts
| // Cooldown check — don't spam Discord | ||
| const now = Date.now() | ||
| if (now - lastNotificationTime < NOTIFICATION_COOLDOWN_MS) { | ||
| console.log( | ||
| `Discord notification skipped (cooldown). Last sent ${Math.round((now - lastNotificationTime) / 1000)}s ago.` | ||
| ) | ||
| return | ||
| } | ||
| lastNotificationTime = now | ||
|
|
There was a problem hiding this comment.
Only consume the alert cooldown after Discord accepts the request.
lastNotificationTime is updated before the POST, and the webhook response is never checked. A transient 429/5xx or network failure will suppress every alert for the next 30 minutes even though nothing was delivered.
Minimal fix
- lastNotificationTime = now
-
- await fetch(webhookUrl, {
+ const response = await fetch(webhookUrl, {
method: 'POST',
headers: { 'Content-Type': 'application/json' },
body: JSON.stringify({ content: message }),
})
+ if (!response.ok) {
+ throw new Error(`Discord webhook returned ${response.status}`)
+ }
+ lastNotificationTime = nowAlso applies to: 84-90
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@src/app/api/health/route.ts` around lines 36 - 45, The cooldown timestamp is
being set before the Discord webhook POST and without verifying the response;
change the logic in the health route so that lastNotificationTime is only
updated after a successful POST (check the fetch/axios response status is 2xx
and no network error), and if the request fails (non-2xx or throws) do not
update lastNotificationTime and log/handle the failure instead; update both
places where lastNotificationTime is set (the current cooldown check block and
the second occurrence around the webhook POST) to follow this pattern.
| if (overallStatus === 'unhealthy') { | ||
| sendDiscordNotification(responseData).catch(console.error) | ||
|
|
||
| return NextResponse.json(responseData, { | ||
| status: 500, | ||
| headers: { | ||
| 'Cache-Control': 'no-store, no-cache, must-revalidate, proxy-revalidate', | ||
| Pragma: 'no-cache', | ||
| Expires: '0', | ||
| 'Surrogate-Control': 'no-store', | ||
| }, | ||
| }) | ||
| return NextResponse.json(responseData, { status: 500, headers: NO_CACHE_HEADERS }) |
There was a problem hiding this comment.
🧩 Analysis chain
🏁 Script executed:
# First, let's check the file exists and read the relevant lines
cat -n src/app/api/health/route.ts | head -250 | tail -40Repository: peanutprotocol/peanut-ui
Length of output: 1827
🏁 Script executed:
# Check for package.json to understand Next.js version
cat package.json | grep -A 5 '"next"'Repository: peanutprotocol/peanut-ui
Length of output: 260
🏁 Script executed:
# Search for usage of after() or unstable_after() in the codebase
rg -i "after\(|unstable_after" --type ts --type tsxRepository: peanutprotocol/peanut-ui
Length of output: 95
🏁 Script executed:
# Check if sendDiscordNotification is defined and how it's implemented
rg "sendDiscordNotification" -B 3 -A 10Repository: peanutprotocol/peanut-ui
Length of output: 1834
🏁 Script executed:
# Search for after() without file type restrictions
rg "after\(" --max-count 10Repository: peanutprotocol/peanut-ui
Length of output: 50
🏁 Script executed:
# Get more context around the problematic code
cat -n src/app/api/health/route.ts | head -30Repository: peanutprotocol/peanut-ui
Length of output: 1370
🏁 Script executed:
# Check the full sendDiscordNotification function
cat -n src/app/api/health/route.ts | sed -n '1,100p'Repository: peanutprotocol/peanut-ui
Length of output: 4683
Await or use after() for Discord notification to ensure incident alerts are sent.
The route returns immediately after calling sendDiscordNotification() without awaiting it. In this serverless function (force-dynamic), async work after the response is sent is best-effort and can be terminated before completing, causing incident alerts to be dropped entirely. Either await the notification before returning, or use after() (available in Next.js 16+) to schedule it after the response.
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@src/app/api/health/route.ts` around lines 228 - 230, The handler currently
calls sendDiscordNotification(responseData) and returns immediately when
overallStatus === 'unhealthy', which can let the notification be terminated;
update the branch so you either await sendDiscordNotification(responseData)
before returning the NextResponse.json(responseData, { status: 500, headers:
NO_CACHE_HEADERS }) or schedule it with Next.js's after(() =>
sendDiscordNotification(responseData).catch(console.error)) so the alert is
reliably sent; ensure you keep the existing error handling (catch) if using
after() or add try/catch around the awaited call to log failures.
| // Test all chains in parallel for faster response | ||
| await Promise.all( | ||
| chainsToTest.map(async (chain) => { | ||
| const chainRpcs = rpcUrls[chain.id] || [] | ||
| chainResults[chain.name] = { | ||
| chainId: chain.id, | ||
| critical: CRITICAL_CHAINS.has(chain.id), | ||
| providers: {}, | ||
| overallStatus: 'unknown', | ||
| } | ||
|
|
||
| for (let i = 0; i < chainRpcs.length; i++) { | ||
| const rpcUrl = chainRpcs[i] | ||
| const providerName = rpcUrl.includes('infura') | ||
| ? 'infura' | ||
| : rpcUrl.includes('alchemy') | ||
| ? 'alchemy' | ||
| : rpcUrl.includes('bnbchain') | ||
| ? 'binance' | ||
| : `provider_${i}` | ||
|
|
||
| const rpcTestStart = Date.now() | ||
|
|
||
| try { | ||
| const response = await fetchWithSentry(rpcUrl, { | ||
| method: 'POST', | ||
| headers: { | ||
| 'Content-Type': 'application/json', | ||
| }, | ||
| body: JSON.stringify({ | ||
| jsonrpc: '2.0', | ||
| method: 'eth_blockNumber', | ||
| params: [], | ||
| id: 1, | ||
| }), | ||
| // Test all providers for this chain in parallel | ||
| await Promise.all( | ||
| chainRpcs.map(async (rpcUrl, i) => { | ||
| const providerName = rpcUrl.includes('infura') | ||
| ? 'infura' | ||
| : rpcUrl.includes('alchemy') | ||
| ? 'alchemy' | ||
| : rpcUrl.includes('chainstack') | ||
| ? 'chainstack' | ||
| : rpcUrl.includes('publicnode') | ||
| ? 'publicnode' | ||
| : rpcUrl.includes('ankr') | ||
| ? 'ankr' | ||
| : rpcUrl.includes('bnbchain') | ||
| ? 'binance' | ||
| : `provider_${i}` |
There was a problem hiding this comment.
The new fallback-provider logic is unreachable in keyless environments.
This route still returns 500 before reaching this block when both Infura/Alchemy env vars are absent, even though src/constants/general.consts.ts now provides Chainstack/public URLs for the chains under test. That makes the added provider probing here useless in environments that intentionally rely on those non-keyed RPCs. Please base the "no providers configured" path on the actual rpcUrls entries instead of only those two env vars.
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@src/app/api/health/rpc/route.ts` around lines 45 - 71, The route currently
treats "no providers configured" by relying on Infura/Alchemy env vars and
returns 500 before the provider-probing block; instead, change the check to
inspect the actual rpcUrls mapping used below: compute whether providers exist
by checking rpcUrls for each chain in chainsToTest (e.g., verify
rpcUrls[chain.id] && rpcUrls[chain.id].length > 0) rather than env vars, and
only short-circuit with a 500 when all chains have empty rpcUrls; this ensures
the probing logic that iterates chainsToTest -> chainRpcs (and assigns
providerName, writes into chainResults, and uses CRITICAL_CHAINS) runs in
keyless environments that rely on Chainstack/publicnode entries.
| // Determine chain overall status | ||
| const chainProviders = Object.values(chainResults[chain.name].providers) as any[] | ||
| const healthyCount = chainProviders.filter((p) => p.status === 'healthy').length | ||
| const degradedCount = chainProviders.filter((p) => p.status === 'degraded').length | ||
| const unhealthyCount = chainProviders.length - healthyCount - degradedCount | ||
|
|
||
| if (healthyCount > 0) { | ||
| chainResults[chain.name].overallStatus = 'healthy' | ||
| } else if (degradedCount > 0) { | ||
| chainResults[chain.name].overallStatus = 'degraded' | ||
| } else { | ||
| chainResults[chain.name].overallStatus = 'unhealthy' |
There was a problem hiding this comment.
degraded chains are skipped by the final verdict.
A chain with zero healthy providers but one or more degraded providers is marked overallStatus: 'degraded' above, then ignored here because the aggregate only branches on 'unhealthy'. That means Ethereum/Arbitrum can end up with no healthy provider and still leave the endpoint green, and Polygon can fail into degraded without bubbling up to the top-level status. Base the aggregate on each chain's healthy count (or summary.healthy === 0) rather than only on 'unhealthy'.
Also applies to: 142-161
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@src/app/api/health/rpc/route.ts` around lines 119 - 130, The aggregate
top-level health currently ignores chains whose overallStatus === 'degraded', so
update the aggregation logic to consider a chain with zero healthy providers as
failing: when computing the global status from per-chain results (look at
chainResults[chain.name], summary.healthy and overallStatus), treat any chain
with summary.healthy === 0 (or chainResults[chain.name].overallStatus ===
'unhealthy' after recalculation) as making the global status 'unhealthy';
otherwise, if any chain has degraded providers (summary.healthy > 0 but
overallStatus === 'degraded' or summary.degraded > 0), make the global status
'degraded'; only set global 'healthy' if all chains have summary.healthy > 0 and
no degraded counts—apply the same fix in both aggregation sites (the block
around where overallStatus is set and the similar block noted at lines 142-161).
| // Test Arbitrum bundler with eth_supportedEntryPoints (mandatory ERC-4337 method) | ||
| const bundlerTestStart = Date.now() | ||
| try { | ||
| const bundlerResponse = await fetch(BUNDLER_URL, { | ||
| method: 'POST', | ||
| headers: { 'Content-Type': 'application/json' }, | ||
| signal: AbortSignal.timeout(5000), | ||
| body: JSON.stringify({ | ||
| jsonrpc: '2.0', | ||
| method: 'eth_supportedEntryPoints', | ||
| params: [], | ||
| id: 1, | ||
| }), | ||
| }) |
There was a problem hiding this comment.
Sequential 5s probes can overrun the caller's 8s budget.
The bundler and paymaster checks each allow up to 5s, but they run one after the other. /api/health aborts each sub-check after 8s in src/app/api/health/route.ts Line 108, so two slow-but-successful upstream responses can still make the aggregate endpoint report zerodev as unhealthy. Run these probes in parallel or make the per-probe timeout fit inside the parent's budget.
Also applies to: 96-109
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@src/app/api/health/zerodev/route.ts` around lines 46 - 59, The health check
runs the bundler and paymaster probes sequentially with individual
AbortSignal.timeout(5000) calls, which can exceed the parent's 8s budget; change
the logic to run the probes in parallel (use Promise.all or Promise.allSettled)
so both fetches for the bundler (BUNDLER_URL call with method
'eth_supportedEntryPoints') and the paymaster probe execute concurrently, or
alternatively reduce each probe's timeout to a value that guarantees their sum
fits within the parent's remaining budget; ensure each fetch still uses an
AbortSignal and that error/timeout handling for the existing bundlerResponse and
paymasterResponse handling code paths is preserved.
| if (bundlerResponse.ok) { | ||
| const bundlerData = await bundlerResponse.json() | ||
| results.arbitrum.bundler = { | ||
| status: 'healthy', | ||
| responseTime: bundlerResponseTime, | ||
| httpStatus: bundlerResponse.status, | ||
| entryPoints: bundlerData?.result, | ||
| } |
There was a problem hiding this comment.
🧩 Analysis chain
🏁 Script executed:
cat -n src/app/api/health/zerodev/route.tsRepository: peanutprotocol/peanut-ui
Length of output: 8299
Validate JSON-RPC response for errors before marking bundler healthy.
When the bundler returns HTTP 200 with a JSON-RPC error body (e.g., {"error": {"code": -32600, "message": "Invalid Request"}, "id": 1}), the code currently marks it as healthy because only bundlerResponse.ok is checked. This bypasses JSON-RPC-level validation and creates false positives for auth, config, or method failures. Check for bundlerData.error and treat it as unhealthy or degraded accordingly.
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@src/app/api/health/zerodev/route.ts` around lines 65 - 72, The current health
check only inspects bundlerResponse.ok and sets results.arbitrum.bundler to
healthy; instead, after parsing bundlerData (from bundlerResponse.json()),
verify that bundlerData.error is absent before marking healthy—if
bundlerData.error exists, set results.arbitrum.bundler.status to unhealthy or
degraded, include bundlerData.error (and bundlerResponse.status /
bundlerResponseTime) in the entry so the JSON-RPC error is recorded, and only
set status to 'healthy' and entryPoints from bundlerData.result when
bundlerData.error is falsy.
| 'https://sepolia-rollup.arbitrum.io/rpc', // Official Arbitrum Sepolia | ||
| ].filter(Boolean) as string[], | ||
| [polygon.id]: [ | ||
| 'https://polygon-mainnet.core.chainstack.com/e8d733c7341e28d98e4cf66c61c42aa6', // Chainstack (primary) |
There was a problem hiding this comment.
🧩 Analysis chain
🏁 Script executed:
#!/bin/bash
# Verify whether project-specific RPC endpoints are hardcoded and how widespread the pattern is.
# 1) Find hardcoded Chainstack-style RPC URLs with path tokens.
rg -nP 'https://[a-z0-9.-]*chainstack\.com/[A-Za-z0-9]+' -g '!**/node_modules/**'
# 2) Check whether Chainstack RPCs are env-managed anywhere in the repo.
rg -nP 'CHAINSTACK|NEXT_PUBLIC_.*RPC|RPC_.*CHAINSTACK' -g '!**/node_modules/**'
# 3) Show the rpcUrls block context for quick review.
rg -n -A40 -B10 'export const rpcUrls' src/constants/general.consts.tsRepository: peanutprotocol/peanut-ui
Length of output: 3502
Move Chainstack RPC endpoints to environment variables to mitigate quota and abuse risks.
Hardcoded Chainstack endpoints with embedded project identifiers at lines 20, 28, 41, 54, and 60 are exposed in the client-side constants. Unlike Infura and Alchemy (which use process.env.NEXT_PUBLIC_ALCHEMY_API_KEY and similar), these identifiers can be abused to exhaust quota and degrade app reliability.
Migrate to environment-managed values with provider-side restrictions (origin/IP allowlisting, rate limits). Rotate keys if already exposed.
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@src/constants/general.consts.ts` at line 41, The hardcoded Chainstack RPC
URLs in src/constants/general.consts.ts should be moved to environment
variables: replace each embedded Chainstack URL string in the RPC endpoints
array/constant with process.env.NEXT_PUBLIC_CHAINSTACK_<NETWORK>_RPC (or similar
names you choose) and reference those env vars where the constant (the RPC
endpoints array defined in this file) is used; ensure you validate presence
(throw or fallback) at startup and document the new env var names, add
provider-side restrictions (origin/IP allowlisting) and rotate the exposed keys.
| const CRISP_TOKEN_SALT = 'peanut-crisp-session-v1' | ||
|
|
||
| /** | ||
| * Generates a deterministic Crisp session token from a userId using SHA-256. | ||
| * Formatted as UUID-like string for Crisp compatibility. | ||
| * | ||
| * @see https://docs.crisp.chat/guides/chatbox-sdks/web-sdk/session-continuity/ | ||
| */ | ||
| async function generateCrispToken(userId: string): Promise<string> { | ||
| const data = new TextEncoder().encode(`${CRISP_TOKEN_SALT}:${userId}`) | ||
| const hashBuffer = await crypto.subtle.digest('SHA-256', data) | ||
| const hashArray = Array.from(new Uint8Array(hashBuffer)) | ||
| const hashHex = hashArray.map((b) => b.toString(16).padStart(2, '0')).join('') | ||
|
|
||
| return [ | ||
| hashHex.slice(0, 8), | ||
| hashHex.slice(8, 12), | ||
| hashHex.slice(12, 16), | ||
| hashHex.slice(16, 20), | ||
| hashHex.slice(20, 32), | ||
| ].join('-') |
There was a problem hiding this comment.
🧩 Analysis chain
🌐 Web query:
What does Crisp's official Session Continuity documentation recommend for generating CRISP_TOKEN_ID, and which token patterns does it explicitly call unsafe?
💡 Result:
Crisp’s Session Continuity docs recommend that CRISP_TOKEN_ID be a backend-generated, database-stored per-user token, and specifically suggest generating it as a secure random UUID v4. [1]
It explicitly calls these token patterns unsafe: auto-incremented IDs (eg 1234), hashed emails (eg MD5, SHA256), using the user’s email as the token, and using the current time/timestamp as the token. [1]
Generate CRISP_TOKEN_ID on the backend using a secure random UUID v4, not in the browser.
Crisp's official Session Continuity documentation explicitly recommends that CRISP_TOKEN_ID be generated on the backend, stored in your database, and persisted per user. It specifically lists SHA-256-hashed identifiers (including hashed emails and user IDs) as unsafe patterns.
This implementation derives the token deterministically from userId plus a static client-side salt using SHA-256, making it reproducible if the underlying identifier is known. Replace this with a backend-generated random UUID v4 that you fetch from your server.
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@src/hooks/useCrispTokenId.ts` around lines 4 - 24, The current
generateCrispToken function (and CRISP_TOKEN_SALT) creates a deterministic
client-side token using SHA-256, which is insecure per Crisp guidance; instead
remove or disable client-side generation and fetch a backend-generated random
UUID v4 (persisted per user) from your API when initializing Crisp. Update
useCrispTokenId (or any caller of generateCrispToken) to call your server
endpoint to retrieve the stored CRISP_TOKEN_ID for the user (creating and
storing one server-side if missing), and ensure the frontend simply uses that
returned UUID v4 for Crisp initialization rather than deriving it locally.
| export function useCrispTokenId(): string | undefined { | ||
| const { userId } = useAuth() | ||
| const [tokenId, setTokenId] = useState<string | undefined>(userId ? tokenCache.get(userId) : undefined) | ||
|
|
||
| useEffect(() => { | ||
| if (!userId) { | ||
| setTokenId(undefined) | ||
| return | ||
| } | ||
|
|
||
| const cached = tokenCache.get(userId) | ||
| if (cached) { | ||
| setTokenId(cached) | ||
| return | ||
| } | ||
|
|
||
| generateCrispToken(userId) | ||
| .then((token) => { | ||
| tokenCache.set(userId, token) | ||
| setTokenId(token) | ||
| }) | ||
| .catch(() => setTokenId(undefined)) | ||
| }, [userId]) |
There was a problem hiding this comment.
Key the cached token to the current userId.
tokenId outlives the user it was derived from, and the pending generateCrispToken(userId) promise is never invalidated. On a fast logout/login or account switch, a late resolve/reject can overwrite the current state with the previous user's token, which then flows into SupportDrawer → useCrispProxyUrl() and can reopen the wrong Crisp session. Store the owner userId alongside the token, or ignore stale completions before calling setTokenId.
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@src/hooks/useCrispTokenId.ts` around lines 35 - 57, The hook useCrispTokenId
currently allows a resolved generateCrispToken(userId) promise to overwrite
state after userId changes; capture the current userId at effect start (e.g.
const activeUser = userId) and before calling tokenCache.set(...) or
setTokenId(...) verify the still-active user matches activeUser, or
alternatively store tokens in tokenCache keyed to their owner (value = { owner:
userId, token }) and ignore stale completions if owner !== current userId;
update the then/catch paths in useCrispTokenId to perform this check so late
resolutions cannot set the token for a different user.
No description provided.