Conversation
|
Note Reviews pausedIt looks like this branch is under active development. To avoid overwhelming you with review comments due to an influx of new commits, CodeRabbit has automatically paused this review. You can configure this behavior by changing the Use the following commands to manage reviews:
Use the checkboxes below for quick actions:
WalkthroughThe PR shifts E2E from host-based docker-compose orchestration to a containerized Playwright flow and an infra-first model. It removes legacy orchestration modules (DevEnvironmentManager, DockerCompose, DevGhostManager, TinybirdManager), centralizes environment constants, and introduces a mode-driven EnvironmentManager plus a per-worker GhostManager. CI and local scripts were added/updated (infra up/down, Playwright container runner, Tinybird sync, CI job preparers). e2e Dockerfile and docker/dev-gateway/Caddyfile.build were modified for content/files asset hosting. CI workflow, artifact collection, and failure-logging steps were adjusted for the containerized execution. 🚥 Pre-merge checks | ✅ 2 | ❌ 1❌ Failed checks (1 warning)
✅ Passed checks (2 passed)
✏️ Tip: You can configure your own custom pre-merge checks in the settings. ✨ Finishing Touches
🧪 Generate unit tests (beta)
Thanks for using CodeRabbit! It's free for OSS, and your support helps us grow. If you like it, consider giving us a shout-out. Comment |
Codecov Report✅ All modified and coverable lines are covered by tests. Additional details and impacted files@@ Coverage Diff @@
## main #26539 +/- ##
==========================================
+ Coverage 73.12% 73.14% +0.02%
==========================================
Files 1528 1529 +1
Lines 120135 120229 +94
Branches 14495 14523 +28
==========================================
+ Hits 87845 87941 +96
- Misses 31269 31283 +14
+ Partials 1021 1005 -16
Flags with carried forward coverage won't be shown. Click here to find out more. ☔ View full report in Codecov by Sentry. 🚀 New features to boost your workflow:
|
3538b99 to
eb4c6d2
Compare
There was a problem hiding this comment.
Actionable comments posted: 2
Caution
Some comments are outside the diff and can’t be posted inline due to platform limitations.
⚠️ Outside diff range comments (1)
e2e/helpers/environment/service-availability.ts (1)
71-85:⚠️ Potential issue | 🟠 MajorAvoid stale Tinybird availability cache.
Once
tinybirdAvailableis set, later calls won’t re-check Docker. If Tinybird starts after an initial false (or stops after a true), the cache becomes wrong for the rest of the process, which can break infra-up/infra-down flows. Consider adding a refresh/reset mechanism or a TTL, and wiring it into the infra lifecycle.🔧 Suggested approach (refresh + reset)
-export async function isTinybirdAvailable(): Promise<boolean> { - if (tinybirdAvailable !== null) { +export async function isTinybirdAvailable(forceRefresh = false): Promise<boolean> { + if (!forceRefresh && tinybirdAvailable !== null) { return tinybirdAvailable; } const docker = new Docker(); tinybirdAvailable = await isServiceAvailable(docker, TINYBIRD.LOCAL_HOST); return tinybirdAvailable; } + +export function resetTinybirdAvailabilityCache(): void { + tinybirdAvailable = null; +}🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@e2e/helpers/environment/service-availability.ts` around lines 71 - 85, The module-level cache tinybirdAvailable makes isTinybirdAvailable() return stale results; add a refresh mechanism such as a TTL or an explicit reset so callers can force re-checking Docker. Modify isTinybirdAvailable to accept an optional forceRefresh boolean (or implement internal expiry using a timestamp) and introduce a resetTinybirdAvailability() function that clears tinybirdAvailable (and any timestamp), then wire that reset into the infra lifecycle hooks (infra-up/infra-down) so the availability is re-evaluated after infrastructure changes.
🧹 Nitpick comments (5)
e2e/helpers/environment/service-managers/ghost-manager.ts (2)
376-394:cleanupAllContainerssilently ignores all errors.The empty
catchblock at line 391-393 swallows errors indiscriminately. IflistContainersfails due to a Docker daemon issue, you'll get no feedback. Consider at least logging a debug message:Suggested improvement
} catch { - // Ignore - no containers to remove or removal failed + debug('cleanupAllContainers: no containers to remove or removal failed'); }🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@e2e/helpers/environment/service-managers/ghost-manager.ts` around lines 376 - 394, The cleanupAllContainers method currently swallows all errors in its empty catch; update cleanupAllContainers to catch errors and log them (at debug or warn level) rather than silently ignore, e.g., in the catch block call the appropriate logger with the caught error and context (reference symbols: cleanupAllContainers, this.docker.listContainers, this.docker.getContainer(...).remove) so failures to list or remove containers are visible while still allowing the method to continue gracefully.
128-149:getOrCreateContainerswallows allinspect()errors, not just "not found".The
catchblock at line 142 assumes any error fromexisting.inspect()means the container doesn't exist. If the Docker daemon is unreachable or returns a non-404 error, this silently falls through tocreate()which would also fail — but with a less informative error.This is a minor concern since the
create()call will surface the real problem, but for debuggability you could check the error code:Suggested improvement
} catch (error: unknown) { - // Doesn't exist - create new - debug(`Creating new container: ${name}`); + // 404 = doesn't exist; anything else is unexpected but we still try to create + const statusCode = (error as { statusCode?: number })?.statusCode; + if (statusCode && statusCode !== 404) { + debug(`Unexpected error inspecting container ${name} (status ${statusCode}), attempting create`); + } else { + debug(`Creating new container: ${name}`); + } const container = await create();🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@e2e/helpers/environment/service-managers/ghost-manager.ts` around lines 128 - 149, The getOrCreateContainer function currently catches all errors from existing.inspect() and assumes the container doesn't exist; change the catch to inspect the thrown error (e.g., check error.statusCode === 404 or error.message/includes('No such container')) and only proceed to call create() when the error indicates "not found"; for any other errors (Docker daemon unreachable, permission issues, etc.) rethrow or surface the original error (or log it with context) so callers of getOrCreateContainer and the create() path see the real failure; reference the existing.inspect() call and the create() and existing.start() flows when making the change.e2e/helpers/environment/environment-manager.ts (1)
97-126: Consider narrowing theconfigtype instead of usingunknown+ cast.Line 112 casts
options.configfromunknowntoRecord<string, string> | undefined. Since the public API acceptsunknown, callers get no compile-time feedback on what shape is expected. A narrower type would be clearer:- async perTestSetup(options: {config?: unknown} = {}): Promise<GhostInstance> { + async perTestSetup(options: {config?: Record<string, string>} = {}): Promise<GhostInstance> {This eliminates the cast on line 112 and gives callers type safety.
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@e2e/helpers/environment/environment-manager.ts` around lines 97 - 126, Change the perTestSetup signature to accept a narrower config type instead of unknown: update the parameter options in perTestSetup to use a concrete type like {config?: Record<string,string>} (or a more specific interface if needed) so callers get compile-time feedback; then remove the cast on line where extraConfig is assigned (currently casting options.config to Record<string,string> | undefined) and use options.config directly; update any callers that pass a different shape to conform to the new type or adjust the type to the correct expected shape for GhostInstance configuration (refer to perTestSetup and extraConfig/ghost.restartWithDatabase)..github/workflows/ci.yml (1)
1303-1337: Good failure diagnostics — consider also dumping infra container logs.The
dump_container_logshelper is well-structured. It currently dumps Ghost worker and gateway logs, but on infra failures (e.g., MySQL crash), you might also want to check the compose-managed containers. Consider adding:dump_container_logs '^ghost-e2e-worker-' 'Ghost worker' dump_container_logs '^ghost-e2e-gateway-' 'E2E gateway' + dump_container_logs 'ghost-dev-mysql' 'MySQL' + dump_container_logs 'ghost-dev-redis' 'Redis'This would help debug infrastructure-level failures that aren't visible from Ghost logs alone.
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In @.github/workflows/ci.yml around lines 1303 - 1337, Add a third dump for compose-managed infra containers by reusing the existing dump_container_logs helper: call dump_container_logs with a pattern that matches the infra service container names (e.g., regex matching MySQL/Redis/other compose services such as 'ghost-e2e-.*(mysql|mariadb|redis|minio|broker)') and a label like 'E2E infra' right after the existing dump_container_logs calls and before the Stop E2E infra step so infra-level logs are captured on failure.e2e/helpers/environment/service-managers/mysql-manager.ts (1)
30-40: Preserve original error context when setup fails.Wrapping the error in a new
Errordrops the original stack/type. Re-throw the original error (or normalize toError) and keep the contextual log.Proposed tweak
} catch (error) { logging.error('Failed to setup test database:', error); - throw new Error(`Failed to setup test database: ${error}`); + throw error instanceof Error ? error : new Error(`Failed to setup test database: ${String(error)}`); }🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@e2e/helpers/environment/service-managers/mysql-manager.ts` around lines 30 - 40, The catch block in the setup sequence (createDatabase, restoreDatabaseFromSnapshot, updateSiteUuid) currently throws a new Error which discards the original error's stack/type; instead, after logging via logging.error('Failed to setup test database:', error), re-throw the original caught error (or, if you must normalize, wrap with Object.assign(new Error(message), { cause: error } ) or use the error as the cause) so the original stack and type are preserved when failure propagates from the method that runs this try/catch.
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.
Inline comments:
In `@e2e/helpers/environment/service-managers/ghost-manager.ts`:
- Around line 195-238: The buildEnv method constructs container ENV but
currently omits the Tinybird tracker token; after calling
this.loadTinybirdConfig() in buildEnv() check tinybirdConfig?.trackerToken and
push it into env as TINYBIRD_TRACKER_TOKEN so the container sees
process.env.TINYBIRD_TRACKER_TOKEN (refer to buildEnv, loadTinybirdConfig,
tinybirdConfig, and env).
In `@e2e/helpers/pages/public/public-page.ts`:
- Line 89: PublicPage.goto() calls waitForMemberAttributionReady()
unconditionally and invokes enableAnalyticsRequests() twice; update goto() so
that enableAnalyticsRequests() is only called when isAnalyticsProject() is true,
remove the redundant unconditional enableAnalyticsRequests() call, and wrap the
waitForMemberAttributionReady() invocation behind the same isAnalyticsProject()
check; also add an explicit, short timeout argument to
waitForMemberAttributionReady() (so the helper rejects quickly instead of
hanging) and apply the same guarded/timeout change to the other occurrences
noted around waitForMemberAttributionReady()/enableAnalyticsRequests() (lines
shown in the review).
---
Outside diff comments:
In `@e2e/helpers/environment/service-availability.ts`:
- Around line 71-85: The module-level cache tinybirdAvailable makes
isTinybirdAvailable() return stale results; add a refresh mechanism such as a
TTL or an explicit reset so callers can force re-checking Docker. Modify
isTinybirdAvailable to accept an optional forceRefresh boolean (or implement
internal expiry using a timestamp) and introduce a resetTinybirdAvailability()
function that clears tinybirdAvailable (and any timestamp), then wire that reset
into the infra lifecycle hooks (infra-up/infra-down) so the availability is
re-evaluated after infrastructure changes.
---
Nitpick comments:
In @.github/workflows/ci.yml:
- Around line 1303-1337: Add a third dump for compose-managed infra containers
by reusing the existing dump_container_logs helper: call dump_container_logs
with a pattern that matches the infra service container names (e.g., regex
matching MySQL/Redis/other compose services such as
'ghost-e2e-.*(mysql|mariadb|redis|minio|broker)') and a label like 'E2E infra'
right after the existing dump_container_logs calls and before the Stop E2E infra
step so infra-level logs are captured on failure.
In `@e2e/helpers/environment/environment-manager.ts`:
- Around line 97-126: Change the perTestSetup signature to accept a narrower
config type instead of unknown: update the parameter options in perTestSetup to
use a concrete type like {config?: Record<string,string>} (or a more specific
interface if needed) so callers get compile-time feedback; then remove the cast
on line where extraConfig is assigned (currently casting options.config to
Record<string,string> | undefined) and use options.config directly; update any
callers that pass a different shape to conform to the new type or adjust the
type to the correct expected shape for GhostInstance configuration (refer to
perTestSetup and extraConfig/ghost.restartWithDatabase).
In `@e2e/helpers/environment/service-managers/ghost-manager.ts`:
- Around line 376-394: The cleanupAllContainers method currently swallows all
errors in its empty catch; update cleanupAllContainers to catch errors and log
them (at debug or warn level) rather than silently ignore, e.g., in the catch
block call the appropriate logger with the caught error and context (reference
symbols: cleanupAllContainers, this.docker.listContainers,
this.docker.getContainer(...).remove) so failures to list or remove containers
are visible while still allowing the method to continue gracefully.
- Around line 128-149: The getOrCreateContainer function currently catches all
errors from existing.inspect() and assumes the container doesn't exist; change
the catch to inspect the thrown error (e.g., check error.statusCode === 404 or
error.message/includes('No such container')) and only proceed to call create()
when the error indicates "not found"; for any other errors (Docker daemon
unreachable, permission issues, etc.) rethrow or surface the original error (or
log it with context) so callers of getOrCreateContainer and the create() path
see the real failure; reference the existing.inspect() call and the create() and
existing.start() flows when making the change.
In `@e2e/helpers/environment/service-managers/mysql-manager.ts`:
- Around line 30-40: The catch block in the setup sequence (createDatabase,
restoreDatabaseFromSnapshot, updateSiteUuid) currently throws a new Error which
discards the original error's stack/type; instead, after logging via
logging.error('Failed to setup test database:', error), re-throw the original
caught error (or, if you must normalize, wrap with Object.assign(new
Error(message), { cause: error } ) or use the error as the cause) so the
original stack and type are preserved when failure propagates from the method
that runs this try/catch.
ℹ️ Review info
Configuration used: Path: .coderabbit.yaml
Review profile: CHILL
Plan: Pro
📒 Files selected for processing (19)
.github/workflows/ci.ymldocker/dev-gateway/Caddyfile.builde2e/Dockerfile.e2ee2e/README.mde2e/helpers/environment/constants.tse2e/helpers/environment/dev-environment-manager.tse2e/helpers/environment/docker-compose.tse2e/helpers/environment/environment-factory.tse2e/helpers/environment/environment-manager.tse2e/helpers/environment/index.tse2e/helpers/environment/service-availability.tse2e/helpers/environment/service-managers/dev-ghost-manager.tse2e/helpers/environment/service-managers/ghost-manager.tse2e/helpers/environment/service-managers/index.tse2e/helpers/environment/service-managers/mysql-manager.tse2e/helpers/environment/service-managers/tinybird-manager.tse2e/helpers/pages/public/public-page.tse2e/package.jsone2e/scripts/fetch-tinybird-config.mjs
💤 Files with no reviewable changes (6)
- e2e/helpers/environment/service-managers/index.ts
- e2e/helpers/environment/index.ts
- e2e/helpers/environment/service-managers/tinybird-manager.ts
- e2e/helpers/environment/service-managers/dev-ghost-manager.ts
- e2e/helpers/environment/dev-environment-manager.ts
- e2e/helpers/environment/docker-compose.ts
| } | ||
| await this.enableAnalyticsRequests(); | ||
| const result = await super.goto(url, options); | ||
| await this.waitForMemberAttributionReady(); |
There was a problem hiding this comment.
🧩 Analysis chain
🏁 Script executed:
# First, let's read the public-page.ts file to see the actual implementation
cat -n e2e/helpers/pages/public/public-page.ts | head -150Repository: TryGhost/Ghost
Length of output: 5618
🏁 Script executed:
# Search for ghost-history usage to understand when it's populated
rg -n "ghost-history" --type=ts --type=tsx --type=jsRepository: TryGhost/Ghost
Length of output: 85
🏁 Script executed:
# Search for where PublicPage.goto is used
rg -n "\.goto\(" e2e/ --type=ts --type=tsx -A 2 | head -50Repository: TryGhost/Ghost
Length of output: 85
🏁 Script executed:
# Look for project configuration to understand analytics vs other projects
fd -e "config\|\.setup\|playwright\." e2e/ --type f | head -20Repository: TryGhost/Ghost
Length of output: 379
🏁 Script executed:
# Search for where ghost-history is set/populated
rg "ghost-history" --type=js -B 2 -A 2Repository: TryGhost/Ghost
Length of output: 2649
🏁 Script executed:
# Look in the actual Ghost app code (not just tests)
fd . apps/ | head -100Repository: TryGhost/Ghost
Length of output: 3859
🏁 Script executed:
# Search in apps/admin or core for ghost-history or sessionStorage attribution
rg "ghost-history|sessionStorage.*history|attribution" apps/ --type=js --type=ts -l | head -20Repository: TryGhost/Ghost
Length of output: 486
🏁 Script executed:
# Look for analytics-related client code
fd -e ".ts" -e ".js" apps/ | xargs rg -l "sessionStorage" | head -20Repository: TryGhost/Ghost
Length of output: 1195
🏁 Script executed:
# Check what the member attribution tracking code does
rg -n "ghost-history\|attribution" . --type=js --type=ts | head -30Repository: TryGhost/Ghost
Length of output: 40
🏁 Script executed:
# Read the member-attribution.js file to understand when ghost-history is populated
cat -n ghost/core/core/frontend/src/member-attribution/member-attribution.js | head -100Repository: TryGhost/Ghost
Length of output: 4090
🏁 Script executed:
# Check if there's a condition for when member-attribution is loaded
rg -n "member-attribution" ghost/core/core/frontend -A 2 -B 2 | head -50Repository: TryGhost/Ghost
Length of output: 1101
🏁 Script executed:
# Look at the portal helpers to see how ghost-history is accessed
cat -n apps/portal/src/utils/helpers.js | grep -A 10 -B 5 "ghost-history"Repository: TryGhost/Ghost
Length of output: 719
🏁 Script executed:
# Check the signup-form attribution test to see how it handles ghost-history
cat -n apps/signup-form/test/e2e/attribution.test.ts | head -80Repository: TryGhost/Ghost
Length of output: 3751
🏁 Script executed:
# Search for any issues or TODOs related to waitFor or timeout in e2e tests
rg -n "waitFor|timeout|TimeoutError" e2e/helpers/pages/ --type=ts -B 1 -A 1 | head -40Repository: TryGhost/Ghost
Length of output: 2880
🏁 Script executed:
# Search for uses of PublicPage and how it's used in tests
rg "PublicPage|publicPage.*goto" e2e/ --type=ts -B 2 -A 2 | head -60Repository: TryGhost/Ghost
Length of output: 5374
🏁 Script executed:
# Look for test setup and fixture configuration to see member settings
fd . -e "fixtures\|config\|setup" e2e/ --full-path | head -20Repository: TryGhost/Ghost
Length of output: 40
🏁 Script executed:
# Check playwright config to understand test projects
cat -n e2e/playwright.config.ts | head -100Repository: TryGhost/Ghost
Length of output: 114
🏁 Script executed:
# Find playwright config file
fd "playwright" e2e/ --type=fRepository: TryGhost/Ghost
Length of output: 83
🏁 Script executed:
# Look for config files in e2e root
ls -la e2e/ | head -30Repository: TryGhost/Ghost
Length of output: 1192
🏁 Script executed:
# Check if there are fixture/setup patterns for members_enabled
rg -n "members_enabled|members_track_sources" e2e/ --type=ts --type=js -B 2 -A 2 | head -50Repository: TryGhost/Ghost
Length of output: 615
🏁 Script executed:
# Count how many tests use PublicPage
rg -c "new PublicPage\|PublicPage\(" e2e/tests --type=ts | grep -v ":0$" | wc -lRepository: TryGhost/Ghost
Length of output: 59
🏁 Script executed:
# Look for any timeout-related configurations or try/catch around waitForMemberAttributionReady
rg -n "waitForMemberAttributionReady|TimeoutError" e2e/ --type=ts -B 2 -A 2Repository: TryGhost/Ghost
Length of output: 1756
🏁 Script executed:
# Read the playwright config to understand test projects and timeouts
cat -n e2e/playwright.config.mjsRepository: TryGhost/Ghost
Length of output: 2952
🏁 Script executed:
# Check what happens in setUp/fixtures - look for member settings initialization
rg -n "setMembersTrackSources\|members_enabled" e2e/tests --type=ts -B 3 -A 3 | head -40Repository: TryGhost/Ghost
Length of output: 40
🏁 Script executed:
# Look at how tests initialize (check for fixture setup)
rg -n "test\.describe\|test\.beforeEach" e2e/tests --type=ts -A 5 | head -50Repository: TryGhost/Ghost
Length of output: 40
🏁 Script executed:
# Check what the global setup does - look for fixture/settings initialization
cat -n e2e/tests/global.setup.ts | head -100Repository: TryGhost/Ghost
Length of output: 449
🏁 Script executed:
# Count uses of PublicPage in main vs analytics tests
echo "=== Uses in main tests (should exclude analytics) ===" && \
rg "new PublicPage\|PublicPage\(" e2e/tests --type=ts --not-path="*/analytics/*" | wc -l && \
echo "=== Uses in analytics tests ===" && \
rg "new PublicPage\|PublicPage\(" e2e/tests/admin/analytics --type=ts | wc -lRepository: TryGhost/Ghost
Length of output: 603
🏁 Script executed:
# Check if setMembersTrackSources is called in test setup or main project tests
rg -n "setMembersTrackSources" e2e/tests --type=ts -B 5 -A 2 | head -60Repository: TryGhost/Ghost
Length of output: 1715
🏁 Script executed:
# Look for test that uses PublicPage outside analytics
rg -l "new PublicPage\|PublicPage\(" e2e/tests --type=ts --not-path="*/analytics/*" | head -5Repository: TryGhost/Ghost
Length of output: 496
🏁 Script executed:
# Search more carefully for PublicPage usage in test files
rg "PublicPage\|import.*PublicPage" e2e/tests --type=ts -lRepository: TryGhost/Ghost
Length of output: 40
🏁 Script executed:
# Look at one of the test files to understand the pattern
cat -n e2e/tests/admin/members/member-activity-events.test.ts | head -80Repository: TryGhost/Ghost
Length of output: 3084
🏁 Script executed:
# Check if members_track_sources is enabled by default in global setup
rg -n "members_track_sources\|members_enabled" e2e/ --type=ts --type=js -B 2 -A 2 | head -80Repository: TryGhost/Ghost
Length of output: 40
🏁 Script executed:
# Search for any try-catch or error handling around waitForFunction
rg -n "try\|catch\|TimeoutError" e2e/helpers/pages --type=ts -B 2 -A 2 | head -50Repository: TryGhost/Ghost
Length of output: 40
🏁 Script executed:
# Check the environment/fixture setup to see if members_track_sources is enabled by default
cat -n e2e/helpers/environment/setup-test-environment.ts | head -100Repository: TryGhost/Ghost
Length of output: 139
🏁 Script executed:
# Look for any initialization of members settings
fd "setup\|environment" e2e/helpers --type=f -e ".ts"Repository: TryGhost/Ghost
Length of output: 40
🏁 Script executed:
# Check if there's default fixture data that enables/disables members tracking
rg -n "members_track_sources\|members_enabled" e2e/helpers --type=ts -B 3 -A 3Repository: TryGhost/Ghost
Length of output: 40
🏁 Script executed:
# Let's check what the actual behavior should be - look at the global setup more carefully
fd "global" e2e/tests --type=f -e ".ts"Repository: TryGhost/Ghost
Length of output: 112
🏁 Script executed:
# Check if there's any fixture that sets up members or attribution
rg -n "fixture\|beforeEach\|beforeAll" e2e/helpers --type=ts -lRepository: TryGhost/Ghost
Length of output: 40
🏁 Script executed:
# Look at the HomePage implementation to see if it handles member attribution differently
cat -n e2e/helpers/pages/admin/home-page.ts | head -50Repository: TryGhost/Ghost
Length of output: 126
🏁 Script executed:
# Check the parent BasePage class to see if there's any timeout handling
cat -n e2e/helpers/pages/base-page.ts | head -100Repository: TryGhost/Ghost
Length of output: 1635
🏁 Script executed:
# Look at the global setup to see what it initializes
cat -n e2e/tests/global.setup.tsRepository: TryGhost/Ghost
Length of output: 449
🏁 Script executed:
# Check if there's environment initialization that sets members settings
find e2e -name "*.ts" -type f | xargs grep -l "getEnvironmentManager\|globalSetup" | head -5Repository: TryGhost/Ghost
Length of output: 240
🏁 Script executed:
# Look for the EnvironmentManager implementation
find e2e -name "*environment*" -type fRepository: TryGhost/Ghost
Length of output: 151
🏁 Script executed:
# Check the environment manager to understand default setup
cat -n e2e/helpers/environment/environment-manager.ts | head -150Repository: TryGhost/Ghost
Length of output: 6179
🏁 Script executed:
# Check if there's a specific handling for members tracking sources in the factory
cat -n e2e/helpers/environment/environment-factory.ts | head -150Repository: TryGhost/Ghost
Length of output: 653
🏁 Script executed:
# Let's look more carefully at the test that uses PublicPage in main project
cat -n e2e/tests/admin/members/member-activity-events.test.ts | head -100Repository: TryGhost/Ghost
Length of output: 3084
🏁 Script executed:
# Check if there's a fixture or beforeEach that enables members tracking
rg -n "beforeEach\|fixture\|setMembers" e2e/tests/admin/members --type=ts -B 2 -A 5 | head -80Repository: TryGhost/Ghost
Length of output: 40
🏁 Script executed:
# Search for any error messages or timeout issues in git history or comments
rg -n "timeout\|hang\|stall\|waitForMemberAttributionReady" e2e/ --type=ts --type=js -B 1 -A 1Repository: TryGhost/Ghost
Length of output: 40
🏁 Script executed:
# Check if there's a default fixture or setup that enables members tracking
rg -n "members_enabled\|members_track_sources" . --type=ts --type=js | grep -v test | grep -v node_modules | head -20Repository: TryGhost/Ghost
Length of output: 40
🏁 Script executed:
# Look at the HomePage fixture to see if it sets up members
cat -n e2e/helpers/pages/public/home-page.ts | head -80Repository: TryGhost/Ghost
Length of output: 1359
🏁 Script executed:
# Check the public pages export to understand what's imported
cat -n e2e/helpers/pages/public/index.tsRepository: TryGhost/Ghost
Length of output: 198
🏁 Script executed:
# Check the default Ghost setup to see if members are enabled by default
rg -n "DEFAULT\|default.*member\|members.*true" ghost/core --type=js --type=ts | grep -i "member" | head -20Repository: TryGhost/Ghost
Length of output: 40
🏁 Script executed:
# Look for Ghost's settings/config defaults
find ghost/core -name "*defaults*" -o -name "*settings*" | head -10Repository: TryGhost/Ghost
Length of output: 775
🏁 Script executed:
# Check if there's indication in tests about whether members are enabled in the test fixture
rg -n "members" e2e/helpers/services --type=ts -lRepository: TryGhost/Ghost
Length of output: 336
🏁 Script executed:
# Look at one more test file that uses PublicPage to understand the pattern
cat -n e2e/tests/public/member-signup-types.test.ts | head -100Repository: TryGhost/Ghost
Length of output: 5179
waitForMemberAttributionReady() is called unconditionally in goto() and lacks an explicit timeout, risking silent test hangs.
The member-attribution script that populates ghost-history is only injected when both members_enabled and members_track_sources are true. However, waitForMemberAttributionReady() at line 89 runs for every PublicPage.goto() call regardless of configuration or project type. If called in a context where ghost-history is not populated (fixture without tracking sources enabled), the test will block silently until the global Playwright timeout fires (30–60 seconds).
The method's own comment (lines 105–106) scopes its purpose to "attribution-dependent assertions," which contradicts its unconditional placement.
Additionally, enableAnalyticsRequests() is called both unconditionally (line 87) and conditionally (lines 84–85), creating redundancy.
Two-part fix:
🛡️ Proposed fix
- Add an explicit timeout so failures surface quickly:
- protected async waitForMemberAttributionReady(): Promise<void> {
+ protected async waitForMemberAttributionReady(timeout = 5000): Promise<void> {
// Test-only anti-pattern: we synchronize on async client bootstrap state
// to keep attribution-dependent assertions deterministic in CI.
await this.page.waitForFunction(() => {
try {
const raw = window.sessionStorage.getItem('ghost-history');
if (!raw) {
return false;
}
const history = JSON.parse(raw);
return Array.isArray(history) && history.length > 0;
} catch {
return false;
}
- });
+ }, {timeout});
}- Guard the call behind the analytics project check to match the code comment's intent:
const result = await super.goto(url, options);
- await this.waitForMemberAttributionReady();
+ if (testInfo.project.name === 'analytics') {
+ await this.waitForMemberAttributionReady();
+ }- Remove the redundant unconditional call to
enableAnalyticsRequests()at line 87.
Also applies to: lines 104–121, 124, 129
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@e2e/helpers/pages/public/public-page.ts` at line 89, PublicPage.goto() calls
waitForMemberAttributionReady() unconditionally and invokes
enableAnalyticsRequests() twice; update goto() so that enableAnalyticsRequests()
is only called when isAnalyticsProject() is true, remove the redundant
unconditional enableAnalyticsRequests() call, and wrap the
waitForMemberAttributionReady() invocation behind the same isAnalyticsProject()
check; also add an explicit, short timeout argument to
waitForMemberAttributionReady() (so the helper rejects quickly instead of
hanging) and apply the same guarded/timeout change to the other occurrences
noted around waitForMemberAttributionReady()/enableAnalyticsRequests() (lines
shown in the review).
ref https://linear.app/ghost/issue/BER-3363/unified-environment-manager Unified the E2E environment manager for dev and build modes.
ref https://linear.app/ghost/issue/BER-3363/unified-environment-manager Added Tinybird file config support and build gateway image handling.
ref https://linear.app/ghost/issue/BER-3363/unified-environment-manager Updated E2E scripts and docs for the infra-first build mode workflow.
ref https://linear.app/ghost/issue/BER-3363/unified-environment-manager Switched the CI E2E job to the unified manager on the host runner.
ref https://linear.app/ghost/issue/BER-3363/unified-environment-manager Stabilized member signup attribution tests before the CI runner migration.
ref https://linear.app/ghost/issue/BER-3363/unified-environment-manager Moved CI E2E execution into the official Playwright container. Co-authored-by: Troy Ciesco <tmciesco@gmail.com>
ref https://linear.app/ghost/issue/BER-3363/unified-environment-manager Stock caddy:2-alpine cannot parse the transform log encoder in Caddyfile.build, and shard failures now print worker/gateway logs for debugging.
ref https://linear.app/ghost/issue/BER-3363/unified-environment-manager Reduced stale dev-mode wording and derived only the E2E pieces that actually follow COMPOSE_PROJECT_NAME.
ref https://linear.app/ghost/issue/BER-3363/unified-environment-manager Moved CI shell logic into versioned scripts, parallelized build-mode runtime prep, and made infra startup sync/reset Tinybird state. Co-authored-by: Troy Ciesco <tmciesco@gmail.com>
ref https://linear.app/ghost/issue/BER-3363/unified-environment-manager Made Tinybird prep automatic for local tests, startup-only infra commands, and fail-fast sync/reset when Tinybird is running.
ref https://linear.app/ghost/issue/BER-3363/unified-environment-manager Collapsed CI prep into a single script, overlapped runtime preflight with builds, and switched the Playwright runner script to env-driven shard configuration.
93c4d2b to
5121d7a
Compare
|
@coderabbitai review |
✅ Actions performedReview triggered.
|
There was a problem hiding this comment.
Actionable comments posted: 6
♻️ Duplicate comments (2)
e2e/helpers/pages/public/public-page.ts (1)
88-120:⚠️ Potential issue | 🟠 MajorGuard member-attribution waits and add an explicit timeout.
waitForMemberAttributionReady()now runs for everygoto()and portal open. Ifghost-historyis never populated (e.g., non‑analytics projects or tracking disabled), this will wait until the Playwright timeout and slow tests significantly. Make it opt‑in (e.g., analytics project only) and pass a short timeout so failures surface quickly.🛠 Suggested fix
async goto(url?: string, options?: pageGotoOptions): Promise<null | Response> { const testInfo = test.info(); - let pageHitPromise = null; - if (testInfo.project.name === 'analytics') { + const isAnalyticsProject = testInfo.project.name === 'analytics'; + let pageHitPromise = null; + if (isAnalyticsProject) { pageHitPromise = this.pageHitRequestPromise(); } await this.enableAnalyticsRequests(); const result = await super.goto(url, options); - await this.waitForMemberAttributionReady(); + if (isAnalyticsProject) { + await this.waitForMemberAttributionReady(5000); + } if (pageHitPromise) { await pageHitPromise; } return result; } -protected async waitForMemberAttributionReady(): Promise<void> { +protected async waitForMemberAttributionReady(timeout = 5000): Promise<void> { // Test-only anti-pattern: we synchronize on async client bootstrap state // to keep attribution-dependent assertions deterministic in CI. await this.page.waitForFunction(() => { try { const raw = window.sessionStorage.getItem('ghost-history'); @@ } catch { return false; } - }); + }, {timeout}); } async openPortalViaSubscribeButton(): Promise<void> { - await this.waitForMemberAttributionReady(); + if (test.info().project.name === 'analytics') { + await this.waitForMemberAttributionReady(5000); + } await this.portal.clickLinkAndWaitForPopup(this.subscribeLink); } async openPortalViaSignInLink(): Promise<void> { - await this.waitForMemberAttributionReady(); + if (test.info().project.name === 'analytics') { + await this.waitForMemberAttributionReady(5000); + } await this.portal.clickLinkAndWaitForPopup(this.signInLink); }Also applies to: 123-129
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@e2e/helpers/pages/public/public-page.ts` around lines 88 - 120, waitForMemberAttributionReady currently always blocks and can hang CI; make it opt‑in and add a short explicit timeout. Change callers (e.g., where goto/portal open invoke waitForMemberAttributionReady) to pass a flag or set a page-level flag (like this.options.enableAnalytics) so the wait only runs when analytics are expected, and in waitForMemberAttributionReady add an early return if analytics are disabled and pass a small timeout option (e.g., 1–2s) to page.waitForFunction so failures surface quickly; reference the waitForMemberAttributionReady method and related places that call it (and optionally pageHitRequestPromise) to locate affected code.e2e/helpers/environment/constants.ts (1)
66-87: Ensure BASE_GHOST_ENV hostnames match compose service/container names.The fixed
ghost-dev-*hostnames should line up with how infra containers are named/aliased; otherwise, Ghost won’t resolve DB/Redis/Mailpit. Please keep these aligned with whatever naming is confirmed for the MySQL container.🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@e2e/helpers/environment/constants.ts` around lines 66 - 87, BASE_GHOST_ENV uses hardcoded hostnames that must match your compose service/container names; update the entries in the BASE_GHOST_ENV const so the hostnames exactly match the Docker Compose service names used for MySQL, Redis and Mailpit. Specifically change the values for database__connection__host, adapters__cache__Redis__host and mail__options__host inside BASE_GHOST_ENV to the confirmed service names (e.g., the actual mysql/redis/mailpit service aliases used in your compose file) and verify resolution in the test environment.
🧹 Nitpick comments (4)
e2e/helpers/environment/service-managers/ghost-manager.ts (4)
176-192:restartWithDatabaseleaves the new container in an unverified state.The method removes the old container, creates and starts a new one, but returns before that container is healthy. Any caller that interacts with Ghost immediately after this call will race against startup. Either call
waitForReady()internally or document this contract explicitly in the JSDoc.♻️ Proposed fix
await this.ghostContainer.start(); + await this.waitForReady(); debug('Ghost restarted with database:', databaseName);🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@e2e/helpers/environment/service-managers/ghost-manager.ts` around lines 176 - 192, restartWithDatabase currently removes the old container, creates and starts the replacement, but returns before the new Ghost instance is healthy; update restartWithDatabase to await the container becoming ready by calling this.waitForReady() (or appropriate readiness helper) after await this.ghostContainer.start(), ensuring callers won't race against startup; reference the restartWithDatabase method and the this.ghostContainer lifecycle (removeContainer/createGhostContainer/start) and add the wait call (or, if you prefer not to block, add clear JSDoc on restartWithDatabase stating it returns before readiness).
415-450:container.logs().toString()produces garbled output due to Docker's multiplexing frame headers.For non-TTY containers, the returned buffer starts with 8-byte headers for each line. Each chunk of data gets prefixed with an 8-byte header: the first byte identifies the stream (stdin/stdout/stderr), bytes 1–3 are reserved, and bytes 4–7 contain the payload size. Calling
.toString()directly on this buffer mixes those binary frame bytes into the log string at every log line. This affects all three log collection sites (lines 429, 435, 447), making the error output that reacheslogging.errorhard to read at exactly the moment it's most important.Note: as of a recent docker-modem version change,
container.logs({ follow: false })may return a string instead of a Buffer, but the frame headers are still embedded in the content when multiplexing is active.A minimal demuxer strips the headers cleanly:
♻️ Proposed helper + usage
private async getContainerLogs(container: Container, tail: number): Promise<string> { const raw = await container.logs({stdout: true, stderr: true, tail, follow: false}); const buf = Buffer.isBuffer(raw) ? raw : Buffer.from(raw as string, 'binary'); const lines: string[] = []; let offset = 0; while (offset + 8 <= buf.length) { const size = buf.readUInt32BE(offset + 4); lines.push(buf.slice(offset + 8, offset + 8 + size).toString('utf8')); offset += 8 + size; } return lines.join(''); }Replace each inline
container.logs({...}).toString()call inwaitForHealthywithawait this.getContainerLogs(container, 100).🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@e2e/helpers/environment/service-managers/ghost-manager.ts` around lines 415 - 450, waitForHealthy is logging garbled output because container.logs() can return multiplexed frames; implement a helper method getContainerLogs(container: Container, tail: number): Promise<string> that calls container.logs({stdout:true, stderr:true, tail, follow:false}), normalizes the result to a Buffer (Buffer.isBuffer or Buffer.from(...,'binary')), then demuxes the Docker 8-byte header frames in a loop by reading the payload size with buf.readUInt32BE(offset + 4) and extracting buf.slice(offset + 8, offset + 8 + size).toString('utf8') into an array and joining it; replace each inline container.logs(...).toString() call in waitForHealthy with await this.getContainerLogs(container, 100) so logging.error receives cleaned log text.
176-192:restartWithDatabasestarts the new container but does not wait for it to be healthy.After
this.ghostContainer.start(), the container is still booting. Any test that callsrestartWithDatabaseand then immediately interacts with Ghost will race against the health check. Callers have no way to know they are responsible for callingwaitForReady()separately — this should be either documented in the JSDoc or handled internally.♻️ Proposed fix
await this.ghostContainer.start(); + await this.waitForReady(); debug('Ghost restarted with database:', databaseName);🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@e2e/helpers/environment/service-managers/ghost-manager.ts` around lines 176 - 192, restartWithDatabase currently starts the new container but returns before Ghost is healthy; after calling this.ghostContainer.start() ensure the manager waits for the service to be ready by calling the existing readiness helper (e.g., this.waitForReady() or the manager's waitForGhostReady method) before returning so callers don't race on the health check; update restartWithDatabase to await that readiness call after start (using the same readiness helper used elsewhere in the manager) and keep existing removeContainer/createGhostContainer logic unchanged.
415-450: Usecontainer.modem.demuxStreamor parse the 8-byte Docker frame headers before converting logs to string.Dockerode's
container.logs()for non-TTY containers returns a multiplexed buffer where each frame begins with an 8-byte header ([stream_type, 0, 0, 0, size(4 bytes)]). Calling.toString()directly includes these binary bytes in the output, making error messages unreadable.The codebase already uses the idiomatic fix in
mysql-manager.ts—demultiplexing withcontainer.modem.demuxStream. Apply the same pattern here, or extract a helper method to parse the frames manually (stripping the 8-byte header from each frame before concatenating).♻️ Suggested helper
private async getContainerLogs(container: Container, tail: number): Promise<string> { const logBuffer = await container.logs({stdout: true, stderr: true, tail, follow: false}) as Buffer; // Strip the 8-byte Docker multiplexing header from each frame const lines: string[] = []; let offset = 0; while (offset + 8 <= logBuffer.length) { const size = logBuffer.readUInt32BE(offset + 4); lines.push(logBuffer.slice(offset + 8, offset + 8 + size).toString('utf8')); offset += 8 + size; } return lines.join(''); }Then replace the three
container.logs({...}).toString()calls inwaitForHealthywithawait this.getContainerLogs(container, 100).🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@e2e/helpers/environment/service-managers/ghost-manager.ts` around lines 415 - 450, The health-check is calling container.logs(...).toString() which returns Docker's multiplexed frames (8-byte headers) and produces unreadable output; update waitForHealthy to demultiplex logs like in mysql-manager.ts by either using container.modem.demuxStream or adding a helper (e.g., getContainerLogs) that strips the 8-byte Docker frame headers or reads frames and concatenates the payloads, then replace the three direct container.logs(...).toString() calls with await this.getContainerLogs(container, 100) (or use demuxStream into stdout/stderr buffers) so logged error messages are human-readable.
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.
Inline comments:
In `@e2e/helpers/environment/service-managers/ghost-manager.ts`:
- Around line 395-410: The teardown's cleanupAllContainers currently uses
Promise.all which aborts on the first rejection; change the removal step to use
Promise.allSettled so every this.docker.getContainer(c.Id).remove({force: true})
is attempted for all containers, then iterate the settled results to log any
failures (and optionally the container Id via c.Id) instead of letting the first
error short-circuit; keep the outer try/catch for listContainers errors and use
debug/processLogger for per-container rejection messages from the
Promise.allSettled results.
- Around line 395-410: The cleanupAllContainers function uses Promise.all which
aborts on the first rejection; change it to use Promise.allSettled so every
container removal is attempted: call Promise.allSettled on the array returned by
containers.map(c => this.docker.getContainer(c.Id).remove({force: true})), then
iterate the settled results to log or debug any rejected removals (include
container Id/context) while treating fulfilled ones as successful so the method
completes cleanup attempts for all containers; keep the existing try/catch for
the initial listContainers call.
- Around line 244-250: The loop that builds env from extraConfig silently skips
non-string values; update the logic in the block that iterates extraConfig (the
for..of over Object.entries(extraConfig) in GhostManager/GhostEnvOverrides
handling) to coerce scalar values (number, boolean) to strings before
env.push(`${key}=${value}`) and only skip non-primitive types
(objects/arrays/functions); when skipping, emit a debug/warn log including the
key and the ignored value type so callers know their config was not applied.
Ensure you reference the same symbols: extraConfig, env, and the entry handling
code so the coercion and conditional logging are applied in that loop.
- Around line 244-250: The loop handling extraConfig in ghost-manager.ts
currently skips non-string values and silently drops them; update the loop that
iterates over Object.entries(extraConfig) (the extraConfig -> env push logic) to
coerce scalar non-string values to strings using String(value) before pushing
(env.push(`${key}=${String(value)}`), and emit a warning via the existing logger
or console when a value is non-scalar (null/undefined/object/array) so callers
are informed that that entry was not applied; preserve current behavior for
already-string values.
In `@e2e/helpers/environment/service-managers/mysql-manager.ts`:
- Around line 25-27: The MySQLManager constructor hardcodes containerName
'ghost-dev-mysql' causing docker.getContainer(...) in the
MySQLManager.getContainer (used around the call site where docker.getContainer
is invoked) to fail for e2e where compose names the container differently; fix
by reading an override from the environment (e.g.,
process.env.MYSQL_CONTAINER_NAME) in the MySQLManager constructor (or accept it
via constructor arg) and fall back to the current default, and update any tests
or invocation to set MYSQL_CONTAINER_NAME to 'ghost-e2e-mysql' (or alternatively
add container_name: ghost-dev-mysql to the e2e/compose.yml if you prefer the
compose change).
In `@e2e/scripts/load-playwright-container-env.sh`:
- Around line 11-15: The script currently performs cd "$REPO_ROOT" and computes
PLAYWRIGHT_VERSION with node without checking errors; add explicit guards: after
cd "$REPO_ROOT" verify the exit status and if it failed print a clear error and
exit non‑zero, capture the output/error of the node -p call used to set
PLAYWRIGHT_VERSION and if node fails or returns an empty value print a
descriptive error and exit, and only construct PLAYWRIGHT_IMAGE once
PLAYWRIGHT_VERSION is validated; reference the cd "$REPO_ROOT" command and the
PLAYWRIGHT_VERSION and PLAYWRIGHT_IMAGE variable assignments when making these
checks.
---
Duplicate comments:
In `@e2e/helpers/environment/constants.ts`:
- Around line 66-87: BASE_GHOST_ENV uses hardcoded hostnames that must match
your compose service/container names; update the entries in the BASE_GHOST_ENV
const so the hostnames exactly match the Docker Compose service names used for
MySQL, Redis and Mailpit. Specifically change the values for
database__connection__host, adapters__cache__Redis__host and mail__options__host
inside BASE_GHOST_ENV to the confirmed service names (e.g., the actual
mysql/redis/mailpit service aliases used in your compose file) and verify
resolution in the test environment.
In `@e2e/helpers/pages/public/public-page.ts`:
- Around line 88-120: waitForMemberAttributionReady currently always blocks and
can hang CI; make it opt‑in and add a short explicit timeout. Change callers
(e.g., where goto/portal open invoke waitForMemberAttributionReady) to pass a
flag or set a page-level flag (like this.options.enableAnalytics) so the wait
only runs when analytics are expected, and in waitForMemberAttributionReady add
an early return if analytics are disabled and pass a small timeout option (e.g.,
1–2s) to page.waitForFunction so failures surface quickly; reference the
waitForMemberAttributionReady method and related places that call it (and
optionally pageHitRequestPromise) to locate affected code.
---
Nitpick comments:
In `@e2e/helpers/environment/service-managers/ghost-manager.ts`:
- Around line 176-192: restartWithDatabase currently removes the old container,
creates and starts the replacement, but returns before the new Ghost instance is
healthy; update restartWithDatabase to await the container becoming ready by
calling this.waitForReady() (or appropriate readiness helper) after await
this.ghostContainer.start(), ensuring callers won't race against startup;
reference the restartWithDatabase method and the this.ghostContainer lifecycle
(removeContainer/createGhostContainer/start) and add the wait call (or, if you
prefer not to block, add clear JSDoc on restartWithDatabase stating it returns
before readiness).
- Around line 415-450: waitForHealthy is logging garbled output because
container.logs() can return multiplexed frames; implement a helper method
getContainerLogs(container: Container, tail: number): Promise<string> that calls
container.logs({stdout:true, stderr:true, tail, follow:false}), normalizes the
result to a Buffer (Buffer.isBuffer or Buffer.from(...,'binary')), then demuxes
the Docker 8-byte header frames in a loop by reading the payload size with
buf.readUInt32BE(offset + 4) and extracting buf.slice(offset + 8, offset + 8 +
size).toString('utf8') into an array and joining it; replace each inline
container.logs(...).toString() call in waitForHealthy with await
this.getContainerLogs(container, 100) so logging.error receives cleaned log
text.
- Around line 176-192: restartWithDatabase currently starts the new container
but returns before Ghost is healthy; after calling this.ghostContainer.start()
ensure the manager waits for the service to be ready by calling the existing
readiness helper (e.g., this.waitForReady() or the manager's waitForGhostReady
method) before returning so callers don't race on the health check; update
restartWithDatabase to await that readiness call after start (using the same
readiness helper used elsewhere in the manager) and keep existing
removeContainer/createGhostContainer logic unchanged.
- Around line 415-450: The health-check is calling
container.logs(...).toString() which returns Docker's multiplexed frames (8-byte
headers) and produces unreadable output; update waitForHealthy to demultiplex
logs like in mysql-manager.ts by either using container.modem.demuxStream or
adding a helper (e.g., getContainerLogs) that strips the 8-byte Docker frame
headers or reads frames and concatenates the payloads, then replace the three
direct container.logs(...).toString() calls with await
this.getContainerLogs(container, 100) (or use demuxStream into stdout/stderr
buffers) so logged error messages are human-readable.
ℹ️ Review info
Configuration used: Path: .coderabbit.yaml
Review profile: CHILL
Plan: Pro
📒 Files selected for processing (28)
.github/workflows/ci.ymldocker/dev-gateway/Caddyfile.builde2e/Dockerfile.e2ee2e/README.mde2e/helpers/environment/constants.tse2e/helpers/environment/dev-environment-manager.tse2e/helpers/environment/docker-compose.tse2e/helpers/environment/environment-factory.tse2e/helpers/environment/environment-manager.tse2e/helpers/environment/index.tse2e/helpers/environment/service-availability.tse2e/helpers/environment/service-managers/dev-ghost-manager.tse2e/helpers/environment/service-managers/ghost-manager.tse2e/helpers/environment/service-managers/index.tse2e/helpers/environment/service-managers/mysql-manager.tse2e/helpers/environment/service-managers/tinybird-manager.tse2e/helpers/pages/public/public-page.tse2e/helpers/playwright/fixture.tse2e/package.jsone2e/scripts/dump-e2e-docker-logs.she2e/scripts/infra-down.she2e/scripts/infra-up.she2e/scripts/load-playwright-container-env.she2e/scripts/prepare-ci-e2e-build-mode.she2e/scripts/prepare-ci-e2e-job.she2e/scripts/run-playwright-container.she2e/scripts/run-playwright-host.she2e/scripts/sync-tinybird-state.mjs
💤 Files with no reviewable changes (6)
- e2e/helpers/environment/service-managers/tinybird-manager.ts
- e2e/helpers/environment/dev-environment-manager.ts
- e2e/helpers/environment/index.ts
- e2e/helpers/environment/service-managers/dev-ghost-manager.ts
- e2e/helpers/environment/service-managers/index.ts
- e2e/helpers/environment/docker-compose.ts
✅ Files skipped from review due to trivial changes (1)
- e2e/helpers/playwright/fixture.ts
🚧 Files skipped from review as they are similar to previous changes (3)
- e2e/package.json
- docker/dev-gateway/Caddyfile.build
- e2e/helpers/environment/service-availability.ts
9ab6b50 to
6ff988c
Compare
6ff988c to
d271401
Compare
|
You have run out of free Bugbot PR reviews for this billing cycle. This will reset on March 3. To receive reviews on all of your PRs, visit the Cursor dashboard to activate Pro and start your 14-day free trial. |
There was a problem hiding this comment.
Actionable comments posted: 1
♻️ Duplicate comments (2)
e2e/helpers/pages/public/public-page.ts (2)
102-119:⚠️ Potential issue | 🟠 Major
waitForFunctionstill has no explicit timeout.The
waitForMemberAttributionReady()helper was relocated fromgoto()(improvement), but the timeout concern from the prior review is still unresolved. With no timeout argument, this call inherits the global Playwright timeout (~30s), causing a long silent hang wheneverghost-historyis absent (e.g.,members_track_sourcesdisabled).🛡️ Proposed fix
- protected async waitForMemberAttributionReady(): Promise<void> { + protected async waitForMemberAttributionReady(timeout = 5000): Promise<void> { // Test-only anti-pattern: we synchronize on async client bootstrap state // to keep attribution-dependent assertions deterministic in CI. await this.page.waitForFunction(() => { try { const raw = window.sessionStorage.getItem('ghost-history'); if (!raw) { return false; } const history = JSON.parse(raw); return Array.isArray(history) && history.length > 0; } catch { return false; } - }); + }, {timeout}); }🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@e2e/helpers/pages/public/public-page.ts` around lines 102 - 119, waitForMemberAttributionReady() calls this.page.waitForFunction without an explicit timeout, which can inherit the global Playwright timeout and hang; update the call to this.page.waitForFunction in waitForMemberAttributionReady to pass a sensible explicit timeout (or make it configurable) so the helper fails fast when 'ghost-history' never appears (e.g., members_track_sources disabled), keeping the existing predicate logic and error handling intact.
126-129:⚠️ Potential issue | 🟠 Major
waitForMemberAttributionReady()is semantically misplaced inopenPortalViaSignInLink().Sign-in flows don't require attribution data (
ghost-history) — that's only needed for signup/attribution tracking tests. Placing this guard here means any test that callsopenPortalViaSignInLink()withoutmembers_track_sourcesenabled will hang until timeout.
openPortalViaSubscribeButton()(line 122) is a reasonable placement since subscribe flows are attribution-dependent. For sign-in, the guard should either be removed or made conditional.🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@e2e/helpers/pages/public/public-page.ts` around lines 126 - 129, The call to waitForMemberAttributionReady inside openPortalViaSignInLink should be removed or guarded because sign-in flows don't require attribution; update openPortalViaSignInLink to either call portal.clickLinkAndWaitForPopup(this.signInLink) directly or first check the members_track_sources feature flag before awaiting waitForMemberAttributionReady, keeping the attribution guard only in openPortalViaSubscribeButton where subscribe flows need it; locate the methods openPortalViaSignInLink, openPortalViaSubscribeButton, waitForMemberAttributionReady, portal.clickLinkAndWaitForPopup, and the signInLink symbol to apply the change.
🧹 Nitpick comments (1)
e2e/scripts/prepare-ci-e2e-build-mode.sh (1)
11-35: Clean up background jobs on early failure.
If one background task fails, the script exits while other pulls/infra-up may keep running. A simple EXIT trap keeps the job clean and avoids dangling processes.♻️ Suggested guard for background jobs
pids=() labels=() + +cleanup_bg() { + for pid in "${pids[@]:-}"; do + kill "$pid" 2>/dev/null || true + done +} +trap cleanup_bg EXIT @@ for i in "${!pids[@]}"; do if ! wait "${pids[$i]}"; then echo "[${labels[$i]}] failed" >&2 exit 1 fi done + +trap - EXIT🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@e2e/scripts/prepare-ci-e2e-build-mode.sh` around lines 11 - 35, The script spawns background jobs via run_bg (which records PIDs in the pids array) but doesn't clean them up on early exit; add an EXIT trap that iterates over pids and kills any still-running background processes and waits for them to avoid dangling jobs. Implement a trap handler (e.g., trap on EXIT and optionally INT/TERM) that uses the pids array to kill -TERM or kill -9 as fallback and then wait "${pids[@]}" before exiting; ensure the handler is registered near the top (before run_bg calls) and that the normal successful completion path still waits for pids as currently implemented.
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.
Inline comments:
In `@e2e/helpers/environment/service-managers/ghost-manager.ts`:
- Around line 97-109: The error message in verifyBuildImageExists referencing a
non-existent script should be updated: replace the "Use a different image:
GHOST_E2E_IMAGE=<image> yarn test:build" suggestion with an actual, available
command (for example "GHOST_E2E_IMAGE=<image> yarn build:docker" or the correct
workspace command used elsewhere) so the thrown Error for BUILD_IMAGE gives
actionable instructions; update the message in the verifyBuildImageExists method
to remove or correct the yarn test:build reference and ensure BUILD_IMAGE and
the possible fix commands are accurate.
---
Duplicate comments:
In `@e2e/helpers/pages/public/public-page.ts`:
- Around line 102-119: waitForMemberAttributionReady() calls
this.page.waitForFunction without an explicit timeout, which can inherit the
global Playwright timeout and hang; update the call to this.page.waitForFunction
in waitForMemberAttributionReady to pass a sensible explicit timeout (or make it
configurable) so the helper fails fast when 'ghost-history' never appears (e.g.,
members_track_sources disabled), keeping the existing predicate logic and error
handling intact.
- Around line 126-129: The call to waitForMemberAttributionReady inside
openPortalViaSignInLink should be removed or guarded because sign-in flows don't
require attribution; update openPortalViaSignInLink to either call
portal.clickLinkAndWaitForPopup(this.signInLink) directly or first check the
members_track_sources feature flag before awaiting
waitForMemberAttributionReady, keeping the attribution guard only in
openPortalViaSubscribeButton where subscribe flows need it; locate the methods
openPortalViaSignInLink, openPortalViaSubscribeButton,
waitForMemberAttributionReady, portal.clickLinkAndWaitForPopup, and the
signInLink symbol to apply the change.
---
Nitpick comments:
In `@e2e/scripts/prepare-ci-e2e-build-mode.sh`:
- Around line 11-35: The script spawns background jobs via run_bg (which records
PIDs in the pids array) but doesn't clean them up on early exit; add an EXIT
trap that iterates over pids and kills any still-running background processes
and waits for them to avoid dangling jobs. Implement a trap handler (e.g., trap
on EXIT and optionally INT/TERM) that uses the pids array to kill -TERM or kill
-9 as fallback and then wait "${pids[@]}" before exiting; ensure the handler is
registered near the top (before run_bg calls) and that the normal successful
completion path still waits for pids as currently implemented.
ℹ️ Review info
Configuration used: Path: .coderabbit.yaml
Review profile: CHILL
Plan: Pro
📒 Files selected for processing (19)
.github/workflows/ci.ymldocker/dev-gateway/Caddyfile.builde2e/README.mde2e/helpers/environment/constants.tse2e/helpers/environment/service-availability.tse2e/helpers/environment/service-managers/ghost-manager.tse2e/helpers/environment/service-managers/mysql-manager.tse2e/helpers/pages/public/public-page.tse2e/helpers/playwright/fixture.tse2e/package.jsone2e/scripts/dump-e2e-docker-logs.she2e/scripts/infra-down.she2e/scripts/infra-up.she2e/scripts/load-playwright-container-env.she2e/scripts/prepare-ci-e2e-build-mode.she2e/scripts/prepare-ci-e2e-job.she2e/scripts/run-playwright-container.she2e/scripts/run-playwright-host.she2e/scripts/sync-tinybird-state.mjs
🚧 Files skipped from review as they are similar to previous changes (10)
- e2e/helpers/playwright/fixture.ts
- e2e/scripts/prepare-ci-e2e-job.sh
- docker/dev-gateway/Caddyfile.build
- e2e/helpers/environment/service-availability.ts
- e2e/scripts/dump-e2e-docker-logs.sh
- e2e/scripts/run-playwright-host.sh
- e2e/scripts/run-playwright-container.sh
- e2e/scripts/sync-tinybird-state.mjs
- e2e/README.md
- e2e/scripts/infra-down.sh
There was a problem hiding this comment.
💡 Codex Review
Here are some automated review suggestions for this pull request.
Reviewed commit: d271401e7b
ℹ️ About Codex in GitHub
Your team has set up Codex to review pull requests in this repo. Reviews are triggered when you
- Open a pull request for review
- Mark a draft as ready
- Comment "@codex review".
If Codex has suggestions, it will comment; otherwise it will react with 👍.
Codex can also answer questions or update the PR. Try commenting "@codex address that feedback".
ref https://linear.app/ghost/issue/BER-3363 Removed dead service-availability exports and the obsolete pretest skip branch after unifying the environment manager.
ref https://linear.app/ghost/issue/BER-3366/rework-e2e-fixtures This keeps the README wording focused and clarifies why the temporary CI wait remains in PublicPage.
There was a problem hiding this comment.
💡 Codex Review
Here are some automated review suggestions for this pull request.
Reviewed commit: b795f90fe6
ℹ️ About Codex in GitHub
Your team has set up Codex to review pull requests in this repo. Reviews are triggered when you
- Open a pull request for review
- Mark a draft as ready
- Comment "@codex review".
If Codex has suggestions, it will comment; otherwise it will react with 👍.
Codex can also answer questions or update the PR. Try commenting "@codex address that feedback".
ref https://linear.app/ghost/issue/BER-3363 Removed tb-cli from compose --wait targets in infra bootstrap because it is a one-shot setup job; analytics still pulls it in via depends_on and long-lived services remain the readiness gate.
There was a problem hiding this comment.
🧹 Nitpick comments (2)
e2e/scripts/prepare-ci-e2e-build-mode.sh (2)
30-35: Orphaned background processes on early exit.When one task fails, the script exits immediately without killing the remaining background processes. This can leave
infra-up.shpartially running (e.g., Docker containers half-started). Adding a trap to clean up stale PIDs on exit would make this more robust.🛡️ Proposed fix: add a cleanup trap
pids=() labels=() + +cleanup() { + for pid in "${pids[@]}"; do + kill "$pid" 2>/dev/null || true + done +} +trap cleanup EXIT run_bg() {🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@e2e/scripts/prepare-ci-e2e-build-mode.sh` around lines 30 - 35, The wait loop currently exits immediately on the first failed background job and leaves other PIDs in pids[] running; add a cleanup trap to kill any remaining background processes on early exit by defining a cleanup function (e.g., cleanup() ) that iterates over the pids[] array, checks if each PID is still running, and kills them (and optionally waits), then register it with trap 'cleanup' on EXIT (and optionally ERR and INT); update the script so the existing for loop that waits on pids[$i] remains but relies on the trap to ensure orphaned processes started by this script (like infra-up.sh/Docker) are terminated when the script exits early.
14-24:run_bgis duplicated inprepare-ci-e2e-job.sh.This helper is identical to the one in
e2e/scripts/prepare-ci-e2e-job.sh:20-30. Consider extracting it into a shared sourced file (e.g.,e2e/scripts/_lib.sh) to keep the two scripts in sync.🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@e2e/scripts/prepare-ci-e2e-build-mode.sh` around lines 14 - 24, The helper function run_bg (and its associated arrays labels and pids) is duplicated; extract run_bg into a shared sourced file named _lib.sh, declare and initialize the arrays there (e.g., declare -a labels pids) and keep the same implementation using run_bg() so callers are unchanged, then remove the duplicate function from the two scripts and add a source _lib.sh line at the top of each script so both use the single canonical run_bg definition.
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.
Nitpick comments:
In `@e2e/scripts/prepare-ci-e2e-build-mode.sh`:
- Around line 30-35: The wait loop currently exits immediately on the first
failed background job and leaves other PIDs in pids[] running; add a cleanup
trap to kill any remaining background processes on early exit by defining a
cleanup function (e.g., cleanup() ) that iterates over the pids[] array, checks
if each PID is still running, and kills them (and optionally waits), then
register it with trap 'cleanup' on EXIT (and optionally ERR and INT); update the
script so the existing for loop that waits on pids[$i] remains but relies on the
trap to ensure orphaned processes started by this script (like
infra-up.sh/Docker) are terminated when the script exits early.
- Around line 14-24: The helper function run_bg (and its associated arrays
labels and pids) is duplicated; extract run_bg into a shared sourced file named
_lib.sh, declare and initialize the arrays there (e.g., declare -a labels pids)
and keep the same implementation using run_bg() so callers are unchanged, then
remove the duplicate function from the two scripts and add a source _lib.sh line
at the top of each script so both use the single canonical run_bg definition.
ℹ️ Review info
Configuration used: Path: .coderabbit.yaml
Review profile: CHILL
Plan: Pro
📒 Files selected for processing (11)
compose.dev.yamldocker/ghost-dev/Dockerfilee2e/README.mde2e/helpers/environment/constants.tse2e/helpers/environment/index.tse2e/helpers/environment/service-availability.tse2e/helpers/environment/service-managers/ghost-manager.tse2e/helpers/pages/public/public-page.tse2e/package.jsone2e/scripts/infra-up.she2e/scripts/prepare-ci-e2e-build-mode.sh
💤 Files with no reviewable changes (2)
- compose.dev.yaml
- e2e/helpers/environment/index.ts
🚧 Files skipped from review as they are similar to previous changes (2)
- e2e/helpers/pages/public/public-page.ts
- e2e/scripts/infra-up.sh
There was a problem hiding this comment.
💡 Codex Review
Here are some automated review suggestions for this pull request.
Reviewed commit: d1f6815c0d
ℹ️ About Codex in GitHub
Your team has set up Codex to review pull requests in this repo. Reviews are triggered when you
- Open a pull request for review
- Mark a draft as ready
- Comment "@codex review".
If Codex has suggestions, it will comment; otherwise it will react with 👍.
Codex can also answer questions or update the PR. Try commenting "@codex address that feedback".
| -e CI=true \ | ||
| -e TEST_WORKERS_COUNT="${TEST_WORKERS_COUNT:-1}" \ | ||
| -e GHOST_E2E_MODE="${GHOST_E2E_MODE:-build}" \ | ||
| -e GHOST_E2E_IMAGE="${GHOST_E2E_IMAGE:-ghost-e2e:local}" \ | ||
| -e GHOST_E2E_GATEWAY_IMAGE="${GHOST_E2E_GATEWAY_IMAGE:-caddy:2-alpine}" \ |
There was a problem hiding this comment.
Propagate COMPOSE_PROJECT_NAME into Playwright container
This docker run forwards GHOST_E2E_* variables but drops COMPOSE_PROJECT_NAME, even though the environment manager derives compose project names and shared volume names from that variable (e2e/helpers/environment/constants.ts). If infra was started with a non-default compose project, tests inside this container fall back to ghost-dev, so Tinybird detection and shared-config volume binding point at the wrong namespace. Pass COMPOSE_PROJECT_NAME through here so runtime and infra stay aligned.
Useful? React with 👍 / 👎.
ref https://linear.app/ghost/issue/BER-3363 Passed COMPOSE_PROJECT_NAME through dockerized Playwright runs so E2E runtime naming stays aligned with infra when using non-default compose project names.
There was a problem hiding this comment.
Actionable comments posted: 1
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.
Inline comments:
In `@e2e/scripts/run-playwright-container.sh`:
- Around line 4-11: The script currently only checks that SHARD_INDEX and
SHARD_TOTAL are set but not that they are valid integers and within bounds;
update the validation after SHARD_INDEX/SHARD_TOTAL are read to ensure
E2E_SHARD_INDEX and E2E_SHARD_TOTAL are numeric integers (use a regex like
'^[0-9]+$' or bash arithmetic checks), ensure SHARD_TOTAL is > 0, and ensure
SHARD_INDEX is >= 0 and less than SHARD_TOTAL (or <= SHARD_TOTAL-1); if any
check fails, print a clear error mentioning SHARD_INDEX/SHARD_TOTAL and exit
with non-zero status so Playwright won't attempt mis-sharded runs.
| SHARD_INDEX="${E2E_SHARD_INDEX:-}" | ||
| SHARD_TOTAL="${E2E_SHARD_TOTAL:-}" | ||
| RETRIES="${E2E_RETRIES:-2}" | ||
|
|
||
| if [[ -z "$SHARD_INDEX" || -z "$SHARD_TOTAL" ]]; then | ||
| echo "Missing E2E_SHARD_INDEX or E2E_SHARD_TOTAL environment variables" >&2 | ||
| exit 1 | ||
| fi |
There was a problem hiding this comment.
Validate shard bounds to avoid mis-sharding failures.
Right now you only check for non-empty values. If E2E_SHARD_INDEX is 0, non-numeric, or greater than E2E_SHARD_TOTAL, Playwright fails with a noisy error. Consider validating numeric and bounds up-front.
Suggested fix
SHARD_INDEX="${E2E_SHARD_INDEX:-}"
SHARD_TOTAL="${E2E_SHARD_TOTAL:-}"
RETRIES="${E2E_RETRIES:-2}"
-if [[ -z "$SHARD_INDEX" || -z "$SHARD_TOTAL" ]]; then
+if [[ -z "$SHARD_INDEX" || -z "$SHARD_TOTAL" ]]; then
echo "Missing E2E_SHARD_INDEX or E2E_SHARD_TOTAL environment variables" >&2
exit 1
fi
+if ! [[ "$SHARD_INDEX" =~ ^[0-9]+$ && "$SHARD_TOTAL" =~ ^[0-9]+$ ]] || \
+ (( SHARD_INDEX < 1 || SHARD_TOTAL < 1 || SHARD_INDEX > SHARD_TOTAL )); then
+ echo "E2E_SHARD_INDEX must be a positive integer in [1, E2E_SHARD_TOTAL]" >&2
+ exit 1
+fi📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
| SHARD_INDEX="${E2E_SHARD_INDEX:-}" | |
| SHARD_TOTAL="${E2E_SHARD_TOTAL:-}" | |
| RETRIES="${E2E_RETRIES:-2}" | |
| if [[ -z "$SHARD_INDEX" || -z "$SHARD_TOTAL" ]]; then | |
| echo "Missing E2E_SHARD_INDEX or E2E_SHARD_TOTAL environment variables" >&2 | |
| exit 1 | |
| fi | |
| SHARD_INDEX="${E2E_SHARD_INDEX:-}" | |
| SHARD_TOTAL="${E2E_SHARD_TOTAL:-}" | |
| RETRIES="${E2E_RETRIES:-2}" | |
| if [[ -z "$SHARD_INDEX" || -z "$SHARD_TOTAL" ]]; then | |
| echo "Missing E2E_SHARD_INDEX or E2E_SHARD_TOTAL environment variables" >&2 | |
| exit 1 | |
| fi | |
| if ! [[ "$SHARD_INDEX" =~ ^[0-9]+$ && "$SHARD_TOTAL" =~ ^[0-9]+$ ]] || \ | |
| (( SHARD_INDEX < 1 || SHARD_TOTAL < 1 || SHARD_INDEX > SHARD_TOTAL )); then | |
| echo "E2E_SHARD_INDEX must be a positive integer in [1, E2E_SHARD_TOTAL]" >&2 | |
| exit 1 | |
| fi |
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@e2e/scripts/run-playwright-container.sh` around lines 4 - 11, The script
currently only checks that SHARD_INDEX and SHARD_TOTAL are set but not that they
are valid integers and within bounds; update the validation after
SHARD_INDEX/SHARD_TOTAL are read to ensure E2E_SHARD_INDEX and E2E_SHARD_TOTAL
are numeric integers (use a regex like '^[0-9]+$' or bash arithmetic checks),
ensure SHARD_TOTAL is > 0, and ensure SHARD_INDEX is >= 0 and less than
SHARD_TOTAL (or <= SHARD_TOTAL-1); if any check fails, print a clear error
mentioning SHARD_INDEX/SHARD_TOTAL and exit with non-zero status so Playwright
won't attempt mis-sharded runs.
There was a problem hiding this comment.
💡 Codex Review
Here are some automated review suggestions for this pull request.
Reviewed commit: 0e6fbe48d4
ℹ️ About Codex in GitHub
Your team has set up Codex to review pull requests in this repo. Reviews are triggered when you
- Open a pull request for review
- Mark a draft as ready
- Comment "@codex review".
If Codex has suggestions, it will comment; otherwise it will react with 👍.
Codex can also answer questions or update the PR. Try commenting "@codex address that feedback".
| if ! docker image inspect "$GHOST_DEV_IMAGE" >/dev/null 2>&1 || ! docker image inspect "$GATEWAY_IMAGE" >/dev/null 2>&1; then | ||
| echo "Building missing dev images for E2E (${GHOST_DEV_IMAGE}, ${GATEWAY_IMAGE})..." | ||
| docker compose -f compose.dev.yaml -f compose.dev.analytics.yaml build ghost-dev ghost-dev-gateway |
There was a problem hiding this comment.
Rebuild dev images instead of checking existence only
infra:up only rebuilds ghost-dev/ghost-dev-gateway when docker image inspect says the images are missing, but this commit moved critical public-app URL config out of compose.dev.yaml and into image ENV (docker/ghost-dev/Dockerfile). On machines with cached pre-change images, this branch will skip rebuild and start Ghost without the new runtime config, causing public-app E2E flows to use wrong asset URLs or fail until a manual rebuild is done.
Useful? React with 👍 / 👎.
rob-ghost
left a comment
There was a problem hiding this comment.
CI is 🟢 so I'm happy no matter what 👍🏻
I think there might be a couple of orphaned files:
e2e/helpers/environment/service-managers/portal-manager.ts's only consumer was the docker-compose.ts removed in this PR.
e2e/compose.yml doesn't seem to have any references after this PR.
Can be done as a fast-follow though! 🚀
Summary
This PR reworks E2E environment orchestration into a single unified flow for local and CI, while adding support for build-mode testing against configurable Ghost backend images (
GHOST_E2E_IMAGE).The main goal is to remove drift between “dev” and “CI” behaviors and make backend-version coverage easier to maintain.
Main changes
EnvironmentManager+ worker-scopedGhostManager.docker/dev-gateway/Caddyfile.builde2e/Dockerfile.e2eto layer public app bundles into/content/filesinfra:up/down, preflight/build prep, host/container Playwright runners, Tinybird state sync).yarn devoryarn workspace @tryghost/e2e infra:up).