Context for AI coding assistants (Claude Code, Codex, Cursor, etc.). Update as the project evolves.
Runway is a UI for integrating K-12 education data to the Ed-Fi standard, built on top of earthmover (data transformation) and lightbeam (Ed-Fi API loading).
app/— Node.js monorepo (NX workspace): NestJS backend + React frontendexecutor/— Python job executor (earthmover + lightbeam)cloudformation/— AWS deployment templates
app/
├── api/ # NestJS backend
│ ├── src/ # Application source
│ │ └── database/ # Prisma schema + Postgrator migrations
│ └── integration/ # Integration tests + helpers
├── fe/ # React frontend
├── models/ # Shared TypeScript types
└── utils/ # Shared utilities
- Frontend: React 18, Chakra UI v2 (custom tokens:
blue.50,pink.100,gray.50,green.100), TanStack Router + Query, react-hook-form - Backend: NestJS, Prisma ORM, PostgreSQL, Passport.js (OIDC for UI auth), jose (JWT for external API auth)
- Build: NX monorepo, TypeScript throughout
app/ - CI: GitHub Actions —
.github/workflows/app_ci_pipeline.yml
Full suite — CI/local parity (spins up a Dockerized test DB in CI and local runs):
npm run api:testQuick integration tests — local dev (starts the test DB if needed, leaves it running):
npm run api:test:integration:localTypechecking:
npm run api:typecheck
npm run fe:typecheckSchema changes follow this workflow (all commands run from app/):
- Write a SQL migration file in
app/api/src/database/postgrator/migrations/ - Present the SQL for review and get explicit user approval
- Run
npm run api:migrate-local-devto apply the migration to the local dev DB - Run
npm run prisma:pull-and-generateto introspect the DB and regenerate the Prisma schema + client - Verify the Prisma schema diff contains all and only the expected changes
Do not edit schema.prisma directly — it is generated from the database via prisma:pull-and-generate. The SQL migration is the source of truth.
Migrations run automatically at the start of the integration test suite. If tests fail with schema errors, a missing or mismatched migration is the likely cause.
- App: Elastic Beanstalk (EC2 + ALB), frontend on S3 + CloudFront
- Executor: ECS Fargate (3 task sizes: small/medium/large)
- Database: RDS PostgreSQL (private subnet)
- Network: VPC with public + private subnets across 2 AZs
- CI/CD: CodePipeline + CodeBuild → Beanstalk deploy + ECR push
- Security: WAF on ALB, IAM scoped roles, Secrets Manager
- Monitoring: CloudWatch dashboards + alarms, EventBridge → Slack via Lambda
sequenceDiagram
participant User as Browser
participant App as App (NestJS)
participant DB as PostgreSQL
participant S3 as AWS S3
participant ECS as AWS ECS Fargate
participant Exec as Executor (Python)
participant ODS as Ed-Fi ODS API
User->>App: POST /jobs (create job)
App->>DB: Create Job + JobFile records
App->>S3: Generate presigned upload URL
App-->>User: { jobId, uploadLocations[] }
User->>S3: PUT file (direct upload via presigned URL)
User->>App: PUT /jobs/{id}/start
App->>DB: Create Run record
App->>ECS: RunTask (Fargate) with INIT_TOKEN, INIT_JOB_URL, AWS creds
ECS->>Exec: Container starts
Exec->>App: GET /api/earthbeam/jobs/{runId} (init handshake)
App-->>Exec: Auth token + job definition (files, ODS creds, bundle, callback URLs)
Exec->>ODS: lightbeam fetch (student roster)
Exec->>S3: Download input files
Exec->>Exec: earthmover run (transform data)
Exec->>ODS: lightbeam send (load to ODS)
Exec->>S3: Upload output artifacts
Exec->>App: POST /output-files (path + sentToOds → app lists S3, saves run_output_file_set)
Exec->>App: POST /status, /error, /summary, /unmatched-ids
Exec->>App: POST /status {action: done}
App->>S3: List output files → create RunOutputFile records
User->>App: GET /jobs/{id}/output-files/{name}
App->>S3: Generate presigned download URL
| Service | Used By | Purpose |
|---|---|---|
| S3 | App + Executor | File storage — presigned upload/download URLs, executor artifact I/O |
| ECS Fargate | App | Launches executor container |
| STS | App | Generates scoped temporary credentials for executor S3 access |
| SSM Parameter Store | App | ECS cluster/subnet/task definition config |
| Secrets Manager | App | Database credentials, app config |
| EventBridge | App | Run-completion notifications (Slack, etc.) |
| ECR | CI/CD | Executor Docker image registry |
app/api/src/files/file.service.ts— S3 presigned URL generationapp/api/src/earthbeam/executor/executor.aws.service.ts— ECS task launch, STS assume roleapp/api/src/event-emitter/event-emitter.service.ts— EventBridge notificationsapp/api/src/config/app-config.service.ts— Secrets Manager + SSM reads
app/api/src/earthbeam/api/earthbeam-api.controller.ts— HTTP callback endpoints the executor callsapp/api/src/earthbeam/api/earthbeam-api.service.ts— Job payload assembly, run completionexecutor/executor/executor.py— Main executor: S3 operations, HTTP callbacks, earthmover/lightbeam invocation
- Init: GET
INIT_JOB_URLwithINIT_TOKEN→ receives auth token + job URL - Job fetch: GET job URL → full job definition (files, ODS creds, bundle, callback URLs)
- Bundle refresh: git fetch/checkout/pull the earthmover bundle
- Roster fetch:
lightbeam fetchstudent roster from ODS, upload artifact to S3 - File download: Download user-uploaded input files from S3
- Transform:
earthmover run(with encoding detection + retry) - Load:
lightbeam sendto Ed-Fi ODS - Report: POST summary, unmatched IDs, errors to app via callback URLs
- Output files: POST output file path +
sentToOdsflag to/output-filescallback; app validates path, lists S3, savesrun_output_file_set - Done: POST status
{action: DONE, status: success|failure}
{partnerId}/{tenantCode}/{schoolYearId}/{jobId}/input/{templateKey}__{fileName}
{partnerId}/{tenantCode}/{schoolYearId}/{jobId}/output/{artifactFileName}
__rosters/{partnerId}/{tenantCode}/{schoolYearEndYear}/*
- Commits: lowercase subject + body explaining the "why"
- API: NestJS controller → service → repository pattern
- Error handling: Services return result objects (
{ status: 'SUCCESS', data }/{ status: 'ERROR', code }) for expected failure modes; unexpected errors throw. Controllers map error results to HTTP exceptions. Services should not import or throw HTTP exceptions. - FE: Chakra UI v2 with custom design tokens; prefer inline readable code over extracted helpers for short logic
- Icons:
app/fe/src/assets/icons/ - Data fetching and suspense:
- Critical data →
useSuspenseQueryin the component,ensureQueryDatain an ancestor loader. - Optional data →
useQuerywith a fallback value. OptionallyprefetchQueryin a loader to warm the cache. - Never
useSuspenseQueryon a query only reached viaprefetchQuery— a failed prefetch will re-suspend. - Scope prefetches to the route that needs them, not
__root. - Always
awaitorreturna prefetch from the loader — fire-and-forget won't warm the cache in time. PairprefetchQuerywith a soft fallback (data ?? default) in the component, since errored prefetches leave the cache empty. - Per-major-route pending/error UI via TanStack Router's
pendingComponentanderrorComponent(set on the section's parent route, e.g.,/ods-configs). Route-level pending covers all sub-routes; the top-level React<Suspense>inapp.tsxis only a safety net for unexpected suspends.
- Critical data →
- Documentation: When changing behavior described in nearby docs (README.md, AGENTS.md, code comments), update the docs in the same commit. When creating a commit, review changed files for references to documentation and flag any that may need updating.