Summary
Backend API + admin console UI for generating realistic test questionnaire submissions with OpenAI-powered multilingual comments (Cebuano, Tagalog, English, code-switched).
Motivation
Manually constructing CSV files with realistic submission data for questionnaire ingestion is too slow for rapid analytics testing. The team needs volume (up to ~50 submissions per course) with realistic, multilingual qualitative feedback to properly exercise analytics dashboards (sentiment analysis, topic modeling, etc.).
Scope
API Endpoints
GET /admin/generate-submissions/status — lightweight pre-check (enrollment/submission counts)
POST /admin/generate-submissions/preview — generate rows with OpenAI comments
POST /admin/generate-submissions/commit — submit via QuestionnaireService.submitQuestionnaire() per row
- 4 new filter endpoints: faculty, courses, questionnaire-types, questionnaire-versions
Admin Console UI
- Two-track selection form (faculty+course, type+version) with cascading dropdowns
- Auto-fetched submission status warning before generation (saves OpenAI tokens)
- Color-coded preview table with tooltips
- Commit result dialog with success/partial/failure states
Key Design Decisions
- Direct
submitQuestionnaire() calls (bypasses ingestion pipeline to avoid DI scope issues)
- OpenAI
gpt-4o-mini for comments with fallback on failure/timeout
- Per-student answer tendency for realistic distribution
@ArrayNotEmpty() + @ArrayMaxSize(200) on commit rows
- Dangerous-key rejection for answers (
__proto__, constructor, prototype)
Tests
- 31 unit tests across 3 suites (CommentGeneratorService, AdminGenerateService, AdminFiltersService)
- All 757 project tests passing
Tech Spec
_bmad-output/implementation-artifacts/tech-spec-csv-test-submission-generator.md
Summary
Backend API + admin console UI for generating realistic test questionnaire submissions with OpenAI-powered multilingual comments (Cebuano, Tagalog, English, code-switched).
Motivation
Manually constructing CSV files with realistic submission data for questionnaire ingestion is too slow for rapid analytics testing. The team needs volume (up to ~50 submissions per course) with realistic, multilingual qualitative feedback to properly exercise analytics dashboards (sentiment analysis, topic modeling, etc.).
Scope
API Endpoints
GET /admin/generate-submissions/status— lightweight pre-check (enrollment/submission counts)POST /admin/generate-submissions/preview— generate rows with OpenAI commentsPOST /admin/generate-submissions/commit— submit viaQuestionnaireService.submitQuestionnaire()per rowAdmin Console UI
Key Design Decisions
submitQuestionnaire()calls (bypasses ingestion pipeline to avoid DI scope issues)gpt-4o-minifor comments with fallback on failure/timeout@ArrayNotEmpty()+@ArrayMaxSize(200)on commit rows__proto__,constructor,prototype)Tests
Tech Spec
_bmad-output/implementation-artifacts/tech-spec-csv-test-submission-generator.md