- Basic Node.js service entrypoint (
src/index.js) - Health check endpoint
- Logging utility with structured events
- Job abstraction (interface + in-memory stub)
- Simple artifact model with versioning fields
- File upload API (metadata only, stub storage)
- Enqueue
process_evidencejobs - Background Worker Implementation
- Job Lifecycle Events
- LLM provider interface
- OpenAI-compatible provider implementation
- Config via env vars
- Served Static Files (Express)
- Drag & Drop Upload Interface
- Job Status Polling & Visualization
- Download links for generated artifacts
- New: Delete functionality (Job + File + Artifact cleanup)
- Evidence processing worker
- Prompt Engineering for BPMN 2.0 (JSON graph schema)
- Deterministic JSON-to-XML compilation (
xmlbuilder2) - Automatic diagram layout (
bpmn-auto-layout) - Strict Namespace & Syntax (guaranteed by builder, not LLM)
- JSON-based File Store (
src/data/*.json) - Persistence for Jobs, Evidence, and Artifacts
- Crash recovery (Jobs survive server restarts)
- SIPOC Matrix Generation (Suppliers, Inputs, Process, Outputs, Customers)
- RACI Matrix Generation (Responsible, Accountable, Consulted, Informed)
- Narrative Documentation Generation (Markdown/HTML)
- UI view for text-based artifacts (Modal & Tables)
- Custom Artifact Naming & Parallel Generation
- Replace JSON FileStore with SQLite (better-sqlite3)
- Replace In-Memory/JSON Queue with Redis (BullMQ)
- Structured App-wide Error Handling
- Docker Containerization (App + Redis via Compose)
- Interactive BPMN Viewer: Integrate
bpmn-jsto view and edit diagrams directly in the browser. - Rich Text Editor: Edit Narrative Documentation within the UI (Markdown WYSIWYG).
- Interactive Tables: Editable SIPOC and RACI grids.
- Validation Feedback: Real-time validation of edits against BPMN standards (via Modeler).
- Google GenAI Integration: Support for Gemini models via
@google/genaiSDK. - Anthropic Integration: Support for Claude models via
anthropic-sdk. - Provider Selection UI: Allow users to switch providers per-project or per-job.
- Model Selection UI: Allow users to choose specific models (e.g., gpt-5-nano, gemini-3.1-flash-lite-preview).
- User Authentication: Email/password with JWT (HTTP-only cookies).
- Multi-User Workspaces: Create workspaces and switch between them.
- User Data Isolation: Jobs and artifacts scoped per user and workspace (SQLite-backed).
- Workspace Management UI: Dropdown selector and creation modal in header.
- Role-Based Access: Viewer / Editor / Admin permissions. First registered user is admin.
- Admin Dashboard: User management (roles, status) and all-jobs overview with pagination.
- App Settings: Admin-only LLM configuration UI with encrypted API key storage (AES-256-CBC).
- User Settings: Profile management (name, password with complexity validation).
- Process Name Editing: Update process name after job creation.
- Custom Confirmation Modals: Styled modals replacing native
window.confirm(). - Standardized Header: Consistent header across all pages with user menu.
- Workspace sharing: Share workspaces with other users.
- Workspace invitations: Invite users to join workspaces.
- Member Management: List and remove members from workspaces.
- Workspace Dashboard: "My Workspaces" and "Shared Workspaces" views with job counts.
- Workspace Deletion: Owners can delete workspaces and related data.
- SIPOC and RACI export: Export SIPOC and RACI matrices to CSV or Excel.
- BPMN export: Export BPMN diagrams to BPMN or PNG.
- Narrative export: Export narrative documentation to DOCX or PDF.
- Refactor codebase: Removed unused debug scripts and temp files, cleaned up
.gitignore. - Refactor styles: Moved ~50 inline
style=attributes from HTML files tostyle.cssusing proper CSS classes. - Detect reusable code: Extracted shared header into
header.jswith dynamic injection; replaced nativealert()withshowAlertModal()frommodal-utils.js. - Code quality: Added JSDoc comments to backend services and API routes; fixed all ESLint errors.
- Add documentation: Updated
README.md,architecture.md,api_reference.md,user_guide.md, andROADMAP.mdfor public release. - Package readiness: Updated
package.jsonto v1.0.0, added repository metadata, updated.env.example. - Add tests: Added unit tests (settingsService, workspaceService, bpmnBuilder) and integration tests (workspaces, jobs) — 59 tests, 0 failures.
- JSON-to-XML Pipeline: LLM outputs structured JSON graph instead of raw XML; deterministic XML compilation via
xmlbuilder2. - Auto-Layout:
bpmn-auto-layoutgenerates<bpmndi:BPMNDiagram>with proper X/Y coordinates. - JSON Response Mode: Providers enforce JSON output (OpenAI
response_format, GoogleresponseMimeType, Anthropic fence stripping). - Zod Schema Validation:
src/schemas/bpmnSchema.js— strict runtime validation with.strict()mode, typed enums, JSDoc type inference. - Self-Healing Loop: Up to 3 retry attempts; feeds structured Zod error messages back to LLM for correction.
- Dependencies: Added
xmlbuilder2,bpmn-auto-layout,zod.
- Audio Ingestion: Support for uploading audio/video files (mp3, wav, mp4, etc.).
- Transcription Configuration: Configure a dedicated LLM Provider and Model for transcription (STT) in App Settings involving separate configuration from the main generation model.
- Transcription Processing: If an audio file is uploaded, automatically transcribe it to text using the configured model.
- Transcript Review: Review and edit transcripts before generating process artifacts.
- Transcript Export & Playback: Export transcripts to TXT and play back audio during review.
- Evidence Pipeline Integration: Use the transcribed text as process evidence to generate artifacts (BPMN, SIPOC, RACI).
- UI Enhancements: Add 'Ollama (Local)' to the LLM Provider options (
app-settings.html). Implement dynamic UI toggling to hide the API Key field and display a Custom Base URL field when Ollama is selected. - Provider Factory Update: Modify
src/llm/index.jsto securely intercept Ollama selections. Inject a dummy API key to ensure existing OpenAI keys are never accidentally leaked to the local network. - SSRF Protection & API Key Relaxation: Update
OpenAIProviderto relax the strict API key requirement only for verified local endpoints (e.g.,localhost,127.0.0.1,host.docker.internal). Block external untrusted URLs to prevent Server-Side Request Forgery. - Dynamic Model Loading: Verify and test that the existing
listModels()implementation successfully fetches available local models via Ollama's OpenAI-compatible/v1/modelsendpoint.
- Infrastructure Update: Integrate the
ollama/ollama:latestservice intodocker-compose.yml. Configure a persistent volume (ollama_data) to ensure downloaded models survive container restarts. - Model Pull API: Create a new backend route (
/api/settings/llm/pull) that accepts requests to download a curated list of supported models (e.g., Llama 3.2, Qwen 2.5, Mistral). - Background Worker Integration: Implement a queue worker (
src/workers/modelWorker.js) using the existing Redis/BullMQ architecture to process long-running model downloads via Ollama's native/api/pullendpoint. - Model Manager UI: Build a "Local Model Manager" section in the App Settings page. Allow users to select recommended models and initiate downloads directly from the interface.
- Progress Tracking: Connect the background download worker to the frontend using persisted pull progress and settings-page polling to provide users with a real-time progress bar during model installation.
- Ollama is implemented as a generation provider.
- Local transcription through Ollama is not part of the current runtime and remains deferred until a supported local STT backend is integrated.