Project: ProcessPulse (AI-Assisted Writing Process Analyzer)
Last Updated: December 12, 2025
Status: ✅ ANALYZER WORKING + WRITER READY + DOCKER READY
License: Polyform Noncommercial 1.0.0 (Free for education)
GitHub: https://github.com/lafintiger/processpulse
ProcessPulse is an application for educators to assess student writing by analyzing both the final essay AND the complete AI collaboration history. The core philosophy is 80/20 assessment: 80% of the grade comes from the thinking process, 20% from the final product.
Current State:
- Backend (FastAPI) fully functional with assessment pipeline + session storage
- Frontend (React) analyzer mode works
- Writer interface READY FOR TESTING with all core features
- Academic integrity tracking - paste detection, copy tracking, focus monitoring
- Export options - DOCX, TXT, HTML, JSON (for assessment)
- Perplexica web search - AI-powered research with source citations
- Docker deployment - One command to deploy everything (
docker-compose up -d)
- Rich text editor with TipTap (bold, italic, underline, headings, lists, quotes, alignment)
- AI chat sidebar with streaming responses
- Right-click context menu for Edit with AI, Copy, Cut
- Find & Replace (Ctrl+F / Ctrl+H)
- Insert links via toolbar
- Auto-save to localStorage
- Export sessions to backend + local JSON
- Paste detection with character/word count (shows % pasted in stats bar)
- Copy tracking to detect potential external AI use
- Focus tracking - monitors when user leaves the app
- Session metrics - typed vs pasted ratio, AI acceptance rate
- FastAPI with SQLite database
- Writing session storage (
/api/sessions/save,/api/sessions/list,/api/sessions/{id}) - Assessment pipeline with RAG
- Rubric management
Process-Analyzer/
├── app/ # Python backend
│ ├── __init__.py
│ ├── config.py # Settings (Pydantic BaseSettings)
│ ├── api/
│ │ ├── main.py # FastAPI app + lifespan events
│ │ └── routes/
│ │ ├── health.py # /health, /api/status endpoints
│ │ ├── models.py # /api/models (Ollama model list)
│ │ ├── upload.py # /api/upload/essay, /api/upload/chat-history
│ │ ├── rubric.py # /api/rubric (get rubric)
│ │ ├── assessment.py # Assessment endpoints
│ │ └── sessions.py # NEW: Writing session storage
│ ├── db/
│ │ ├── database.py # SQLite + SQLAlchemy async setup
│ │ └── models.py # ORM models (includes WritingSession)
│ └── services/
│ ├── parsing/
│ ├── ollama/
│ ├── rag/
│ ├── rubric/
│ └── assessment/
│
├── frontend/ # React + Vite + TailwindCSS v4
│ ├── src/
│ │ ├── App.tsx # Main app - routes between Home/Writer/Analyzer
│ │ ├── index.css # TailwindCSS + custom styles
│ │ ├── types.ts # TypeScript interfaces
│ │ ├── components/
│ │ │ ├── Header.tsx
│ │ │ ├── StatusBar.tsx
│ │ │ ├── FileUpload.tsx
│ │ │ ├── AssessmentResults.tsx
│ │ │ ├── ChatViewer.tsx
│ │ │ └── writer/ # Writing interface components
│ │ │ ├── WriterPage.tsx
│ │ │ ├── Editor.tsx # TipTap editor with all features
│ │ │ ├── ChatSidebar.tsx
│ │ │ ├── InlineEditPopup.tsx
│ │ │ ├── SettingsPanel.tsx
│ │ │ └── index.ts
│ │ ├── lib/
│ │ │ └── ai-providers.ts # AI provider abstraction
│ │ └── stores/
│ │ └── writer-store.ts # Zustand state with metrics tracking
│ ├── package.json
│ └── vite.config.ts
│
├── data/
│ └── process_analyzer.db # SQLite database
│
├── RubricDocs/ # Assessment rubric documentation
├── Samples/ # Test files
├── requirements.txt
├── run.py
└── README.md
| Route | Method | Description |
|---|---|---|
/health |
GET | Simple health check |
/api/status |
GET | Full system status |
/api/models |
GET | List available Ollama models |
/api/upload/essay |
POST | Upload & parse essay file |
/api/upload/chat-history |
POST | Upload & parse chat history |
/api/rubric |
GET | Get full rubric structure |
/api/sessions/save |
POST | NEW: Save writing session |
/api/sessions/list |
GET | NEW: List all sessions |
/api/sessions/{id} |
GET | NEW: Get session details |
/api/sessions/{id}/export |
POST | NEW: Export for assessment |
type EventType =
| 'session_start'
| 'session_end'
| 'text_insert' // Characters typed
| 'text_delete' // Characters deleted
| 'text_paste' // Pasted from clipboard (tracks length)
| 'text_copy' // Copied to clipboard (potential external AI)
| 'text_cut' // Cut to clipboard
| 'text_select' // Text selection
| 'ai_request' // Asked AI for help
| 'ai_response' // AI responded
| 'ai_accept' // Accepted AI suggestion
| 'ai_reject' // Rejected AI suggestion
| 'ai_modify' // Modified AI suggestion
| 'document_save'
| 'undo'
| 'redo'
| 'focus_lost' // Window lost focus
| 'focus_gained' // Window regained focusinterface SessionMetrics {
totalCharactersTyped: number // Student's original work
totalCharactersPasted: number // External content pasted in
totalCharactersCopied: number // Content copied out (external AI?)
aiRequestCount: number // Times asked AI for help
aiAcceptCount: number // AI suggestions accepted
aiRejectCount: number // AI suggestions rejected
focusLostCount: number // Times switched away from app
totalFocusLostDuration: number // Time spent outside app (ms)
}class WritingSession(Base):
id: str # UUID
session_id: str # Frontend session ID
document_title: str
document_content: str # Final document HTML
assignment_context: str # Optional assignment prompt
word_count: int
session_start_time: int # Unix ms
session_end_time: int # Unix ms
events_json: str # All captured events
chat_messages_json: str # AI chat history
total_events: int
ai_request_count: int
ai_accept_count: int
ai_reject_count: int
text_insert_count: int
text_delete_count: int
ai_provider: str # ollama, openai, anthropic
ai_model: str
status: str # active, completed, exportedcd C:\Users\lafintiger\SynologyDrive\_aiprojects\__Dev\Process-Analyzer
.\venv\Scripts\Activate.ps1
python run.py
# Runs at http://localhost:8000
# API docs at http://localhost:8000/docscd C:\Users\lafintiger\SynologyDrive\_aiprojects\__Dev\Process-Analyzer\frontend
npm run dev
# Runs at http://localhost:5175curl http://localhost:11434/api/tags- Rich text formatting (bold, italic, underline)
- Headings (H1, H2, H3)
- Lists (bullet, numbered)
- Blockquotes
- Text alignment
- Insert links
- Find & Replace (Ctrl+F, Ctrl+H)
- Word/character count
- Auto-save
- Chat sidebar with streaming
- Right-click → Edit with AI
- Inline edit popup (Cmd+K)
- Multiple providers (Ollama, OpenAI, Claude)
- Provider status indicator
- All events timestamped (Unix ms)
- Paste detection with content length
- Copy tracking
- Focus/blur tracking
- Session metrics calculation
- Backend session storage
- Dark theme
- Context menu on right-click
- Stats bar (words, time, paste %, AI usage)
- Keyboard shortcuts hints
| Feature | Status |
|---|---|
| Export to DOCX | ✅ Done |
| Export to TXT | ✅ Done |
| Export to HTML | ✅ Done |
| Browser spell check | ✅ Done |
| Keyboard shortcuts help modal | ✅ Done |
| Welcome onboarding modal | ✅ Done |
| Auto-save indicator | ✅ Done |
| Error boundary (crash recovery) | ✅ Done |
| Perplexica Web Search | ✅ Done |
| Feature | Priority | Effort |
|---|---|---|
| ✅ Done | - | |
| Focus mode (minimal UI) | Low | Easy |
| Version history/snapshots | Low | High |
| Mobile responsiveness | Low | Medium |
- Analysis:
gpt-oss:latest - Embeddings:
bge-m3
- Backend:
app/config.py - Frontend:
frontend/src/stores/writer-store.ts→defaultSettings
Problem: gpt-oss:latest and some other models do NOT properly support Ollama's format: "json" mode. They return malformed JSON or empty responses, causing all assessment scores to be 0.
Solution: Always use models known to support JSON mode:
- ✅
qwen3:latest- Works well, fast (4.9GB) - ✅
huihui_ai/qwen3-abliterated:8b- Works well, fast (4.7GB) - ✅
huihui_ai/qwen3-abliterated:32b- Works well, slower but better quality (18.4GB) - ❌
gpt-oss:latest- DOES NOT work with JSON mode
How to test a model's JSON support:
$body = @{model="MODEL_NAME"; prompt="Generate JSON: {score: 75}"; format="json"; stream=$false} | ConvertTo-Json
Invoke-RestMethod -Uri "http://localhost:11434/api/generate" -Method Post -Body $bodyProblem: After a long-running or failed request, Ollama can get stuck and stop responding to new requests.
Symptoms:
- API calls to Ollama hang indefinitely
/api/tagsendpoint times out- Assessment starts but never progresses
Solution:
- Kill Ollama:
taskkill /f /im ollama.exeor via Task Manager - Restart Ollama:
ollama serve - Verify it's responding:
Invoke-RestMethod -Uri "http://localhost:11434/api/tags"
Problem: Chat messages scroll off the visible area or don't scroll at all.
Root Cause: Flexbox containers need explicit height constraints for overflow-y: auto to work.
Solution - Apply these CSS rules:
/* Parent container must have fixed/explicit height */
.chat-wrapper {
height: 100%; /* or calc(100vh - header) */
overflow: hidden; /* Prevent parent from growing */
}
/* Scrollable container needs min-h-0 AND explicit height */
.messages-container {
flex: 1;
min-height: 0; /* CRITICAL for flexbox scrolling */
overflow-y: auto;
min-height: 500px; /* Fallback to ensure visibility */
}
/* Fixed elements (header, input) should not shrink */
.header, .input-area {
flex-shrink: 0;
}Problem: State gets stuck in invalid states (e.g., providerStatus: 'checking' forever) due to localStorage persistence.
Solution: When debugging connection issues:
- Open DevTools → Application → Local Storage
- Clear
writer-storekey - Refresh the page
Problem: The Analyzer didn't recognize session exports from the Writer.
Solution: Added PROCESSPULSE_SESSION format to chat parser that extracts exchanges from both chatMessages and events arrays.
-
Black screen after document creation
- Cause: Variable shadowing (
documentvs globaldocument) - Fix: Renamed to
writerDocument, usedwindow.document
- Cause: Variable shadowing (
-
Text selection immediately clearing
- Cause: Auto-updating state on selection change
- Fix: Use right-click context menu instead
-
Chat sidebar scrolling broken
- Cause: Flexbox containers without proper height constraints
- Fix: Added
min-h-0,flex-shrink-0, fixedminHeighton messages container
-
Resizable panel needed between editor and chat
- Fix: Implemented draggable divider with
useStatefor width, mouse event handlers
- Fix: Implemented draggable divider with
-
Provider status stuck at 'checking'
- Cause: Zustand localStorage persistence with stale state
- Fix: Clear localStorage, also removed excessive console logging
-
Unicode emoji errors
- Cause: Windows console encoding
- Fix: Removed all emojis from Python code
-
Assessment returning all zeros
- Cause:
gpt-oss:latestmodel doesn't support JSON format mode - Fix: Changed default model to
qwen3:latest(orhuihui_ai/qwen3-abliterated:32b)
- Cause:
-
Assessment endpoint was placeholder
- Cause: Original endpoint just returned mock data
- Fix: Implemented full assessment pipeline integration
-
ProcessPulse session format not recognized
- Cause: Chat parser didn't know about Writer's export format
- Fix: Added
PROCESSPULSE_SESSIONformat with parsing forchatMessagesandevents
-
Perplexica CORS errors
- Cause: Browser blocking cross-origin requests to localhost:3000
- Fix: Created backend proxy at
/api/perplexica/
-
TailwindCSS v4 config
- Fix: Use
@import "tailwindcss"and@tailwindcss/postcss
- Fix: Use
-
Vite proxy for API calls
- Added proxy in
vite.config.tsfor/apitohttp://localhost:8000
- Added proxy in
Agent: Initial PRD Development Agent
Accomplished: Created comprehensive PRD, defined architecture
Agent: Development Agent
Accomplished:
- Set up complete backend (FastAPI, SQLite, parsers, RAG, assessment)
- Set up React frontend with Tailwind v4
- Built analyzer UI
- Started Phase 2: Writer interface
Agent: Development Agent
Accomplished:
- Fixed black screen bug in Writer
- Implemented right-click context menu
- Added paste/copy/focus tracking for academic integrity
- Added session metrics
- Created backend session storage API
- Added Find & Replace (Ctrl+F/H)
- Added link insertion
- Added writing stats bar
Agent: Development Agent
Accomplished:
- Added DOCX export using
docxlibrary - Added TXT export (plain text)
- Added HTML export (styled)
- Enabled browser spell check in editor
- Added keyboard shortcuts help modal (Ctrl+/)
- Added welcome onboarding modal for first-time users
- Added auto-save indicator with timestamp
- Added error boundary component for crash recovery
- Added Perplexica Web Search integration
- PerplexicaClient class for local AI-powered search
- SearchPanel component with multiple focus modes (Web, Academic, YouTube, Reddit, Wolfram)
- Results display with expandable sources and citations
- Insert search results directly to chat
- Tracks
web_searchevents for process capture
- PROTOTYPE NOW READY FOR STUDENT TESTING
Key Files Added:
frontend/src/lib/export-utils.ts- Export utilitiesfrontend/src/components/ErrorBoundary.tsx- Error handlingfrontend/src/components/writer/SearchPanel.tsx- Perplexica search UI
Key Files Modified:
frontend/src/lib/ai-providers.ts- Added PerplexicaClient classfrontend/src/stores/writer-store.ts- Search state, actions, web_search event typefrontend/src/components/writer/WriterPage.tsx- Export dropdown, shortcuts modal, welcome modal, search buttonfrontend/src/components/writer/Editor.tsx- Spell check enabledfrontend/src/App.tsx- Wrapped with ErrorBoundary
Agent: Development Agent
Accomplished:
- Fixed Perplexica CORS issue
- Created backend proxy at
/api/perplexica/to bypass browser CORS restrictions - Updated PerplexicaClient to use backend proxy instead of direct calls
- Perplexica web search now fully functional
- Created backend proxy at
- Added Polyform Noncommercial 1.0.0 License
- Free for educators, students, educational institutions
- Commercial use requires separate license
- Complete Docker Deployment Setup
Dockerfile- Backend (FastAPI, multi-stage build, ~500MB)frontend/Dockerfile- Frontend (React + nginx, ~50MB)docker-compose.yml- Full orchestration with:- ProcessPulse frontend & backend
- Ollama with auto model download
- Perplexica + SearXNG for web search
env.example- Configuration templateDEPLOYMENT.md- Detailed deployment guide- Auto-downloads models on first run (~7.5GB total)
- GPU support option (just uncomment in docker-compose)
- External Ollama support for existing installations
Key Files Added:
Dockerfile- Backend containerfrontend/Dockerfile- Frontend containerfrontend/nginx.conf- Production nginx configdocker-compose.yml- Full stack orchestrationenv.example- Environment configurationDEPLOYMENT.md- Deployment documentationLICENSE- Polyform Noncommercial 1.0.0.dockerignore- Docker build exclusionsapp/api/routes/perplexica.py- Perplexica proxy endpoints
Disk Space Requirements (Docker):
| Component | Size |
|---|---|
| Docker images | ~2.5 GB |
| Chat model (llama3.1:8b) | ~4.7 GB |
| Embedding model | ~275 MB |
| Total | ~7.5 GB |
Agent: Development Agent
Accomplished:
-
Fixed Analyzer Assessment Pipeline
- Assessment endpoint was returning placeholder data - now runs full pipeline
- Added ProcessPulse session format parser (
PROCESSPULSE_SESSION) - Session JSON exports from Writer now parse correctly
-
Fixed Model JSON Support Issue
- Discovered
gpt-oss:latestdoesn't support Ollama'sformat: "json"mode - Changed default model to
qwen3:latest(configurable) - Documented which models work with JSON mode
- Discovered
-
Fixed Chat Sidebar Scrolling
- Root cause: Flexbox containers need
min-h-0for scrolling to work - Added fixed
minHeight: 600pxto messages area - Made main page container
h-screen overflow-hidden
- Root cause: Flexbox containers need
-
Added Resizable Chat Panel
- Draggable divider between editor and chat
- Persists width during session
-
Added Markdown Export
- New
exportToMarkdown()function in export-utils.ts - Converts HTML to clean Markdown format
- New
-
Fixed AssessmentResults Undefined Errors
- Added null checks for
processing_time_seconds,total_score,total_possible - Added safety for
summaryandcriterion_assessmentsarrays
- Added null checks for
-
Documentation
- Added comprehensive "Critical Lessons Learned" section
- Documented all resolved issues with causes and fixes
- Future agents won't repeat these mistakes
Key Files Modified:
app/api/routes/assessment.py- Full assessment implementationapp/services/parsing/chat_parser.py- ProcessPulse session formatapp/config.py- Changed default model to qwen3:latestfrontend/src/App.tsx- Assessment handling, progress displayfrontend/src/components/writer/WriterPage.tsx- Resizable panels, markdown exportfrontend/src/components/writer/ChatSidebar.tsx- Fixed scrollingfrontend/src/components/AssessmentResults.tsx- Null safety checksfrontend/src/lib/export-utils.ts- Added markdown export
Critical Learnings:
- Always test model JSON support before using for assessment
- Flexbox scrolling requires
min-h-0on child containers - Ollama can get stuck - restart it if requests hang
- Clear localStorage when Zustand state gets corrupted
Agent: Development Agent
Accomplished:
-
Improved Assessment Prompt Strictness
- Updated
SYSTEM_PROMPTwith explicit red flags for AI over-dependence - Added copy-paste delegation patterns that trigger automatic INADEQUATE scores
- Added passive consumption patterns that cap scores at DEVELOPING
- Created detailed scoring guidance with examples for each level
- Updated summary prompt to ask "delegation vs collaboration?"
- Updated authenticity prompt with additional delegation checks
- Improved semantic search queries for each criterion
- Result: Test submission scored 59/100 (down from 62/100) - more accurate for copy-paste usage
- Updated
-
Added PDF Export for Assessment Reports
- Professional multi-page PDF reports using jsPDF + jspdf-autotable
- Includes: overall score, summary paragraphs, key strengths, areas for growth
- Detailed criterion breakdown with reasoning, evidence citations, feedback
- Authenticity analysis with flags and positive indicators
- Export button added to AssessmentResults component
- Also added JSON data export option
-
Changed Frontend Port to 5175
- Updated
vite.config.tsto use port 5175 (was 5173) - Avoids conflict with other projects
- Updated
Agent: Development Agent
Accomplished:
-
Made App Remotely Accessible for Student Testing
- Changed backend host from
127.0.0.1to0.0.0.0inapp/config.py - Updated CORS to allow all origins (
allow_origins=["*"]) - Frontend runs with
--hostflag for LAN access
- Changed backend host from
-
Added Student Information to Document Creation
- New "Student Information" section in New Document modal
- Required: Student Name (must be filled to create document)
- Optional: Student ID
- Student badge appears in editor header showing name/ID
-
Created Server-Side Submission Storage System
- New
/api/submissions/endpoints inapp/api/routes/submissions.py - Submissions saved to
data/submissions/{student_name}/directory - Each submission creates two files:
{title}_{timestamp}.md- Essay in Markdown with YAML frontmatter{title}_{timestamp}_session.json- Full session data for assessment
- Includes computed stats: AI requests, paste counts, characters typed, etc.
- New
-
Added "Submit" Button to Writer
- Green "Submit" button in editor header
- Shows submission result modal (success/failure)
- Submits to server automatically - no manual downloads needed!
-
Created Instructor Submissions Dashboard
- New "Submissions" option on home page (3-column layout)
- Lists all student submissions with metadata
- Search/filter by student name, ID, or document title
- Download individual MD or JSON files
- Delete submissions
- Accessible at home page → "Submissions"
Key Files Added:
app/api/routes/submissions.py- Submission storage APIfrontend/src/components/SubmissionsDashboard.tsx- Instructor dashboard
Key Files Modified:
app/config.py- Changed host to0.0.0.0for remote accessapp/api/main.py- Added submissions router, CORS*originsfrontend/src/stores/writer-store.ts- AddedStudentInfo,submitForAssessment()frontend/src/components/writer/WriterPage.tsx- Student fields, submit button/modalfrontend/src/App.tsx- Added submissions dashboard route
API Endpoints Added:
| Route | Method | Description |
|---|---|---|
/api/submissions/submit |
POST | Submit writing (saves MD + JSON to server) |
/api/submissions/list |
GET | List all submissions (filterable by student) |
/api/submissions/{id} |
GET | Get full submission details |
/api/submissions/{id}/download/{md|json} |
GET | Download specific file |
/api/submissions/{id} |
DELETE | Delete a submission |
- Go to
http://{YOUR_IP}:5175(get IP from instructor) - Click "Writer"
- Click "New Document"
- Enter your name (required) and optional Student ID
- Enter document title and assignment context
- Click "Create"
- Start writing! Use AI chat sidebar for help
- Right-click on selected text for AI editing
- Click "Submit" when done - your work is automatically saved to the server!
- Start the server with
python run.py(backend) andnpm run dev -- --host(frontend) - Give students the URL:
http://{YOUR_IP}:5175 - Submissions are automatically saved to
data/submissions/ - Go to http://localhost:5175 → "Submissions" to view all student work
- Download MD (essay) or JSON (session) files
- Use "Analyzer" to assess submissions with the full rubric
# Windows
Get-NetIPAddress -AddressFamily IPv4 | Where-Object {$_.InterfaceAlias -notlike "*Loopback*"} | Select-Object IPAddress
# Mac/Linux
ip addr show | grep inet | grep -v 127.0.0.1Connect Writer → Analyzer flow - Button to directly import session for assessment(Use Submissions dashboard instead)- Testing - Unit tests for backend, integration tests for frontend
Deployment - Docker setup, production config✅ DONE- Mobile responsiveness - Make writer usable on tablets
- Test Docker deployment - Verify docker-compose works on fresh machine
- Batch assessment - Multiple submissions at once
PDF export - Export essays as PDF(Already have DOCX/MD/HTML export)- Direct analyze from submissions - One-click to load submission into analyzer
git clone https://github.com/lafintiger/processpulse.git
cd processpulse
cp env.example .env
docker-compose up -d
# Access at http://localhost- Frontend (nginx + React)
- Backend (FastAPI)
- Ollama (Local AI)
- Perplexica (Web Search)
- SearXNG (Search Engine)
Edit .env to change:
- Ports (default: 80, 11434, 3000)
- AI models (default: llama3.1:8b, nomic-embed-text)
- Debug mode
See DEPLOYMENT.md for full instructions.
This document should be updated whenever significant progress is made.