The AI feedback system with rubric and RAG support has been already implemented in your application and is currently operational in both backend and frontend.
Location: Backend/app/api/routes/student.py (lines 115-201)
Method: POST
Functionality:
- Saves student answer to database
- Automatically generates AI feedback for first attempt
- Uses AIFeedbackService with rubric configuration
- Retrieves RAG context from embedded documents
- Returns answer + feedback in single response
Request Format:
{
"student_id": "STUDENT_12345",
"question_id": "uuid-here",
"module_id": "uuid-here",
"document_id": "uuid-here",
"answer": {
"selected_option": "B" // For MCQ
// OR
"text_response": "..." // For text answers
},
"attempt": 1 // 1 or 2
}Response Format (First Attempt):
{
"success": true,
"answer": {
"id": "uuid",
"student_id": "STUDENT_12345",
"question_id": "uuid",
"document_id": "uuid",
"answer": {"selected_option": "B"},
"attempt": 1,
"submitted_at": "2025-01-15T10:30:00Z"
},
"feedback": {
"feedback_id": "uuid",
"question_id": "uuid",
"is_correct": false,
"correctness_score": 45.0,
"explanation": "Your answer is partially correct. You identified...",
"improvement_hint": "Review the section on...",
"concept_explanation": "The key concept here is...",
"confidence_level": "high",
"selected_option": "B",
"correct_option": "A",
"available_options": {...},
"used_rag": true,
"rag_sources": ["Textbook.pdf", "Lecture_Notes.pdf"],
"rag_context_summary": "Retrieved from: Textbook.pdf, Lecture_Notes.pdf",
"feedback_type": "mcq",
"attempt_number": 1,
"model_used": "gpt-4",
"generated_at": "2025-01-15T10:30:05Z"
},
"attempt_number": 1,
"can_retry": true,
"max_attempts": 2
}Response Format (Second Attempt):
{
"success": true,
"answer": {...},
"attempt_number": 2,
"final_submission": true,
"message": "Final answer submitted successfully"
}Error Handling:
- If feedback generation fails, answer is still saved
- Returns success with feedback: null and error message
- Student experience is not interrupted
Code Flow:
# Backend/app/api/routes/student.py:116-201
@router.post("/submit-answer")
def submit_student_answer(answer_data: StudentAnswerCreate, db: Session = Depends(get_db)):
# 1. Save/update answer
existing_answer = get_student_answer(db, ...)
if existing_answer:
created_answer = update_student_answer(db, ...)
else:
created_answer = create_student_answer(db, answer_data)
# 2. Generate feedback for first attempt only
if answer_data.attempt == 1:
try:
question = get_question_by_id(db, str(answer_data.question_id))
module_id = str(question.document.module_id)
# Use AIFeedbackService (with rubric + RAG)
feedback_service = AIFeedbackService()
feedback = feedback_service.generate_instant_feedback(
db=db,
student_answer=created_answer,
question_id=str(answer_data.question_id),
module_id=module_id
)
# Return answer + feedback
return {
"success": True,
"answer": {...},
"feedback": feedback,
"attempt_number": 1,
"can_retry": not feedback.get("is_correct", False)
}
except Exception as e:
# Graceful failure - answer still saved
return {
"success": True,
"answer": created_answer,
"feedback": None,
"error": f"Feedback failed: {str(e)}"
}
# 3. Second attempt - no feedback
else:
return {
"success": True,
"answer": created_answer,
"attempt_number": 2,
"final_submission": True
}Location: Frontend/app/student/test/[moduleId]/page.js
Key Features:
const [feedback, setFeedback] = useState({}); // Store feedback for each question
const [showFeedback, setShowFeedback] = useState({}); // Track which feedback is shown
const [attempts, setAttempts] = useState({}); // Track attempt numbers// For MCQ, save immediately and handle feedback
if (question.type === 'mcq') {
const saveAnswer = async () => {
setSaveStatus('saving');
const response = await apiClient.post(`/api/student/submit-answer`, {
student_id: moduleAccess.studentId,
question_id: questionId,
module_id: moduleId,
answer: { selected_option: answer.trim() },
attempt: currentAttempt
});
// Handle feedback for first attempt
if (currentAttempt === 1 && response?.feedback) {
setFeedback(prev => ({ ...prev, [questionId]: response.feedback }));
setShowFeedback(prev => ({ ...prev, [questionId]: true }));
}
setSaveStatus('saved');
};
saveAnswer();
}// For text answers, use debounced save
const timeoutId = setTimeout(async () => {
setSaveStatus('saving');
const response = await apiClient.post(`/api/student/submit-answer`, {
student_id: moduleAccess.studentId,
question_id: questionId,
module_id: moduleId,
answer: { text_response: answer.trim() },
attempt: currentAttempt
});
// Handle feedback for first attempt
if (currentAttempt === 1 && response?.feedback) {
setFeedback(prev => ({ ...prev, [questionId]: response.feedback }));
setShowFeedback(prev => ({ ...prev, [questionId]: true }));
}
setSaveStatus('saved');
}, 1200); // Debounced for text inputconst handleSecondAttempt = async (questionId) => {
// Set attempt to 2 for this question
setAttempts(prev => ({ ...prev, [questionId]: 2 }));
// Clear the feedback display
setShowFeedback(prev => ({ ...prev, [questionId]: false }));
// Clear current answer to allow fresh input
setAnswers(prev => ({ ...prev, [questionId]: "" }));
setSuccess("You can now try again! Enter your second attempt.");
};// Question navigation buttons show visual status
{questions.map((q, index) => {
const hasFeedback = feedback[q.id];
const isCorrect = hasFeedback?.is_correct;
const questionAttempt = attempts[q.id] || 1;
return (
<button className={`
${isCorrect ? 'border-green-500 bg-green-50' : ''}
${hasFeedback && !isCorrect ? 'border-orange-500 bg-orange-50' : ''}
`}>
{isCorrect ? <CheckCircle /> : hasFeedback && !isCorrect ? <XCircle /> : <Eye />}
{index + 1}
{questionAttempt > 1 && <span className="badge">{questionAttempt}</span>}
</button>
);
})}{/* AI Feedback Display */}
{showFeedback[currentQ?.id] && feedback[currentQ?.id] && (
<Card className="mt-4 border-l-4 border-l-blue-500">
<CardHeader>
<div className="flex items-center justify-between">
<CardTitle className="flex items-center gap-2">
<Brain className="w-5 h-5 text-blue-600" />
AI Feedback - Attempt {attempts[currentQ?.id] || 1}
</CardTitle>
<div className="flex items-center gap-2">
{feedback[currentQ?.id]?.is_correct ? (
<Badge className="bg-green-100 text-green-800">
<CheckCircle className="w-3 h-3 mr-1" />
Correct
</Badge>
) : (
<Badge className="bg-orange-100 text-orange-800">
<XCircle className="w-3 h-3 mr-1" />
Incorrect
</Badge>
)}
<Button variant="ghost" size="sm" onClick={() => dismissFeedback(currentQ?.id)}>
<XCircle className="w-4 h-4" />
</Button>
</div>
</div>
</CardHeader>
<CardContent>
<div className="space-y-4">
{/* Score Bar */}
<div className="flex items-center gap-2">
<span className="text-sm font-medium">Score:</span>
<div className="flex-1 bg-gray-200 rounded-full h-2">
<div
className="bg-blue-600 h-2 rounded-full"
style={{ width: `${feedback[currentQ?.id]?.correctness_score || 0}%` }}
></div>
</div>
<span className="text-sm font-medium">
{feedback[currentQ?.id]?.correctness_score || 0}%
</span>
</div>
{/* Explanation */}
<div className="bg-blue-50 p-4 rounded-lg">
<h4 className="font-medium text-blue-900 mb-2">Explanation:</h4>
<p className="text-blue-800">{feedback[currentQ?.id]?.explanation}</p>
</div>
{/* Improvement Hint */}
{feedback[currentQ?.id]?.improvement_hint && (
<div className="bg-yellow-50 p-4 rounded-lg">
<h4 className="font-medium text-yellow-900 mb-2 flex items-center gap-2">
<Target className="w-4 h-4" />
Improvement Suggestion:
</h4>
<p className="text-yellow-800">{feedback[currentQ?.id]?.improvement_hint}</p>
</div>
)}
{/* Second Attempt Button */}
{!feedback[currentQ?.id]?.is_correct && (attempts[currentQ?.id] || 1) === 1 && (
<div className="flex justify-center pt-2">
<Button
onClick={() => handleSecondAttempt(currentQ?.id)}
className="bg-orange-600 hover:bg-orange-700 text-white"
>
<ArrowRight className="w-4 h-4 mr-2" />
Try Second Attempt
</Button>
</div>
)}
{/* Second Attempt Result */}
{(attempts[currentQ?.id] || 1) === 2 && (
<div className="bg-gray-50 p-4 rounded-lg">
<p className="text-center font-medium">
Second attempt completed. Final answer submitted.
</p>
</div>
)}
</div>
</CardContent>
</Card>
)}- Navigate to test page:
/student/test/{moduleId} - View questions: Navigation sidebar shows all questions
- Answer a question:
- MCQ: Click option → Auto-saves immediately
- Text: Type answer → Auto-saves after 1.2 seconds
- Feedback appears automatically (first attempt only):
- Score bar shows percentage
- Explanation of correctness
- Improvement suggestions
- Concept explanations
- If incorrect:
- "Try Second Attempt" button appears
- Click button → Clear answer, enter attempt 2
- Submit → Final submission, no feedback
- Visual indicators:
- Green badge = Correct answer
- Orange badge = Incorrect (can retry)
- Blue eye = Answered, no feedback yet
- Gray = Unanswered
- Navigate between questions: Click numbered buttons
- Submit test: All answers saved, redirect to module
Student Test Page (Frontend)
↓
User selects MCQ option / Types text answer
↓
updateAnswer() function triggered
↓
POST /api/student/submit-answer
{
student_id, question_id, module_id,
answer: {selected_option: "B"},
attempt: 1
}
↓
Backend: student.py:submit_student_answer()
↓
Save answer to database
↓
If attempt == 1:
↓
Get module_id from question.document.module_id
↓
AIFeedbackService.generate_instant_feedback()
├─ Load rubric from modules.assignment_config
├─ Get RAG context from document embeddings
├─ Build dynamic prompt (prompt_builder.py)
├─ Call OpenAI GPT-4 API
└─ Parse JSON response
↓
Return response:
{
success: true,
answer: {...},
feedback: {
is_correct, score, explanation,
improvement_hint, rag_sources, ...
},
can_retry: true
}
↓
Frontend receives response
↓
setFeedback({[questionId]: response.feedback})
setShowFeedback({[questionId]: true})
↓
Feedback card appears with:
- Score visualization
- Explanation
- Hints
- Second attempt button (if incorrect)
↓
Student sees feedback and can retry or continue
# Backend/.env
OPENAI_API_KEY=sk-proj-... # Required for AI feedback
LLM_MODEL=gpt-4 # Model to use
DATABASE_URL=postgresql://...Teachers configure via UI at: /dashboard/rubric?moduleId={id}
Stored in: modules.assignment_config.feedback_rubric
// Frontend/lib/auth.js
const API_BASE_URL = process.env.NEXT_PUBLIC_API_URL || 'http://localhost:8000';// If feedback generation fails
{
"success": true,
"answer": {...}, // Answer still saved!
"feedback": null,
"error": "Answer submitted but feedback failed: OpenAI API timeout"
}// Visual error indicators
{saveStatus === 'error' && (
<>
<XCircle className="w-3 h-3 text-red-600" />
<span className="text-red-600">Save failed</span>
</>
)}
// Error alert banner
{error && (
<div className="p-3 bg-red-50 border border-red-200 rounded-lg">
<AlertCircle className="h-4 w-4" />
<span>{error}</span>
</div>
)}- Answer save time: ~100-300ms
- Feedback generation: ~5-10 seconds (OpenAI API latency)
- Total user wait time: 5-10 seconds for complete feedback
- Auto-save debounce: 1.2 seconds for text answers
- MCQ feedback: Immediate on selection
- Debounced saves for text answers (prevent API spam)
- Immediate saves for MCQ selections (better UX)
- Graceful degradation (answer saves even if feedback fails)
- Visual loading indicators (spinner during save/feedback)
- Optimistic UI updates (instant local state update)
- Real-time AI feedback generation (GPT-4)
- Rubric-based customization (tone, criteria, instructions)
- RAG integration (course material context)
- Visual score indicators
- Detailed explanations and hints
- Second attempt functionality
- Auto-save (MCQ immediate, text debounced)
- Error handling and graceful failures
- Loading states and save status
- Question navigation with status badges
- Concept explanations
- Source citations from course materials
- Score progress bar
- Color-coded feedback (green = correct, orange = incorrect)
- Brain icon for AI feedback indicator
- Collapsible feedback cards
- Second attempt button
- Visual question navigation
- Save status indicator (saving/saved/error)
- Dark mode support
- Responsive design
- Start backend:
cd Backend && uvicorn main:app --reload - Start frontend:
cd Frontend && npm run dev - Join module with access code
- Navigate to test page
- Answer a question
- Observe:
- Auto-save indicator
- Feedback appears automatically
- Score bar animates
- Explanation displays
- Second attempt button (if incorrect)
cd Backend
source venv/bin/activate
python test_ai_feedback_integration.pyExpected: 6/6 tests passing
Check:
- OpenAI API key in
.env - Backend logs for errors
- Network tab in browser DevTools
- Database has rubric configured for module
Normal: 5-10 seconds is expected (OpenAI API latency) Solutions:
- Use gpt-3.5-turbo for faster (but lower quality) feedback
- Implement background job processing
- Cache identical questions
Check:
- Documents uploaded to module
- Documents processed and embedded (
processing_status = "embedded") - Embeddings exist in
document_embeddingstable - Lower similarity threshold in rubric (0.5 instead of 0.7)
These are optional enhancements not currently used by frontend:
POST /api/student-answers/{answer_id}/feedback
Use case: Teacher wants to regenerate feedback, or student views feedback later
POST /api/student-answers/modules/{module_id}/feedback/batch
Use case: Teacher generates feedback for all students after test closes
POST /api/student-answers/?generate_feedback=true
Alternative to /api/student/submit-answer with same functionality
The AI feedback system is fully operational and production-ready. It has been:
- ✅ Implemented in backend with robust error handling
- ✅ Integrated into frontend with beautiful UI
- ✅ Tested with real questions and answers
- ✅ Configured with rubric + RAG support
- ✅ Deployed and currently working in your application
No additional work needed for core functionality! The system works end-to-end from student answer submission to AI feedback display with second attempt support.