-
Notifications
You must be signed in to change notification settings - Fork 31
Description
🔍 Have You Searched Existing Issues?
- I have searched the existing issues to avoid duplicates
💡 Problem Description
Hi @Deepika14145
Currently, model evaluation metrics like precision, recall, F1-score, and accuracy are not available in a unified format.
This makes it harder for users or contributors to get a complete understanding of how each model performs across different metrics.
✅ Proposed Solution
Add a new /api/classification_report route in app.py that returns a structured JSON response containing precision, recall, F1-score, and accuracy and other relevant evaluation metrics (e.g., macro averages, weighted averages, support) for all models. This feature will unify all evaluation metrics into one consolidated classification report, giving users a clear, comprehensive view of overall model performance and reliability.
Expected Outcome:
JSON output summarizing per-model metrics.
Optional future enhancement: visualization in the dashboard (bar chart or table view).
Can I please be assigned to this issue?
🔄 Alternatives Considered
No response
🖼️ Screenshots or Diagrams (Optional)
No response
🙌 Contributor Checklist
- I have checked for similar feature requests
- I agree to follow this project's Code of Conduct
- I am a GSSOC'25 contributor
- I want to work on this issue