A sophisticated real-time video proctoring system that monitors candidates during online interviews using advanced AI technologies. The system detects focus levels, unauthorized objects, and generates comprehensive integrity reports.
- Real-Time Face Detection - Powered by MediaPipe's BlazeFace model
- Focus Tracking - Monitors candidate attention and alerts when focus is lost for >5 seconds
- Absence Detection - Flags when no face is detected for >10 seconds
- Multiple Face Detection - Identifies when more than one person is in frame
- Object Detection - Uses TensorFlow.js COCO-SSD to detect unauthorized items:
- Mobile phones
- Books and notes
- Extra electronic devices (keyboards, monitors, laptops)
- Other suspicious objects
- Live Event Logging - Real-time tracking of all violations with timestamps
- Integrity Scoring - Comprehensive scoring system (0-100) based on violations
- Detailed Reports - Exportable proctoring reports with complete session analytics
- Visual Indicators - Color-coded status overlays showing current monitoring state
- Clean Interface - Modern, intuitive design with professional aesthetics
- Real-Time Status - Visual feedback showing recording status and AI model loading state
- Session Management - Easy start/stop controls with candidate information tracking
- Downloadable Reports - Export detailed text reports for record-keeping
- React 18.3.1 - Modern component-based UI
- TypeScript 5.5.3 - Type-safe development
- Vite 5.4.2 - Lightning-fast build tool
- MediaPipe Tasks Vision - Face detection and tracking
- TensorFlow.js 4.22.0 - In-browser machine learning
- COCO-SSD Model - Object detection
- Tailwind CSS 3.4.1 - Utility-first CSS framework
- Lucide React - Beautiful icon system
- Supabase - Database and authentication ready
Before running this project, ensure you have:
- Node.js (v18 or higher)
- npm or yarn
- Webcam access for video monitoring
- Modern browser with WebRTC support (Chrome, Firefox, Edge)
-
Clone the repository
git clone https://github.com/yourusername/video-proctoring-system.git cd video-proctoring-system -
Install dependencies
npm install
-
Start the development server
npm run dev
-
Open your browser Navigate to
http://localhost:5173
-
Enter Candidate Information
- Input the candidate's name on the start screen
- Click "Start Interview Session"
-
Grant Camera Permissions
- Allow browser access to your webcam when prompted
- Wait for AI models to load (indicated by loading spinner)
-
Monitor the Session
- The system will automatically start monitoring once recording begins
- Real-time events appear in the Event Log panel
- Status indicators show current monitoring state
-
End the Session
- Click "Stop Session" to end monitoring
- View the comprehensive proctoring report
- Download the report for record-keeping
The system uses a deduction-based scoring system starting from 100:
| Violation Type | Deduction | Description |
|---|---|---|
| Focus Lost | -2 points | Looking away for >5 seconds |
| No Face | -5 points | Face not visible for >10 seconds |
| Multiple Faces | -10 points | More than one person in frame |
| Suspicious Object | -15 points | Unauthorized items detected |
Score Interpretation:
- 90-100: Excellent - High integrity
- 80-89: Good - Minor violations
- 70-79: Fair - Moderate concerns
- 60-69: Poor - Significant violations
- <60: Very Poor - Multiple serious violations
// Focus is monitored every second
if (noFaceDetected) {
timer++;
if (timer >= 10) {
logEvent('no_face', severity: 'high');
} else if (timer >= 5) {
logEvent('focus_lost', severity: 'medium');
}
}- Face detection: Every 1 second
- Object detection: Every 3 seconds (to optimize performance)
- Duplicate prevention: 5-second cooldown per object type
video-proctoring-system/
├── src/
│ ├── components/
│ │ ├── InterviewSession.tsx # Main session container
│ │ ├── VideoMonitor.tsx # Video feed with AI detection
│ │ ├── EventLog.tsx # Real-time event display
│ │ └── ProctoringReport.tsx # Detailed report view
│ ├── hooks/
│ │ ├── useFaceDetection.ts # MediaPipe face detection
│ │ └── useObjectDetection.ts # TensorFlow object detection
│ ├── utils/
│ │ └── detectionLogic.ts # Scoring & event utilities
│ ├── types.ts # TypeScript interfaces
│ ├── App.tsx # Root component
│ └── main.tsx # Entry point
├── public/ # Static assets
├── dist/ # Production build
└── package.json # Dependencies
The system uses custom headers for SharedArrayBuffer support:
// vite.config.ts
server: {
headers: {
'Cross-Origin-Embedder-Policy': 'require-corp',
'Cross-Origin-Opener-Policy': 'same-origin',
},
}Adjust thresholds in src/components/VideoMonitor.tsx:
const NO_FACE_THRESHOLD = 10; // seconds
const FOCUS_LOST_THRESHOLD = 5; // seconds
const OBJECT_CHECK_INTERVAL = 3000; // millisecondsEdit src/utils/detectionLogic.ts:
export const SUSPICIOUS_OBJECTS = [
'cell phone',
'book',
'laptop',
'keyboard',
// Add your objects here
];Update calculateIntegrityScore in src/utils/detectionLogic.ts:
export const calculateIntegrityScore = (stats: DetectionStats): number => {
let score = 100;
score -= stats.focusLostCount * 2; // Adjust penalty
score -= stats.noFaceCount * 5; // Adjust penalty
score -= stats.multipleFacesCount * 10; // Adjust penalty
score -= stats.suspiciousObjectCount * 15; // Adjust penalty
return Math.max(0, Math.min(100, score));
};npm run buildThe optimized production build will be in the dist/ directory.
npm run previewnpm run typechecknpm run lint- Local Processing: All AI computations happen in the browser
- No Cloud Uploads: Video streams are not uploaded to external servers
- Session Storage: Data is stored locally in browser storage
- Camera Access: Used only during active sessions
- Data Persistence: Ready for Supabase integration for secure storage
- Eye closure/drowsiness detection
- Audio detection for background voices
- Real-time alerts for interviewers
- Multi-language support
- Advanced analytics dashboard
- Video recording with playback
- Automated report email delivery
- Integration with popular interview platforms
The system is ready for Supabase integration:
// Example schema structure
interface Session {
id: string;
candidate_name: string;
start_time: timestamp;
end_time: timestamp;
integrity_score: number;
events: ProctoringEvent[];
}- Ensure camera permissions are granted in browser settings
- Check if another application is using the camera
- Try a different browser (Chrome recommended)
- Verify camera is properly connected
- Check internet connection (models load from CDN)
- Clear browser cache
- Disable ad blockers that might block CDN requests
- Check browser console for specific errors
- Close unnecessary browser tabs
- Ensure adequate system resources (RAM, CPU)
- Reduce video resolution in
VideoMonitor.tsx:video: { width: 640, height: 480 } // Lower resolution
This project is licensed under the MIT License - see the LICENSE file for details.
Contributions are welcome! Please follow these steps:
- Fork the repository
- Create a feature branch (
git checkout -b feature/AmazingFeature) - Commit your changes (
git commit -m 'Add some AmazingFeature') - Push to the branch (
git push origin feature/AmazingFeature) - Open a Pull Request
- MediaPipe - Google's ML solutions for face detection
- TensorFlow.js - Browser-based machine learning
- COCO-SSD - Pre-trained object detection model
- React Community - For the amazing ecosystem
Note: This system is designed for educational and professional interview monitoring. Always ensure compliance with local privacy laws and obtain proper consent from candidates before use.