MMSpace is a mentor-mentee management platform for institutes and training programs. It combines role-based workflows (admin, mentor, mentee, guardian), real-time communication, attendance and leave tracking, grievance handling, and an AI-powered placement predictor.
- What This Repository Contains
- Core Features
- Architecture
- Tech Stack
- Repository Structure
- Prerequisites
- Local Setup
- Seeding Demo Data
- Important Functional Flows
- Health and Debugging
- Deployment (Production)
- Docker (Local Alternative)
- Known Issues and Backlog
- Testing Checklist
- Contributing
- React + Vite frontend (
client) - Node.js + Express backend (
server) - Python FastAPI ML microservice (
ml_service) for placement prediction - MongoDB data layer
- Socket.IO real-time messaging and notifications
- Role-based authentication and authorization
- Admin user management (enable/disable, update, delete)
- Mentor-mentee assignment management by admin
- Mentor profile editing (email, phone, qualifications, citations/publications)
- Real-time chat using Socket.IO
- Group messaging with proper mentee delivery
- Individual mentor-mentee chat
- Announcement feed with comment support
- Attendance tracking and attendance management views
- Leave request workflow (submit, review, approve/reject)
- Grievance workflow (submit, review, resolve/reject)
- Admin analytics dashboard and system overview
- CSV bulk upload for student onboarding
- CSV validation, create/update behavior, and failure reporting
- Dedicated FastAPI microservice for inference
- TensorFlow/Keras ANN model + scaler metadata
- Node backend proxy endpoint:
POST /api/placement/predict - Frontend predictor UI with result insights
- Frontend talks to Node backend API
- Node backend handles auth, business logic, and DB operations
- Socket.IO provides real-time events for chat/notifications
- Node backend forwards placement requests to ML service (
ML_SERVICE_URL) - ML service loads model artifacts at startup:
ml_service/models/placement_ann.kerasml_service/models/scaler.pkl
- Frontend: React 18, Vite, Tailwind CSS, React Router, Axios
- Backend: Node.js, Express, Mongoose, JWT, Socket.IO
- ML Service: FastAPI, Uvicorn, TensorFlow/Keras, scikit-learn, pandas, NumPy, mRMR
- Database: MongoDB (local or Atlas)
- Deployment: Render (server), Vercel (client)
MMSpace/
client/ # React frontend
server/ # Express backend
ml_service/ # FastAPI ML microservice
models/ # placement_ann.keras, scaler.pkl
dataset/ # source/synthetic ML data
render.yaml # Render blueprint config
- Node.js 18+
- npm 8+
- Python 3.10+ (recommended for TensorFlow compatibility)
- MongoDB (Atlas or local)
git clone <repository-url>
cd MMSpace
npm install
cd server && npm install
cd ../client && npm install
cd ..Create server/.env (or copy from server/.env.example) with values like:
NODE_ENV=development
PORT=5000
MONGODB_URI=mongodb://localhost:27017/mmspace
JWT_SECRET=your-secret-key
CLIENT_URL=http://localhost:5173
CORS_ORIGIN=http://localhost:5173
ML_SERVICE_URL=http://localhost:8000Create client/.env:
VITE_API_URL=http://localhost:5000cd ml_service
python3.10 -m venv .venv
source .venv/bin/activate
python -m pip install --upgrade pip
pip install fastapi uvicorn pandas numpy scikit-learn tensorflow mrmr-selection openpyxl xlrd requests
cd ..Terminal 1 (ML service):
cd ml_service
source .venv/bin/activate
python app.pyTerminal 2 (web app: server + client together):
cd MMSpace
npm run dev- Frontend:
http://localhost:5173 - Backend:
http://localhost:5000 - ML service:
http://localhost:8000
The backend includes a seed script with demo users.
cd server
npm run seedDefault demo credentials:
- Admin:
admin@example.com/password123 - Mentor:
mentor@example.com/password123 - Mentee:
mentee@example.com/password123
- Endpoint:
POST /api/csv/upload-students - Template:
GET /api/csv/template - Required columns:
rollNo,studentEmail,studentPhone - Optional:
fullName,parentsPhone,parentsEmail,mentorEmail,class,section
Default generated password for new student accounts:
{rollNo}@123
- Submit grievance:
POST /api/grievances - Mentee grievances:
GET /api/grievances/mentee - Mentor grievances:
GET /api/grievances/mentor - Admin grievances:
GET /api/grievances/admin - Review/resolve/reject endpoints for mentors/admins
- Endpoint:
PUT /api/mentors/profile - Editable fields include
email,phone,qualifications,citations
- Backend endpoint:
POST /api/placement/predict - Required payload fields:
DSA_SkillGPInternshipsActive_BacklogsTenth_MarksTwelfth_Marks
- Health check:
GET /api/health - Additional health route:
GET /health - DB connection test:
cd server
npm run test-db- Backend + ML: one Render web service (
deploy/Dockerfile.render) - Client: Vercel
- Database: MongoDB Atlas
NODE_ENV=production
PORT=5000
MONGODB_URI=<mongodb-uri>
JWT_SECRET=<secure-secret>
CLIENT_URL=https://your-app.vercel.app
CORS_ORIGIN=https://your-app.vercel.appVITE_API_URL=https://your-server.onrender.com- On Vercel, set project root directory to
client - Keep Vite rewrite support for SPA routes (
client/vercel.json) - Ensure server CORS values exactly match deployed frontend origin(s)
- Render deploy uses one web service with
deploy/Dockerfile.render(Node + ML in one container) render.yamlmust stay at repository root (Render blueprint discovery).dockerignoreshould stay at repository root (controls root Docker context for Render build)
A new user can run MMSpace with Docker on Linux, macOS, and Windows (Docker Desktop + WSL2 recommended on Windows).
git clone <repository-url>
cd MMSpace/mmspace-docker
docker compose down --remove-orphans
docker compose up -d --build
docker compose psOpen in browser:
http://localhost:3000
Useful checks:
docker compose logs -f server
docker compose logs -f ml-service
curl http://localhost:5001/api/healthStop containers:
docker compose downFull reset (also triggers first-time seed again):
docker compose down -v
docker compose up -d --buildDocker server startup runs seed:if-empty, which executes scripts/seed.js only when the database has no users.
Not in this Docker setup by default.
- This stack runs production-style containers, so code edits on host do not hot-reload automatically.
- After code changes, rebuild and restart:
cd mmspace-docker
docker compose up -d --buildFor instant hot reload while developing, use the non-Docker local dev flow (npm run dev for web app + python app.py for ML service).
The previous issues.md has been normalized into this list:
- Further admin dashboard hardening for complete mentor/mentee lifecycle
- Ensure mentor assignment remains strictly admin-controlled
- Attendance UX refinements for group-based detailed views
- Dashboard card improvements around leave/complaint indicators
- Like/comment consistency edge cases
- Group deletion modal UX polish
- Leave cancel flow refinements
- Batch group operations improvements
- Login/logout for all roles
- Admin CRUD and mentor assignment flows
- Group messaging and individual chat behavior
- Announcement creation and comment updates
- Leave submission and filtered state views
- Grievance submission and review lifecycle
- CSV upload happy path and validation errors
- Placement prediction end-to-end (
client -> server -> ml_service)
Please see CONTRIBUTING.md.