Skip to content

3bsalam-1/Mood-Detection

Folders and files

NameName
Last commit message
Last commit date

Latest commit

Β 

History

13 Commits
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 

Repository files navigation

😊😒 Mood Detection CNN

Python TensorFlow Keras License Model Accuracy

A professional deep learning project that detects facial mood expressions (Happy vs Sad) from images using a Convolutional Neural Network (CNN). Features real-time prediction via camera or file upload with an intuitive GUI.

✨ Features

  • 🎯 High-Accuracy Classification - 80%+ accuracy on binary mood detection (Happy/Sad)
  • πŸ“Έ Real-Time Camera Detection - Live video feed with instant mood prediction
  • πŸ“ File Upload Support - Predict mood from image files (JPG, PNG, BMP)
  • πŸ–₯️ Modern GUI - Clean dark/light theme interface with CustomTkinter
  • πŸš€ GPU Optimized - Automatic GPU detection and memory management
  • πŸ“Š Data Augmentation - Prevents overfitting on small datasets
  • πŸ”§ Easy Training - Simple notebook-based training pipeline
  • πŸ“ˆ TensorBoard Logging - Monitor training metrics in real-time

πŸ› οΈ Technology Stack

Technology Version Purpose
Python 3.10+ Core programming language
TensorFlow 2.16+ Deep learning framework
Keras 3.0+ Neural network API
OpenCV 4.8+ Image processing & camera capture
CustomTkinter 5.0+ Modern GUI framework
Matplotlib 3.7+ Training visualization
Pillow 10.0+ Image display

πŸ“ Project Structure

Image-Classification/
β”œβ”€β”€ README.md                        # Project documentation
β”œβ”€β”€ LICENSE                          # MIT License
β”œβ”€β”€ .gitignore                       # Git ignore patterns
β”œβ”€β”€ requirements.txt                 # Python dependencies
β”œβ”€β”€ pyproject.toml                   # Packaging metadata
β”œβ”€β”€ Dockerfile                       # Docker image definition
β”‚
β”œβ”€β”€ src/                             # Core package
β”‚   β”œβ”€β”€ __init__.py
β”‚   β”œβ”€β”€ cli.py                       # CLI: train / predict
β”‚   β”œβ”€β”€ train.py                     # Training pipeline (scriptable)
β”‚   β”œβ”€β”€ inference.py                 # Inference utilities (MoodPredictor)
β”‚   └── gui/                         # GUI package
β”‚       └── app.py                   # GUI application implementation
β”‚
β”œβ”€β”€ notebooks/                       # Notebooks for experiments
β”‚   └── train.ipynb                  # Training notebook
β”‚
β”œβ”€β”€ models/                          # Trained models
β”‚   └── mood.h5                      # Trained binary classifier
β”‚
β”œβ”€β”€ mood/                            # Dataset
β”‚   β”œβ”€β”€ happy/                        # happy images
β”‚   └── sad/                          # sad images
β”‚
└── logs/                          # TensorBoard logs

πŸš€ Quick Start

Prerequisites

  • Python 3.10+
  • pip package manager
  • (Optional) NVIDIA GPU

Dataset & model availability ⚠️

The raw dataset (mood/) and pre-trained model weights (models/mood.h5) are not included in the public repository for size or privacy reasons. If these directories or files are not present in your clone and you need access, please contact the repository owner at 3bsalam0@gmail.com or open a GitHub issue to request access. You can also recreate the dataset locally by placing images in mood/happy/ and mood/sad/ and training a model with python -m src.cli train.

Installation

# 1. Clone repository
git clone https://github.com/3bsalam-1/Image-Classification.git
cd Image-Classification

# 2. Create virtual environment
python -m venv venv
venv\Scripts\activate  # Windows
source venv/bin/activate  # macOS/Linux

# 3. Install dependencies
pip install -r requirements.txt

# 4. (Optional) Enable GPU support
pip install tensorflow[and-cuda]

πŸ“– Usage

Training the Model

Option A: Jupyter Notebook (Recommended)

jupyter notebook notebooks/train.ipynb

Execute all cells for interactive training with visualizations and experiments.

Option B: CLI / Python Script

# Using the CLI (recommended):
python -m src.cli train --epochs 50

# Or run the training script directly:
python src/train.py

Expected Output:

Dataset loaded: 9 batches
Starting training...
Epoch 1/50 - loss: 0.689, accuracy: 0.562, val_loss: 0.683, val_accuracy: 0.583
...
=== Test Set Results ===
Precision: 0.85
Recall: 0.78
Accuracy: 0.81
Loss: 0.4921

Real-Time Prediction GUI

# Run GUI locally (module entrypoint)
# Alternative: run the file directly (supported)
#   python src/gui/app.py
python -m src.gui.app

Requirements: The GUI requires customtkinter (already listed in requirements.txt); install with pip install customtkinter if you plan to use the GUI.

Features:

  • πŸ“Έ Live camera detection
  • πŸ“ Image file selection
  • 😊 Happy / 😒 Sad classification
  • Dark/Light theme support

Python API

from tensorflow.keras.models import load_model
import cv2
import numpy as np

# Load model
model = load_model('models/mood.h5')

# Predict from image
image = cv2.imread('image.jpg')
image_resized = cv2.resize(image, (256, 256)) / 255.0
prediction = model.predict(np.expand_dims(image_resized, 0))[0][0]

mood = 'Happy' if prediction <= 0.5 else 'Sad'
confidence = (1 - prediction) if prediction <= 0.5 else prediction
print(f"Mood: {mood}, Confidence: {confidence:.2%}")

πŸ€– Model Architecture

Network Design

Input: 256Γ—256 RGB Image
       ↓
[Data Augmentation]
RandomFlip(0.5) + Rotation(0.15) + Zoom(0.15)
       ↓
[Conv Block 1] 32 filters β†’ MaxPool β†’ Dropout(0.25)
[Conv Block 2] 64 filters β†’ MaxPool β†’ Dropout(0.25)
[Conv Block 3] 128 filters β†’ MaxPool β†’ Dropout(0.25)
       ↓
[Dense 1] 512 units β†’ Dropout(0.4)
[Dense 2] 256 units β†’ Dropout(0.4)
[Dense 3] 128 units β†’ Dropout(0.3)
       ↓
[Output] 1 unit β†’ Sigmoid
       ↓
Probability: [0, 1]

Key Hyperparameters

Parameter Value Reason
Loss BinaryCrossentropy Binary classification
Activation (Output) Sigmoid Probability in [0,1]
Optimizer Adam (lr=0.0005) Stable convergence
Regularization L2(0.0001) Prevents overfitting
Data Augmentation Yes Small dataset (440 images)
Class Weights {0: 1.1, 1: 0.9} Balance classes
Early Stopping Patience=7 Avoid overfitting
Max Epochs 50 With early stopping

πŸ“Š Model Performance

Test Set Metrics

Metric Value
Accuracy 81.0%
Precision 0.85
Recall 0.78
Loss (BCE) 0.4921

Dataset Info

Aspect Value
Total Images 440
Happy 214 (49%)
Sad 226 (51%)
Balance Ratio 1.06:1 βœ…
Image Size 256Γ—256
Corruption 0% βœ…
Split 70/20/10

πŸ› Troubleshooting

No GPU Detected

pip install tensorflow[and-cuda]
# Or with conda:
conda install tensorflow-gpu

Module Not Found

pip install -r requirements.txt

Model File Missing

# Train first using the CLI or script
python -m src.cli train
# or
python src/train.py

GUI Window Won't Open

pip install customtkinter --upgrade

Camera Permission Issues

  • Grant camera access in OS settings
  • Try different USB port for external camera

πŸ“š Adding New Images

  1. Place happy images in mood/happy/
  2. Place sad images in mood/sad/
  3. Retrain: python -m src.cli train

Supported formats: JPG, PNG, BMP

πŸ“ GPU Performance

Component CPU GPU
Training Time/Epoch 1-5 min 5-15 sec
Speedup 1x 10-50x βœ…

πŸ“„ License

MIT License - see LICENSE file for details.

πŸ‘€ Author

Ahmed Abdulsalam

πŸ™ Acknowledgments


⭐ Star this repo if you find it helpful!

Made with ❀️ by Ahmed Abdulsalam

About

πŸ–ΌοΈA professional deep learning project that detects facial mood expressions (Happy vs Sad) from images using a Convolutional Neural Network (CNN). Features real-time prediction via camera or file upload with an intuitive GUI.

Topics

Resources

License

Contributing

Stars

Watchers

Forks

Releases

No releases published

Packages

 
 
 

Contributors