Skip to content

RohanMukka/Multiclass-Emotion-Recognition-from-EEG-Signals

Repository files navigation

Multiclass Emotion Recognition from EEG Signals

Project Overview

This project implements a robust system for multiclass emotion recognition using EEG signals. It utilizes the GAMEEMO dataset to classify four distinct emotional states: Boring, Calm, Scary, and Funny.

The core innovation of this project is a Unique Dual-Branch Parallel Hybrid Approach that fuses the strengths of Machine Learning (Random Forest) and Deep Learning (CNN). This method aims to balance the interpretability of hand-crafted features with the high performance of deep feature learning, achieving a recognition accuracy of approximately 90%.

Features

  • Dataset: GAMEEMO (28 subjects, 14 channels, 128Hz sampling rate).
  • Preprocessing Pipeline:
    • Artifact removal and channel normalization.
    • Bandpass filtering (1-45 Hz) and notch filtering (50/60 Hz).
    • Epoch cleaning (handling NaNs and flat channels).
  • Feature Extraction:
    • Power Spectral Density (PSD) analysis.
    • Band-power feature extraction across Delta, Theta, Alpha, Beta, and Gamma bands.
  • Modeling:
    • Random Forest (RF): Trained on extracted spectral features for interpretability.
    • Convolutional Neural Network (CNN): Trained on raw EEG signals for complex pattern recognition.
    • Hybrid Fusion: A weighted voting mechanism combining RF and CNN predictions.
  • Analysis:
    • Feature importance ranking.
    • Band-specific accuracy analysis.

Prerequisites

Ensure you have the following Python libraries installed:

pip install mne scipy matplotlib numpy pandas scikit-learn kagglehub
# + Deep Learning framework (TensorFlow/Keras or PyTorch depending on environment)

Structure

  • NDS_PRJ_cleaned.ipynb: The main Jupyter Notebook containing the end-to-end pipeline: data loading, preprocessing, feature extraction, model training, and evaluation.
  • nds (5).pdf: Project proposal and requirements document.
  • fix_notebook.py: Utility script for notebook maintenance.

Implementation Details

  1. Data Acquisition: The project automatically downloads the GAMEEMO dataset using kagglehub.
  2. Processing: EEG data is converted to MNE RawArray objects and preprocessed to remove noise.
  3. Classification:
    • Branch 1 (ML): Extracts 70 spectral features (14 channels × 5 bands) and trains a Random Forest classifier (n_estimators=300).
    • Branch 2 (DL): Processes raw signal segments using a CNN.
    • Fusion: The final prediction is a weighted average of the probabilities from both models.
  4. Results: The fusion model typically outperforms individual models, with the best observed weight distribution around 0.8 (RF) / 0.2 (CNN).

Usage

  1. Open NDS_PRJ_cleaned.ipynb in Jupyter Notebook or Google Colab.
  2. Run the cells sequentially.
  3. The notebook will:
    • Download the dataset.
    • Preprocess the EEG data.
    • Train the models.
    • Display accuracy metrics, confusion matrices, and feature importance plots.

About

Dual-branch hybrid (CNN + Random Forest) system for emotion recognition from EEG signals, achieving ~90% accuracy on the GAMEEMO dataset.

Topics

Resources

Stars

Watchers

Forks

Releases

No releases published

Packages

 
 
 

Contributors