Skip to content

Jolycky/DeepFashion-Image-Classification-CNN

Folders and files

NameName
Last commit message
Last commit date

Latest commit

Β 

History

2 Commits
Β 
Β 
Β 
Β 
Β 
Β 

Repository files navigation

Image Classification System

A deep learning-based image classification system using Transfer Learning with MobileNetV2, achieving high accuracy (target: 95%+) for multi-class image classification tasks.

πŸ“‹ Table of Contents

🎯 Overview

This project implements an end-to-end image classification pipeline using TensorFlow/Keras with Transfer Learning. The system leverages MobileNetV2 pre-trained on ImageNet for feature extraction and includes fine-tuning capabilities to achieve high accuracy on custom datasets.

Key Highlights:

  • Transfer Learning with MobileNetV2
  • Enhanced data preprocessing and augmentation
  • Two-phase training (feature extraction + fine-tuning)
  • Multiple export formats (H5, TFLite, TensorFlow.js)
  • Comprehensive evaluation metrics

✨ Features

  • Transfer Learning: Uses MobileNetV2 pre-trained weights for efficient training
  • Data Augmentation: Advanced augmentation techniques to prevent overfitting
  • Fine-Tuning: Optional fine-tuning phase for maximum accuracy
  • Multi-Format Export:
    • Keras H5 format
    • TensorFlow Lite (mobile deployment)
    • TensorFlow.js (web deployment)
  • Comprehensive Metrics: Accuracy, confusion matrix, classification report
  • Easy Inference: Simple API for making predictions on new images

πŸ“Š Dataset

The dataset should be organized in the following structure:

datasets/
β”œβ”€β”€ train_organized/
β”‚   β”œβ”€β”€ class1/
β”‚   β”‚   β”œβ”€β”€ image1.jpg
β”‚   β”‚   β”œβ”€β”€ image2.jpg
β”‚   β”‚   └── ...
β”‚   β”œβ”€β”€ class2/
β”‚   └── ...
β”œβ”€β”€ val_organized/
β”‚   β”œβ”€β”€ class1/
β”‚   β”œβ”€β”€ class2/
β”‚   └── ...
└── test_organized/
    β”œβ”€β”€ class1/
    β”œβ”€β”€ class2/
    └── ...

Dataset Split:

  • Training: ~70-80%
  • Validation: ~10-15%
  • Testing: ~10-15%

πŸ—οΈ Model Architecture

The model uses MobileNetV2 as the backbone with a custom classification head:

Input (224x224x3)
    ↓
MobileNetV2 Base (frozen initially)
    ↓
GlobalAveragePooling2D
    ↓
Dense(1024, relu)
    ↓
Dropout(0.5)
    ↓
Dense(num_classes, softmax)

Training Strategy:

  1. Phase 1: Train only the classification head (base frozen)
  2. Phase 2: Fine-tune the entire model with low learning rate

πŸš€ Installation

Prerequisites

  • Python 3.8+
  • pip

Setup

  1. Clone the repository

    git clone https://github.com/yourusername/Klasifikasi-Gambar.git
    cd Klasifikasi-Gambar
  2. Create virtual environment

    python -m venv .venv
    source .venv/bin/activate  # On Windows: .venv\Scripts\activate
  3. Install dependencies

    pip install -r submission/requirements.txt

πŸ’» Usage

Quick Start with Jupyter Notebook

  1. Start Jupyter

    jupyter notebook
  2. Open the notebook

    • Navigate to submission/notebook.ipynb
    • Run cells sequentially

Training from Scratch

The notebook includes all steps:

  1. Data loading and preprocessing
  2. Model building
  3. Training (feature extraction)
  4. Fine-tuning
  5. Evaluation
  6. Model export

πŸŽ“ Training

Configuration

Key hyperparameters (can be modified in the notebook):

IMG_HEIGHT = 224
IMG_WIDTH = 224
BATCH_SIZE = 32
EPOCHS = 20  # Feature extraction phase
FINE_TUNE_EPOCHS = 10  # Fine-tuning phase

Data Preprocessing

  • Training: MobileNetV2 preprocessing + augmentation

    • Rotation: Β±30Β°
    • Width/Height shift: 20%
    • Shear: 20%
    • Zoom: 20%
    • Horizontal flip
  • Validation/Test: MobileNetV2 preprocessing only

Training Process

# Phase 1: Feature Extraction
history = model.fit(
    train_generator,
    epochs=EPOCHS,
    validation_data=validation_generator,
    callbacks=[EarlyStopping, ModelCheckpoint, ReduceLROnPlateau]
)

# Phase 2: Fine-Tuning
model.layers[0].trainable = True
model.compile(optimizer=Adam(lr=1e-5), ...)
history_finetune = model.fit(...)

πŸ“ˆ Evaluation

The model is evaluated using:

  • Accuracy: Overall classification accuracy
  • Confusion Matrix: Visual representation of predictions
  • Classification Report: Precision, recall, F1-score per class

Example output:

Test Accuracy: 95.23%

Classification Report:
              precision    recall  f1-score   support
     class1       0.96      0.94      0.95       150
     class2       0.93      0.95      0.94       140
     ...

πŸ“¦ Model Export

Keras H5 Format

model.save('final_model.h5')

TensorFlow Lite (Mobile)

converter = tf.lite.TFLiteConverter.from_keras_model(model)
tflite_model = converter.convert()
with open('model.tflite', 'wb') as f:
    f.write(tflite_model)

TensorFlow.js (Web)

tensorflowjs_converter --input_format=keras final_model.h5 tfjs_model

πŸ“Š Results

Metric Value
Training Accuracy ~98%
Validation Accuracy ~95%
Test Accuracy 95%+
Model Size (H5) ~14 MB
Model Size (TFLite) ~9 MB

Note: Results may vary based on dataset and training configuration

πŸ“ Project Structure

Klasifikasi-Gambar/
β”œβ”€β”€ .venv/                      # Virtual environment
β”œβ”€β”€ submission/
β”‚   β”œβ”€β”€ datasets/               # Dataset directory (not in git)
β”‚   β”‚   β”œβ”€β”€ train_organized/
β”‚   β”‚   β”œβ”€β”€ val_organized/
β”‚   β”‚   └── test_organized/
β”‚   β”œβ”€β”€ notebook.ipynb          # Main training notebook
β”‚   β”œβ”€β”€ requirements.txt        # Python dependencies
β”‚   β”œβ”€β”€ best_model.h5          # Best model checkpoint
β”‚   β”œβ”€β”€ final_model.h5         # Final trained model
β”‚   β”œβ”€β”€ model.tflite           # TFLite model
β”‚   β”œβ”€β”€ tfjs_model/            # TensorFlow.js model
β”‚   └── PRD.md                 # Product requirements
β”œβ”€β”€ .gitignore
└── README.md

πŸ› οΈ Technologies Used

  • TensorFlow/Keras: Deep learning framework
  • MobileNetV2: Pre-trained CNN architecture
  • NumPy: Numerical computing
  • Matplotlib/Seaborn: Visualization
  • scikit-learn: Metrics and evaluation
  • Jupyter: Interactive development

🀝 Contributing

Contributions are welcome! Please follow these steps:

  1. Fork the repository
  2. Create a feature branch (git checkout -b feature/AmazingFeature)
  3. Commit your changes (git commit -m 'Add some AmazingFeature')
  4. Push to the branch (git push origin feature/AmazingFeature)
  5. Open a Pull Request

πŸ“ License

This project is licensed under the MIT License - see the LICENSE file for details.

πŸ‘€ Author

Your Name

πŸ™ Acknowledgments

  • MobileNetV2 architecture by Google
  • TensorFlow/Keras team
  • ImageNet dataset for pre-trained weights

πŸ“ž Support

For questions or issues, please:


Note: Make sure to replace placeholder information (author details, repository URL) with your actual information before publishing.

About

No description, website, or topics provided.

Resources

Stars

Watchers

Forks

Releases

No releases published

Packages

 
 
 

Contributors