Skip to content

Semantic segmentation on the CamVid dataset using deep learning models implemented in PyTorch. This project trains and evaluates a segmentation model to label urban driving scenes.

Notifications You must be signed in to change notification settings

ehsankhani/camvid-segmentation

Folders and files

NameName
Last commit message
Last commit date

Latest commit

Β 

History

4 Commits
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 

Repository files navigation

camvid-segmentation

Semantic segmentation on the CamVid dataset using deep learning models implemented in PyTorch. This project trains and evaluates a segmentation model to label urban driving scenes.

This project performs semantic segmentation on the CamVid dataset. It leverages PyTorch and deep learning techniques to classify each pixel in urban driving scenes.

πŸ“ Features

  • Preprocessing and loading of CamVid images and labels
  • Deep learning model (e.g., U-Net) for segmentation
  • Training and validation pipeline
  • Visualization of model predictions

🧰 Requirements

Before running the notebook, set up a Python environment and install the dependencies listed in requirements.txt.

πŸ”Ή Option 1: Using venv (Standard Virtual Environment)

# Create a virtual environment
python -m venv venv

# Activate the environment
# On Windows:
venv\Scripts\activate
# On macOS/Linux:
source venv/bin/activate

# Upgrade pip (recommended)
pip install --upgrade pip

# Install the required packages
pip install -r requirements.txt'

πŸ”Ή Option 2: Using conda (Anaconda/Miniconda)

# 1. Create a conda env
conda create -n camvid-env python=3.12

# 2. Activate it
conda activate camvid-env

# 3. Ensure pip is available
conda install pip

# 4. Install dependencies
pip install -r requirements.txt

πŸš€ How to Run

Clone the repo

git clone https://github.com/ehsankhani/camvid-segmentation
cd camvid-segmentation

-Install dependencies

Use either the venv or conda instructions above.

-Launch Jupyter Notebook

jupyter notebook camvid-segmentation.ipynb

Follow the notebook to:

  • Load & preprocess images
  • Train the segmentation model
  • Evaluate & visualize predictions

πŸ“‚ Dataset Download the CamVid dataset from the official site and place it under a folder named data/ (or adjust paths in the notebook). -- link to the dataset : CamVid dataset

CamVid-main/
β”œβ”€β”€ CamVidColor11/
β”œβ”€β”€ CamVidGray/
β”œβ”€β”€ CamVid_Label/
β”œβ”€β”€ CamVid_RGB/
β”œβ”€β”€ SegNetanno/
β”œβ”€β”€ .gitignore
β”œβ”€β”€ best_model.pth
β”œβ”€β”€ camvid_data.py
β”œβ”€β”€ camvid_test.txt
β”œβ”€β”€ camvid_train.txt
β”œβ”€β”€ camvid_trainval.txt
β”œβ”€β”€ camvid_val.txt
β”œβ”€β”€ LICENSE
β”œβ”€β”€ camvid-segmentation.ipynb
β”œβ”€β”€ README.md
β”œβ”€β”€ requirements.txt
β”œβ”€β”€ segmentation_examples.png
└── training_metrics.png

About

Semantic segmentation on the CamVid dataset using deep learning models implemented in PyTorch. This project trains and evaluates a segmentation model to label urban driving scenes.

Resources

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published