This repository contains the full analysis pipeline for the BLNK project, enabling automated extraction of eye-related features from dual-eye video recordings.
The pipeline is designed to be:
- modular
- reproducible
- interactive (via Jupyter Notebook)
- compatible with downstream MATLAB and Python workflows
It builds on the PyLids framework and extends it with custom preprocessing, video handling, and analysis logic.
The BLNK pipeline processes raw dual-eye videos and produces structured feature outputs for each eye.
For each input video, the pipeline:
- Splits the video into left-eye and right-eye streams
- Crops the eye region using an elliptical mask
- Applies preprocessing:
- brightness thresholding
- contrast / gamma / brightness adjustments
- padding to a consistent size
- Runs eye-feature extraction via PyLids
- Saves results to MATLAB-compatible
.matfiles
Optional visualization outputs can also be generated for debugging and parameter tuning.
blnkAnalysis/
│
├── blnk_analysis_pipeline.ipynb # Main interactive pipeline (entry point)
├── blnk_analysis_pipeline.py # Core processing + preprocessing logic
├── subject_settings/ # Per-subject parameter configurations
├── pylids/ # Submodule dependency (feature extraction backend)
├── README.md
The subject_settings/ directory stores predefined parameter configurations for individual subjects.
Each subject may require slightly different preprocessing due to:
- differences in lighting conditions
- camera positioning
- eye appearance variability
- recording-specific artifacts
These settings typically include:
- crop ellipse parameters
- threshold values
- brightness / contrast / gamma adjustments
- subject-specific preprocessing overrides
Rather than manually tuning parameters each time, this system allows:
- consistent preprocessing across sessions for the same subject
- reproducibility of results
- easier batch processing
- When analyzing a known subject → load their saved settings
- When analyzing a new subject → tune parameters and save a new configuration
git clone --recurse-submodules git@github.com:zkelly1/blnkAnalysis.git
cd blnkAnalysisNote: The
--recurse-submodulesflag is required to properly clone thepylidsdependency.
Follow the setup instructions inside:
pylids/
This involves:
- creating a Python/Conda environment
- installing required dependencies
The pipeline is designed to be used through the Jupyter Notebook:
blnk_analysis_pipeline.ipynb
You can run the pipeline using either Jupyter Notebook or VS Code.
To use the jupyter native notebook interface, do the following
jupyter notebookThen open:
blnk_analysis_pipeline.ipynb
You can open and run the pipeline directly in VS Code using its built-in notebook interface:
- Open the repository folder in VS Code
- Navigate to:
blnk_analysis_pipeline.ipynb - Open the notebook file
- In the top-right corner, select the correct Python kernel
- This should be the environment where PyLids and all dependencies are installed (titled pylids)
- Run cells interactively using:
- ▶ Run Cell buttons, or
- Shift + Enter
Important: If the notebook does not run correctly, the most common issue is selecting the wrong Python kernel.
Tip: VS Code provides a smoother development experience with variable inspection, inline outputs, and easier debugging compared to the classic Jupyter interface.
- Verify kernel
- Install missing dependencies (if needed)
- Import custom modules
Provide:
- a single video path, OR
- a directory of videos, OR
- a list of filepaths
Choose a location where processed results will be saved.
If the subject has an existing configuration in subject_settings/, load it to ensure consistent preprocessing.
Otherwise, manually define parameters.
Key parameters include:
| Parameter | Description |
|---|---|
| crop_ellipse | Defines region of interest for eye |
| target_size | Output frame size after padding |
| whiteness_threshold | Removes bright artifacts |
| contrast, gamma, brightness | Image enhancement |
Run the verification step to visually confirm:
- correct eye isolation
- proper cropping
- no loss of important features
- reasonable brightness/contrast
Execute the pipeline to process all videos and generate outputs.
For each processed video, the pipeline produces:
.matfiles containing:- extracted eye features
- metadata
- preprocessing parameters
Optional visualization outputs may also be generated.
Example:
video_left_eyeFeatures.mat
video_right_eyeFeatures.mat
If a CUDA-enabled GPU is available, PyTorch can accelerate parts of the pipeline.
Check availability:
import torch
print(torch.cuda.is_available())You can also run:
nvidia-smi- Always verify preprocessing parameters before batch runs
- Use subject-specific settings when available
- Ensure CUDA is installed
- Verify correct PyTorch version
- Check that
pylidssubmodule is initialized - Confirm correct Python environment
- Confirm installation of extra libraries this project uses ontop of PyLids
- Revisit preprocessing parameters
- Adjust ellipse and thresholds
This repository combines:
- a reusable Python processing pipeline
- an interactive notebook interface
- integration with PyLids for feature extraction