This repository presents a face locking system built on top of a previously implemented face recognition pipeline using ArcFace (ONNX) and 5-point facial alignment.
While the project includes a complete face recognition workflow, the primary focus of this repository is the Face Locking feature, which extends recognition into identity-aware behavior tracking over time.
The system recognizes a selected enrolled identity, locks onto that face, tracks it consistently across frames, detects simple facial actions, and records an action history.
- Real-time camera capture with FPS measurement
- Face detection using Haar Cascades
- 5-point facial landmark extraction
- Geometric face alignment
- Face embedding extraction (ArcFace-style, ONNX-ready)
- Enrollment of multiple identities
- Cosine similarity–based recognition
- Manual selection of one enrolled identity to lock onto
- Automatic locking when the selected face is recognized
- Stable tracking across frames
- Tolerance to brief recognition failures
- Explicit release of lock only after sustained face loss
- Action detection while locked
- Persistent action history recording
face-recognition-5pt/
├── models/
├── data/
│ ├── db/ # Stored face embeddings
│ └── enroll/ # Enrollment images
├── src/
│ ├── enroll.py # Identity enrollment
│ ├── recognize.py # Face recognition pipeline
│ ├── facelock.py # Face locking and behavior tracking
│ ├── haar_5pt.py # Face detection + 5-point landmarks
│ ├── embed.py # ArcFace ONNX embedding
│ ├── align.py # Geometric alignment
│ └── config.py # System configuration
├── requirements.txt
└── README.md
- The user enrolls one or more identities.
- The system prompts the user to manually select one identity to lock.
- When the selected face appears and is confidently recognized:
- The system locks onto that identity.
- The lock status is clearly displayed.
- While locked:
- The same face is tracked across frames.
- Other faces are ignored.
- Brief recognition failures are tolerated.
- The lock is released only if the face disappears for a defined duration.
This behavior ensures stable identity tracking rather than frame-by-frame recognition.
The system detects and records the following actions using explainable geometric logic:
- Face moved left – horizontal displacement of the face center
- Face moved right – horizontal displacement of the face center
- Eye blink – Eye Aspect Ratio (EAR) thresholding
- Smile or laugh – relative mouth width increase
Perfect accuracy is not required; the goal is clear, interpretable detection logic.
While the face is locked, all detected actions are written to a history file.
<face>_history_<timestamp>.txt
Example:
chrispin_history_20260129112099.txt
Each line in the file contains:
- timestamp (Unix time)
- action type
- brief description
Example:
1706523341.52 eye_blink eye blink
1706523343.10 face_moved_left face moved left
History files are stored automatically and persist after program exit.
git clone https://github.com/Mchiir/FaceLocking.git
cd FaceLocking
python -m venv .venv
.venv\Scripts\activate
python -m pip install --upgrade pip
pip install -r requirements.txtMain dependencies
- opencv-python
- numpy
- onnxruntime (Also Download ONNX embeder here)
- mediapipe
- scipy
Enroll identities first:
python -m src.enrollStart the face locking system:
python -m src.facelockThe system will prompt for the identity name to lock onto.
This project moves beyond face recognition into identity-aware behavior tracking. It demonstrates how a recognized face can become a persistent subject whose actions are observed, interpreted, and recorded over time using explainable computer vision logic.