Skip to content

NeaByteLab/MediapipePy

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

1 Commit
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 

Repository files navigation

MediapipePy

MediaPipe face landmarker, detector, and recognition experiments. JSON stdout.

Installation

Clone, then install venv, dependencies, and models (run once):

git clone https://github.com/NeaByteLab/MediapipePy.git
cd MediapipePy
chmod +x install.sh run.sh
./install.sh

Put a face image (e.g. .jpg or .png) in images/ if you don’t have one yet.

Quick Start

The default task is face_landmarker: it returns 478 landmarks per face (x, y, z) as JSON. You can also use face_detector to get bounding boxes per face. If you don’t pass an image path, ./run.sh uses the first .jpg or .png in images/. All output is JSON to stdout so you can pipe it or use it in scripts.

./run.sh
./run.sh images/photo.jpg

Example output (face landmarks):

{"timestampInMilliseconds": 0, "faceLandmarks": [[{"x": 0.55, "y": 0.45, "z": -0.05}, ...]]}

Other options — pass task and/or image:

# face_landmarker on specific image
./run.sh images/your_image.jpg

# face_detector (first image in images/ or explicit path)
./run.sh face_detector
./run.sh face_detector images/your_image.jpg

# one face → encoding (landmark)
./run.sh face_decode images/photo.jpg

# same person? (landmark, default threshold 0.9)
./run.sh compare images/face_a.jpg images/face_b.jpg

# one face → encoding (Image Embedder)
./run.sh face_decode_embed images/photo.jpg

# same person? (Image Embedder; try --threshold 0.6)
./run.sh compare_embed images/face_a.jpg images/face_b.jpg

Experiment Results (compare, threshold 0.9)

Same person (landmark) — typically match:

Pair Similarity Match
elon1–elon2 0.994
elon1–elon3 0.948
jeff1–jeff2 0.970
jeff1–jeff3 0.920
jeff2–jeff3 0.942
mark1–mark2 0.924
mark2–mark3 0.972

Same person, no match (pose/angle): mark1–mark3 0.854 ❌

Cross-person (landmark): with threshold 0.9, many false positives (encoding is geometry, not identity). Use --threshold 0.98 for stricter same-person checks.

compare_embed (Image Embedder, threshold 0.6): same person ~0.5–0.67, different ~0.3–0.5; use --threshold 0.6 to separate.

Folders

  • models/ — filled by install.sh (see models/README.md)
  • images/ — put test images here (see images/README.md)

Manual (Tasks / Custom Model)

.venv/bin/python run.py --task face_landmarker --image images/foo.jpg
.venv/bin/python run.py --task face_detector --image images/foo.jpg --model models/face_detector.tflite
.venv/bin/python run.py --task face_decode --image images/foo.jpg
.venv/bin/python run.py --task compare --image images/a.jpg images/b.jpg --threshold 0.9
.venv/bin/python run.py --task face_decode_embed --image images/foo.jpg
.venv/bin/python run.py --task compare_embed --image images/a.jpg images/b.jpg --threshold 0.6

License

This project is licensed under the MIT license. See the LICENSE file for details.