A browser-based app that uses your webcam, hand tracking, and a custom-trained gesture model to trigger Naruto shadow clone effect in real time.
⚠️ Important:
- The trained hand sign dataset and model files are not included in this repository. I encourage you to use
trainer.htmlto train the model yourself! See instructions below or in this video:- Browser: Chrome recommended (may glitch on Safari).
- Webcam: A webcam is required
- Yes, it’s an ML project in JS and not Python :P hehe
- I am not a MLE or Data Scientist. My goal was to make it exist, not to make it optimal (this can be done by you!)
- I used a simple neural network in my version (see
trainer.js). For those who wish to get better performance, I encourage you to mess around with the topology of the model or even try new models entirely
| File | Description |
|---|---|
index.html |
Main app: webcam feed with clone jutsu |
script.js |
Clone rendering, gesture detection, smoke effects |
styles.css |
Styling for the main page |
trainer.html |
UI for recording hand sign samples and training the model |
trainer.js |
Training logic and model definition: captures hand landmarks and exports the model |
trainer.css |
Styling for the trainer page |
assets/ |
Smoke sprites and overlay button images |
- MediaPipe Holistic tracks your hand landmarks through the webcam
- MediaPipe Selfie Segmentation isolates your body from the background
- A neural network TensorFlow.js model (trained by you) recognizes a specific two-hand gesture.
- When the gesture is detected with high confidence, shadow clones spawn with smoke effects
- Ensure you have Node.js installed as we will being
npxto serve the project locally (required as using our webcam). - From the root of the repo, run
npx serve -p <CHOOSE A PORT>to start a local server (e.g.,npx serve -p 3000will serve the project athttp://localhost:3000/if port 3000 is available). The following steps will assume port 3000 is used. - To navigate to each HTML file/page, add it after the link
- trainer.html —>
http://localhost:3000/trainer - index.html —>
http://localhost:3000/index(or justhttp://localhost:3000)
- trainer.html —>
- In Chrome, navigate to the trainer page (ex.
http://localhost:3000/trainer) - Record samples of your chosen hand sign (both hands visible)
- Record negative samples (random hand positions, edge cases)
- Click train → this generates
gesture-model.jsonandgesture-model.weights.bin - Place the exported files in the project root (same folder as the main app)
- In Chrome, navigate to the home/index page (ex.
http://localhost:3000/) - Allow camera access
- Perform your trained hand sign and your clones will appear!
In script.js you can tweak:
- Clone positions, sizes, and delay times in the
customClonesarray - Confidence threshold in the
predictGesturefunction (default:0.999)
In trainer.js you can tweak the model topology and training process. The current implementation was sufficient for my purposes but the model can definitely be optimized.
All loaded via the jsdelivr CDN, no installation required:
- TensorFlow.js
- MediaPipe Holistic
- MediaPipe Selfie Segmentation
