Developed by Sadekul Islam (Lì Ào)
This is a cutting-edge Human-Computer Interaction (HCI) system designed to bridge the gap between physical gestures and digital workspaces. No mouse, no keyboard—just natural interaction.
- Neural Gesture Tracking: Real-time hand tracking using Google MediaPipe.
- Asset Beaming: Instant image/file transfer from mobile to PC via custom JSON chunking over Firebase.
- AI Canvas: Direct air-drawing and sketching with sub-millimeter precision.
- Multimodal Feedback: Integrated Speech-to-Text (STT) and voice command recognition.
- Frontend: React.js / Vite
- Vision Engine: MediaPipe Hands
- Backend & Sync: Firebase Firestore (Custom Data Streaming Logic)
- Styling: Tailwind CSS / Framer Motion
Copyright © 2026 Sadekul Islam (Lì Ào).
This project is licensed under the MIT License. All unique algorithms for data chunking and gesture mapping are the proprietary work of the author.
Note: This project is currently under active development. Some core features are private and will be released upon completion.