Open source local LLM infrastructure for Node.js, web apps, React Native, and native apps.
hilum-labs builds local-first AI runtimes, WebGPU inference cores, JavaScript SDKs, React Native integrations, and native C/C++ model execution layers for running large language models on-device without cloud APIs.
local-llm: local LLM runtime for Node.js and server-side JavaScript.local-llm-web: web JavaScript SDK for local LLM inference with WebGPU.local-llm-rn: React Native package for on-device LLM inference on iOS and Android.
- Local LLM runtimes for JavaScript, mobile, web, and native environments.
- On-device inference systems focused on privacy, low latency, and practical deployment.
- WebGPU infrastructure for client-side LLM execution.
- Cross-platform SDKs that keep API design consistent across runtimes.
- Open source components for local AI, offline AI, and edge inference workflows.
| Repository | Focus |
|---|---|
local-llm |
Node.js local LLM package |
local-llm-web |
Web SDK for WebGPU LLM inference |
local-llm-web-core |
WebGPU runtime core |
local-llm-rn |
React Native local AI package |
local-llm-js-core |
Shared JavaScript runtime primitives |
hilum-local-llm-engine |
Native C/C++ inference engine |
- Expand the
local-llmpackage family across Node.js, web, and React Native. - Build a dedicated WebGPU runtime core and web SDK for local LLM inference.
- Keep local AI APIs consistent across JavaScript and native runtimes.
- Ship open source infrastructure for web AI, on-device inference, and private model execution.
Questions, feedback, hiring or partnership inquiries: info@hilumlabs.com