-
Notifications
You must be signed in to change notification settings - Fork 3
18. Getting Started
- Prerequisites
- Installation Options
- First Run Setup
- Project Tooling Stack
- Troubleshooting Common Issues
Before installing Oxide Lab, ensure your system meets the following requirements:
System Requirements
- Operating System: Windows 10/11 (primary supported platform)
- RAM: Minimum 4 GB (8 GB or more recommended for larger models)
- Storage: Sufficient space for model files (typically 3-15 GB depending on model size)
- Processor: Modern CPU with SSE4.1 support (Intel Core i5 or equivalent recommended)
Software Dependencies
- Rust: Required for building from source. Install via rustup
- Node.js: Version 18 or higher. Required for frontend development and building
- CUDA Drivers (Optional): For GPU acceleration on NVIDIA GPUs. Install CUDA 11.7 or higher from NVIDIA's website
Model Requirements
- GGUF Format Models: Oxide Lab currently supports Qwen3 models in GGUF format
-
Tokenizer File: Required
tokenizer.jsonfile that matches your model
The application is designed to work with both CPU and GPU inference. While GPU acceleration (via CUDA) is supported and recommended for better performance, the application will fall back to CPU mode on systems without compatible NVIDIA hardware.
Section sources
- README.md
- src-tauri/Cargo.toml
For most users, the easiest way to get started with Oxide Lab is to download the pre-built binary:
- Visit the GitHub Releases page for Oxide Lab
- Download the latest Windows installer (.msi or .exe) for your system architecture
- Run the installer and follow the on-screen instructions
- Launch Oxide Lab from the Start menu or desktop shortcut
The pre-built binary includes all necessary dependencies and is ready to use immediately after installation.
For developers or users who want to customize the application, you can build Oxide Lab from source:
Step 1: Clone the Repository
git clone https://github.com/oxide-lab/oxide-lab.git
cd oxide-labStep 2: Install Dependencies
npm installStep 3: Build the Application
cargo tauri buildThis command will:
- Build the frontend using Vite
- Compile the Rust backend with Tauri
- Create a bundled application in the
src-tauri/target/release/bundle/directory
Alternative Build Commands
- For development:
npm run tauri dev - For production build:
npm run build && cargo tauri build
The build process automatically handles both the frontend and backend compilation, creating a standalone executable that can be distributed.
Updated Added new Tauri development and build scripts for CPU and CUDA configurations.
Section sources
- package.json
- vite.config.js
- src-tauri/Cargo.toml
- tauri.conf.json
- package.json - Added in commit 240401a
When launching Oxide Lab for the first time, you'll need to configure your model source. The application supports two primary methods:
Local GGUF Models
- Click "Select Model File" in the application interface
- Navigate to your local GGUF model file (e.g.,
qwen3-7b.Q4_K_M.gguf) - Click "Select Tokenizer" and choose the corresponding
tokenizer.jsonfile - Configure inference parameters as needed
- Click "Load" to initialize the model
Hugging Face Hub Models
- Use the built-in model browser to search for Qwen3 models
- Select a model from the search results
- The application will automatically download the model and tokenizer
- Wait for the download to complete
- The model will be loaded automatically upon download completion
After loading your model, you can adjust the following settings:
Generation Parameters
- Temperature: Controls randomness (0.1-0.3 for accuracy, 0.7-1.0 for creativity)
- Top-K/Top-P: Fine-tunes token selection during generation
- Min-P: Minimum probability threshold for token selection
- Repeat Penalty: Reduces repetition in generated text
- Context Length: Adjust based on available system resources
Interface Settings
- Enable "Thinking Mode" for step-by-step reasoning
- Adjust window size and appearance preferences
- Configure keyboard shortcuts for common actions
To load your first model successfully:
- Ensure you have a compatible Qwen3 model in GGUF format
- Verify the model file and tokenizer are in the same directory
- Start with smaller models (e.g., Qwen3 0.6B or 1.7B) if you have limited system resources
- Monitor the loading progress indicators in the UI
- Once loaded, test with a simple prompt like "Hello, how are you?"
The application includes intelligent memory management that optimizes performance based on your available system resources.
Section sources
- README.md
- src/lib/services/huggingface.ts
- src/lib/services/huggingface.ts
Oxide Lab leverages a modern technology stack that combines Rust's performance with web technologies for an intuitive user interface.
Tauri provides the bridge between the frontend and backend, enabling a native desktop application with web technologies.
Key Configuration (tauri.conf.json)
{
"build": {
"beforeDevCommand": "npm run dev",
"devUrl": "http://localhost:1420",
"beforeBuildCommand": "npm run build",
"frontendDist": "../build"
},
"app": {
"windows": [
{
"title": "Oxide Lab",
"decorations": false,
"width": 1280,
"height": 720,
"minWidth": 640,
"minHeight": 360
}
]
}
}This configuration sets up:
- Development server on port 1420
- Frontend build process integration
- Window properties and constraints
- Security settings (CSP disabled for local operation)
Section sources
- tauri.conf.json
Svelte provides a reactive, component-based approach to building the user interface.
Svelte Configuration (svelte.config.js)
import adapter from "@sveltejs/adapter-static";
import { vitePreprocess } from "@sveltejs/vite-plugin-svelte";
const config = {
preprocess: vitePreprocess(),
kit: {
adapter: adapter({
fallback: "index.html",
}),
},
};
export default config;The configuration enables:
- Static site generation for Tauri integration
- SPA (Single Page Application) mode with index.html fallback
- Vite preprocessing for optimized builds
Vite provides fast development server and optimized production builds.
Vite Configuration (vite.config.js)
import { defineConfig } from "vite";
import { sveltekit } from "@sveltejs/kit/vite";
export default defineConfig(async () => ({
plugins: [sveltekit()],
clearScreen: false,
server: {
port: 1420,
strictPort: false,
watch: {
ignored: ["**/src-tauri/**"],
},
},
}));Key features:
- Development server on port 1420 (matches Tauri configuration)
- Integration with SvelteKit
- Watch mode with src-tauri directory ignored
- Optimized build pipeline
Cargo manages the Rust dependencies and build process for the backend.
Cargo Configuration (Cargo.toml)
[dependencies]
tauri = { version = "2", features = [] }
candle = { package = "candle-core", git = "https://github.com/huggingface/candle.git", default-features = false }
candle-nn = { git = "https://github.com/huggingface/candle.git", default-features = false }
hf-hub = "0.4"
[features]
default = ["cuda"]
cuda = [
"candle/cuda",
"candle-nn/cuda",
"candle-transformers/cuda",
]The configuration includes:
- Tauri for desktop integration
- Candle framework for machine learning operations
- Hugging Face Hub integration for model downloading
- Optional CUDA support enabled by default
Updated Added new npm scripts for Tauri development and build processes with CPU and CUDA configurations.
Diagram sources
- package.json - Added in commit 240401a
- src-tauri/Cargo.toml - Updated in commit 240401a
``mermaid graph TD A["Frontend (Svelte/Vite)"] --> |API Calls| B["Tauri Bridge"] B --> C["Rust Backend (Cargo)"] C --> D["Candle ML Framework"] D --> E["CPU/GPU (CUDA)"] D --> F["Hugging Face Hub"] C --> G["Local File System"] B --> H["Native Desktop Features"] style A fill:#4CAF50,stroke:#388E3C style B fill:#2196F3,stroke:#1976D2 style C fill:#9C27B0,stroke:#7B1FA2 style D fill:#FF9800,stroke:#F57C00 style E fill:#607D8B,stroke:#455A64 style F fill:#00BCD4,stroke:#0097A7 style G fill:#795548,stroke:#5D4037 style H fill:#673AB7,stroke:#512DA8
**Diagram sources**
- [package.json](file://package.json#L1-L46)
- [src-tauri/Cargo.toml](file://src-tauri/Cargo.toml#L1-L54)
- [tauri.conf.json](file://src-tauri/tauri.conf.json#L1-L52)
## Troubleshooting Common Issues
### Build Failures
**Missing Dependencies**
- Ensure Rust is installed: `rustup --version`
- Verify Node.js is installed: `node --version`
- Install build tools: `npm install` in project root
**CUDA Build Issues**
- Verify CUDA toolkit is installed: `nvcc --version`
- Ensure CUDA version is compatible (11.7+)
- Set environment variables:
bash export CUDA_HOME=/usr/local/cuda export PATH=$CUDA_HOME/bin:$PATH
**Common Error Messages**
- "Cannot find crate": Run `cargo clean` and rebuild
- "Linking errors": Ensure compatible Rust and Node.js versions
- "Missing symbols": Update submodules with `git submodule update --init --recursive`
### Missing Dependencies
**Frontend Dependencies**
bash
npm cache clean --force rm -rf node_modules package-lock.json npm install
**Rust Dependencies**
bash
cargo clean cargo update cargo build
### Runtime Issues
**Model Loading Problems**
- Verify model file integrity
- Ensure tokenizer.json matches the model
- Check file permissions
- Start with smaller models to test functionality
**Performance Issues**
- Reduce context length for better performance
- Close other memory-intensive applications
- Ensure adequate system cooling for sustained performance
- Consider upgrading to a model size appropriate for your hardware
**General Tips**
- Keep the application updated to the latest version
- Monitor system resources during operation
- Use the built-in "Thinking Mode" to understand AI reasoning
- Report persistent issues on the GitHub repository
**Updated** Added new build scripts for CPU and CUDA configurations that require specific troubleshooting guidance.
**Section sources**
- [README.md](file://README.md#L1-L167)
- [src-tauri/Cargo.toml](file://src-tauri/Cargo.toml#L37-L49)
- [example/candle-book/src/error_manage.md](file://example/candle-book/src/error_manage.md#L30-L51)
- [package.json](file://package.json#L14-L17) - *Added in commit 240401a*
**Referenced Files in This Document**
- [README.md](file://README.md)
- [package.json](file://package.json)
- [vite.config.js](file://vite.config.js)
- [svelte.config.js](file://svelte.config.js)
- [src-tauri/Cargo.toml](file://src-tauri/Cargo.toml)
- [src-tauri/tauri.conf.json](file://src-tauri/tauri.conf.json)
- [src-tauri/src/main.rs](file://src-tauri/src/main.rs)
- [src-tauri/src/lib.rs](file://src-tauri/src/lib.rs)
- [src/lib/services/huggingface.ts](file://src/lib/services/huggingface.ts)