A comprehensive Rust application for interacting with Ollama AI models, featuring both local and remote server connectivity, image analysis capabilities, and real-time streaming responses with performance metrics.
- Multi-Connection Support: Connect to both local and remote Ollama servers
- Interactive Menu System: Easy-to-use command-line interface
- Image Analysis: Analyze images using vision models (llava)
- Real-time Streaming: Stream responses with live performance metrics
- Performance Monitoring: Track tokens per second, response times, and throughput
- Flexible Configuration: Environment-based configuration with
.envsupport - Command Line Interface: Direct command execution with arguments
- Server Fallback: Automatic fallback from remote to local connections
- Rust (latest stable version)
- Ollama installed and running
- Vision Model (optional, for image analysis):
ollama pull llava
graph TB
%% Main Entry
CLI[CLI main.rs<br/>--prompt, --test, --local, --image]
%% Core Modules Group
subgraph Modules[Core Modules]
Remote[Remote<br/>connecttoollama.rs]
Local[Local<br/>connectlocally.rs]
Vision[Vision<br/>imagedescriber.rs]
end
%% Dependencies Group
subgraph Config[Configuration]
ENV[.env file]
Images[./images/]
end
%% Servers Group
subgraph Servers[Ollama Servers :11434]
RemoteServer[Remote Server]
LocalServer[Local Server]
end
%% Connections
CLI --> Modules
Config --> Modules
Modules --> Servers
OllamaLib[ollama-rs] --> Modules
Vision -.->|Fallback| LocalServer
%% Styling
style CLI stroke:#000,stroke-width:2px
style Remote stroke:#000,stroke-width:2px
style Local stroke:#000,stroke-width:2px
style Vision stroke:#000,stroke-width:2px
style ENV stroke:#000,stroke-width:2px
style Images stroke:#000,stroke-width:2px
style RemoteServer stroke:#000,stroke-width:2px
style LocalServer stroke:#000,stroke-width:2px
style OllamaLib stroke:#000,stroke-width:2px
style Modules stroke:#000,stroke-width:2px
style Config stroke:#000,stroke-width:2px
style Servers stroke:#000,stroke-width:2px
-
Clone the repository:
git clone https://github.com/Not-Buddy/Rust-AI-Ollama.git cd Rust-AI-Ollama -
Set up environment variables:
cp .envexample .env
Edit
.envwith your configuration:server_ip=your.server.ip.address model=llama3.2 vision_model=llava
-
Create images directory:
mkdir images
Add your images (jpg, jpeg, png, gif, bmp, webp) to this directory for analysis.
-
Build the application:
cargo build --release
Launch the interactive menu:
cargo runMenu Options:
- Generate Response (Remote Server) - Connect to configured remote server
- Generate Response (Local) - Use local Ollama instance
- Test Server Connection - Test remote server connectivity
- Test Local Connection - Test local Ollama connectivity
- View Configuration - Display current settings
- Analyze Image - AI-powered image analysis
- Exit - Close application
Direct text generation:
cargo run -- --prompt "Explain quantum computing"Use local instance:
cargo run -- --local --prompt "What is Rust programming?"Test connections:
cargo run -- --testAnalyze specific image:
cargo run -- --image photo.jpgRust-AI-Ollama/
βββ src/
β βββ main.rs # Main application and menu system
β βββ connecttoollama.rs # Remote server connection logic
β βββ connectlocally.rs # Local Ollama connection logic
β βββ imagedescriber.rs # Image analysis functionality
βββ images/ # Directory for image analysis
βββ .env # Environment configuration
βββ .envexample # Example environment file
βββ Cargo.toml # Dependencies and project metadata
βββ README.md # This file
The .env file supports the following variables:
# Remote server configuration
server_ip=192.168.1.100 # Your Ollama server IP
model=llama3.2 # Default text model
vision_model=llava # Model for image analysis- JPEG/JPG
- PNG
- GIF
- BMP
- WebP
The application provides detailed performance analytics:
- Total Response Time: End-to-end request duration
- Tokens Generated: Number of tokens in response
- Tokens per Second: Real-time throughput measurement
- Server Metrics: Ollama-reported evaluation times and speeds
Key dependencies include:
[dependencies]
ollama-rs = "0.3.2" # Ollama API client
tokio = "1.0" # Async runtime
tokio-stream = "0.1" # Stream utilities
clap = "4.0" # Command line parsing
dotenv = "0.15" # Environment variables
base64 = "0.22" # Image encodingThe application follows this connection priority:
- Remote Server (if configured in
.env) - Local Fallback (automatic if remote fails)
- Place images in the
./images/directory - Select "Analyze Image" from menu or use
--image filename - Choose image from numbered list
- Enter custom prompt or use default
- View AI analysis with performance metrics
# Use custom prompt for image analysis
cargo run -- --image nature.jpg
# Then enter: "Identify all the animals in this image"- Fork the repository
- Create a feature branch:
git checkout -b feature-name - Make your changes and commit:
git commit -m 'Add feature' - Push to branch:
git push origin feature-name - Submit a pull request
This project is open source. See the repository for license details.
Connection Refused:
# Check if Ollama is running
ollama serve
# Test connection
curl http://localhost:11434Missing Models:
# Pull required models
ollama pull llama3.2
ollama pull llavaEnvironment Variables:
- Ensure
.envfile exists and contains valid configuration - Check that
server_ipis accessible from your network
- Use local instance for faster response times
- Configure appropriate models for your hardware
- Monitor token generation rates to optimize performance
For issues or questions:
- Open an issue on GitHub
- Check existing issues for solutions
- Review the troubleshooting section
Built with β€οΈ in Rust | Powered by Ollama