Skip to content

Rust client for Ollama AI with local/remote support, image analysis, real-time streaming, and metrics.

License

Notifications You must be signed in to change notification settings

Not-Buddy/Rust-AI-Ollama

Folders and files

NameName
Last commit message
Last commit date

Latest commit

Β 

History

11 Commits
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 

Repository files navigation

Rust AI Ollama Client

A comprehensive Rust application for interacting with Ollama AI models, featuring both local and remote server connectivity, image analysis capabilities, and real-time streaming responses with performance metrics.

πŸš€ Features

  • Multi-Connection Support: Connect to both local and remote Ollama servers
  • Interactive Menu System: Easy-to-use command-line interface
  • Image Analysis: Analyze images using vision models (llava)
  • Real-time Streaming: Stream responses with live performance metrics
  • Performance Monitoring: Track tokens per second, response times, and throughput
  • Flexible Configuration: Environment-based configuration with .env support
  • Command Line Interface: Direct command execution with arguments
  • Server Fallback: Automatic fallback from remote to local connections

πŸ“‹ Prerequisites

  • Rust (latest stable version)
  • Ollama installed and running
  • Vision Model (optional, for image analysis): ollama pull llava

Architecture Diagram of this project

graph TB
    %% Main Entry
    CLI[CLI main.rs<br/>--prompt, --test, --local, --image]
    
    %% Core Modules Group
    subgraph Modules[Core Modules]
        Remote[Remote<br/>connecttoollama.rs]
        Local[Local<br/>connectlocally.rs]
        Vision[Vision<br/>imagedescriber.rs]
    end
    
    %% Dependencies Group
    subgraph Config[Configuration]
        ENV[.env file]
        Images[./images/]
    end
    
    %% Servers Group
    subgraph Servers[Ollama Servers :11434]
        RemoteServer[Remote Server]
        LocalServer[Local Server]
    end
    
    %% Connections
    CLI --> Modules
    Config --> Modules
    Modules --> Servers
    OllamaLib[ollama-rs] --> Modules
    
    Vision -.->|Fallback| LocalServer
    
    %% Styling
    style CLI stroke:#000,stroke-width:2px
    style Remote stroke:#000,stroke-width:2px
    style Local stroke:#000,stroke-width:2px
    style Vision stroke:#000,stroke-width:2px
    style ENV stroke:#000,stroke-width:2px
    style Images stroke:#000,stroke-width:2px
    style RemoteServer stroke:#000,stroke-width:2px
    style LocalServer stroke:#000,stroke-width:2px
    style OllamaLib stroke:#000,stroke-width:2px
    style Modules stroke:#000,stroke-width:2px
    style Config stroke:#000,stroke-width:2px
    style Servers stroke:#000,stroke-width:2px
Loading

πŸ› οΈ Installation

  1. Clone the repository:

    git clone https://github.com/Not-Buddy/Rust-AI-Ollama.git
    cd Rust-AI-Ollama
  2. Set up environment variables:

    cp .envexample .env

    Edit .env with your configuration:

    server_ip=your.server.ip.address
    model=llama3.2
    vision_model=llava
  3. Create images directory:

    mkdir images

    Add your images (jpg, jpeg, png, gif, bmp, webp) to this directory for analysis.

  4. Build the application:

    cargo build --release

πŸš€ Usage

Interactive Menu

Launch the interactive menu:

cargo run

Menu Options:

  1. Generate Response (Remote Server) - Connect to configured remote server
  2. Generate Response (Local) - Use local Ollama instance
  3. Test Server Connection - Test remote server connectivity
  4. Test Local Connection - Test local Ollama connectivity
  5. View Configuration - Display current settings
  6. Analyze Image - AI-powered image analysis
  7. Exit - Close application

Command Line Interface

Direct text generation:

cargo run -- --prompt "Explain quantum computing"

Use local instance:

cargo run -- --local --prompt "What is Rust programming?"

Test connections:

cargo run -- --test

Analyze specific image:

cargo run -- --image photo.jpg

πŸ“ Project Structure

Rust-AI-Ollama/
β”œβ”€β”€ src/
β”‚   β”œβ”€β”€ main.rs              # Main application and menu system
β”‚   β”œβ”€β”€ connecttoollama.rs   # Remote server connection logic
β”‚   β”œβ”€β”€ connectlocally.rs    # Local Ollama connection logic
β”‚   └── imagedescriber.rs    # Image analysis functionality
β”œβ”€β”€ images/                  # Directory for image analysis
β”œβ”€β”€ .env                     # Environment configuration
β”œβ”€β”€ .envexample             # Example environment file
β”œβ”€β”€ Cargo.toml              # Dependencies and project metadata
└── README.md               # This file

βš™οΈ Configuration

Environment Variables

The .env file supports the following variables:

# Remote server configuration
server_ip=192.168.1.100          # Your Ollama server IP
model=llama3.2                   # Default text model
vision_model=llava               # Model for image analysis

Supported Image Formats

  • JPEG/JPG
  • PNG
  • GIF
  • BMP
  • WebP

πŸ“Š Performance Metrics

The application provides detailed performance analytics:

  • Total Response Time: End-to-end request duration
  • Tokens Generated: Number of tokens in response
  • Tokens per Second: Real-time throughput measurement
  • Server Metrics: Ollama-reported evaluation times and speeds

πŸ”§ Dependencies

Key dependencies include:

[dependencies]
ollama-rs = "0.3.2"           # Ollama API client
tokio = "1.0"                 # Async runtime
tokio-stream = "0.1"          # Stream utilities
clap = "4.0"                  # Command line parsing
dotenv = "0.15"               # Environment variables
base64 = "0.22"               # Image encoding

πŸš€ Advanced Usage

Server Priority

The application follows this connection priority:

  1. Remote Server (if configured in .env)
  2. Local Fallback (automatic if remote fails)

Image Analysis Workflow

  1. Place images in the ./images/ directory
  2. Select "Analyze Image" from menu or use --image filename
  3. Choose image from numbered list
  4. Enter custom prompt or use default
  5. View AI analysis with performance metrics

Custom Prompts for Images

# Use custom prompt for image analysis
cargo run -- --image nature.jpg
# Then enter: "Identify all the animals in this image"

🀝 Contributing

  1. Fork the repository
  2. Create a feature branch: git checkout -b feature-name
  3. Make your changes and commit: git commit -m 'Add feature'
  4. Push to branch: git push origin feature-name
  5. Submit a pull request

πŸ“ License

This project is open source. See the repository for license details.

πŸ†˜ Troubleshooting

Common Issues

Connection Refused:

# Check if Ollama is running
ollama serve

# Test connection
curl http://localhost:11434

Missing Models:

# Pull required models
ollama pull llama3.2
ollama pull llava

Environment Variables:

  • Ensure .env file exists and contains valid configuration
  • Check that server_ip is accessible from your network

Performance Tips

  • Use local instance for faster response times
  • Configure appropriate models for your hardware
  • Monitor token generation rates to optimize performance

πŸ“ž Support

For issues or questions:

  • Open an issue on GitHub
  • Check existing issues for solutions
  • Review the troubleshooting section

Built with ❀️ in Rust | Powered by Ollama

About

Rust client for Ollama AI with local/remote support, image analysis, real-time streaming, and metrics.

Topics

Resources

License

Stars

Watchers

Forks

Languages