Skip to content

Latest commit

 

History

History
460 lines (350 loc) · 9.02 KB

File metadata and controls

460 lines (350 loc) · 9.02 KB

C.O.R.A - Setup Guide

  ================================================================
    ____   ___   ____      _
   / ___| / _ \ |  _ \    / \
  | |    | | | || |_) |  / _ \
  | |___ | |_| ||  _ <  / ___ \
   \____| \___/ |_| \_\/_/   \_\

  Cognitive Operations & Reasoning Assistant
  ================================================================
  Version: 1.0.0
  Unity AI Lab | https://www.unityailab.com
  ================================================================

Quick Install (5 Minutes)

Step 1: Python

Make sure you have Python 3.10 or newer:

python --version

If not installed: Download Python

Step 2: Install Dependencies

pip install -r requirements.txt

Step 3: Install Ollama (AI Brain)

winget install Ollama.Ollama

Or download from: ollama.com

Step 4: Download AI Models

ollama pull dolphin-mistral:7b    # Main chat model (CORA's brain)
ollama pull llava                  # Vision/image analysis
ollama pull qwen2.5-coder:7b       # Coding assistance

Step 5: Install mpv (Optional - for YouTube)

Download from: https://sourceforge.net/projects/mpv-player-windows/files/64bit/

Extract to ./tools/ folder - CORA auto-detects mpv.exe (any subfolder works).

Step 6: Run CORA

python src/boot_sequence.py


API Keys Setup

CORA works great with NO API keys - Ollama runs locally for free! But these optional keys enable extra features:

Pollinations.AI (FREE - Image Generation)

  1. Go to: https://enter.pollinations.ai/
  2. Click "Get API Key" or sign up
  3. Copy your API key (starts with pk_)
  4. Add to your .env file:
    POLLINATIONS_API_KEY=pk_your_key_here
    

GitHub Token (FREE - Git Operations)

  1. Go to: https://github.com/settings/tokens
  2. Click "Generate new token (classic)"
  3. Name it "CORA" and select scopes: repo, user
  4. Copy the token (starts with ghp_)
  5. Add to your .env file:
    GITHUB_TOKEN=ghp_your_token_here
    

Weather API (FREE - Weather Data)

  1. Go to: https://openweathermap.org/api
  2. Sign up for free account
  3. Go to "API Keys" tab
  4. Copy your API key
  5. Add to your .env file:
    WEATHER_API_KEY=your_key_here
    

News API (FREE - Headlines)

  1. Go to: https://newsapi.org/
  2. Click "Get API Key"
  3. Sign up for free
  4. Copy your API key
  5. Add to your .env file:
    NEWS_API_KEY=your_key_here
    

Setting Up Your .env File

  1. Copy env.example to .env:
    copy env.example .env
  2. Open .env in a text editor
  3. Replace the placeholder values with your actual keys
  4. Save the file

Note: The .env file contains secrets - never commit it to git!

See env.example for detailed instructions on each API key.


Detailed Installation

System Requirements

Component Minimum Recommended
OS Windows 10 Windows 11
Python 3.10+ 3.11+
RAM 8 GB 16 GB
GPU None NVIDIA (CUDA)
VRAM N/A 8 GB+
Storage 5 GB 10 GB
mpv Optional For YouTube playback

Python Dependencies

Core packages installed via requirements.txt:

Package Purpose
customtkinter Modern GUI framework
ollama AI model integration
kokoro Neural TTS synthesis
soundfile Audio processing
sounddevice Audio playback
psutil System monitoring
opencv-python Webcam/vision
pillow Image processing
requests API calls
SpeechRecognition Voice input
vosk Offline speech recognition

Install all:

pip install -r requirements.txt

Ollama Setup

Installing Ollama

Windows (winget):

winget install Ollama.Ollama

Windows (manual):

  1. Go to ollama.com
  2. Download the Windows installer
  3. Run the installer
  4. Restart your terminal

Starting Ollama

Ollama runs as a background service. Start it:

ollama serve

Or it starts automatically when you pull/run a model.

Downloading Models

Required models:

ollama pull dolphin-mistral:7b   # Main chat - CORA's brain (~4.1 GB)
ollama pull llava                # Vision/image analysis (~4.7 GB)
ollama pull qwen2.5-coder:7b     # Coding assistance (~4.4 GB)

Optional models:

ollama pull llama3.2      # Alternative chat (smaller)
ollama pull codellama     # Alternative code generation
ollama pull phi           # Small & fast for testing

Verifying Ollama

Check if Ollama is running:

ollama list

Should show your downloaded models.


Voice Setup

TTS (Text-to-Speech)

CORA uses Kokoro TTS by default (neural voice):

pip install kokoro soundfile sounddevice

The af_bella voice is used automatically.

STT (Speech-to-Text)

For voice input:

pip install SpeechRecognition vosk

Download a Vosk model (optional, for offline):


GPU Setup (Optional)

For faster AI responses with NVIDIA GPU:

CUDA Installation

  1. Install NVIDIA drivers (latest)
  2. Install CUDA Toolkit 11.8+
  3. Install cuDNN

Verify GPU

CORA auto-detects GPU via nvidia-smi:

nvidia-smi

You should see your GPU listed with VRAM stats.


Running CORA

Visual Boot (Recommended)

python src/boot_sequence.py
  • Full cyberpunk visual display
  • 10-phase diagnostic with TTS
  • Live system stats
  • Dynamic AI responses

GUI Mode

python src/gui_launcher.py

CLI Mode

python cora.py

Quick Boot (No TTS)

python src/boot_sequence.py --quick

Web Version

Visit unityailab.com/CORA in your browser:

  1. Make sure Ollama is running (ollama serve)
  2. Click [ BACKEND TERMINAL ] to check model status
  3. Download any missing models shown in the terminal
  4. (Optional) Run stats server for live GPU/CPU stats:
    python services/stats_server.py

Configuration

Settings Location

File Purpose
config/settings.json GUI settings (TTS, Ollama, STT)
config/voice_commands.json Wake words, voice config
personality.json AI personality traits

Default Settings

config/settings.json:

{
  "tts": {
    "enabled": true,
    "rate": 150,
    "volume": 1.0
  },
  "ollama": {
    "enabled": true,
    "model": "llama3.2"
  },
  "stt": {
    "enabled": true,
    "sensitivity": 0.7
  }
}

Troubleshooting

CORA won't start

Check Python version:

python --version

Must be 3.10+

Check dependencies:

pip install -r requirements.txt

No AI responses

Check Ollama is running:

ollama list

Start Ollama:

ollama serve

Pull models:

ollama pull llama3.2

No voice output

Check Kokoro:

pip install kokoro soundfile sounddevice

Test TTS:

python -c "from voice.tts import speak; speak('Hello')"

No GPU detected

Check NVIDIA drivers:

nvidia-smi

Update drivers: nvidia.com/drivers

Webcam not working

Check OpenCV:

pip install opencv-python

Test webcam:

python -c "import cv2; print(cv2.VideoCapture(0).isOpened())"

Directory Structure

After installation:

C.O.R.A/
├── src/
│   ├── boot_sequence.py     # Main boot with visual display
│   ├── cora.py              # CLI application
│   └── gui_launcher.py      # GUI launcher
├── ui/
│   ├── boot_display.py      # Visual boot display
│   ├── app.py               # Main GUI
│   └── panels.py            # GUI panels
├── voice/
│   ├── tts.py               # Kokoro TTS
│   ├── stt.py               # Speech recognition
│   └── wake_word.py         # Wake word detection
├── ai/
│   ├── ollama.py            # Ollama client
│   └── context.py           # Context management
├── cora_tools/              # Python tool modules (20+)
├── tools/                   # Downloaded binaries (mpv, ffmpeg)
├── services/                # Weather, location, etc.
├── config/
│   └── settings.json        # Configuration
├── data/
│   ├── images/              # Generated images
│   └── camera/              # Camera captures
└── requirements.txt         # Python dependencies

Note: cora_tools/ = Python code, tools/ = binaries only


Updating CORA

Pull Latest Code

git pull origin main

Update Dependencies

pip install -r requirements.txt --upgrade

Update AI Models

ollama pull llama3.2
ollama pull llava

Need Help?


C.O.R.A v1.0.0 - Setup Guide Unity AI Lab - Hackall360, Sponge, GFourteen