================================================================
____ ___ ____ _
/ ___| / _ \ | _ \ / \
| | | | | || |_) | / _ \
| |___ | |_| || _ < / ___ \
\____| \___/ |_| \_\/_/ \_\
Cognitive Operations & Reasoning Assistant
================================================================
Version: 1.0.0
Unity AI Lab | https://www.unityailab.com
================================================================
Make sure you have Python 3.10 or newer:
python --versionIf not installed: Download Python
pip install -r requirements.txtwinget install Ollama.OllamaOr download from: ollama.com
ollama pull dolphin-mistral:7b # Main chat model (CORA's brain)
ollama pull llava # Vision/image analysis
ollama pull qwen2.5-coder:7b # Coding assistanceDownload from: https://sourceforge.net/projects/mpv-player-windows/files/64bit/
Extract to ./tools/ folder - CORA auto-detects mpv.exe (any subfolder works).
python src/boot_sequence.pyCORA works great with NO API keys - Ollama runs locally for free! But these optional keys enable extra features:
- Go to: https://enter.pollinations.ai/
- Click "Get API Key" or sign up
- Copy your API key (starts with
pk_) - Add to your
.envfile:POLLINATIONS_API_KEY=pk_your_key_here
- Go to: https://github.com/settings/tokens
- Click "Generate new token (classic)"
- Name it "CORA" and select scopes:
repo,user - Copy the token (starts with
ghp_) - Add to your
.envfile:GITHUB_TOKEN=ghp_your_token_here
- Go to: https://openweathermap.org/api
- Sign up for free account
- Go to "API Keys" tab
- Copy your API key
- Add to your
.envfile:WEATHER_API_KEY=your_key_here
- Go to: https://newsapi.org/
- Click "Get API Key"
- Sign up for free
- Copy your API key
- Add to your
.envfile:NEWS_API_KEY=your_key_here
- Copy
env.exampleto.env:copy env.example .env
- Open
.envin a text editor - Replace the placeholder values with your actual keys
- Save the file
Note: The .env file contains secrets - never commit it to git!
See env.example for detailed instructions on each API key.
| Component | Minimum | Recommended |
|---|---|---|
| OS | Windows 10 | Windows 11 |
| Python | 3.10+ | 3.11+ |
| RAM | 8 GB | 16 GB |
| GPU | None | NVIDIA (CUDA) |
| VRAM | N/A | 8 GB+ |
| Storage | 5 GB | 10 GB |
| mpv | Optional | For YouTube playback |
Core packages installed via requirements.txt:
| Package | Purpose |
|---|---|
| customtkinter | Modern GUI framework |
| ollama | AI model integration |
| kokoro | Neural TTS synthesis |
| soundfile | Audio processing |
| sounddevice | Audio playback |
| psutil | System monitoring |
| opencv-python | Webcam/vision |
| pillow | Image processing |
| requests | API calls |
| SpeechRecognition | Voice input |
| vosk | Offline speech recognition |
Install all:
pip install -r requirements.txtWindows (winget):
winget install Ollama.OllamaWindows (manual):
- Go to ollama.com
- Download the Windows installer
- Run the installer
- Restart your terminal
Ollama runs as a background service. Start it:
ollama serveOr it starts automatically when you pull/run a model.
Required models:
ollama pull dolphin-mistral:7b # Main chat - CORA's brain (~4.1 GB)
ollama pull llava # Vision/image analysis (~4.7 GB)
ollama pull qwen2.5-coder:7b # Coding assistance (~4.4 GB)Optional models:
ollama pull llama3.2 # Alternative chat (smaller)
ollama pull codellama # Alternative code generation
ollama pull phi # Small & fast for testingCheck if Ollama is running:
ollama listShould show your downloaded models.
CORA uses Kokoro TTS by default (neural voice):
pip install kokoro soundfile sounddeviceThe af_bella voice is used automatically.
For voice input:
pip install SpeechRecognition voskDownload a Vosk model (optional, for offline):
- vosk-model-small-en-us
- Extract to
models/vosk/
For faster AI responses with NVIDIA GPU:
- Install NVIDIA drivers (latest)
- Install CUDA Toolkit 11.8+
- Install cuDNN
CORA auto-detects GPU via nvidia-smi:
nvidia-smiYou should see your GPU listed with VRAM stats.
python src/boot_sequence.py- Full cyberpunk visual display
- 10-phase diagnostic with TTS
- Live system stats
- Dynamic AI responses
python src/gui_launcher.pypython cora.pypython src/boot_sequence.py --quickVisit unityailab.com/CORA in your browser:
- Make sure Ollama is running (
ollama serve) - Click
[ BACKEND TERMINAL ]to check model status - Download any missing models shown in the terminal
- (Optional) Run stats server for live GPU/CPU stats:
python services/stats_server.py
| File | Purpose |
|---|---|
config/settings.json |
GUI settings (TTS, Ollama, STT) |
config/voice_commands.json |
Wake words, voice config |
personality.json |
AI personality traits |
config/settings.json:
{
"tts": {
"enabled": true,
"rate": 150,
"volume": 1.0
},
"ollama": {
"enabled": true,
"model": "llama3.2"
},
"stt": {
"enabled": true,
"sensitivity": 0.7
}
}Check Python version:
python --versionMust be 3.10+
Check dependencies:
pip install -r requirements.txtCheck Ollama is running:
ollama listStart Ollama:
ollama servePull models:
ollama pull llama3.2Check Kokoro:
pip install kokoro soundfile sounddeviceTest TTS:
python -c "from voice.tts import speak; speak('Hello')"Check NVIDIA drivers:
nvidia-smiUpdate drivers: nvidia.com/drivers
Check OpenCV:
pip install opencv-pythonTest webcam:
python -c "import cv2; print(cv2.VideoCapture(0).isOpened())"After installation:
C.O.R.A/
├── src/
│ ├── boot_sequence.py # Main boot with visual display
│ ├── cora.py # CLI application
│ └── gui_launcher.py # GUI launcher
├── ui/
│ ├── boot_display.py # Visual boot display
│ ├── app.py # Main GUI
│ └── panels.py # GUI panels
├── voice/
│ ├── tts.py # Kokoro TTS
│ ├── stt.py # Speech recognition
│ └── wake_word.py # Wake word detection
├── ai/
│ ├── ollama.py # Ollama client
│ └── context.py # Context management
├── cora_tools/ # Python tool modules (20+)
├── tools/ # Downloaded binaries (mpv, ffmpeg)
├── services/ # Weather, location, etc.
├── config/
│ └── settings.json # Configuration
├── data/
│ ├── images/ # Generated images
│ └── camera/ # Camera captures
└── requirements.txt # Python dependencies
Note: cora_tools/ = Python code, tools/ = binaries only
git pull origin mainpip install -r requirements.txt --upgradeollama pull llama3.2
ollama pull llava- Website: https://www.unityailab.com
- GitHub: https://github.com/Unity-Lab-AI
- Email: unityailabcontact@gmail.com
C.O.R.A v1.0.0 - Setup Guide Unity AI Lab - Hackall360, Sponge, GFourteen