Aamati is a comprehensive system that combines machine learning with real-time audio processing to analyze musical mood and apply intelligent effects. It consists of a Python-based ML training pipeline and a JUCE-based audio plugin for real-time processing.
Project automation scripts live in scripts/. Long-form guides are in docs/ (see docs/COMMANDS.md for build, install, and run commands).
# Run the comprehensive setup script (from repository root)
python3 scripts/setup_aamati.py
# This will:
# - Install all Python dependencies
# - Setup ONNX Runtime
# - Organize the ML structure
# - Create build scripts
# - Setup Resources directory# Interactive training (recommended for first time)
python3 scripts/run_aamati.py --interactive
# Non-interactive training (for automation)
python3 scripts/run_aamati.py --non-interactive# Build the plugin
python3 scripts/run_aamati.py --build-only
# Or use platform-specific scripts
./build_macos.sh # macOS
./build_linux.sh # Linux# Run comprehensive test suite
python3 scripts/test_aamati.py
# Run specific tests
python3 scripts/test_aamati.py --test ml
python3 scripts/test_aamati.py --test juc# Extract features from MIDI files
python3 aamati_ml/main.py --mode extract --interactive
# Non-interactive mode
python3 aamati_ml/main.py --mode extract --non-interactive# Train all models
python3 aamati_ml/main.py --mode train
# Train specific model groups
python3 aamati_ml/scripts/train_models.py --models basic advanced# Generate mood predictions
python3 aamati_ml/main.py --mode predict# Run complete training workflow
python3 aamati_ml/main.py --mode automate --workflow training
# Run data management
python3 aamati_ml/main.py --mode automate --workflow data-management
# Check system status
python3 aamati_ml/main.py --mode status- Real-time feature extraction: Analyzes audio in real-time
- Mood prediction: Uses trained ML models to predict musical mood
- Dynamic processing: Applies different effects based on predicted mood
- Traditional EQ: High-pass and low-pass filters
- Mid/Side processing: Stereo image manipulation
- 10 mood categories: Chill, energetic, suspenseful, uplifting, ominous, romantic, gritty, dreamy, frantic, focused
- Real-time analysis: Processes audio every buffer
- Configurable sensitivity: User can control ML processing intensity
- Live status display: Shows model status and predictions
- High Pass Frequency: 20Hz - 5kHz
- Low Pass Frequency: 5kHz - 22kHz
- ML Sensitivity: 0.1x - 2.0x
- ML Enabled: Toggle for ML processing
- Live Status: Model status, predicted mood, feature extraction status
- ML Features: Update
src/core/feature_extractor.py - Mood Processing: Update
Source/PluginProcessor.cpp - UI Controls: Update
Source/PluginEditor.cpp - Models: Add new training scripts in
src/models/
- Retrain models using Python scripts
- Export to ONNX format
- Update model loading code if needed
- Test with the plugin
# Run all tests
python3 scripts/test_aamati.py
# Run specific test categories
python3 scripts/test_aamati.py --test python
python3 scripts/test_aamati.py --test ml
python3 scripts/test_aamati.py --test jucThe system uses multiple ML models:
- Main Mood Model (
groove_mood_model.onnx): Predicts overall mood - Feature Classification Models:
- Energy classification
- Dynamic intensity
- Swing detection
- Fill activity
- Rhythmic density
- FX character
- Timing feel
| Mood | Characteristics | Tempo | Density | Energy |
|---|---|---|---|---|
| π§ Chill | Loose, minimal, mellow | 60-115 | 2-10 | 2-5 |
| β‘ Energetic | Tight, aggressive, driving | 120-175 | 20-40 | 13-15 |
| π³οΈ Suspenseful | Tense, minor scales, stabs | 75-125 | 6-18 | 6-9 |
| π Uplifting | Bright, major harmonies | 100-160 | 10-26 | 7-13 |
| π Ominous | Brooding, dark, sparse | 55-100 | 4-12 | 5-8 |
| π Romantic | Flowing, expressive, warm | 60-125 | 10-20 | 5-9 |
| πͺ Gritty | Dirty, mechanical, raw | 135-180 | 15-33 | 10-14 |
| π Dreamy | Reverb-heavy, washed | 70-110 | 5-15 | 5-8 |
| π Frantic | Chaotic, rapid, wild | 160-250 | 22-40 | 14-17 |
| π― Focused | Steady, repetitive, precise | 83-135 | 8-22 | 8-11 |
- Models not loading: Check if models exist in
Resources/folder - Build failures: Ensure JUCE and ONNX Runtime are properly installed
- Import errors: Run
python3 scripts/setup_aamati.pyto install dependencies - Plugin not working: Check console output for error messages
# Enable verbose output (test runner)
python3 scripts/test_aamati.py --verboseCheck logs in aamati_ml/logs/ for detailed information about ML operations.
- Commands (build, install, start): docs/COMMANDS.md
- Setup Guide: docs/JUCE_PLUGIN_SETUP.md
- Usage / training / integration testing: docs/COMPLETE_USAGE_GUIDE.md, docs/TRAINING_GUIDE.md, docs/INTEGRATION_TESTING_GUIDE.md
- ML Documentation:
aamati_ml/README.md - API Reference: See docstrings in source files
- Fork the repository
- Create a feature branch
- Make your changes
- Add tests
- Submit a pull request
This project is licensed under the MIT License - see the LICENSE file for details.
- JUCE framework for audio plugin development
- ONNX Runtime for model inference
- Pretty MIDI for MIDI file processing
- Scikit-learn for machine learning
- All contributors and testers