Releases: jaredrummler/consoul
v0.4.2 - CLI Tool Support Fixes
Bug Fix Release
Fixed
- Fixed CLI tool support for all Ollama models (removed hardcoded whitelist)
- Fixed missing tool documentation in CLI system prompts
- Fixed AttributeError on Ctrl+C (session_id path correction)
Changed
- Reverted to optimistic tool binding for Ollama models with graceful fallback
See CHANGELOG.md for full details.
Consoul v0.4.1 - Hotfix
Hotfix Release - Critical Bug Fix
Fixed
🐛 Critical: Fixed TypeError in chat, ask, and resume commands
This hotfix resolves a critical bug introduced in v0.4.0 during SDK refactoring that caused consoul chat, consoul ask, and consoul resume commands to fail with:
TypeError: ChatSession.__init__() got an unexpected keyword argument 'tool_registry'
Changes
- Removed invalid
tool_registryparameter fromChatSession.__init__()calls - Added
approval_providerparameter toConversationService.from_config() - Fixed approval provider flow: CLI → ChatSession → ConversationService → ToolRegistry
Testing
Verified fix with:
echo "content" | consoul ask --stdin "summarize" # ✅ WorksAll affected commands now function correctly.
Installation
pip install consoul==0.4.1 --upgradeConsoul v0.4.0 - SDK Decoupling & Thinking Mode
Major Feature Release - SDK Decoupling & Thinking Mode Support
This release represents a significant architectural milestone with the completion of SDK decoupling (EPIC-012, EPIC-013), making Consoul's AI capabilities available as a standalone SDK. It also introduces thinking mode support for reasoning models and modern async streaming architecture.
Stats: 104 commits since v0.3.0 (22 features, 37 fixes, 0 breaking changes)
Added
SDK Service Layer (EPIC-013)
- 🏗️ ConversationService - Centralized conversation management with message history tracking, async streaming support, and tool execution orchestration
- 🤖 ModelService - Unified interface for all AI providers with dynamic local model discovery
- 🛠️ ToolService - Tool catalog and execution with async approval workflow
Thinking Mode Support (SOUL-280, SOUL-283)
- 🧠 Native support for reasoning models (phi4-reasoning:14b, DeepSeek-R1, Qwen QWQ, o1-preview)
- Automatic detection of thinking tags with real-time streaming display
- Collapsible thinking sections in message bubbles
Async Streaming Architecture (SOUL-279, SOUL-234)
- 🔄 Modern async/await streaming with StreamingOrchestrator
- 🌐 WebSocket streaming support with FastAPI proof-of-concept
Headless SDK Support (SOUL-251)
- ✨ stream_chunks() function for UI-agnostic streaming
- 📦 StreamChunk model with pure data (no UI dependencies)
Enhanced Model Picker (SOUL-276)
- 🎨 Card-based model picker UI with provider filtering and search
Fixed
- 🐛 IndexError in docs generator (SOUL-272)
- 🎨 Thinking mode UI improvements
- 🔧 Model switching and tool execution bugs
- 🌐 WebSocket streaming fixes (SOUL-277)
Changed
- 🏗️ TUI refactored to use SDK services (EPIC-012, SOUL-270) - Extracted 1,600+ lines to reusable SDK
- 📝 System prompt building moved to SDK (SOUL-253)
- 🎯 CLI session orchestration extracted (SOUL-252, SOUL-254)
- 📊 Model registry integration replacing static catalog
Full changelog: https://github.com/goatbytes/consoul/blob/main/CHANGELOG.md
v0.3.0 - Inline Command Execution
v0.3.0 - Inline Command Execution
Feature Release - This release introduces powerful inline shell command execution, allowing users to run shell commands directly from the chat interface and automatically include their output in AI conversations.
🚀 Major Features
Inline Shell Command Execution (SOUL-196)
Run shell commands directly from the chat interface with !-prefix syntax:
- Standalone mode:
!ls -laexecutes command and displays output - Inline mode:
Here is the file: !cat README.md`` embeds output in message - Support for complex commands with pipes, redirects, and arguments
- Automatic output truncation for large results
- Real-time execution with visual feedback
- Syntax-highlighted command output
- Structured context injection for LLM comprehension
AI Provider Improvements
- Anthropic prompt caching cost tracking (SOUL-186) - Track cache creation and read tokens separately
- ChatLlamaCpp context size extraction (SOUL-231) - Runtime extraction of actual n_ctx from loaded models
- Pattern-based intelligent defaults - Automatic context size detection for new AI models
TUI Enhancements
- Search configuration improvements (SOUL-183)
- InitializationErrorScreen (SOUL-192) - Graceful handling of failures
CLI Enhancements
- PDF support for
--fileand--globflags (SOUL-211) --system-fileflag (SOUL-212) - Read system prompts from files
🔧 Fixes
Security & Dependencies (SOUL-228)
- torch updated to 2.9.1 (CVE-2025-32434, CVE-2025-3730, CVE-2025-2953)
- urllib3 updated to 2.6.0 (CVE-2025-66471, CVE-2025-66418)
AI Provider Fixes
- OpenAI stream_options fix (SOUL-233) - Removed unconditional parameter for non-streaming calls
TUI Fixes
- Model field made optional in ProfileConfig
- Temperature field added to profile editor
- Markdown syntax highlighting improvements
📝 Changed
- Command output always expanded by default for better UX
- Context injection format improved with structured XML-like tags
- Model field made optional in ProfileConfig
📚 Documentation
- SDK integration guide for real-world usage
- Installation instructions updated
- CLI usage examples added to README
- TUI documentation completed
- Performance optimization documentation completed
✅ Testing
- 20 comprehensive unit tests for command detection
- Pattern matching validation
- Edge case handling
⚡ Performance
- Async command execution via thread pool
- Smart output truncation for large results
- Ollama context sizes cached
🔄 Upgrade Notes
- Backward compatible with 0.2.x
- New inline command execution requires no configuration
- Security updates strongly recommended
- Configuration:
~/.config/consoul/config.yaml - History:
~/.local/share/consoul/conversations.db - Ollama cache:
~/.consoul/ollama_context_cache.json
📦 Installation
pip install --upgrade consoul[tui]See CHANGELOG.md for complete details.
v0.2.0 - First Public Release
First Public Release 🎉
This is the first public release of Consoul, featuring a complete TUI, comprehensive CLI commands, multi-provider AI support, and powerful tool calling capabilities.
Installation
pip install consoulQuick Start
# Set your API key
export ANTHROPIC_API_KEY=your-key-here # Claude
export OPENAI_API_KEY=your-key-here # GPT-4
export GOOGLE_API_KEY=your-key-here # Gemini
# Launch the TUI
consoul
# Or use the CLI
consoul ask "What is 2+2?"Highlights
🎨 Beautiful TUI
- Rich, interactive terminal interface powered by Textual
- Multi-turn conversations with streaming responses
- Conversation history and search
- File attachments and image analysis
- Customizable themes (light/dark)
🤖 Multi-Provider Support
- Anthropic Claude - Claude 3.5 Sonnet, Opus, Haiku
- OpenAI - GPT-4o, GPT-4, GPT-3.5
- Google Gemini - Gemini 2.0 Flash, Pro
- Ollama - Run models locally
🛠️ AI-Powered Tools
- File editing with safety controls
- Code search and analysis
- Image analysis (vision support)
- Bash command execution
- Web search integration
📝 Complete CLI
consoul ask- One-off questionsconsoul chat- Interactive sessionsconsoul history- Conversation managementconsoul config- Configuration management
What's New
CLI Commands
- 🎯
consoul ask- One-off questions with--stdin,--file,--glob, and--systemflags - 💬
consoul chat- Interactive chat with full context - 📚
consoul history- Complete conversation management (list, show, export, search, resume) - ⚙️
consoul config- Configuration management
Image Analysis
- 📸 Multimodal vision capabilities for Claude 3.5 Sonnet, GPT-4o, Gemini 2.0 Flash, and Ollama LLaVA
- Analyze screenshots, diagrams, UI mockups, and other visual content
- Security features: file size limits, extension filtering, path blocking
Token Counting Improvements
- 🚀 HuggingFace tokenizer integration for Ollama models
- 100% accurate token counting (vs 66% with character approximation)
- <5ms performance (vs 3-10+ seconds with API calls)
Performance
- Token counting: 3-10+ seconds → <5ms for Ollama models
- Real-time streaming across all providers
Security
- Image analysis security with path blocking
- Permission policies (paranoid, balanced, trusting, unrestricted)
- Audit logging for tool executions
- Command validation
Documentation
Upgrade Notes
This is the first public release. Configuration is stored in:
- Config:
~/.config/consoul/config.yaml - History:
~/.local/share/consoul/conversations.db
See the configuration guide for all available options.
Full Changelog: https://github.com/goatbytes/consoul/blob/v0.2.0/CHANGELOG.md