This guide is for developers who want to contribute to or modify Shello CLI.
If you just want to use Shello CLI, see README.md for installation instructions.
shelloorshello chat- Start interactive chat sessionshello setup- Interactive configuration wizardshello config- Display current configurationshello config --edit- Open settings in default editorshello config get <key>- Get specific setting valueshello config set <key> <value>- Set specific setting valueshello config reset- Reset settings to defaultsshello --version- Show version informationshello --help- Show help message
python main.py setup # Interactive configuration wizard
python main.py chat # Start chat session
python main.py config # Show current configuration/new- Start a new conversation/switch- Switch between AI providers (OpenAI, Bedrock, etc.)/help- Show available commands/about- Show about information/quitor/exit- Exit the application
Using UV (Recommended - 10-100x faster):
# Install uv if you haven't already
# Windows (PowerShell)
powershell -ExecutionPolicy ByPass -c "irm https://astral.sh/uv/install.ps1 | iex"
# Linux/macOS
curl -LsSf https://astral.sh/uv/install.sh | sh
# Clone and setup
git clone https://github.com/om-mapari/shello-cli.git
cd shello-cli
# Create virtual environment
uv venv
# Activate virtual environment
# Windows (PowerShell): .venv\Scripts\Activate.ps1
# Windows (CMD): .venv\Scripts\activate.bat
# Linux/macOS: source .venv/bin/activate
# Install dependencies
uv pip install -e ".[dev]"Option A: Interactive Setup (Recommended)
python main.py setupThe setup wizard will guide you through:
- AI provider selection (OpenAI-compatible API or AWS Bedrock)
- Provider-specific configuration (API keys or AWS credentials)
- Default model selection
Option B: Manual Configuration
For OpenAI-compatible APIs:
Create ~/.shello_cli/user-settings.yml:
# =============================================================================
# SHELLO CLI USER SETTINGS
# =============================================================================
provider: openai
openai_config:
provider_type: openai
api_key: your-api-key-here
base_url: https://api.openai.com/v1
default_model: gpt-4o
models:
- gpt-4o
- gpt-4o-mini
- gpt-4-turboFor AWS Bedrock:
Create ~/.shello_cli/user-settings.yml:
provider: bedrock
bedrock_config:
provider_type: bedrock
aws_region: us-east-1
aws_profile: default
default_model: anthropic.claude-3-5-sonnet-20241022-v2:0
models:
- anthropic.claude-3-5-sonnet-20241022-v2:0
- anthropic.claude-3-opus-20240229-v1:0
- amazon.nova-pro-v1:0For multiple providers:
provider: openai
openai_config:
provider_type: openai
api_key: your-openai-key
base_url: https://api.openai.com/v1
default_model: gpt-4o
models:
- gpt-4o
- gpt-4o-mini
bedrock_config:
provider_type: bedrock
aws_region: us-east-1
aws_profile: default
default_model: anthropic.claude-3-5-sonnet-20241022-v2:0
models:
- anthropic.claude-3-5-sonnet-20241022-v2:0Option C: Environment Variables
OpenAI-compatible:
export OPENAI_API_KEY="your-api-key"AWS Bedrock:
export AWS_REGION="us-east-1"
export AWS_PROFILE="default"
# Or use explicit credentials:
export AWS_ACCESS_KEY_ID="your-access-key"
export AWS_SECRET_ACCESS_KEY="your-secret-key"python main.py configLocation: ~/.shello_cli/user-settings.yml
Contains AI provider configuration, credentials, and default preferences. The file uses YAML format with helpful comments and documentation.
OpenAI-compatible API configuration:
# =============================================================================
# SHELLO CLI USER SETTINGS
# =============================================================================
# Edit this file to customize your settings.
# Only specify values you want to override - defaults are used for the rest.
# =============================================================================
# PROVIDER CONFIGURATION
# =============================================================================
provider: openai
openai_config:
provider_type: openai
api_key: your-api-key-here # Or use OPENAI_API_KEY env var
base_url: https://api.openai.com/v1
default_model: gpt-4o
models:
- gpt-4o
- gpt-4o-mini
- gpt-4-turbo
# =============================================================================
# OUTPUT MANAGEMENT (optional - uses defaults if not specified)
# =============================================================================
# Controls how command output is truncated and displayed.
# Uncomment and modify to customize:
#
# output_management:
# enabled: true
# show_summary: true
# limits:
# list: 5000
# search: 10000
# log: 15000
# json: 20000
# default: 8000
# strategies:
# list: first_only
# search: first_only
# log: last_only
# default: first_last
# =============================================================================
# COMMAND TRUST (optional - uses defaults if not specified)
# =============================================================================
# Controls which commands require approval before execution.
# Uncomment and modify to customize:
#
# command_trust:
# enabled: true
# yolo_mode: false
# approval_mode: user_driven
# allowlist:
# - ls
# - pwd
# - git statusAWS Bedrock configuration:
provider: bedrock
bedrock_config:
provider_type: bedrock
aws_region: us-east-1
aws_profile: default
default_model: anthropic.claude-3-5-sonnet-20241022-v2:0
models:
- anthropic.claude-3-5-sonnet-20241022-v2:0
- anthropic.claude-3-opus-20240229-v1:0
- amazon.nova-pro-v1:0Multiple providers configured:
provider: openai
openai_config:
provider_type: openai
api_key: your-openai-key
base_url: https://api.openai.com/v1
default_model: gpt-4o
models:
- gpt-4o
- gpt-4o-mini
bedrock_config:
provider_type: bedrock
aws_region: us-east-1
aws_profile: default
default_model: anthropic.claude-3-5-sonnet-20241022-v2:0
models:
- anthropic.claude-3-5-sonnet-20241022-v2:0Key features:
- YAML format with inline documentation
- Only configured values are saved (everything else uses defaults)
- All optional settings shown as comments with examples
- Environment variables can override any credential
Note: Credentials can also be set via environment variables:
- OpenAI:
OPENAI_API_KEY - Bedrock:
AWS_REGION,AWS_PROFILE,AWS_ACCESS_KEY_ID,AWS_SECRET_ACCESS_KEY
Location: .shello/settings.yml (in your project directory)
Contains project-specific overrides:
model: gpt-4o-miniLocation: .shello/SHELLO.md (in your project directory)
Add project-specific context for the AI:
# Custom Instructions for My Project
When working in this project:
- This is a Python project using pytest for testing
- Always run tests after making changes
- Use type hints in all code
- Follow PEP 8 style guidelinesOutput management settings can be customized in your user settings file. Default values are defined in shello_cli/defaults.py:
# Character limits per output type
DEFAULT_CHAR_LIMITS = {
"list": 5_000, # ~1.2K tokens
"search": 10_000, # ~2.5K tokens
"log": 15_000, # ~3.7K tokens
"json": 20_000, # ~5K tokens
"install": 8_000, # ~2K tokens
"build": 8_000, # ~2K tokens
"test": 15_000, # ~3.7K tokens
"default": 8_000, # ~2K tokens
"safety": 50_000, # ~12.5K tokens (hard max)
}
# Truncation strategies
DEFAULT_STRATEGIES = {
"list": "first_only",
"search": "first_only",
"log": "last_only",
"json": "first_only",
"install": "first_last",
"build": "first_last",
"test": "first_last",
"default": "first_last",
}
# Cache settings
DEFAULT_CACHE_MAX_SIZE_MB = 100 # 100MB, no TTLTo customize these settings, add an output_management section to your ~/.shello_cli/user-settings.yml:
output_management:
enabled: true
show_summary: true
limits:
list: 5000
search: 10000
default: 8000
strategies:
list: first_only
log: last_only
default: first_lastSettings are loaded in this order (later overrides earlier):
- Default values (in
shello_cli/defaults.py) - User settings (
~/.shello_cli/user-settings.yml) - Environment variables (
OPENAI_API_KEY,AWS_REGION,AWS_PROFILE, etc.) - Project settings (
.shello/settings.yml)
Note: The settings system uses a merge strategy where:
- Only values you explicitly set in
user-settings.ymloverride defaults - Unspecified values automatically use defaults from
defaults.py - The denylist is always additive (your patterns are added to defaults for safety)
pytest tests/ -vpytest tests/ -v -k "property"pytest tests/test_openai_client.py -vpytest tests/ --cov=shello_cli --cov-report=htmlThe client supports any OpenAI-compatible API endpoint and AWS Bedrock foundation models.
provider: openai
openai_config:
provider_type: openai
base_url: https://api.openai.com/v1
api_key: sk-...
default_model: gpt-4o
models:
- gpt-4o
- gpt-4o-mini
- gpt-4-turboprovider: openai
openai_config:
provider_type: openai
base_url: https://openrouter.ai/api/v1
api_key: sk-or-v1-...
default_model: mistralai/devstral-2512:free
models:
- mistralai/devstral-2512:free
- anthropic/claude-3.5-sonnetprovider: bedrock
bedrock_config:
provider_type: bedrock
aws_region: us-east-1
aws_profile: default
default_model: anthropic.claude-3-5-sonnet-20241022-v2:0
models:
- anthropic.claude-3-5-sonnet-20241022-v2:0
- anthropic.claude-3-opus-20240229-v1:0
- amazon.nova-pro-v1:0See doc/BEDROCK_SETUP_GUIDE.md for detailed AWS Bedrock setup instructions.
provider: openai
openai_config:
provider_type: openai
base_url: http://localhost:1234/v1
api_key: not-needed
default_model: local-model-name
models:
- local-model-nameThe settings manager automatically sets secure file permissions (0600) on user settings files to protect your API keys.
On Unix-like systems:
chmod 600 ~/.shello_cli/user-settings.ymlOn Windows: File permissions are handled automatically by the application.
- Run
python main.py setupto configure interactively - Or check that
~/.shello_cli/user-settings.ymlexists and contains provider configuration - Or set environment variables:
- OpenAI:
OPENAI_API_KEY - Bedrock:
AWS_REGION,AWS_PROFILE, orAWS_ACCESS_KEY_ID+AWS_SECRET_ACCESS_KEY
- OpenAI:
- Verify your credentials are valid
- For OpenAI: Check that the
api_keyandbase_urlare correct - For Bedrock: Verify AWS credentials and region are configured
- Ensure you have internet connectivity
- Verify the model name is supported by your provider
If you're using AWS Bedrock and see this error:
uv pip install boto3Or reinstall all dependencies:
uv pip install -e .- Make sure you've installed all dependencies:
uv pip install -e . - Verify you're in the correct Python environment (check with
which pythonorwhere python)
- Check file locations:
- User:
~/.shello_cli/user-settings.yml - Project:
.shello/settings.yml
- User:
- Verify YAML syntax is valid (use a YAML validator)
- Check file permissions (should be readable, automatically set to 0600 for security)
python main.py configshello_cli/
├── agent/
│ ├── message_processor.py # Message processing logic
│ ├── shello_agent.py # Main agent logic
│ ├── template.py # System prompt template
│ └── tool_executor.py # Tool execution
├── api/
│ ├── bedrock_client.py # AWS Bedrock API client
│ ├── client_factory.py # Client factory (creates appropriate client)
│ └── openai_client.py # OpenAI-compatible API client
├── chat/
│ └── chat_session.py # Chat session management
├── commands/
│ ├── command_detector.py # Direct command detection
│ ├── context_manager.py # Command history tracking
│ ├── direct_executor.py # Direct command execution
│ └── settings_commands.py # Settings management commands
├── settings/
│ ├── __init__.py # Public API
│ ├── manager.py # SettingsManager class
│ ├── models.py # Settings dataclasses
│ └── serializers.py # YAML generation with comments
├── tools/
│ ├── bash_tool.py # Bash command execution
│ ├── get_cached_output_tool.py # Cache retrieval tool
│ └── output/ # Output management system
│ ├── cache.py # Output caching
│ ├── compressor.py # Progress bar compression
│ ├── manager.py # Output manager
│ ├── semantic.py # Semantic line analysis
│ ├── truncator.py # Smart truncation
│ └── type_detector.py # Output type detection
├── ui/
│ ├── ui_renderer.py # Terminal UI rendering
│ └── user_input.py # User input handling
├── utils/
│ ├── output_utils.py # Output utility functions
│ └── settings_manager.py # Configuration management (re-exports from settings/)
├── cli.py # CLI entry point
├── defaults.py # Default values for user-changeable settings
├── patterns.py # Internal patterns and templates (NOT user-changeable)
└── types.py # Type definitions
tests/
├── test_*.py # Unit tests
└── ... # 1,400+ tests total
From source:
python main.pyWith specific command:
python main.py setup # Run setup wizard
python main.py config # Show configuration
python main.py chat # Start chat sessionEdit your user settings file to add new models to your configured provider:
For OpenAI-compatible APIs:
provider: openai
openai_config:
models:
- gpt-4o
- gpt-4o-mini
- your-new-model
default_model: your-new-modelFor AWS Bedrock:
provider: bedrock
bedrock_config:
models:
- anthropic.claude-3-5-sonnet-20241022-v2:0
- your-new-bedrock-model-id
default_model: your-new-bedrock-model-idOr use python main.py setup to reconfigure interactively.
Property-based tests use Hypothesis to generate test cases:
# Run with verbose output
pytest tests/test_output_cache.py -v -s
# Increase number of examples
pytest tests/ --hypothesis-show-statisticsSee BUILD_INSTRUCTIONS.md for creating standalone executables.
Quick build:
# Windows
build.bat
# Linux/macOS
chmod +x build.sh && ./build.sh- README.md - Main documentation
- CONTRIBUTING.md - Contribution guidelines
- CHANGELOG.md - Version history
- docs/HOW_TO_RELEASE.md - Release process
- BUILD_INSTRUCTIONS.md - Build process details