A FIPS 140-2 compliant multi-provider AI chat and image analysis platform designed for containerized deployment in OpenShift environments.
- π¬ Chat Interface: Real-time conversations with Ollama models
- πΌοΈ Image Analysis: Upload and analyze images with vision models
- ποΈ Model Management: Dynamic model selection and information display
- π Session Statistics: Track usage, response times, and performance
- πΎ Export Functionality: Export conversations and analyses
- βοΈ Configurable Settings: Adjust temperature, tokens, and other parameters
- π FIPS Compliance: Built with FIPS 140-2 cryptographic standards
This application MUST be deployed as a container for FIPS compliance. Direct Python execution is only supported for development.
- β Container deployment (Podman/Docker + OpenShift)
- β FIPS-enabled environment
- β Container registry access
β οΈ Local Python execution (development/testing only)
For Production (Container Deployment):
- Podman or Docker
- OpenShift 4.8+ cluster with FIPS mode enabled
- Access to container registry (Quay.io, Docker Hub, etc.)
- Ollama service deployed in OpenShift
For Development Only:
- Python 3.11+
- Ollama installed and running locally
- At least one Ollama model installed (see recommended models below)
-
Build the FIPS-compliant container:
./scripts/build-podman.sh
-
Tag and push to registry:
podman tag ollama-streamlit:latest quay.io/your-username/fips-chat:latest podman push quay.io/your-username/fips-chat:latest
-
Deploy to OpenShift:
cd openshift/ # Update image reference in deployment.yaml oc apply -k .
-
Install dependencies:
pip install -r requirements.txt
-
Install Ollama models (recommended):
# Vision models for image analysis ollama pull llava:7b ollama pull granite3.2-vision:latest # Chat models for conversations ollama pull granite3.3:8b ollama pull gemma3:latest ollama pull phi4-mini:3.8b # Code-focused model ollama pull qwen2.5-coder:7b
-
Run the application (development only):
# Start Ollama service ollama serve # Run Streamlit application streamlit run app.py # Open browser to http://localhost:8501
Test the container locally before deploying:
# Test with provided script
./scripts/test-podman.sh
# Or manual testing
podman run -p 8080:8080 --rm ollama-streamlit:latest
# Open browser to http://localhost:8080After deployment, you'll need to install Ollama models. The application provides several ways to manage models:
Access the admin interface at: https://ollama-admin-{namespace}.apps.{cluster}/
# Deploy a test model
curl -X POST https://ollama-admin-ollama-platform.apps.your-cluster.com/api/pull \
-d '{"name": "llama3.2:1b"}' -H "Content-Type: application/json"
# Check available models
curl -s https://ollama-admin-ollama-platform.apps.your-cluster.com/api/tags# Port forward to Ollama
oc port-forward service/ollama-service 11434:11434 &
# Deploy models
curl -X POST http://localhost:11434/api/pull \
-d '{"name": "granite3.3:8b"}' -H "Content-Type: application/json"See DEPLOYMENT.md for complete model management guide.
- llava:7b - Primary vision model for image description (4.7 GB)
- granite3.2-vision:latest - Alternative vision model (2.4 GB)
- granite3.3:8b - Primary chat model (4.9 GB)
- gemma3:latest - Lightweight chat alternative (3.3 GB)
- phi4-mini:3.8b - Fast response chat model (2.5 GB)
- qwen2.5-coder:7b - Code-focused conversations (4.7 GB)
- mistral-small3.1:24b - High-quality responses (15 GB)
fips-chat/
βββ app.py # Main Streamlit application
βββ config.py # Configuration management
βββ ollama_client.py # Ollama API wrapper
βββ ui_components/ # Reusable UI components
β βββ chat_interface.py # Chat functionality
β βββ image_interface.py # Image analysis
β βββ model_selector.py # Model management
βββ utils/ # Utility functions
β βββ image_processing.py # Image utilities
β βββ session_manager.py # Session management
βββ tests/ # Test suite
βββ requirements.txt # Dependencies
βββ README.md # This file
The application can be configured through environment variables:
export OLLAMA_HOST="http://localhost:11434" # Ollama server URL
export DEFAULT_CHAT_MODEL="granite3.3:8b" # Default chat model
export DEFAULT_VISION_MODEL="llava:7b" # Default vision model
export TEMPERATURE="0.7" # Model temperature
export MAX_TOKENS="2048" # Max response length
export MAX_FILE_SIZE_MB="10" # Max image file sizeRun the test suite:
# Run all tests
python -m pytest tests/ -v
# Run specific test modules
python -m pytest tests/test_config.py -v
python -m pytest tests/test_ollama_client.py -v
python -m pytest tests/test_image_processing.py -v- Model list refresh: < 2 seconds
- Chat response initiation: < 3 seconds
- Image analysis initiation: < 5 seconds
- UI responsiveness: No blocking operations
-
"Cannot connect to Ollama"
- Ensure Ollama is running:
ollama serve - Check if Ollama is accessible:
ollama list
- Ensure Ollama is running:
-
"No models available"
- Install at least one model:
ollama pull llava:7b - Verify models are installed:
ollama list
- Install at least one model:
-
"Model does not support image analysis"
- Select a vision model (llava:7b, granite3.2-vision:latest)
- Check model capabilities in the Models tab
-
High memory usage warning
- Use the "Clean Up" button in the sidebar
- Clear conversation history
- Reduce image file sizes
- Use smaller models (phi4-mini:3.8b, gemma3:latest) for faster responses
- Reduce image file sizes before upload
- Clear old conversations and images regularly
- Adjust temperature and max tokens for your use case
For FIPS-compliant deployment to OpenShift:
# Build with Podman
./scripts/build-podman.sh
# Test locally
./scripts/test-podman.sh
# Deploy to OpenShift
oc apply -k openshift/See DEPLOYMENT.md for comprehensive OpenShift deployment instructions.
β This application is FIPS 140-2 compliant
- No weak cryptographic functions (MD5, SHA1, etc.)
- Container runs with OPENSSL_FIPS=1
- Uses OCI-compliant Containerfile for Podman
This project is provided as-is for educational and development purposes.