| Version | Supported |
|---|---|
| 0.1.x | ✅ |
We take security vulnerabilities seriously. If you discover a security issue, please follow these steps:
Security vulnerabilities should not be disclosed publicly until they have been addressed.
Please report security vulnerabilities by emailing:
security@contextflow.ai (or create a private security advisory on GitHub)
Include the following information:
- Description of the vulnerability
- Steps to reproduce
- Potential impact
- Suggested fix (if any)
- Initial Response: Within 48 hours
- Status Update: Within 7 days
- Fix Timeline: Depends on severity (critical: 24-72 hours, high: 1-2 weeks, medium: 1 month)
Once a fix is available:
- We will release a patched version
- We will publish a security advisory
- We will credit the reporter (unless they prefer anonymity)
# NEVER hardcode API keys
# Bad
client = Anthropic(api_key="sk-ant-...")
# Good - Use environment variables
import os
client = Anthropic(api_key=os.getenv("ANTHROPIC_API_KEY"))- Use
.envfiles for local development - NEVER commit
.envto version control - Use secrets management in production (AWS Secrets Manager, HashiCorp Vault, etc.)
- Always use HTTPS in production
- Configure CORS appropriately
- Use rate limiting for API endpoints
- All user inputs are validated via Pydantic models
- File paths are sanitized
- SQL injection prevented via parameterized queries (SQLite)
- Dependencies are pinned in
poetry.lock - Regular security audits via
safetyandbandit - Dependabot alerts enabled
- Input Validation - Pydantic models validate all inputs
- Error Handling - Sensitive information not leaked in errors
- Logging - Structured logging without sensitive data
- Rate Limiting - Configurable rate limits on API
- CORS - Configurable cross-origin policies
- Prompt Injection - Users should implement prompt guards for production
- Output Filtering - Verification protocol can filter harmful outputs
- Token Limits - Configurable limits prevent runaway costs
- Provider Isolation - Each provider is isolated
[ ] API keys stored in secure secrets manager
[ ] HTTPS enabled with valid certificate
[ ] CORS configured for specific origins only
[ ] Rate limiting enabled
[ ] Logging configured (without sensitive data)
[ ] Regular dependency updates
[ ] Input validation enabled
[ ] Error messages sanitized
[ ] Session management secured
[ ] Network policies configured
The RLM strategy includes a REPL environment for code execution. While sandboxed:
- Only safe builtins are available
- External libraries are restricted
- Timeout limits prevent infinite loops
Recommendation: In high-security environments, consider disabling RLM or adding additional sandboxing.
Documents added to RAG are stored in memory:
- No encryption at rest (memory-based)
- Documents are not persisted by default
- Consider sensitivity of indexed content
Session data is stored in SQLite:
- Not encrypted by default
- Consider database encryption for sensitive use cases
For security-related inquiries:
- Email: security@contextflow.ai
- GitHub Security Advisories: Create Advisory
Thank you for helping keep ContextFlow secure!