We release patches for security vulnerabilities. Currently supported versions:
| Version | Supported |
|---|---|
| 0.1.x | ✅ |
| < 0.1 | ❌ |
We take the security of LLMKit seriously. If you believe you have found a security vulnerability, please report it to us as described below.
Please do not report security vulnerabilities through public GitHub issues.
Instead, please use GitHub's private vulnerability reporting:
Please include the following information in your report:
- Type of issue (e.g. credential exposure, injection, etc.)
- Full paths of source file(s) related to the manifestation of the issue
- The location of the affected source code (tag/branch/commit or direct URL)
- Any special configuration required to reproduce the issue
- Step-by-step instructions to reproduce the issue
- Proof-of-concept or exploit code (if possible)
- Impact of the issue, including how an attacker might exploit it
- We will acknowledge your email within 48 hours
- We will send a more detailed response within 7 days indicating the next steps
- We will keep you informed about progress towards a fix
- We may ask for additional information or guidance
- Once fixed, we will publicly disclose the vulnerability (crediting you if desired)
LLMKit handles API keys and communicates with external LLM providers. This library:
- Secure credential handling: API keys are not logged or exposed in error messages
- HTTPS enforced: All provider communications use HTTPS
- No unsafe code: Core library avoids unsafe Rust code
- Input validation: Request parameters are validated before sending
- Dependency auditing: Regular security audits via
cargo audit
When using LLMKit:
- Environment variables: Store API keys in environment variables, not in code
- Key rotation: Rotate API keys regularly
- Least privilege: Use API keys with minimal required permissions
- Monitor usage: Track API usage for anomalies
- Update regularly: Keep LLMKit updated with latest security patches
- Secure logging: Ensure your application doesn't log sensitive request/response data
-
API key exposure: If API keys are hardcoded or logged, they could be exposed
- Mitigation: Use environment variables, never log keys
-
Prompt injection: User-provided content in prompts could manipulate LLM behavior
- Mitigation: Validate and sanitize user inputs, use system prompts carefully
-
Response handling: LLM responses should be treated as untrusted
- Mitigation: Validate and sanitize LLM outputs before use
When we receive a security bug report, we will:
- Confirm the problem and determine affected versions
- Audit code to find similar problems
- Prepare fixes for all supported versions
- Release patches as soon as possible
We ask security researchers to:
- Give us reasonable time to respond before public disclosure
- Make a good faith effort to avoid privacy violations and service disruption
- Not access or modify other users' data
If you have suggestions on how this process could be improved, please submit a pull request.