The security of AI Code Review is a top priority. If you discover a security vulnerability, we appreciate your help in disclosing it to us responsibly.
- Open a public GitHub issue
- Disclose the vulnerability publicly before it's been addressed
- Exploit the vulnerability beyond what's necessary to demonstrate it
1. Report Privately
Contact us directly at:
- GitHub Security Advisories: Report a vulnerability (preferred)
2. Include Details
Please provide:
- Description of the vulnerability
- Steps to reproduce
- Potential impact assessment
- Suggested fix (if you have one)
- Your contact information (optional)
Example Report:
Subject: [SECURITY] API Key Exposure in Logs
Description:
OpenAI API keys are being logged in plain text when debug mode is enabled,
creating a risk of credential exposure in CI logs.
Steps to Reproduce:
1. Enable debug logging with DEBUG=true
2. Run release workflow
3. Check action logs - API key is visible
Impact:
High - Exposed API keys could be used by unauthorized parties
Suggested Fix:
Mask API keys in all log outputs using GitHub's secret masking feature
Contact: security-researcher@example.com
3. Response Timeline
We will:
- Acknowledge your report within 48 hours
- Provide an initial assessment within 5 business days
- Keep you informed of our progress
- Credit you in the fix (unless you prefer anonymity)
1. Secret Handling
- All API keys are handled as GitHub secrets
- No secrets are logged or exposed in outputs
- GitHub automatically masks secrets in logs
2. Dependency Security
- Regular dependency updates
- Automated security scanning with Dependabot
- Minimal dependency footprint
3. Code Security
- TypeScript strict mode
- No use of
eval()or similar dangerous functions - Input validation on all external data
- Safe handling of git operations
4. API Security
- HTTPS-only communication
- OpenAI API calls use official SDK
- Rate limiting respected
- No credential storage
OpenAI API Key
- Required for AI changelog generation
- Stored as GitHub secret (encrypted at rest)
- Never committed to repository
- Transmitted over HTTPS only
GitHub Token
- Auto-provided by GitHub Actions
- Scoped permissions (contents: write only)
- Automatically revoked after workflow completes
1. API Keys
# β
CORRECT - Use secrets
with:
OPENAI_API_KEY: ${{ secrets.OPENAI_API_KEY }}
# β WRONG - Never hardcode
with:
OPENAI_API_KEY: 'sk-...'2. Permissions
# β
CORRECT - Minimal permissions
permissions:
contents: write
# β WRONG - Excessive permissions
permissions:
contents: write
packages: write
id-token: write3. Branch Protection
# β
CORRECT - Protect release branch
on:
push:
branches: [main]
# Ensure branch protection rules are enabledGitHub Repository Secrets:
- Navigate to Settings β Secrets and variables β Actions
- Add
OPENAI_API_KEYas repository secret - Never commit secrets to the repository
- Rotate keys regularly (recommended: every 90 days)
- Use read-only tokens when possible
Environment Variables:
# β WRONG - Don't expose in CI logs
echo $OPENAI_API_KEY
# β
CORRECT - GitHub automatically masks
# (but avoid echoing anyway)We perform security reviews:
- Before each major release
- When adding new features
- After dependency updates
- In response to vulnerability reports
Currently, no formal third-party audits have been conducted. We welcome security researchers to review the codebase.
- Lock file:
pnpm-lock.yamlensures reproducible builds - Updates: Dependencies updated monthly (or sooner for security patches)
- Scanning: Automated scanning via GitHub Dependabot
Core dependencies (production):
{
"@actions/core": "^1.x",
"@actions/github": "^6.x",
"openai": "^4.x"
}All dependencies are from trusted sources with active maintenance.
If a security vulnerability is confirmed:
1. Assessment (Day 1)
- Evaluate severity and impact
- Determine affected versions
- Develop mitigation strategy
2. Fix Development (Days 2-5)
- Develop and test fix
- Create security advisory (if needed)
- Prepare release notes
3. Disclosure (Day 5-7)
- Release patched version
- Publish security advisory
- Notify affected users
- Credit reporter (if desired)
4. Post-Incident (Week 2)
- Conduct post-mortem
- Update security practices
- Implement preventive measures
We assess vulnerabilities using CVSS v3.1:
| Score | Severity | Response Time |
|---|---|---|
| 9.0-10.0 | Critical | 24 hours |
| 7.0-8.9 | High | 48 hours |
| 4.0-6.9 | Medium | 7 days |
| 0.1-3.9 | Low | 30 days |
Security issues in:
- AI Code Review source code (
src/) - GitHub Actions workflow execution
- API key handling and secret management
- Dependencies and supply chain
- Documentation that could lead to insecure usage
- Issues in third-party dependencies (report to them)
- Vulnerabilities in GitHub Actions platform (report to GitHub)
- OpenAI API vulnerabilities (report to OpenAI)
- Social engineering attacks
- Physical security
We recognize security researchers who help improve AI Code Review's security:
(No vulnerabilities reported yet)
Want to be listed here? Report a valid security vulnerability following our process!
Security Team:
- GitHub: @zxcloli666
For general questions: Open a Discussion
For bugs (non-security): Open an Issue
We follow Coordinated Vulnerability Disclosure:
- Researcher reports vulnerability privately
- We acknowledge and investigate
- We develop and test a fix
- We release the fix
- We publish security advisory
- Researcher can publicly disclose (after fix is released)
Typical timeline: 7-30 days from report to public disclosure.
This security policy may be updated periodically. Check this page for the latest version.
Last Updated: November 2025
Thank you for helping keep AI Code Review and its users safe! Your responsible disclosure helps protect the entire community.
If you have suggestions for improving this security policy, please open a Discussion.