A local AI-powered code companion. Keep your code on your machine while exploring code translation, reviews, and debugging with LLMs. A learning project exploring local AI integration in developer workflows.
CodePapi AI is an experimental, open-source project that brings Large Language Models (LLMs) to your local development environment. Translate code between languages, get AI-powered code reviews, and explore debugging workflows—all without sending your code to external services.
Note: This is a hobby/learning project. While functional, it's not optimized for production use. Performance depends heavily on your hardware and model choice. Expect AI responses to take 10-60+ seconds depending on code size and hardware.
✅ Private — Your code stays on your machine (no cloud uploads)
✅ Open-Source — Inspect the full codebase
✅ Free — MIT licensed, no subscriptions
✅ Learning Tool — Explore local LLM integration in practice
Convert code between supported languages: JavaScript, TypeScript, Python, Go, Rust, Java, C++, PHP, Ruby, Swift, and C#. Quality depends on model accuracy and code complexity.
Get AI-generated feedback on:
- Performance optimization ideas
- Potential security issues
- Code quality observations
- Best practice suggestions
Note: AI suggestions should be reviewed carefully and aren't a substitute for human code review.
The Diff View shows AI-suggested fixes side-by-side with original code. Always test fixes before committing.
Code processing happens locally using Qwen2.5-Coder via Ollama—nothing leaves your machine.
Before you begin, ensure you have the following installed:
- Docker & Docker Compose (easiest way to get started)
- Alternatively: Node.js 18+ and Ollama running locally
# Clone the repository
git clone https://github.com/codepapi/codepapi-ai.git
cd codepapi-ai
# Start the entire stack with one command
docker-compose up -d
⚠️ First Run: The first startup downloads the AI model (~1.5GB). Ensure stable internet and available disk space.
After starting the containers, pull the required model:
docker exec ollama ollama pull qwen2.5-coder:1.5bInitial Request Times: Expect 10-90 seconds for initial responses depending on:
- Your CPU/GPU specs
- Code size
- Available system memory
- Background processes
Once the models are downloaded and containers are running:
- 🖥️ Frontend: Open http://localhost in your browser
- 🔌 API: Backend runs at http://localhost:3000
- 🤖 AI Engine: Ollama API available at http://localhost:11434
- Paste or type code into the left editor
- Select a source language
- Choose an action:
- Translate: Pick a target language
- Review: Get feedback on code
- Check Bugs: See suggested fixes
- Click "Run AI" and wait for results
- Copy or review the output
Tips:
- Smaller code snippets get faster responses
- Review AI suggestions before using them in production
- Results vary based on code complexity and quality
| Component | Technology | Purpose |
|---|---|---|
| AI Engine | Ollama + Qwen2.5-Coder | Local LLM inference |
| Orchestration | LangChain.js | AI workflow management |
| Backend | NestJS (Node.js) | REST API & business logic |
| Frontend | React + TailwindCSS + Lucide | Modern, responsive UI |
| Editor | Monaco Editor | VS Code-powered code editing |
| Quality | Biome | Fast linting & formatting |
Want to support more programming languages? It's easy!
See the Frontend Documentation for detailed instructions on adding languages to frontend/src/constants/languages.ts.
We use Biome for linting and formatting. Before submitting a PR, run:
npm run biome:lint # Check for issues
npx @biomejs/biome check --apply . # Auto-fix issuescodepapi-ai/
├── backend/ # NestJS API server
│ └── src/converter/ # Code conversion logic
├── frontend/ # React UI application
│ └── src/constants/ # Language definitions
├── docker-compose.yml # Full stack orchestration
└── README.md # This file
We are committed to providing a welcoming and inclusive environment for all contributors. Please read and follow our Code of Conduct:
- Respect: Treat all community members with respect and dignity
- Inclusion: Welcome contributors of all backgrounds and experience levels
- Professionalism: Keep discussions constructive and focused on the project
- Accountability: If you witness or experience misconduct, report it responsibly
Violations will not be tolerated and may result in removal from the project.
We welcome contributions! This is a learning/hobby project, so contributions can range from bug fixes and feature ideas to documentation and testing.
- This is experimental code. Don't expect production-grade stability
- Limitations are intentional — helps us learn and improve
- AI suggestions need review — this tool augments, not replaces, human developers
- Check existing issues and PRs to avoid duplicate work
- Fork the repository and clone it locally
- Create a feature branch with a descriptive name:
git checkout -b feature/add-kotlin-support # or git checkout -b fix/console-error-on-large-files
# Install dependencies
cd backend && npm install && cd ..
cd frontend && npm install && cd ..
# Start development environment
docker-compose up -d
# Or run services individually with npm
npm run dev # in both backend/ and frontend/- Linter: We use Biome for all TypeScript/JavaScript code
- Before every commit, run:
npx @biomejs/biome check --apply . - No manual formatting — let Biome handle it
- Line length: Maximum 100 characters (Biome enforces this)
- Commit messages should be clear and descriptive:
✨ feat: add support for Kotlin language 🐛 fix: resolve console error on large file uploads 📝 docs: update contributing guidelines ♻️ refactor: simplify code translation logic - Prefix types:
feat,fix,docs,refactor,test,chore,perf - Keep commits atomic — one logical change per commit
- Reference issues:
Closes #123in commit body when applicable
- Title: Use the same format as commits (e.g.,
feat: add Kotlin support) - Description: Explain why the change is needed, not just what
- Linked issues: Reference any related issues (
Fixes #123) - Testing: Include steps to test your changes
- Screenshots: For UI changes, include before/after screenshots
- No WIP PRs: Only open PRs when ready for review
Before submitting a PR, ensure:
- ✅ Code passes
npm run biome:lintwithout warnings - ✅ All tests pass (if applicable)
- ✅ No console errors or warnings in development
- ✅ Comments explain why, not what (code should be self-documenting)
- ✅ No commented-out code left behind
- ✅ Variable/function names are descriptive and follow conventions
- ✅ No hardcoded values (use constants/config instead)
- ✅ Security: No credentials, secrets, or sensitive data exposed
- ✅ TypeScript: Avoid
anytypes; use proper typing - ✅ Documentation: Update README/docs if behavior changes
- Update
frontend/src/constants/languages.tswith new entries - Add corresponding backend logic in
backend/src/converter/converter.service.tsif needed - Test end-to-end with the UI
- Update
frontend/README.mdif adding complex metadata
- Add a test case that reproduces the bug (if possible)
- Fix the issue
- Verify the test now passes
- Check for related issues that might have the same root cause
- Keep docs synchronized with code changes
- Add examples for complex features
- Update the main README if adding major functionality
While formal unit tests are encouraged:
- Manual testing is acceptable for UI changes
- Test in Docker to ensure consistency across environments
- Test with the Qwen2.5-Coder model
- Document test steps in your PR
- Automated checks run on all PRs (Biome linting)
- Code review: At least one maintainer must approve
- Feedback: Be open to suggestions and iterate
- Approval: Once approved, you may merge (or request maintainer to merge)
- Closed PRs: If inactive for 30 days, may be closed to keep backlog clean
- 🌍 Translations: UI language support
- 🧪 Testing: Test coverage and edge cases
- 📚 Documentation: Guides, tutorials, examples
- 🐛 Bug fixes: Active issues on GitHub
- ✨ Features: Language support, new modes
- 🎨 UI/UX: Design improvements, accessibility
See the Issues page for tasks labeled good first issue and help wanted.
- All contributors are listed in
CONTRIBUTORS.md - Significant contributions may be highlighted in release notes
- Community members can earn roles (Maintainer, Reviewer, etc.)
As an experimental AI project, CodePapi AI follows responsible practices:
- No telemetry: We don't collect usage analytics
- Local processing: All code stays on your machine
- No training: Your code never trains models
- Open source: Full code inspection available
- Clear limitations: We're honest about what works and what doesn't
- No magic: It's an AI assistant, not a replacement for human judgment
- Review all AI suggestions before implementing
- Don't rely solely on AI output for security-critical code
- Test thoroughly in your environment
- Report security issues privately
This is an experimental project with real limitations:
- Speed: Not fast. Responses take 10-90+ seconds per request
- Quality: AI output varies. Some translations work well, others need manual fixes
- Hardware-dependent: Performance heavily depends on your CPU/GPU and available RAM
- Model limitations: Qwen2.5-Coder is a smaller model; results aren't comparable to larger proprietary models
- Error handling: Limited error checking and validation
- Production use: Not suitable for mission-critical workflows without thorough testing
- Use the bug report template provided in GitHub Issues
- Include reproduction steps and expected vs. actual behavior
- Environment info: OS, Docker version, any custom configs
- No duplicate reports: Search existing issues first
- Don't open public issues for security vulnerabilities
- Email us privately: [security@example.com]
- Include: Version, reproduction steps, and potential impact
- Responsible disclosure: Allow 48 hours before public disclosure
See frontend/README.md for detailed customization guides.
- Docker & Docker Compose (recommended) or
- Node.js 18+ + Ollama (for local development)
- Minimum 2GB RAM recommended (Qwen2.5-Coder model size)
- Stable internet for initial model download
- macOS, Linux, or Windows (with WSL2)
- Frontend Guide — UI customization and adding languages
- Backend Guide — API development and extending converters
- Docker Compose Configuration — Service orchestration
Distributed under the MIT License. See LICENSE for details.
- Issues: Report bugs on GitHub Issues
- Discussions: Ask questions in GitHub Discussions
- Docs: Full documentation in README files
A learning project exploring local LLMs in development workflows