Understand any code, fast.
Live app: https://smarai.rishmi5h.com
smar-ai is an AI-powered GitHub repository analyzer that helps you understand, review, and work with any codebase. Paste a GitHub URL and get instant AI-generated analysis, architecture diagrams, README generation, PR/issue reviews, and copy-pasteable prompts you can give to any LLM to recreate or extend the project.
Powered by Groq (llama-3.3-70b-versatile) for blazing-fast inference.
- 📊 Smart Analysis — AI-powered code understanding with 3 analysis modes (Overview, Detailed, Learning Guide)
- 🏗️ Architecture Diagrams — Auto-generated interactive dependency graphs with Mermaid.js and AI explanations
- 📝 README Generator — One-click professional README.md generation from repository code
- 🧩 Prompt Generator — Generate copy-pasteable prompts to recreate, extend, review, or migrate any codebase
- 🔀 PR & Issue Analysis — Paste a PR or issue URL for targeted AI review with diff analysis
- 📈 Codebase Evolution — Compare how code changed between any two commits
- 💬 Interactive Q&A — Chat with the AI about the analyzed repository
- ⚡ Real-time Streaming — See results as they're generated via Server-Sent Events
- 🎨 Beautiful UI — Modern, responsive interface with syntax highlighting and dark theme
- 📥 Export Options — Copy or download analysis, README, and prompts
- Node.js 20+
- A Groq API key (free tier available)
- A GitHub token (optional, for higher rate limits)
# Clone the repository
git clone https://github.com/rishmi5h/smar-ai.git
cd smar-ai
# Backend
cd server
cp .env.example .env
# Edit .env with your GROQ_API_KEY
npm install
npm run dev
# In a new terminal — Frontend
cd client
npm install
npm run devVisit http://localhost:5173
graph LR
A[GitHub URL] -->|Fetch Code| B[GitHub API v3]
B -->|Code + Metadata| C[Groq LLM]
C -->|Stream via SSE| D[React Frontend]
D -->|Display| E[Analysis / Architecture / README / Prompts]
- Paste a URL — Repository, PR, or Issue URL
- Choose analysis type — Overview, Detailed, or Learning Guide
- Explore tabs — Analysis, Changes, Architecture, README, Prompts
- Export — Copy, download, or use generated prompts with any LLM
| Mode | What You Get |
|---|---|
| Code Overview | Purpose, tech stack, architecture, key components |
| Detailed Explanation | File structure, data flow, design patterns, configuration |
| Learning Guide | Prerequisites, step-by-step learning path, hands-on activities |
Automatically maps file dependencies across your codebase. Supports imports from JavaScript, TypeScript, Python, Go, Java, and Rust. Renders interactive Mermaid diagrams with zoom, pan, and SVG download.
One-click generation of a professional README.md including project overview, features, tech stack, setup instructions, project structure, and contributing guidelines. Streams in real-time with copy and download options.
Generate prompts you can paste into any LLM (Claude, ChatGPT, Cursor, Copilot) to:
| Prompt Type | Description |
|---|---|
| Recreate This Project | Full blueprint with tech stack, file structure, build order, and implementation patterns |
| Add a Feature | Convention-aware guide specifying files to create, patterns to follow, and wiring instructions |
| Code Review | Context-rich review prompt with focus areas, checklists, and output format |
| Convert / Migrate | Migration strategy with pattern mapping, file-by-file plan, and dependency replacements |
Migration presets: JS → TS, React → Next.js, Express → Fastify, REST → GraphQL, Flask → FastAPI, CJS → ESM, or custom targets.
Paste a PR URL to get AI-powered diff review with file-by-file analysis. Paste an issue URL for context-aware discussion and suggested approaches.
Compare any two commits to see how the codebase evolved — structural changes, new patterns introduced, and code quality shifts.
After analysis, chat with the AI about the repository. Ask follow-up questions about architecture, patterns, or specific files.
All streaming endpoints use Server-Sent Events (SSE) for real-time responses.
POST /api/analyze-stream
curl -X POST http://localhost:5050/api/analyze-stream \
-H "Content-Type: application/json" \
-d '{"repoUrl": "https://github.com/facebook/react", "analysisType": "overview"}'POST /api/analyze-pr
curl -X POST http://localhost:5050/api/analyze-pr \
-H "Content-Type: application/json" \
-d '{"prUrl": "https://github.com/owner/repo/pull/123"}'POST /api/analyze-issue
curl -X POST http://localhost:5050/api/analyze-issue \
-H "Content-Type: application/json" \
-d '{"issueUrl": "https://github.com/owner/repo/issues/456"}'POST /api/architecture
curl -X POST http://localhost:5050/api/architecture \
-H "Content-Type: application/json" \
-d '{"repoUrl": "https://github.com/owner/repo"}'POST /api/readme
curl -X POST http://localhost:5050/api/readme \
-H "Content-Type: application/json" \
-d '{"repoUrl": "https://github.com/owner/repo"}'POST /api/generate-prompt
curl -X POST http://localhost:5050/api/generate-prompt \
-H "Content-Type: application/json" \
-d '{"repoUrl": "https://github.com/owner/repo", "promptType": "recreate"}'Prompt types: recreate, feature, review, migrate
POST /api/changes
POST /api/chat
GET /api/repo-info?repoUrl=owner/repo
smar-ai/
├── server/
│ ├── src/
│ │ ├── index.js # Express server entry point
│ │ ├── services/
│ │ │ ├── groqService.js # Groq AI — analysis, README, prompts, PR/issue review
│ │ │ ├── githubService.js # GitHub API — repos, PRs, issues, trees, diffs
│ │ │ ├── importParser.js # Multi-language import/dependency parser
│ │ │ ├── aiService.js # AI service abstraction layer
│ │ │ └── ollamaService.js # Ollama integration (legacy)
│ │ └── routes/
│ │ └── analyze.js # All API endpoints
│ ├── .env.example
│ ├── Dockerfile
│ └── package.json
├── client/
│ ├── src/
│ │ ├── components/
│ │ │ ├── RepoAnalyzer.jsx # Main orchestrator — URL detection, routing
│ │ │ ├── SearchBar.jsx # URL input with type detection (repo/PR/issue)
│ │ │ ├── AnalysisResults.jsx # Tab container — Analysis, Changes, Architecture, README, Prompts
│ │ │ ├── ArchitecturePanel.jsx # Mermaid diagram + AI analysis
│ │ │ ├── ChangesPanel.jsx # Commit comparison / codebase evolution
│ │ │ ├── ReadmePanel.jsx # README generator
│ │ │ ├── PromptPanel.jsx # Prompt generator (4 types)
│ │ │ ├── ChatPanel.jsx # Interactive Q&A
│ │ │ ├── PRAnalysisResults.jsx # PR diff review
│ │ │ ├── IssueAnalysisResults.jsx # Issue analysis
│ │ │ ├── MarkdownRenderer.jsx # Markdown + syntax highlighting + Mermaid
│ │ │ └── LoadingSpinner.jsx # Loading states
│ │ ├── App.jsx
│ │ └── index.css
│ ├── .env.example
│ ├── Dockerfile
│ ├── nginx.conf
│ ├── netlify.toml
│ └── vite.config.js
├── docker-compose.yml
├── DEPLOYMENT.md
└── README.md
Server (server/.env):
PORT=5050
GROQ_API_KEY=your_groq_api_key_here
GROQ_MODEL=llama-3.3-70b-versatile
GITHUB_TOKEN=ghp_xxx # Optional, for higher GitHub API limits
Client (client/.env):
VITE_API_URL=http://localhost:5050/api
Frontend:
- React 19 + Vite
- Mermaid.js (architecture diagrams)
- react-syntax-highlighter
- Modern CSS3 with CSS variables
Backend:
- Node.js 20 + Express
- Groq SDK (llama-3.3-70b-versatile)
- GitHub REST API v3
- Server-Sent Events (SSE) for streaming
AI:
- Groq cloud inference (primary)
- Ollama local inference (legacy support)
Deployment:
- Docker & Docker Compose
- Railway (backend)
- Netlify (frontend)
- Nginx
./scripts/deploy-docker.sh startBackend (Railway):
- Connect GitHub repository to Railway
- Set environment variables:
GROQ_API_KEY=gsk_... GROQ_MODEL=llama-3.3-70b-versatile GITHUB_TOKEN=ghp_... (optional) PORT=5050
Frontend (Netlify):
- Connect GitHub repository to Netlify
- Set build settings:
- Base directory:
client - Build command:
npm run build - Publish directory:
dist
- Base directory:
- Set environment:
VITE_API_URL=https://your-railway.railway.app/api
Full Railway guide → Full Netlify guide →
Supports AWS, DigitalOcean, Linode, etc.
Contributions welcome! Please:
- Fork the repository
- Create feature branch:
git checkout -b feature/amazing-feature - Commit changes:
git commit -m "Add amazing feature" - Push to branch:
git push origin feature/amazing-feature - Open Pull Request
Q: "Invalid GitHub URL"
A: Use format https://github.com/owner/repo, https://github.com/owner/repo/pull/123, or https://github.com/owner/repo/issues/456
Q: Rate limit errors
A: Add GITHUB_TOKEN for higher GitHub API limits
Q: Docker won't start A: Make sure Docker is running and ports 3000, 5050 are available
Q: Architecture diagram shows no connections A: The repo may have very few source files, or use an unsupported import syntax
ISC License - See LICENSE file for details
- Powered by Groq for fast LLM inference
- Built with React and Vite
- Architecture diagrams by Mermaid.js
- GitHub integration via GitHub REST API
- Deployed on Railway, Netlify, and Docker
Made with ❤️ for code understanding