diff --git a/.agents/skills/code-reviewer/SKILL.md b/.agents/skills/code-reviewer/SKILL.md new file mode 100644 index 00000000..214da2ab --- /dev/null +++ b/.agents/skills/code-reviewer/SKILL.md @@ -0,0 +1,177 @@ +--- +name: code-reviewer +description: Code review automation for TypeScript, JavaScript, Python, Go, Swift, Kotlin. Analyzes PRs for complexity and risk, checks code quality for SOLID violations and code smells, generates review reports. Use when reviewing pull requests, analyzing code quality, identifying issues, generating review checklists. +--- + +# Code Reviewer + +Automated code review tools for analyzing pull requests, detecting code quality issues, and generating review reports. + +--- + +## Table of Contents + +- [Tools](#tools) + - [PR Analyzer](#pr-analyzer) + - [Code Quality Checker](#code-quality-checker) + - [Review Report Generator](#review-report-generator) +- [Reference Guides](#reference-guides) +- [Languages Supported](#languages-supported) + +--- + +## Tools + +### PR Analyzer + +Analyzes git diff between branches to assess review complexity and identify risks. + +```bash +# Analyze current branch against main +python scripts/pr_analyzer.py /path/to/repo + +# Compare specific branches +python scripts/pr_analyzer.py . --base main --head feature-branch + +# JSON output for integration +python scripts/pr_analyzer.py /path/to/repo --json +``` + +**What it detects:** +- Hardcoded secrets (passwords, API keys, tokens) +- SQL injection patterns (string concatenation in queries) +- Debug statements (debugger, console.log) +- ESLint rule disabling +- TypeScript `any` types +- TODO/FIXME comments + +**Output includes:** +- Complexity score (1-10) +- Risk categorization (critical, high, medium, low) +- File prioritization for review order +- Commit message validation + +--- + +### Code Quality Checker + +Analyzes source code for structural issues, code smells, and SOLID violations. + +```bash +# Analyze a directory +python scripts/code_quality_checker.py /path/to/code + +# Analyze specific language +python scripts/code_quality_checker.py . --language python + +# JSON output +python scripts/code_quality_checker.py /path/to/code --json +``` + +**What it detects:** +- Long functions (>50 lines) +- Large files (>500 lines) +- God classes (>20 methods) +- Deep nesting (>4 levels) +- Too many parameters (>5) +- High cyclomatic complexity +- Missing error handling +- Unused imports +- Magic numbers + +**Thresholds:** + +| Issue | Threshold | +|-------|-----------| +| Long function | >50 lines | +| Large file | >500 lines | +| God class | >20 methods | +| Too many params | >5 | +| Deep nesting | >4 levels | +| High complexity | >10 branches | + +--- + +### Review Report Generator + +Combines PR analysis and code quality findings into structured review reports. + +```bash +# Generate report for current repo +python scripts/review_report_generator.py /path/to/repo + +# Markdown output +python scripts/review_report_generator.py . --format markdown --output review.md + +# Use pre-computed analyses +python scripts/review_report_generator.py . \ + --pr-analysis pr_results.json \ + --quality-analysis quality_results.json +``` + +**Report includes:** +- Review verdict (approve, request changes, block) +- Score (0-100) +- Prioritized action items +- Issue summary by severity +- Suggested review order + +**Verdicts:** + +| Score | Verdict | +|-------|---------| +| 90+ with no high issues | Approve | +| 75+ with ≤2 high issues | Approve with suggestions | +| 50-74 | Request changes | +| <50 or critical issues | Block | + +--- + +## Reference Guides + +### Code Review Checklist +`.agents/skills/code-reviewer/references/code_review_checklist.md` + +Systematic checklists covering: +- Pre-review checks (build, tests, PR hygiene) +- Correctness (logic, data handling, error handling) +- Security (input validation, injection prevention) +- Performance (efficiency, caching, scalability) +- Maintainability (code quality, naming, structure) +- Testing (coverage, quality, mocking) +- Language-specific checks + +### Coding Standards +`.agents/skills/code-reviewer/references/coding_standards.md` + +Language-specific standards for: +- TypeScript (type annotations, null safety, async/await) +- JavaScript (declarations, patterns, modules) +- Python (type hints, exceptions, class design) +- Go (error handling, structs, concurrency) +- Swift (optionals, protocols, errors) +- Kotlin (null safety, data classes, coroutines) + +### Common Antipatterns +`.agents/skills/code-reviewer/references/common_antipatterns.md` + +Antipattern catalog with examples and fixes: +- Structural (god class, long method, deep nesting) +- Logic (boolean blindness, stringly typed code) +- Security (SQL injection, hardcoded credentials) +- Performance (N+1 queries, unbounded collections) +- Testing (duplication, testing implementation) +- Async (floating promises, callback hell) + +--- + +## Languages Supported + +| Language | Extensions | +|----------|------------| +| Python | `.py` | +| TypeScript | `.ts`, `.tsx` | +| JavaScript | `.js`, `.jsx`, `.mjs` | +| Go | `.go` | +| Swift | `.swift` | +| Kotlin | `.kt`, `.kts` | \ No newline at end of file diff --git a/.agents/skills/code-reviewer/references/code_review_checklist.md b/.agents/skills/code-reviewer/references/code_review_checklist.md new file mode 100644 index 00000000..b7bd0867 --- /dev/null +++ b/.agents/skills/code-reviewer/references/code_review_checklist.md @@ -0,0 +1,270 @@ +# Code Review Checklist + +Structured checklists for systematic code review across different aspects. + +--- + +## Table of Contents + +- [Pre-Review Checks](#pre-review-checks) +- [Correctness](#correctness) +- [Security](#security) +- [Performance](#performance) +- [Maintainability](#maintainability) +- [Testing](#testing) +- [Documentation](#documentation) +- [Language-Specific Checks](#language-specific-checks) + +--- + +## Pre-Review Checks + +Before diving into code, verify these basics: + +### Build and Tests +- [ ] Code compiles without errors +- [ ] All existing tests pass +- [ ] New tests are included for new functionality +- [ ] No unintended files included (build artifacts, IDE configs) + +### PR Hygiene +- [ ] PR has clear title and description +- [ ] Changes are scoped appropriately (not too large) +- [ ] Commits follow conventional commit format +- [ ] Branch is up to date with base branch + +### Scope Verification +- [ ] Changes match the stated purpose +- [ ] No unrelated changes bundled in +- [ ] Breaking changes are documented +- [ ] Migration path provided if needed + +--- + +## Correctness + +### Logic +- [ ] Algorithm implements requirements correctly +- [ ] Edge cases handled (null, empty, boundary values) +- [ ] Off-by-one errors checked +- [ ] Correct operators used (== vs ===, & vs &&) +- [ ] Loop termination conditions correct +- [ ] Recursion has proper base cases + +### Data Handling +- [ ] Data types appropriate for the use case +- [ ] Numeric overflow/underflow considered +- [ ] Date/time handling accounts for timezones +- [ ] Unicode and internationalization handled +- [ ] Data validation at entry points + +### State Management +- [ ] State transitions are valid +- [ ] Race conditions addressed +- [ ] Concurrent access handled correctly +- [ ] State cleanup on errors/exit + +### Error Handling +- [ ] Errors caught at appropriate levels +- [ ] Error messages are actionable +- [ ] Errors don't expose sensitive information +- [ ] Recovery or graceful degradation implemented +- [ ] Resources cleaned up in error paths + +--- + +## Security + +### Input Validation +- [ ] All user input validated and sanitized +- [ ] Input length limits enforced +- [ ] File uploads validated (type, size, content) +- [ ] URL parameters validated + +### Injection Prevention +- [ ] SQL queries parameterized +- [ ] Command execution uses safe APIs +- [ ] HTML output escaped to prevent XSS +- [ ] LDAP queries properly escaped +- [ ] XML parsing disables external entities + +### Authentication & Authorization +- [ ] Authentication required for protected resources +- [ ] Authorization checked before operations +- [ ] Session management secure +- [ ] Password handling follows best practices +- [ ] Token expiration implemented + +### Data Protection +- [ ] Sensitive data encrypted at rest +- [ ] Sensitive data encrypted in transit +- [ ] PII handled according to policy +- [ ] Secrets not hardcoded +- [ ] Logs don't contain sensitive data + +### API Security +- [ ] Rate limiting implemented +- [ ] CORS configured correctly +- [ ] CSRF protection in place +- [ ] API keys/tokens secured +- [ ] Endpoints use HTTPS + +--- + +## Performance + +### Efficiency +- [ ] Appropriate data structures used +- [ ] Algorithms have acceptable complexity +- [ ] Database queries are optimized +- [ ] N+1 query problems avoided +- [ ] Indexes used where beneficial + +### Resource Usage +- [ ] Memory usage bounded +- [ ] No memory leaks +- [ ] File handles properly closed +- [ ] Database connections pooled +- [ ] Network calls minimized + +### Caching +- [ ] Appropriate caching strategy +- [ ] Cache invalidation handled +- [ ] Cache keys are unique and predictable +- [ ] TTL values appropriate + +### Scalability +- [ ] Horizontal scaling considered +- [ ] Bottlenecks identified +- [ ] Async processing for long operations +- [ ] Batch operations where appropriate + +--- + +## Maintainability + +### Code Quality +- [ ] Functions/methods have single responsibility +- [ ] Classes follow SOLID principles +- [ ] Code is DRY (Don't Repeat Yourself) +- [ ] No dead code or commented-out code +- [ ] Magic numbers replaced with constants + +### Naming +- [ ] Names are descriptive and consistent +- [ ] Naming follows project conventions +- [ ] No abbreviations that obscure meaning +- [ ] Boolean variables/functions have is/has/can prefix + +### Structure +- [ ] Functions are appropriately sized (<50 lines preferred) +- [ ] Nesting depth is reasonable (<4 levels) +- [ ] Related code is grouped together +- [ ] Dependencies are minimal and explicit + +### Readability +- [ ] Code is self-documenting where possible +- [ ] Complex logic has explanatory comments +- [ ] Formatting is consistent +- [ ] No overly clever or obscure code + +--- + +## Testing + +### Coverage +- [ ] New code has unit tests +- [ ] Critical paths have integration tests +- [ ] Edge cases are tested +- [ ] Error conditions are tested + +### Quality +- [ ] Tests are independent +- [ ] Tests have clear assertions +- [ ] Test names describe what is tested +- [ ] Tests don't depend on external state + +### Mocking +- [ ] External dependencies are mocked +- [ ] Mocks are realistic +- [ ] Mock setup is not excessive + +--- + +## Documentation + +### Code Documentation +- [ ] Public APIs are documented +- [ ] Complex algorithms explained +- [ ] Non-obvious decisions documented +- [ ] TODO/FIXME comments have context + +### External Documentation +- [ ] README updated if needed +- [ ] API documentation updated +- [ ] Changelog updated +- [ ] Migration guides provided + +--- + +## Language-Specific Checks + +### TypeScript/JavaScript +- [ ] Types are explicit (avoid `any`) +- [ ] Null checks present (`?.`, `??`) +- [ ] Async/await errors handled +- [ ] No floating promises +- [ ] Memory leaks from closures checked + +### Python +- [ ] Type hints used for public APIs +- [ ] Context managers for resources (`with` statements) +- [ ] Exception handling is specific (not bare `except`) +- [ ] No mutable default arguments +- [ ] List comprehensions used appropriately + +### Go +- [ ] Errors checked and handled +- [ ] Goroutine leaks prevented +- [ ] Context propagation correct +- [ ] Defer statements in right order +- [ ] Interfaces minimal + +### Swift +- [ ] Optionals handled safely +- [ ] Memory management correct (weak/unowned) +- [ ] Error handling uses Result or throws +- [ ] Access control appropriate +- [ ] Codable implementation correct + +### Kotlin +- [ ] Null safety leveraged +- [ ] Coroutine cancellation handled +- [ ] Data classes used appropriately +- [ ] Extension functions don't obscure behavior +- [ ] Sealed classes for state + +--- + +## Review Process Tips + +### Before Approving +1. Verify all critical checks passed +2. Confirm tests are adequate +3. Consider deployment impact +4. Check for any security concerns +5. Ensure documentation is updated + +### Providing Feedback +- Be specific about issues +- Explain why something is problematic +- Suggest alternatives when possible +- Distinguish blockers from suggestions +- Acknowledge good patterns + +### When to Block +- Security vulnerabilities present +- Critical logic errors +- No tests for risky changes +- Breaking changes without migration +- Significant performance regressions diff --git a/.agents/skills/code-reviewer/references/coding_standards.md b/.agents/skills/code-reviewer/references/coding_standards.md new file mode 100644 index 00000000..9fbc6a06 --- /dev/null +++ b/.agents/skills/code-reviewer/references/coding_standards.md @@ -0,0 +1,555 @@ +# Coding Standards + +Language-specific coding standards and conventions for code review. + +--- + +## Table of Contents + +- [Universal Principles](#universal-principles) +- [TypeScript Standards](#typescript-standards) +- [JavaScript Standards](#javascript-standards) +- [Python Standards](#python-standards) +- [Go Standards](#go-standards) +- [Swift Standards](#swift-standards) +- [Kotlin Standards](#kotlin-standards) + +--- + +## Universal Principles + +These apply across all languages. + +### Naming Conventions + +| Element | Convention | Example | +|---------|------------|---------| +| Variables | camelCase (JS/TS), snake_case (Python/Go) | `userName`, `user_name` | +| Constants | SCREAMING_SNAKE_CASE | `MAX_RETRY_COUNT` | +| Functions | camelCase (JS/TS), snake_case (Python) | `getUserById`, `get_user_by_id` | +| Classes | PascalCase | `UserRepository` | +| Interfaces | PascalCase, optionally prefixed | `IUserService` or `UserService` | +| Private members | Prefix with underscore or use access modifiers | `_internalState` | + +### Function Design + +``` +Good functions: +- Do one thing well +- Have descriptive names (verb + noun) +- Take 3 or fewer parameters +- Return early for error cases +- Stay under 50 lines +``` + +### Error Handling + +``` +Good error handling: +- Catch specific errors, not generic exceptions +- Log with context (what, where, why) +- Clean up resources in error paths +- Don't swallow errors silently +- Provide actionable error messages +``` + +--- + +## TypeScript Standards + +### Type Annotations + +```typescript +// Avoid 'any' - use unknown for truly unknown types +function processData(data: unknown): ProcessedResult { + if (isValidData(data)) { + return transform(data); + } + throw new Error('Invalid data format'); +} + +// Use explicit return types for public APIs +export function calculateTotal(items: CartItem[]): number { + return items.reduce((sum, item) => sum + item.price, 0); +} + +// Use type guards for runtime checks +function isUser(obj: unknown): obj is User { + return ( + typeof obj === 'object' && + obj !== null && + 'id' in obj && + 'email' in obj + ); +} +``` + +### Null Safety + +```typescript +// Use optional chaining and nullish coalescing +const userName = user?.profile?.name ?? 'Anonymous'; + +// Be explicit about nullable types +interface Config { + timeout: number; + retries?: number; // Optional + fallbackUrl: string | null; // Explicitly nullable +} + +// Use assertion functions for validation +function assertDefined(value: T | null | undefined): asserts value is T { + if (value === null || value === undefined) { + throw new Error('Value is not defined'); + } +} +``` + +### Async/Await + +```typescript +// Always handle errors in async functions +async function fetchUser(id: string): Promise { + try { + const response = await api.get(`/users/${id}`); + return response.data; + } catch (error) { + logger.error('Failed to fetch user', { id, error }); + throw new UserFetchError(id, error); + } +} + +// Use Promise.all for parallel operations +async function loadDashboard(userId: string): Promise { + const [profile, stats, notifications] = await Promise.all([ + fetchProfile(userId), + fetchStats(userId), + fetchNotifications(userId) + ]); + return { profile, stats, notifications }; +} +``` + +### React/Component Standards + +```typescript +// Use explicit prop types +interface ButtonProps { + label: string; + onClick: () => void; + variant?: 'primary' | 'secondary'; + disabled?: boolean; +} + +// Prefer functional components with hooks +function Button({ label, onClick, variant = 'primary', disabled = false }: ButtonProps) { + return ( + + ); +} + +// Use custom hooks for reusable logic +function useDebounce(value: T, delay: number): T { + const [debouncedValue, setDebouncedValue] = useState(value); + + useEffect(() => { + const timer = setTimeout(() => setDebouncedValue(value), delay); + return () => clearTimeout(timer); + }, [value, delay]); + + return debouncedValue; +} +``` + +--- + +## JavaScript Standards + +### Variable Declarations + +```javascript +// Use const by default, let when reassignment needed +const MAX_ITEMS = 100; +let currentCount = 0; + +// Never use var +// var is function-scoped and hoisted, leading to bugs +``` + +### Object and Array Patterns + +```javascript +// Use object destructuring +const { name, email, role = 'user' } = user; + +// Use spread for immutable updates +const updatedUser = { ...user, lastLogin: new Date() }; +const updatedList = [...items, newItem]; + +// Use array methods over loops +const activeUsers = users.filter(u => u.isActive); +const emails = users.map(u => u.email); +const total = orders.reduce((sum, o) => sum + o.amount, 0); +``` + +### Module Patterns + +```javascript +// Use named exports for utilities +export function formatDate(date) { ... } +export function parseDate(str) { ... } + +// Use default export for main component/class +export default class UserService { ... } + +// Group related exports +export { formatDate, parseDate, isValidDate } from './dateUtils'; +``` + +--- + +## Python Standards + +### Type Hints (PEP 484) + +```python +from typing import Optional, List, Dict, Union + +def get_user(user_id: int) -> Optional[User]: + """Fetch user by ID, returns None if not found.""" + return db.query(User).filter(User.id == user_id).first() + +def process_items(items: List[str]) -> Dict[str, int]: + """Count occurrences of each item.""" + return {item: items.count(item) for item in set(items)} + +def send_notification( + user: User, + message: str, + *, + priority: str = "normal", + channels: List[str] = None +) -> bool: + """Send notification to user via specified channels.""" + channels = channels or ["email"] + # Implementation +``` + +### Exception Handling + +```python +# Catch specific exceptions +try: + result = api_client.fetch_data(endpoint) +except ConnectionError as e: + logger.warning(f"Connection failed: {e}") + return cached_data +except TimeoutError as e: + logger.error(f"Request timed out: {e}") + raise ServiceUnavailableError() from e + +# Use context managers for resources +with open(filepath, 'r') as f: + data = json.load(f) + +# Custom exceptions should be informative +class ValidationError(Exception): + def __init__(self, field: str, message: str): + self.field = field + self.message = message + super().__init__(f"{field}: {message}") +``` + +### Class Design + +```python +from dataclasses import dataclass +from abc import ABC, abstractmethod + +# Use dataclasses for data containers +@dataclass +class UserDTO: + id: int + email: str + name: str + is_active: bool = True + +# Use ABC for interfaces +class Repository(ABC): + @abstractmethod + def find_by_id(self, id: int) -> Optional[Entity]: + pass + + @abstractmethod + def save(self, entity: Entity) -> Entity: + pass + +# Use properties for computed attributes +class Order: + def __init__(self, items: List[OrderItem]): + self._items = items + + @property + def total(self) -> Decimal: + return sum(item.price * item.quantity for item in self._items) +``` + +--- + +## Go Standards + +### Error Handling + +```go +// Always check errors +file, err := os.Open(filename) +if err != nil { + return fmt.Errorf("failed to open %s: %w", filename, err) +} +defer file.Close() + +// Use custom error types for specific cases +type ValidationError struct { + Field string + Message string +} + +func (e *ValidationError) Error() string { + return fmt.Sprintf("%s: %s", e.Field, e.Message) +} + +// Wrap errors with context +if err := db.Query(query); err != nil { + return fmt.Errorf("query failed for user %d: %w", userID, err) +} +``` + +### Struct Design + +```go +// Use unexported fields with exported methods +type UserService struct { + repo UserRepository + cache Cache + logger Logger +} + +// Constructor functions for initialization +func NewUserService(repo UserRepository, cache Cache, logger Logger) *UserService { + return &UserService{ + repo: repo, + cache: cache, + logger: logger, + } +} + +// Keep interfaces small +type Reader interface { + Read(p []byte) (n int, err error) +} + +type Writer interface { + Write(p []byte) (n int, err error) +} +``` + +### Concurrency + +```go +// Use context for cancellation +func fetchData(ctx context.Context, url string) ([]byte, error) { + req, err := http.NewRequestWithContext(ctx, "GET", url, nil) + if err != nil { + return nil, err + } + // ... +} + +// Use channels for communication +func worker(jobs <-chan Job, results chan<- Result) { + for job := range jobs { + result := process(job) + results <- result + } +} + +// Use sync.WaitGroup for coordination +var wg sync.WaitGroup +for _, item := range items { + wg.Add(1) + go func(i Item) { + defer wg.Done() + processItem(i) + }(item) +} +wg.Wait() +``` + +--- + +## Swift Standards + +### Optionals + +```swift +// Use optional binding +if let user = fetchUser(id: userId) { + displayProfile(user) +} + +// Use guard for early exit +guard let data = response.data else { + throw NetworkError.noData +} + +// Use nil coalescing for defaults +let displayName = user.nickname ?? user.email + +// Avoid force unwrapping except in tests +// BAD: let name = user.name! +// GOOD: guard let name = user.name else { return } +``` + +### Protocol-Oriented Design + +```swift +// Define protocols with minimal requirements +protocol Identifiable { + var id: String { get } +} + +protocol Persistable: Identifiable { + func save() throws + static func find(by id: String) -> Self? +} + +// Use protocol extensions for default implementations +extension Persistable { + func save() throws { + try Storage.shared.save(self) + } +} + +// Prefer composition over inheritance +struct User: Identifiable, Codable { + let id: String + var name: String + var email: String +} +``` + +### Error Handling + +```swift +// Define domain-specific errors +enum AuthError: Error { + case invalidCredentials + case tokenExpired + case networkFailure(underlying: Error) +} + +// Use Result type for async operations +func authenticate( + email: String, + password: String, + completion: @escaping (Result) -> Void +) + +// Use throws for synchronous operations +func validate(_ input: String) throws -> ValidatedInput { + guard !input.isEmpty else { + throw ValidationError.emptyInput + } + return ValidatedInput(value: input) +} +``` + +--- + +## Kotlin Standards + +### Null Safety + +```kotlin +// Use nullable types explicitly +fun findUser(id: Int): User? { + return userRepository.find(id) +} + +// Use safe calls and elvis operator +val name = user?.profile?.name ?: "Unknown" + +// Use let for null checks with side effects +user?.let { activeUser -> + sendWelcomeEmail(activeUser.email) + logActivity(activeUser.id) +} + +// Use require/check for validation +fun processPayment(amount: Double) { + require(amount > 0) { "Amount must be positive: $amount" } + // Process +} +``` + +### Data Classes and Sealed Classes + +```kotlin +// Use data classes for DTOs +data class UserDTO( + val id: Int, + val email: String, + val name: String, + val isActive: Boolean = true +) + +// Use sealed classes for state +sealed class Result { + data class Success(val data: T) : Result() + data class Error(val message: String, val cause: Throwable? = null) : Result() + object Loading : Result() +} + +// Pattern matching with when +fun handleResult(result: Result) = when (result) { + is Result.Success -> showUser(result.data) + is Result.Error -> showError(result.message) + Result.Loading -> showLoading() +} +``` + +### Coroutines + +```kotlin +// Use structured concurrency +suspend fun loadDashboard(): Dashboard = coroutineScope { + val profile = async { fetchProfile() } + val stats = async { fetchStats() } + val notifications = async { fetchNotifications() } + + Dashboard( + profile = profile.await(), + stats = stats.await(), + notifications = notifications.await() + ) +} + +// Handle cancellation +suspend fun fetchWithRetry(url: String): Response { + repeat(3) { attempt -> + try { + return httpClient.get(url) + } catch (e: IOException) { + if (attempt == 2) throw e + delay(1000L * (attempt + 1)) + } + } + throw IllegalStateException("Unreachable") +} +``` diff --git a/.agents/skills/code-reviewer/references/common_antipatterns.md b/.agents/skills/code-reviewer/references/common_antipatterns.md new file mode 100644 index 00000000..26045452 --- /dev/null +++ b/.agents/skills/code-reviewer/references/common_antipatterns.md @@ -0,0 +1,739 @@ +# Common Antipatterns + +Code antipatterns to identify during review, with examples and fixes. + +--- + +## Table of Contents + +- [Structural Antipatterns](#structural-antipatterns) +- [Logic Antipatterns](#logic-antipatterns) +- [Security Antipatterns](#security-antipatterns) +- [Performance Antipatterns](#performance-antipatterns) +- [Testing Antipatterns](#testing-antipatterns) +- [Async Antipatterns](#async-antipatterns) + +--- + +## Structural Antipatterns + +### God Class + +A class that does too much and knows too much. + +```typescript +// BAD: God class handling everything +class UserManager { + createUser(data: UserData) { ... } + updateUser(id: string, data: UserData) { ... } + deleteUser(id: string) { ... } + sendEmail(userId: string, content: string) { ... } + generateReport(userId: string) { ... } + validatePassword(password: string) { ... } + hashPassword(password: string) { ... } + uploadAvatar(userId: string, file: File) { ... } + resizeImage(file: File) { ... } + logActivity(userId: string, action: string) { ... } + // 50 more methods... +} + +// GOOD: Single responsibility classes +class UserRepository { + create(data: UserData): User { ... } + update(id: string, data: Partial): User { ... } + delete(id: string): void { ... } +} + +class EmailService { + send(to: string, content: string): void { ... } +} + +class PasswordService { + validate(password: string): ValidationResult { ... } + hash(password: string): string { ... } +} +``` + +**Detection:** Class has >20 methods, >500 lines, or handles unrelated concerns. + +--- + +### Long Method + +Functions that do too much and are hard to understand. + +```python +# BAD: Long method doing everything +def process_order(order_data): + # Validate order (20 lines) + if not order_data.get('items'): + raise ValueError('No items') + if not order_data.get('customer_id'): + raise ValueError('No customer') + # ... more validation + + # Calculate totals (30 lines) + subtotal = 0 + for item in order_data['items']: + price = get_product_price(item['product_id']) + subtotal += price * item['quantity'] + # ... tax calculation, discounts + + # Process payment (40 lines) + payment_result = payment_gateway.charge(...) + # ... handle payment errors + + # Create order record (20 lines) + order = Order.create(...) + + # Send notifications (20 lines) + send_order_confirmation(...) + notify_warehouse(...) + + return order + +# GOOD: Composed of focused functions +def process_order(order_data): + validate_order(order_data) + totals = calculate_order_totals(order_data) + payment = process_payment(order_data['customer_id'], totals) + order = create_order_record(order_data, totals, payment) + send_order_notifications(order) + return order +``` + +**Detection:** Function >50 lines or requires scrolling to read. + +--- + +### Deep Nesting + +Excessive indentation making code hard to follow. + +```javascript +// BAD: Deep nesting +function processData(data) { + if (data) { + if (data.items) { + if (data.items.length > 0) { + for (const item of data.items) { + if (item.isValid) { + if (item.type === 'premium') { + if (item.price > 100) { + // Finally do something + processItem(item); + } + } + } + } + } + } + } +} + +// GOOD: Early returns and guard clauses +function processData(data) { + if (!data?.items?.length) { + return; + } + + const premiumItems = data.items.filter( + item => item.isValid && item.type === 'premium' && item.price > 100 + ); + + premiumItems.forEach(processItem); +} +``` + +**Detection:** Indentation >4 levels deep. + +--- + +### Magic Numbers and Strings + +Hard-coded values without explanation. + +```go +// BAD: Magic numbers +func calculateDiscount(total float64, userType int) float64 { + if userType == 1 { + return total * 0.15 + } else if userType == 2 { + return total * 0.25 + } + return total * 0.05 +} + +// GOOD: Named constants +const ( + UserTypeRegular = 1 + UserTypePremium = 2 + + DiscountRegular = 0.05 + DiscountStandard = 0.15 + DiscountPremium = 0.25 +) + +func calculateDiscount(total float64, userType int) float64 { + switch userType { + case UserTypePremium: + return total * DiscountPremium + case UserTypeRegular: + return total * DiscountStandard + default: + return total * DiscountRegular + } +} +``` + +**Detection:** Literal numbers (except 0, 1) or repeated string literals. + +--- + +### Primitive Obsession + +Using primitives instead of small objects. + +```typescript +// BAD: Primitives everywhere +function createUser( + name: string, + email: string, + phone: string, + street: string, + city: string, + zipCode: string, + country: string +): User { ... } + +// GOOD: Value objects +interface Address { + street: string; + city: string; + zipCode: string; + country: string; +} + +interface ContactInfo { + email: string; + phone: string; +} + +function createUser( + name: string, + contact: ContactInfo, + address: Address +): User { ... } +``` + +**Detection:** Functions with >4 parameters of same type, or related primitives always passed together. + +--- + +## Logic Antipatterns + +### Boolean Blindness + +Passing booleans that make code unreadable at call sites. + +```swift +// BAD: What do these booleans mean? +user.configure(true, false, true, false) + +// GOOD: Named parameters or option objects +user.configure( + sendWelcomeEmail: true, + requireVerification: false, + enableNotifications: true, + isAdmin: false +) + +// Or use an options struct +struct UserConfiguration { + var sendWelcomeEmail: Bool = true + var requireVerification: Bool = false + var enableNotifications: Bool = true + var isAdmin: Bool = false +} + +user.configure(UserConfiguration()) +``` + +**Detection:** Function calls with multiple boolean literals. + +--- + +### Null Returns for Collections + +Returning null instead of empty collections. + +```kotlin +// BAD: Returning null +fun findUsersByRole(role: String): List? { + val users = repository.findByRole(role) + return if (users.isEmpty()) null else users +} + +// Caller must handle null +val users = findUsersByRole("admin") +if (users != null) { + users.forEach { ... } +} + +// GOOD: Return empty collection +fun findUsersByRole(role: String): List { + return repository.findByRole(role) +} + +// Caller can iterate directly +findUsersByRole("admin").forEach { ... } +``` + +**Detection:** Functions returning nullable collections. + +--- + +### Stringly Typed Code + +Using strings where enums or types should be used. + +```python +# BAD: String-based logic +def handle_event(event_type: str, data: dict): + if event_type == "user_created": + handle_user_created(data) + elif event_type == "user_updated": + handle_user_updated(data) + elif event_type == "user_dleted": # Typo won't be caught + handle_user_deleted(data) + +# GOOD: Enum-based +from enum import Enum + +class EventType(Enum): + USER_CREATED = "user_created" + USER_UPDATED = "user_updated" + USER_DELETED = "user_deleted" + +def handle_event(event_type: EventType, data: dict): + handlers = { + EventType.USER_CREATED: handle_user_created, + EventType.USER_UPDATED: handle_user_updated, + EventType.USER_DELETED: handle_user_deleted, + } + handlers[event_type](data) +``` + +**Detection:** String comparisons for type/status/category values. + +--- + +## Security Antipatterns + +### SQL Injection + +String concatenation in SQL queries. + +```javascript +// BAD: String concatenation +const query = `SELECT * FROM users WHERE id = ${userId}`; +db.query(query); + +// BAD: String templates still vulnerable +const query = `SELECT * FROM users WHERE name = '${userName}'`; + +// GOOD: Parameterized queries +const query = 'SELECT * FROM users WHERE id = $1'; +db.query(query, [userId]); + +// GOOD: Using ORM safely +User.findOne({ where: { id: userId } }); +``` + +**Detection:** String concatenation or template literals with SQL keywords. + +--- + +### Hardcoded Credentials + +Secrets in source code. + +```python +# BAD: Hardcoded secrets +API_KEY = "sk-abc123xyz789" +DATABASE_URL = "postgresql://admin:password123@prod-db.internal:5432/app" + +# GOOD: Environment variables +import os + +API_KEY = os.environ["API_KEY"] +DATABASE_URL = os.environ["DATABASE_URL"] + +# GOOD: Secrets manager +from aws_secretsmanager import get_secret + +API_KEY = get_secret("api-key") +``` + +**Detection:** Variables named `password`, `secret`, `key`, `token` with string literals. + +--- + +### Unsafe Deserialization + +Deserializing untrusted data without validation. + +```python +# BAD: Binary serialization from untrusted source can execute arbitrary code +# Examples: Python's binary serialization, yaml.load without SafeLoader + +# GOOD: Use safe alternatives +import json + +def load_data(file_path): + with open(file_path, 'r') as f: + return json.load(f) + +# GOOD: Use SafeLoader for YAML +import yaml + +with open('config.yaml') as f: + config = yaml.safe_load(f) +``` + +**Detection:** Binary deserialization functions, yaml.load without safe loader, dynamic code execution on external data. + +--- + +### Missing Input Validation + +Trusting user input without validation. + +```typescript +// BAD: No validation +app.post('/user', (req, res) => { + const user = db.create({ + name: req.body.name, + email: req.body.email, + role: req.body.role // User can set themselves as admin! + }); + res.json(user); +}); + +// GOOD: Validate and sanitize +import { z } from 'zod'; + +const CreateUserSchema = z.object({ + name: z.string().min(1).max(100), + email: z.string().email(), + // role is NOT accepted from input +}); + +app.post('/user', (req, res) => { + const validated = CreateUserSchema.parse(req.body); + const user = db.create({ + ...validated, + role: 'user' // Default role, not from input + }); + res.json(user); +}); +``` + +**Detection:** Request body/params used directly without validation schema. + +--- + +## Performance Antipatterns + +### N+1 Query Problem + +Loading related data one record at a time. + +```python +# BAD: N+1 queries +def get_orders_with_items(): + orders = Order.query.all() # 1 query + for order in orders: + items = OrderItem.query.filter_by(order_id=order.id).all() # N queries + order.items = items + return orders + +# GOOD: Eager loading +def get_orders_with_items(): + return Order.query.options( + joinedload(Order.items) + ).all() # 1 query with JOIN + +# GOOD: Batch loading +def get_orders_with_items(): + orders = Order.query.all() + order_ids = [o.id for o in orders] + items = OrderItem.query.filter( + OrderItem.order_id.in_(order_ids) + ).all() # 2 queries total + # Group items by order_id... +``` + +**Detection:** Database queries inside loops. + +--- + +### Unbounded Collections + +Loading unlimited data into memory. + +```go +// BAD: Load all records +func GetAllUsers() ([]User, error) { + return db.Find(&[]User{}) // Could be millions +} + +// GOOD: Pagination +func GetUsers(page, pageSize int) ([]User, error) { + offset := (page - 1) * pageSize + return db.Limit(pageSize).Offset(offset).Find(&[]User{}) +} + +// GOOD: Streaming for large datasets +func ProcessAllUsers(handler func(User) error) error { + rows, err := db.Model(&User{}).Rows() + if err != nil { + return err + } + defer rows.Close() + + for rows.Next() { + var user User + db.ScanRows(rows, &user) + if err := handler(user); err != nil { + return err + } + } + return nil +} +``` + +**Detection:** `findAll()`, `find({})`, or queries without `LIMIT`. + +--- + +### Synchronous I/O in Hot Paths + +Blocking operations in request handlers. + +```javascript +// BAD: Sync file read on every request +app.get('/config', (req, res) => { + const config = fs.readFileSync('./config.json'); // Blocks event loop + res.json(JSON.parse(config)); +}); + +// GOOD: Load once at startup +const config = JSON.parse(fs.readFileSync('./config.json')); + +app.get('/config', (req, res) => { + res.json(config); +}); + +// GOOD: Async with caching +let configCache = null; + +app.get('/config', async (req, res) => { + if (!configCache) { + configCache = JSON.parse(await fs.promises.readFile('./config.json')); + } + res.json(configCache); +}); +``` + +**Detection:** `readFileSync`, `execSync`, or blocking calls in request handlers. + +--- + +## Testing Antipatterns + +### Test Code Duplication + +Repeating setup in every test. + +```typescript +// BAD: Duplicate setup +describe('UserService', () => { + it('should create user', async () => { + const db = await createTestDatabase(); + const userRepo = new UserRepository(db); + const emailService = new MockEmailService(); + const service = new UserService(userRepo, emailService); + + const user = await service.create({ name: 'Test' }); + expect(user.name).toBe('Test'); + }); + + it('should update user', async () => { + const db = await createTestDatabase(); // Duplicated + const userRepo = new UserRepository(db); // Duplicated + const emailService = new MockEmailService(); // Duplicated + const service = new UserService(userRepo, emailService); // Duplicated + + // ... + }); +}); + +// GOOD: Shared setup +describe('UserService', () => { + let service: UserService; + let db: TestDatabase; + + beforeEach(async () => { + db = await createTestDatabase(); + const userRepo = new UserRepository(db); + const emailService = new MockEmailService(); + service = new UserService(userRepo, emailService); + }); + + afterEach(async () => { + await db.cleanup(); + }); + + it('should create user', async () => { + const user = await service.create({ name: 'Test' }); + expect(user.name).toBe('Test'); + }); +}); +``` + +--- + +### Testing Implementation Instead of Behavior + +Tests coupled to internal implementation. + +```python +# BAD: Testing implementation details +def test_add_item_to_cart(): + cart = ShoppingCart() + cart.add_item(Product("Apple", 1.00)) + + # Testing internal structure + assert cart._items[0].name == "Apple" + assert cart._total == 1.00 + +# GOOD: Testing behavior +def test_add_item_to_cart(): + cart = ShoppingCart() + cart.add_item(Product("Apple", 1.00)) + + # Testing public behavior + assert cart.item_count == 1 + assert cart.total == 1.00 + assert cart.contains("Apple") +``` + +--- + +## Async Antipatterns + +### Floating Promises + +Promises without await or catch. + +```typescript +// BAD: Floating promise +async function saveUser(user: User) { + db.save(user); // Not awaited, errors lost + logger.info('User saved'); // Logs before save completes +} + +// BAD: Fire and forget in loop +for (const item of items) { + processItem(item); // All run in parallel, no error handling +} + +// GOOD: Await the promise +async function saveUser(user: User) { + await db.save(user); + logger.info('User saved'); +} + +// GOOD: Process with proper handling +await Promise.all(items.map(item => processItem(item))); + +// Or sequentially +for (const item of items) { + await processItem(item); +} +``` + +**Detection:** Async function calls without `await` or `.then()`. + +--- + +### Callback Hell + +Deeply nested callbacks. + +```javascript +// BAD: Callback hell +getUser(userId, (err, user) => { + if (err) return handleError(err); + getOrders(user.id, (err, orders) => { + if (err) return handleError(err); + getProducts(orders[0].productIds, (err, products) => { + if (err) return handleError(err); + renderPage(user, orders, products, (err) => { + if (err) return handleError(err); + console.log('Done'); + }); + }); + }); +}); + +// GOOD: Async/await +async function loadPage(userId) { + try { + const user = await getUser(userId); + const orders = await getOrders(user.id); + const products = await getProducts(orders[0].productIds); + await renderPage(user, orders, products); + console.log('Done'); + } catch (err) { + handleError(err); + } +} +``` + +**Detection:** >2 levels of callback nesting. + +--- + +### Async in Constructor + +Async operations in constructors. + +```typescript +// BAD: Async in constructor +class DatabaseConnection { + constructor(url: string) { + this.connect(url); // Fire-and-forget async + } + + private async connect(url: string) { + this.client = await createClient(url); + } +} + +// GOOD: Factory method +class DatabaseConnection { + private constructor(private client: Client) {} + + static async create(url: string): Promise { + const client = await createClient(url); + return new DatabaseConnection(client); + } +} + +// Usage +const db = await DatabaseConnection.create(url); +``` + +**Detection:** `async` calls or `.then()` in constructor. diff --git a/.agents/skills/code-reviewer/scripts/code_quality_checker.py b/.agents/skills/code-reviewer/scripts/code_quality_checker.py new file mode 100644 index 00000000..6f4e802d --- /dev/null +++ b/.agents/skills/code-reviewer/scripts/code_quality_checker.py @@ -0,0 +1,560 @@ +#!/usr/bin/env python3 +""" +Code Quality Checker + +Analyzes source code for quality issues, code smells, complexity metrics, +and SOLID principle violations. + +Usage: + python .agents/skills/code-reviewer/scripts/code_quality_checker.py /path/to/file.py + python .agents/skills/code-reviewer/scripts/code_quality_checker.py /path/to/directory --recursive + python .agents/skills/code-reviewer/scripts/code_quality_checker.py . --language typescript --json +""" + +import argparse +import json +import re +import sys +from pathlib import Path +from typing import Dict, List, Optional + + +# Language-specific file extensions +LANGUAGE_EXTENSIONS = { + "python": [".py"], + "typescript": [".ts", ".tsx"], + "javascript": [".js", ".jsx", ".mjs"], + "go": [".go"], + "swift": [".swift"], + "kotlin": [".kt", ".kts"] +} + +# Code smell thresholds +THRESHOLDS = { + "long_function_lines": 50, + "too_many_parameters": 5, + "high_complexity": 10, + "god_class_methods": 20, + "max_imports": 15 +} + + +def get_file_extension(filepath: Path) -> str: + """Get file extension.""" + return filepath.suffix.lower() + + +def detect_language(filepath: Path) -> Optional[str]: + """Detect programming language from file extension.""" + ext = get_file_extension(filepath) + for lang, extensions in LANGUAGE_EXTENSIONS.items(): + if ext in extensions: + return lang + return None + + +def read_file_content(filepath: Path) -> str: + """Read file content safely.""" + try: + with open(filepath, "r", encoding="utf-8", errors="ignore") as f: + return f.read() + except Exception: + return "" + + +def calculate_cyclomatic_complexity(content: str) -> int: + """ + Estimate cyclomatic complexity based on control flow keywords. + """ + complexity = 1 # Base complexity + + # Control flow patterns that increase complexity + patterns = [ + r"\bif\b", + r"\belif\b", + r"\belse\b", + r"\bfor\b", + r"\bwhile\b", + r"\bcase\b", + r"\bcatch\b", + r"\bexcept\b", + r"\band\b", + r"\bor\b", + r"\|\|", + r"&&" + ] + + for pattern in patterns: + matches = re.findall(pattern, content, re.IGNORECASE) + complexity += len(matches) + + return complexity + + +def count_lines(content: str) -> Dict[str, int]: + """Count different types of lines in code.""" + lines = content.split("\n") + total = len(lines) + blank = sum(1 for line in lines if not line.strip()) + comment = 0 + + for line in lines: + stripped = line.strip() + if stripped.startswith("#") or stripped.startswith("//"): + comment += 1 + elif stripped.startswith("/*") or stripped.startswith("'''") or stripped.startswith('"""'): + comment += 1 + + code = total - blank - comment + + return { + "total": total, + "code": code, + "blank": blank, + "comment": comment + } + + +def find_functions(content: str, language: str) -> List[Dict]: + """Find function definitions and their metrics.""" + functions = [] + + # Language-specific function patterns + patterns = { + "python": r"def\s+(\w+)\s*\(([^)]*)\)", + "typescript": r"(?:function\s+(\w+)|(?:const|let|var)\s+(\w+)\s*=\s*(?:async\s+)?\([^)]*\)\s*=>)", + "javascript": r"(?:function\s+(\w+)|(?:const|let|var)\s+(\w+)\s*=\s*(?:async\s+)?\([^)]*\)\s*=>)", + "go": r"func\s+(?:\([^)]+\)\s+)?(\w+)\s*\(([^)]*)\)", + "swift": r"func\s+(\w+)\s*\(([^)]*)\)", + "kotlin": r"fun\s+(\w+)\s*\(([^)]*)\)" + } + + pattern = patterns.get(language, patterns["python"]) + matches = re.finditer(pattern, content, re.MULTILINE) + + for match in matches: + name = next((g for g in match.groups() if g), "anonymous") + params_str = match.group(2) if len(match.groups()) > 1 and match.group(2) else "" + + # Count parameters + params = [p.strip() for p in params_str.split(",") if p.strip()] + param_count = len(params) + + # Estimate function length + start_pos = match.end() + remaining = content[start_pos:] + + next_func = re.search(pattern, remaining) + if next_func: + func_body = remaining[:next_func.start()] + else: + func_body = remaining[:min(2000, len(remaining))] + + line_count = len(func_body.split("\n")) + complexity = calculate_cyclomatic_complexity(func_body) + + functions.append({ + "name": name, + "parameters": param_count, + "lines": line_count, + "complexity": complexity + }) + + return functions + + +def find_classes(content: str, language: str) -> List[Dict]: + """Find class definitions and their metrics.""" + classes = [] + + patterns = { + "python": r"class\s+(\w+)", + "typescript": r"class\s+(\w+)", + "javascript": r"class\s+(\w+)", + "go": r"type\s+(\w+)\s+struct", + "swift": r"class\s+(\w+)", + "kotlin": r"class\s+(\w+)" + } + + pattern = patterns.get(language, patterns["python"]) + matches = re.finditer(pattern, content) + + for match in matches: + name = match.group(1) + + start_pos = match.end() + remaining = content[start_pos:] + + next_class = re.search(pattern, remaining) + if next_class: + class_body = remaining[:next_class.start()] + else: + class_body = remaining + + # Count methods + method_patterns = { + "python": r"def\s+\w+\s*\(", + "typescript": r"(?:public|private|protected)?\s*\w+\s*\([^)]*\)\s*[:{]", + "javascript": r"\w+\s*\([^)]*\)\s*\{", + "go": r"func\s+\(", + "swift": r"func\s+\w+", + "kotlin": r"fun\s+\w+" + } + method_pattern = method_patterns.get(language, method_patterns["python"]) + methods = len(re.findall(method_pattern, class_body)) + + classes.append({ + "name": name, + "methods": methods, + "lines": len(class_body.split("\n")) + }) + + return classes + + +def check_code_smells(content: str, functions: List[Dict], classes: List[Dict]) -> List[Dict]: + """Check for code smells in the content.""" + smells = [] + + # Long functions + for func in functions: + if func["lines"] > THRESHOLDS["long_function_lines"]: + smells.append({ + "type": "long_function", + "severity": "medium", + "message": f"Function '{func['name']}' has {func['lines']} lines (max: {THRESHOLDS['long_function_lines']})", + "location": func["name"] + }) + + # Too many parameters + for func in functions: + if func["parameters"] > THRESHOLDS["too_many_parameters"]: + smells.append({ + "type": "too_many_parameters", + "severity": "low", + "message": f"Function '{func['name']}' has {func['parameters']} parameters (max: {THRESHOLDS['too_many_parameters']})", + "location": func["name"] + }) + + # High complexity + for func in functions: + if func["complexity"] > THRESHOLDS["high_complexity"]: + severity = "high" if func["complexity"] > 20 else "medium" + smells.append({ + "type": "high_complexity", + "severity": severity, + "message": f"Function '{func['name']}' has complexity {func['complexity']} (max: {THRESHOLDS['high_complexity']})", + "location": func["name"] + }) + + # God classes + for cls in classes: + if cls["methods"] > THRESHOLDS["god_class_methods"]: + smells.append({ + "type": "god_class", + "severity": "high", + "message": f"Class '{cls['name']}' has {cls['methods']} methods (max: {THRESHOLDS['god_class_methods']})", + "location": cls["name"] + }) + + # Magic numbers + magic_pattern = r"\b(? List[Dict]: + """Check for potential SOLID principle violations.""" + violations = [] + + # OCP: Type checking instead of polymorphism + type_checks = len(re.findall(r"isinstance\(|type\(.*\)\s*==|typeof\s+\w+\s*===", content)) + if type_checks > 2: + violations.append({ + "principle": "OCP", + "name": "Open/Closed Principle", + "severity": "medium", + "message": f"Found {type_checks} type checks - consider using polymorphism" + }) + + # LSP/ISP: NotImplementedError + not_impl = len(re.findall(r"raise\s+NotImplementedError|not\s+implemented", content, re.IGNORECASE)) + if not_impl: + violations.append({ + "principle": "LSP/ISP", + "name": "Liskov/Interface Segregation", + "severity": "low", + "message": f"Found {not_impl} unimplemented methods - may indicate oversized interface" + }) + + # DIP: Too many direct imports + imports = len(re.findall(r"^(?:import|from)\s+", content, re.MULTILINE)) + if imports > THRESHOLDS["max_imports"]: + violations.append({ + "principle": "DIP", + "name": "Dependency Inversion Principle", + "severity": "low", + "message": f"File has {imports} imports - consider dependency injection" + }) + + return violations + + +def calculate_quality_score( + line_metrics: Dict, + functions: List[Dict], + classes: List[Dict], + smells: List[Dict], + violations: List[Dict] +) -> int: + """Calculate overall quality score (0-100).""" + score = 100 + + # Deduct for code smells + for smell in smells: + if smell["severity"] == "high": + score -= 10 + elif smell["severity"] == "medium": + score -= 5 + elif smell["severity"] == "low": + score -= 2 + + # Deduct for SOLID violations + for violation in violations: + if violation["severity"] == "high": + score -= 8 + elif violation["severity"] == "medium": + score -= 4 + elif violation["severity"] == "low": + score -= 2 + + # Bonus for good comment ratio (10-30%) + if line_metrics["total"] > 0: + comment_ratio = line_metrics["comment"] / line_metrics["total"] + if 0.1 <= comment_ratio <= 0.3: + score += 5 + + # Bonus for reasonable function sizes + if functions: + avg_lines = sum(f["lines"] for f in functions) / len(functions) + if avg_lines < 30: + score += 5 + + return max(0, min(100, score)) + + +def get_grade(score: int) -> str: + """Convert score to letter grade.""" + if score >= 90: + return "A" + elif score >= 80: + return "B" + elif score >= 70: + return "C" + elif score >= 60: + return "D" + else: + return "F" + + +def analyze_file(filepath: Path) -> Dict: + """Analyze a single file for code quality.""" + language = detect_language(filepath) + if not language: + return {"error": f"Unsupported file type: {filepath.suffix}"} + + content = read_file_content(filepath) + if not content: + return {"error": f"Could not read file: {filepath}"} + + line_metrics = count_lines(content) + functions = find_functions(content, language) + classes = find_classes(content, language) + smells = check_code_smells(content, functions, classes) + violations = check_solid_violations(content) + score = calculate_quality_score(line_metrics, functions, classes, smells, violations) + + return { + "file": str(filepath), + "language": language, + "metrics": { + "lines": line_metrics, + "functions": len(functions), + "classes": len(classes), + "avg_complexity": round(sum(f["complexity"] for f in functions) / max(1, len(functions)), 1) + }, + "quality_score": score, + "grade": get_grade(score), + "smells": smells, + "solid_violations": violations, + "function_details": functions[:10], + "class_details": classes[:10] + } + + +def analyze_directory( + dir_path: Path, + recursive: bool = True, + language: Optional[str] = None +) -> Dict: + """Analyze all files in a directory.""" + results = [] + extensions = [] + + if language: + extensions = LANGUAGE_EXTENSIONS.get(language, []) + else: + for exts in LANGUAGE_EXTENSIONS.values(): + extensions.extend(exts) + + pattern = "**/*" if recursive else "*" + + for ext in extensions: + for filepath in dir_path.glob(f"{pattern}{ext}"): + if "node_modules" in str(filepath) or ".git" in str(filepath): + continue + result = analyze_file(filepath) + if "error" not in result: + results.append(result) + + if not results: + return {"error": "No supported files found"} + + total_score = sum(r["quality_score"] for r in results) + avg_score = total_score / len(results) + total_smells = sum(len(r["smells"]) for r in results) + total_violations = sum(len(r["solid_violations"]) for r in results) + + return { + "directory": str(dir_path), + "files_analyzed": len(results), + "average_score": round(avg_score, 1), + "overall_grade": get_grade(int(avg_score)), + "total_code_smells": total_smells, + "total_solid_violations": total_violations, + "files": sorted(results, key=lambda x: x["quality_score"]) + } + + +def print_report(analysis: Dict) -> None: + """Print human-readable analysis report.""" + if "error" in analysis: + print(f"Error: {analysis['error']}") + return + + print("=" * 60) + print("CODE QUALITY REPORT") + print("=" * 60) + + if "file" in analysis: + print(f"\nFile: {analysis['file']}") + print(f"Language: {analysis['language']}") + print(f"Quality Score: {analysis['quality_score']}/100 ({analysis['grade']})") + + metrics = analysis["metrics"] + print(f"\nLines: {metrics['lines']['total']} ({metrics['lines']['code']} code, {metrics['lines']['comment']} comments)") + print(f"Functions: {metrics['functions']}") + print(f"Classes: {metrics['classes']}") + print(f"Avg Complexity: {metrics['avg_complexity']}") + + if analysis["smells"]: + print("\n--- CODE SMELLS ---") + for smell in analysis["smells"][:10]: + print(f" [{smell['severity'].upper()}] {smell['message']} ({smell['location']})") + + if analysis["solid_violations"]: + print("\n--- SOLID VIOLATIONS ---") + for v in analysis["solid_violations"]: + print(f" [{v['principle']}] {v['message']}") + else: + print(f"\nDirectory: {analysis['directory']}") + print(f"Files Analyzed: {analysis['files_analyzed']}") + print(f"Average Score: {analysis['average_score']}/100 ({analysis['overall_grade']})") + print(f"Total Code Smells: {analysis['total_code_smells']}") + print(f"Total SOLID Violations: {analysis['total_solid_violations']}") + + print("\n--- FILES BY QUALITY ---") + for f in analysis["files"][:10]: + print(f" {f['quality_score']:3d}/100 [{f['grade']}] {f['file']}") + + print("\n" + "=" * 60) + + +def main(): + parser = argparse.ArgumentParser( + description="Analyze code quality, smells, and SOLID violations" + ) + parser.add_argument( + "path", + help="File or directory to analyze" + ) + parser.add_argument( + "--recursive", "-r", + action="store_true", + default=True, + help="Recursively analyze directories (default: true)" + ) + parser.add_argument( + "--language", "-l", + choices=list(LANGUAGE_EXTENSIONS.keys()), + help="Filter by programming language" + ) + parser.add_argument( + "--json", + action="store_true", + help="Output in JSON format" + ) + parser.add_argument( + "--output", "-o", + help="Write output to file" + ) + + args = parser.parse_args() + + target = Path(args.path).resolve() + + if not target.exists(): + print(f"Error: Path does not exist: {target}", file=sys.stderr) + sys.exit(1) + + if target.is_file(): + analysis = analyze_file(target) + else: + analysis = analyze_directory(target, args.recursive, args.language) + + if args.json: + output = json.dumps(analysis, indent=2, default=str) + if args.output: + with open(args.output, "w") as f: + f.write(output) + print(f"Results written to {args.output}") + else: + print(output) + else: + print_report(analysis) + + +if __name__ == "__main__": + main() diff --git a/.agents/skills/code-reviewer/scripts/pr_analyzer.py b/.agents/skills/code-reviewer/scripts/pr_analyzer.py new file mode 100644 index 00000000..901e71cf --- /dev/null +++ b/.agents/skills/code-reviewer/scripts/pr_analyzer.py @@ -0,0 +1,495 @@ +#!/usr/bin/env python3 +""" +PR Analyzer + +Analyzes pull request changes for review complexity, risk assessment, +and generates review priorities. + +Usage: + python .agents/skills/code-reviewer/scripts/pr_analyzer.py /path/to/repo + python .agents/skills/code-reviewer/scripts/pr_analyzer.py . --base main --head feature-branch + python .agents/skills/code-reviewer/scripts/pr_analyzer.py /path/to/repo --json +""" + +import argparse +import json +import os +import re +import subprocess +import sys +from pathlib import Path +from typing import Dict, List, Optional, Tuple + + +# File categories for review prioritization +FILE_CATEGORIES = { + "critical": { + "patterns": [ + r"auth", r"security", r"password", r"token", r"secret", + r"payment", r"billing", r"crypto", r"encrypt" + ], + "weight": 5, + "description": "Security-sensitive files requiring careful review" + }, + "high": { + "patterns": [ + r"api", r"database", r"migration", r"schema", r"model", + r"config", r"env", r"middleware" + ], + "weight": 4, + "description": "Core infrastructure files" + }, + "medium": { + "patterns": [ + r"service", r"controller", r"handler", r"util", r"helper" + ], + "weight": 3, + "description": "Business logic files" + }, + "low": { + "patterns": [ + r"test", r"spec", r"mock", r"fixture", r"story", + r"readme", r"docs", r"\.md$" + ], + "weight": 1, + "description": "Tests and documentation" + } +} + +# Risky patterns to flag +RISK_PATTERNS = [ + { + "name": "hardcoded_secrets", + "pattern": r"(password|secret|api_key|token)\s*[=:]\s*['\"][^'\"]+['\"]", + "severity": "critical", + "message": "Potential hardcoded secret detected" + }, + { + "name": "todo_fixme", + "pattern": r"(TODO|FIXME|HACK|XXX):", + "severity": "low", + "message": "TODO/FIXME comment found" + }, + { + "name": "console_log", + "pattern": r"console\.(log|debug|info|warn|error)\(", + "severity": "medium", + "message": "Console statement found (remove for production)" + }, + { + "name": "debugger", + "pattern": r"\bdebugger\b", + "severity": "high", + "message": "Debugger statement found" + }, + { + "name": "disable_eslint", + "pattern": r"eslint-disable", + "severity": "medium", + "message": "ESLint rule disabled" + }, + { + "name": "any_type", + "pattern": r":\s*any\b", + "severity": "medium", + "message": "TypeScript 'any' type used" + }, + { + "name": "sql_concatenation", + "pattern": r"(SELECT|INSERT|UPDATE|DELETE).*\+.*['\"]", + "severity": "critical", + "message": "Potential SQL injection (string concatenation in query)" + } +] + + +def run_git_command(cmd: List[str], cwd: Path) -> Tuple[bool, str]: + """Run a git command and return success status and output.""" + try: + result = subprocess.run( + cmd, + cwd=cwd, + capture_output=True, + text=True, + timeout=30 + ) + return result.returncode == 0, result.stdout.strip() + except subprocess.TimeoutExpired: + return False, "Command timed out" + except Exception as e: + return False, str(e) + + +def get_changed_files(repo_path: Path, base: str, head: str) -> List[Dict]: + """Get list of changed files between two refs.""" + success, output = run_git_command( + ["git", "diff", "--name-status", f"{base}...{head}"], + repo_path + ) + + if not success: + # Try without the triple dot (for uncommitted changes) + success, output = run_git_command( + ["git", "diff", "--name-status", base, head], + repo_path + ) + + if not success or not output: + # Fall back to staged changes + success, output = run_git_command( + ["git", "diff", "--name-status", "--cached"], + repo_path + ) + + files = [] + for line in output.split("\n"): + if not line.strip(): + continue + parts = line.split("\t") + if len(parts) >= 2: + status = parts[0][0] # First character of status + filepath = parts[-1] # Handle renames (R100\told\tnew) + status_map = { + "A": "added", + "M": "modified", + "D": "deleted", + "R": "renamed", + "C": "copied" + } + files.append({ + "path": filepath, + "status": status_map.get(status, "modified") + }) + + return files + + +def get_file_diff(repo_path: Path, filepath: str, base: str, head: str) -> str: + """Get diff content for a specific file.""" + success, output = run_git_command( + ["git", "diff", f"{base}...{head}", "--", filepath], + repo_path + ) + if not success: + success, output = run_git_command( + ["git", "diff", "--cached", "--", filepath], + repo_path + ) + return output if success else "" + + +def categorize_file(filepath: str) -> Tuple[str, int]: + """Categorize a file based on its path and name.""" + filepath_lower = filepath.lower() + + for category, info in FILE_CATEGORIES.items(): + for pattern in info["patterns"]: + if re.search(pattern, filepath_lower): + return category, info["weight"] + + return "medium", 2 # Default category + + +def analyze_diff_for_risks(diff_content: str, filepath: str) -> List[Dict]: + """Analyze diff content for risky patterns.""" + risks = [] + + # Only analyze added lines (starting with +) + added_lines = [ + line[1:] for line in diff_content.split("\n") + if line.startswith("+") and not line.startswith("+++") + ] + + content = "\n".join(added_lines) + + for risk in RISK_PATTERNS: + matches = re.findall(risk["pattern"], content, re.IGNORECASE) + if matches: + risks.append({ + "name": risk["name"], + "severity": risk["severity"], + "message": risk["message"], + "file": filepath, + "count": len(matches) + }) + + return risks + + +def count_changes(diff_content: str) -> Dict[str, int]: + """Count additions and deletions in diff.""" + additions = 0 + deletions = 0 + + for line in diff_content.split("\n"): + if line.startswith("+") and not line.startswith("+++"): + additions += 1 + elif line.startswith("-") and not line.startswith("---"): + deletions += 1 + + return {"additions": additions, "deletions": deletions} + + +def calculate_complexity_score(files: List[Dict], all_risks: List[Dict]) -> int: + """Calculate overall PR complexity score (1-10).""" + score = 0 + + # File count contribution (max 3 points) + file_count = len(files) + if file_count > 20: + score += 3 + elif file_count > 10: + score += 2 + elif file_count > 5: + score += 1 + + # Total changes contribution (max 3 points) + total_changes = sum(f.get("additions", 0) + f.get("deletions", 0) for f in files) + if total_changes > 500: + score += 3 + elif total_changes > 200: + score += 2 + elif total_changes > 50: + score += 1 + + # Risk severity contribution (max 4 points) + critical_risks = sum(1 for r in all_risks if r["severity"] == "critical") + high_risks = sum(1 for r in all_risks if r["severity"] == "high") + + score += min(2, critical_risks) + score += min(2, high_risks) + + return min(10, max(1, score)) + + +def analyze_commit_messages(repo_path: Path, base: str, head: str) -> Dict: + """Analyze commit messages in the PR.""" + success, output = run_git_command( + ["git", "log", "--oneline", f"{base}...{head}"], + repo_path + ) + + if not success or not output: + return {"commits": 0, "issues": []} + + commits = output.strip().split("\n") + issues = [] + + for commit in commits: + if len(commit) < 10: + continue + + # Check for conventional commit format + message = commit[8:] if len(commit) > 8 else commit # Skip hash + + if not re.match(r"^(feat|fix|docs|style|refactor|test|chore|perf|ci|build|revert)(\(.+\))?:", message): + issues.append({ + "commit": commit[:7], + "issue": "Does not follow conventional commit format" + }) + + if len(message) > 72: + issues.append({ + "commit": commit[:7], + "issue": "Commit message exceeds 72 characters" + }) + + return { + "commits": len(commits), + "issues": issues + } + + +def analyze_pr( + repo_path: Path, + base: str = "main", + head: str = "HEAD" +) -> Dict: + """Perform complete PR analysis.""" + # Get changed files + changed_files = get_changed_files(repo_path, base, head) + + if not changed_files: + return { + "status": "no_changes", + "message": "No changes detected between branches" + } + + # Analyze each file + all_risks = [] + file_analyses = [] + + for file_info in changed_files: + filepath = file_info["path"] + category, weight = categorize_file(filepath) + + # Get diff for the file + diff = get_file_diff(repo_path, filepath, base, head) + changes = count_changes(diff) + risks = analyze_diff_for_risks(diff, filepath) + + all_risks.extend(risks) + + file_analyses.append({ + "path": filepath, + "status": file_info["status"], + "category": category, + "priority_weight": weight, + "additions": changes["additions"], + "deletions": changes["deletions"], + "risks": risks + }) + + # Sort by priority (highest first) + file_analyses.sort(key=lambda x: (-x["priority_weight"], x["path"])) + + # Analyze commits + commit_analysis = analyze_commit_messages(repo_path, base, head) + + # Calculate metrics + complexity = calculate_complexity_score(file_analyses, all_risks) + + total_additions = sum(f["additions"] for f in file_analyses) + total_deletions = sum(f["deletions"] for f in file_analyses) + + return { + "status": "analyzed", + "summary": { + "files_changed": len(file_analyses), + "total_additions": total_additions, + "total_deletions": total_deletions, + "complexity_score": complexity, + "complexity_label": get_complexity_label(complexity), + "commits": commit_analysis["commits"] + }, + "risks": { + "critical": [r for r in all_risks if r["severity"] == "critical"], + "high": [r for r in all_risks if r["severity"] == "high"], + "medium": [r for r in all_risks if r["severity"] == "medium"], + "low": [r for r in all_risks if r["severity"] == "low"] + }, + "files": file_analyses, + "commit_issues": commit_analysis["issues"], + "review_order": [f["path"] for f in file_analyses[:10]] # Top 10 priority files + } + + +def get_complexity_label(score: int) -> str: + """Get human-readable complexity label.""" + if score <= 2: + return "Simple" + elif score <= 4: + return "Moderate" + elif score <= 6: + return "Complex" + elif score <= 8: + return "Very Complex" + else: + return "Critical" + + +def print_report(analysis: Dict) -> None: + """Print human-readable analysis report.""" + if analysis["status"] == "no_changes": + print("No changes detected.") + return + + summary = analysis["summary"] + risks = analysis["risks"] + + print("=" * 60) + print("PR ANALYSIS REPORT") + print("=" * 60) + + print(f"\nComplexity: {summary['complexity_score']}/10 ({summary['complexity_label']})") + print(f"Files Changed: {summary['files_changed']}") + print(f"Lines: +{summary['total_additions']} / -{summary['total_deletions']}") + print(f"Commits: {summary['commits']}") + + # Risk summary + print("\n--- RISK SUMMARY ---") + print(f"Critical: {len(risks['critical'])}") + print(f"High: {len(risks['high'])}") + print(f"Medium: {len(risks['medium'])}") + print(f"Low: {len(risks['low'])}") + + # Critical and high risks details + if risks["critical"]: + print("\n--- CRITICAL RISKS ---") + for risk in risks["critical"]: + print(f" [{risk['file']}] {risk['message']} (x{risk['count']})") + + if risks["high"]: + print("\n--- HIGH RISKS ---") + for risk in risks["high"]: + print(f" [{risk['file']}] {risk['message']} (x{risk['count']})") + + # Commit message issues + if analysis["commit_issues"]: + print("\n--- COMMIT MESSAGE ISSUES ---") + for issue in analysis["commit_issues"][:5]: + print(f" {issue['commit']}: {issue['issue']}") + + # Review order + print("\n--- SUGGESTED REVIEW ORDER ---") + for i, filepath in enumerate(analysis["review_order"], 1): + file_info = next(f for f in analysis["files"] if f["path"] == filepath) + print(f" {i}. [{file_info['category'].upper()}] {filepath}") + + print("\n" + "=" * 60) + + +def main(): + parser = argparse.ArgumentParser( + description="Analyze pull request for review complexity and risks" + ) + parser.add_argument( + "repo_path", + nargs="?", + default=".", + help="Path to git repository (default: current directory)" + ) + parser.add_argument( + "--base", "-b", + default="main", + help="Base branch for comparison (default: main)" + ) + parser.add_argument( + "--head", + default="HEAD", + help="Head branch/commit for comparison (default: HEAD)" + ) + parser.add_argument( + "--json", + action="store_true", + help="Output in JSON format" + ) + parser.add_argument( + "--output", "-o", + help="Write output to file" + ) + + args = parser.parse_args() + + repo_path = Path(args.repo_path).resolve() + + if not (repo_path / ".git").exists(): + print(f"Error: {repo_path} is not a git repository", file=sys.stderr) + sys.exit(1) + + analysis = analyze_pr(repo_path, args.base, args.head) + + if args.json: + output = json.dumps(analysis, indent=2) + if args.output: + with open(args.output, "w") as f: + f.write(output) + print(f"Results written to {args.output}") + else: + print(output) + else: + print_report(analysis) + + +if __name__ == "__main__": + main() diff --git a/.agents/skills/code-reviewer/scripts/review_report_generator.py b/.agents/skills/code-reviewer/scripts/review_report_generator.py new file mode 100644 index 00000000..d0f13221 --- /dev/null +++ b/.agents/skills/code-reviewer/scripts/review_report_generator.py @@ -0,0 +1,505 @@ +#!/usr/bin/env python3 +""" +Review Report Generator + +Generates comprehensive code review reports by combining PR analysis +and code quality findings into structured, actionable reports. + +Usage: + python .agents/skills/code-reviewer/scripts/review_report_generator.py /path/to/repo + python .agents/skills/code-reviewer/scripts/review_report_generator.py . --pr-analysis pr_results.json --quality-analysis quality_results.json + python .agents/skills/code-reviewer/scripts/review_report_generator.py /path/to/repo --format markdown --output review.md +""" + +import argparse +import json +import os +import subprocess +import sys +from datetime import datetime +from pathlib import Path +from typing import Dict, List, Optional, Tuple + + +# Severity weights for prioritization +SEVERITY_WEIGHTS = { + "critical": 100, + "high": 75, + "medium": 50, + "low": 25, + "info": 10 +} + +# Review verdict thresholds +VERDICT_THRESHOLDS = { + "approve": {"max_critical": 0, "max_high": 0, "max_score": 100}, + "approve_with_suggestions": {"max_critical": 0, "max_high": 2, "max_score": 85}, + "request_changes": {"max_critical": 0, "max_high": 5, "max_score": 70}, + "block": {"max_critical": float("inf"), "max_high": float("inf"), "max_score": 0} +} + + +def load_json_file(filepath: str) -> Optional[Dict]: + """Load JSON file if it exists.""" + try: + with open(filepath, "r") as f: + return json.load(f) + except (FileNotFoundError, json.JSONDecodeError): + return None + + +def run_pr_analyzer(repo_path: Path) -> Dict: + """Run .agents/skills/code-reviewer/scripts/pr_analyzer.py and return results.""" + script_path = Path(__file__).parent / ".agents/skills/code-reviewer/scripts/pr_analyzer.py" + if not script_path.exists(): + return {"status": "error", "message": ".agents/skills/code-reviewer/scripts/pr_analyzer.py not found"} + + try: + result = subprocess.run( + [sys.executable, str(script_path), str(repo_path), "--json"], + capture_output=True, + text=True, + timeout=120 + ) + if result.returncode == 0: + return json.loads(result.stdout) + return {"status": "error", "message": result.stderr} + except Exception as e: + return {"status": "error", "message": str(e)} + + +def run_quality_checker(repo_path: Path) -> Dict: + """Run .agents/skills/code-reviewer/scripts/code_quality_checker.py and return results.""" + script_path = Path(__file__).parent / ".agents/skills/code-reviewer/scripts/code_quality_checker.py" + if not script_path.exists(): + return {"status": "error", "message": ".agents/skills/code-reviewer/scripts/code_quality_checker.py not found"} + + try: + result = subprocess.run( + [sys.executable, str(script_path), str(repo_path), "--json"], + capture_output=True, + text=True, + timeout=300 + ) + if result.returncode == 0: + return json.loads(result.stdout) + return {"status": "error", "message": result.stderr} + except Exception as e: + return {"status": "error", "message": str(e)} + + +def calculate_review_score(pr_analysis: Dict, quality_analysis: Dict) -> int: + """Calculate overall review score (0-100).""" + score = 100 + + # Deduct for PR risks + if "risks" in pr_analysis: + risks = pr_analysis["risks"] + score -= len(risks.get("critical", [])) * 15 + score -= len(risks.get("high", [])) * 10 + score -= len(risks.get("medium", [])) * 5 + score -= len(risks.get("low", [])) * 2 + + # Deduct for code quality issues + if "issues" in quality_analysis: + issues = quality_analysis["issues"] + score -= len([i for i in issues if i.get("severity") == "critical"]) * 12 + score -= len([i for i in issues if i.get("severity") == "high"]) * 8 + score -= len([i for i in issues if i.get("severity") == "medium"]) * 4 + score -= len([i for i in issues if i.get("severity") == "low"]) * 1 + + # Deduct for complexity + if "summary" in pr_analysis: + complexity = pr_analysis["summary"].get("complexity_score", 0) + if complexity > 7: + score -= 10 + elif complexity > 5: + score -= 5 + + return max(0, min(100, score)) + + +def determine_verdict(score: int, critical_count: int, high_count: int) -> Tuple[str, str]: + """Determine review verdict based on score and issue counts.""" + if critical_count > 0: + return "block", "Critical issues must be resolved before merge" + + if score >= 90 and high_count == 0: + return "approve", "Code meets quality standards" + + if score >= 75 and high_count <= 2: + return "approve_with_suggestions", "Minor improvements recommended" + + if score >= 50: + return "request_changes", "Several issues need to be addressed" + + return "block", "Significant issues prevent approval" + + +def generate_findings_list(pr_analysis: Dict, quality_analysis: Dict) -> List[Dict]: + """Combine and prioritize all findings.""" + findings = [] + + # Add PR risk findings + if "risks" in pr_analysis: + for severity, items in pr_analysis["risks"].items(): + for item in items: + findings.append({ + "source": "pr_analysis", + "severity": severity, + "category": item.get("name", "unknown"), + "message": item.get("message", ""), + "file": item.get("file", ""), + "count": item.get("count", 1) + }) + + # Add code quality findings + if "issues" in quality_analysis: + for issue in quality_analysis["issues"]: + findings.append({ + "source": "quality_analysis", + "severity": issue.get("severity", "medium"), + "category": issue.get("type", "unknown"), + "message": issue.get("message", ""), + "file": issue.get("file", ""), + "line": issue.get("line", 0) + }) + + # Sort by severity weight + findings.sort( + key=lambda x: -SEVERITY_WEIGHTS.get(x["severity"], 0) + ) + + return findings + + +def generate_action_items(findings: List[Dict]) -> List[Dict]: + """Generate prioritized action items from findings.""" + action_items = [] + seen_categories = set() + + for finding in findings: + category = finding["category"] + severity = finding["severity"] + + # Group similar issues + if category in seen_categories and severity not in ["critical", "high"]: + continue + + action = { + "priority": "P0" if severity == "critical" else "P1" if severity == "high" else "P2", + "action": get_action_for_category(category, finding), + "severity": severity, + "files_affected": [finding["file"]] if finding.get("file") else [] + } + action_items.append(action) + seen_categories.add(category) + + return action_items[:15] # Top 15 actions + + +def get_action_for_category(category: str, finding: Dict) -> str: + """Get actionable recommendation for issue category.""" + actions = { + "hardcoded_secrets": "Remove hardcoded credentials and use environment variables or a secrets manager", + "sql_concatenation": "Use parameterized queries to prevent SQL injection", + "debugger": "Remove debugger statements before merging", + "console_log": "Remove or replace console statements with proper logging", + "todo_fixme": "Address TODO/FIXME comments or create tracking issues", + "disable_eslint": "Address the underlying issue instead of disabling lint rules", + "any_type": "Replace 'any' types with proper type definitions", + "long_function": "Break down function into smaller, focused units", + "god_class": "Split class into smaller, single-responsibility classes", + "too_many_params": "Use parameter objects or builder pattern", + "deep_nesting": "Refactor using early returns, guard clauses, or extraction", + "high_complexity": "Reduce cyclomatic complexity through refactoring", + "missing_error_handling": "Add proper error handling and recovery logic", + "duplicate_code": "Extract duplicate code into shared functions", + "magic_numbers": "Replace magic numbers with named constants", + "large_file": "Consider splitting into multiple smaller modules" + } + return actions.get(category, f"Review and address: {finding.get('message', category)}") + + +def format_markdown_report(report: Dict) -> str: + """Generate markdown-formatted report.""" + lines = [] + + # Header + lines.append("# Code Review Report") + lines.append("") + lines.append(f"**Generated:** {report['metadata']['generated_at']}") + lines.append(f"**Repository:** {report['metadata']['repository']}") + lines.append("") + + # Executive Summary + lines.append("## Executive Summary") + lines.append("") + summary = report["summary"] + verdict = summary["verdict"] + verdict_emoji = { + "approve": "✅", + "approve_with_suggestions": "✅", + "request_changes": "⚠️", + "block": "❌" + }.get(verdict, "❓") + + lines.append(f"**Verdict:** {verdict_emoji} {verdict.upper().replace('_', ' ')}") + lines.append(f"**Score:** {summary['score']}/100") + lines.append(f"**Rationale:** {summary['rationale']}") + lines.append("") + + # Issue Counts + lines.append("### Issue Summary") + lines.append("") + lines.append("| Severity | Count |") + lines.append("|----------|-------|") + for severity in ["critical", "high", "medium", "low"]: + count = summary["issue_counts"].get(severity, 0) + lines.append(f"| {severity.capitalize()} | {count} |") + lines.append("") + + # PR Statistics (if available) + if "pr_summary" in report: + pr = report["pr_summary"] + lines.append("### Change Statistics") + lines.append("") + lines.append(f"- **Files Changed:** {pr.get('files_changed', 'N/A')}") + lines.append(f"- **Lines Added:** +{pr.get('total_additions', 0)}") + lines.append(f"- **Lines Removed:** -{pr.get('total_deletions', 0)}") + lines.append(f"- **Complexity:** {pr.get('complexity_label', 'N/A')}") + lines.append("") + + # Action Items + if report.get("action_items"): + lines.append("## Action Items") + lines.append("") + for i, item in enumerate(report["action_items"], 1): + priority = item["priority"] + emoji = "🔴" if priority == "P0" else "🟠" if priority == "P1" else "🟡" + lines.append(f"{i}. {emoji} **[{priority}]** {item['action']}") + if item.get("files_affected"): + lines.append(f" - Files: {', '.join(item['files_affected'][:3])}") + lines.append("") + + # Critical Findings + critical_findings = [f for f in report.get("findings", []) if f["severity"] == "critical"] + if critical_findings: + lines.append("## Critical Issues (Must Fix)") + lines.append("") + for finding in critical_findings: + lines.append(f"- **{finding['category']}** in `{finding.get('file', 'unknown')}`") + lines.append(f" - {finding['message']}") + lines.append("") + + # High Priority Findings + high_findings = [f for f in report.get("findings", []) if f["severity"] == "high"] + if high_findings: + lines.append("## High Priority Issues") + lines.append("") + for finding in high_findings[:10]: + lines.append(f"- **{finding['category']}** in `{finding.get('file', 'unknown')}`") + lines.append(f" - {finding['message']}") + lines.append("") + + # Review Order (if available) + if "review_order" in report: + lines.append("## Suggested Review Order") + lines.append("") + for i, filepath in enumerate(report["review_order"][:10], 1): + lines.append(f"{i}. `{filepath}`") + lines.append("") + + # Footer + lines.append("---") + lines.append("*Generated by Code Reviewer*") + + return "\n".join(lines) + + +def format_text_report(report: Dict) -> str: + """Generate plain text report.""" + lines = [] + + lines.append("=" * 60) + lines.append("CODE REVIEW REPORT") + lines.append("=" * 60) + lines.append("") + lines.append(f"Generated: {report['metadata']['generated_at']}") + lines.append(f"Repository: {report['metadata']['repository']}") + lines.append("") + + summary = report["summary"] + verdict = summary["verdict"].upper().replace("_", " ") + lines.append(f"VERDICT: {verdict}") + lines.append(f"SCORE: {summary['score']}/100") + lines.append(f"RATIONALE: {summary['rationale']}") + lines.append("") + + lines.append("--- ISSUE SUMMARY ---") + for severity in ["critical", "high", "medium", "low"]: + count = summary["issue_counts"].get(severity, 0) + lines.append(f" {severity.capitalize()}: {count}") + lines.append("") + + if report.get("action_items"): + lines.append("--- ACTION ITEMS ---") + for i, item in enumerate(report["action_items"][:10], 1): + lines.append(f" {i}. [{item['priority']}] {item['action']}") + lines.append("") + + critical = [f for f in report.get("findings", []) if f["severity"] == "critical"] + if critical: + lines.append("--- CRITICAL ISSUES ---") + for f in critical: + lines.append(f" [{f.get('file', 'unknown')}] {f['message']}") + lines.append("") + + lines.append("=" * 60) + + return "\n".join(lines) + + +def generate_report( + repo_path: Path, + pr_analysis: Optional[Dict] = None, + quality_analysis: Optional[Dict] = None +) -> Dict: + """Generate comprehensive review report.""" + # Run analyses if not provided + if pr_analysis is None: + pr_analysis = run_pr_analyzer(repo_path) + + if quality_analysis is None: + quality_analysis = run_quality_checker(repo_path) + + # Generate findings + findings = generate_findings_list(pr_analysis, quality_analysis) + + # Count issues by severity + issue_counts = { + "critical": len([f for f in findings if f["severity"] == "critical"]), + "high": len([f for f in findings if f["severity"] == "high"]), + "medium": len([f for f in findings if f["severity"] == "medium"]), + "low": len([f for f in findings if f["severity"] == "low"]) + } + + # Calculate score and verdict + score = calculate_review_score(pr_analysis, quality_analysis) + verdict, rationale = determine_verdict( + score, + issue_counts["critical"], + issue_counts["high"] + ) + + # Generate action items + action_items = generate_action_items(findings) + + # Build report + report = { + "metadata": { + "generated_at": datetime.now().isoformat(), + "repository": str(repo_path), + "version": "1.0.0" + }, + "summary": { + "score": score, + "verdict": verdict, + "rationale": rationale, + "issue_counts": issue_counts + }, + "findings": findings, + "action_items": action_items + } + + # Add PR summary if available + if pr_analysis.get("status") == "analyzed": + report["pr_summary"] = pr_analysis.get("summary", {}) + report["review_order"] = pr_analysis.get("review_order", []) + + # Add quality summary if available + if quality_analysis.get("status") == "analyzed": + report["quality_summary"] = quality_analysis.get("summary", {}) + + return report + + +def main(): + parser = argparse.ArgumentParser( + description="Generate comprehensive code review reports" + ) + parser.add_argument( + "repo_path", + nargs="?", + default=".", + help="Path to repository (default: current directory)" + ) + parser.add_argument( + "--pr-analysis", + help="Path to pre-computed PR analysis JSON" + ) + parser.add_argument( + "--quality-analysis", + help="Path to pre-computed quality analysis JSON" + ) + parser.add_argument( + "--format", "-f", + choices=["text", "markdown", "json"], + default="text", + help="Output format (default: text)" + ) + parser.add_argument( + "--output", "-o", + help="Write output to file" + ) + parser.add_argument( + "--json", + action="store_true", + help="Output as JSON (shortcut for --format json)" + ) + + args = parser.parse_args() + + repo_path = Path(args.repo_path).resolve() + if not repo_path.exists(): + print(f"Error: Path does not exist: {repo_path}", file=sys.stderr) + sys.exit(1) + + # Load pre-computed analyses if provided + pr_analysis = None + quality_analysis = None + + if args.pr_analysis: + pr_analysis = load_json_file(args.pr_analysis) + if not pr_analysis: + print(f"Warning: Could not load PR analysis from {args.pr_analysis}") + + if args.quality_analysis: + quality_analysis = load_json_file(args.quality_analysis) + if not quality_analysis: + print(f"Warning: Could not load quality analysis from {args.quality_analysis}") + + # Generate report + report = generate_report(repo_path, pr_analysis, quality_analysis) + + # Format output + output_format = "json" if args.json else args.format + + if output_format == "json": + output = json.dumps(report, indent=2) + elif output_format == "markdown": + output = format_markdown_report(report) + else: + output = format_text_report(report) + + # Write or print output + if args.output: + with open(args.output, "w") as f: + f.write(output) + print(f"Report written to {args.output}") + else: + print(output) + + +if __name__ == "__main__": + main() diff --git a/.agentsmesh/.lock b/.agentsmesh/.lock index 3a9843e5..be7e03ec 100644 --- a/.agentsmesh/.lock +++ b/.agentsmesh/.lock @@ -1,7 +1,7 @@ # Auto-generated. DO NOT EDIT MANUALLY. # Tracks the state of all config files for team conflict resolution. -generated_at: 2026-03-29T14:46:47.578Z +generated_at: 2026-03-30T12:15:20.515Z generated_by: serhii lib_version: 0.2.9 checksums: @@ -115,6 +115,7 @@ checksums: extends: shared-samplexbro-rules: 0b393ecfeb7419d648d4c36203998d11ad3a0fdc packs: + alirezarezvani-claude-skills-skills: sha256:343398a678251a447d73e7b45d577e620e727e10e45c34f01208f3b42ed2c8db composiohq-awesome-claude-skills-skills: sha256:6b63e9d8e01c904ed85fcff5dd9e368ab37e68b904b2170454c3104feb82cd3d devsforge-orchestrator-skills: sha256:9228afa4fd2aedaca2403e7c2afbce0c18e4204793ede6e649a777eed9eb3231 joeking-ly-claude-skills-arsenal-skills: sha256:63b24277c3d09607506971957d40f95139385193eaf8b02a92d68ca6f8af15ec diff --git a/.agentsmesh/installs.yaml b/.agentsmesh/installs.yaml index 22f42b67..48bd57a9 100644 --- a/.agentsmesh/installs.yaml +++ b/.agentsmesh/installs.yaml @@ -1,5 +1,17 @@ version: 1 installs: + - name: alirezarezvani-claude-skills-skills + source: github:alirezarezvani/claude-skills@110348f4b2a66d856bc474a1f3f304ab93a50853 + version: 110348f4b2a66d856bc474a1f3f304ab93a50853 + source_kind: github + features: + - skills + pick: + skills: + - code-reviewer + target: claude-code + path: engineering-team + as: skills - name: composiohq-awesome-claude-skills-skills source: github:ComposioHQ/awesome-claude-skills@27904475d1270d8395acf07691966267d5abda2d version: 27904475d1270d8395acf07691966267d5abda2d diff --git a/.agentsmesh/packs/alirezarezvani-claude-skills-skills/pack.yaml b/.agentsmesh/packs/alirezarezvani-claude-skills-skills/pack.yaml new file mode 100644 index 00000000..1088fd61 --- /dev/null +++ b/.agentsmesh/packs/alirezarezvani-claude-skills-skills/pack.yaml @@ -0,0 +1,15 @@ +name: alirezarezvani-claude-skills-skills +source: github:alirezarezvani/claude-skills@110348f4b2a66d856bc474a1f3f304ab93a50853 +version: 110348f4b2a66d856bc474a1f3f304ab93a50853 +source_kind: github +installed_at: 2026-03-30T12:15:18.309Z +updated_at: 2026-03-30T12:15:18.309Z +features: + - skills +pick: + skills: + - code-reviewer +target: claude-code +path: engineering-team +as: skills +content_hash: sha256:343398a678251a447d73e7b45d577e620e727e10e45c34f01208f3b42ed2c8db diff --git a/.agentsmesh/packs/alirezarezvani-claude-skills-skills/skills/code-reviewer/SKILL.md b/.agentsmesh/packs/alirezarezvani-claude-skills-skills/skills/code-reviewer/SKILL.md new file mode 100644 index 00000000..d5f8824f --- /dev/null +++ b/.agentsmesh/packs/alirezarezvani-claude-skills-skills/skills/code-reviewer/SKILL.md @@ -0,0 +1,177 @@ +--- +name: "code-reviewer" +description: Code review automation for TypeScript, JavaScript, Python, Go, Swift, Kotlin. Analyzes PRs for complexity and risk, checks code quality for SOLID violations and code smells, generates review reports. Use when reviewing pull requests, analyzing code quality, identifying issues, generating review checklists. +--- + +# Code Reviewer + +Automated code review tools for analyzing pull requests, detecting code quality issues, and generating review reports. + +--- + +## Table of Contents + +- [Tools](#tools) + - [PR Analyzer](#pr-analyzer) + - [Code Quality Checker](#code-quality-checker) + - [Review Report Generator](#review-report-generator) +- [Reference Guides](#reference-guides) +- [Languages Supported](#languages-supported) + +--- + +## Tools + +### PR Analyzer + +Analyzes git diff between branches to assess review complexity and identify risks. + +```bash +# Analyze current branch against main +python scripts/pr_analyzer.py /path/to/repo + +# Compare specific branches +python scripts/pr_analyzer.py . --base main --head feature-branch + +# JSON output for integration +python scripts/pr_analyzer.py /path/to/repo --json +``` + +**What it detects:** +- Hardcoded secrets (passwords, API keys, tokens) +- SQL injection patterns (string concatenation in queries) +- Debug statements (debugger, console.log) +- ESLint rule disabling +- TypeScript `any` types +- TODO/FIXME comments + +**Output includes:** +- Complexity score (1-10) +- Risk categorization (critical, high, medium, low) +- File prioritization for review order +- Commit message validation + +--- + +### Code Quality Checker + +Analyzes source code for structural issues, code smells, and SOLID violations. + +```bash +# Analyze a directory +python scripts/code_quality_checker.py /path/to/code + +# Analyze specific language +python scripts/code_quality_checker.py . --language python + +# JSON output +python scripts/code_quality_checker.py /path/to/code --json +``` + +**What it detects:** +- Long functions (>50 lines) +- Large files (>500 lines) +- God classes (>20 methods) +- Deep nesting (>4 levels) +- Too many parameters (>5) +- High cyclomatic complexity +- Missing error handling +- Unused imports +- Magic numbers + +**Thresholds:** + +| Issue | Threshold | +|-------|-----------| +| Long function | >50 lines | +| Large file | >500 lines | +| God class | >20 methods | +| Too many params | >5 | +| Deep nesting | >4 levels | +| High complexity | >10 branches | + +--- + +### Review Report Generator + +Combines PR analysis and code quality findings into structured review reports. + +```bash +# Generate report for current repo +python scripts/review_report_generator.py /path/to/repo + +# Markdown output +python scripts/review_report_generator.py . --format markdown --output review.md + +# Use pre-computed analyses +python scripts/review_report_generator.py . \ + --pr-analysis pr_results.json \ + --quality-analysis quality_results.json +``` + +**Report includes:** +- Review verdict (approve, request changes, block) +- Score (0-100) +- Prioritized action items +- Issue summary by severity +- Suggested review order + +**Verdicts:** + +| Score | Verdict | +|-------|---------| +| 90+ with no high issues | Approve | +| 75+ with ≤2 high issues | Approve with suggestions | +| 50-74 | Request changes | +| <50 or critical issues | Block | + +--- + +## Reference Guides + +### Code Review Checklist +`references/code_review_checklist.md` + +Systematic checklists covering: +- Pre-review checks (build, tests, PR hygiene) +- Correctness (logic, data handling, error handling) +- Security (input validation, injection prevention) +- Performance (efficiency, caching, scalability) +- Maintainability (code quality, naming, structure) +- Testing (coverage, quality, mocking) +- Language-specific checks + +### Coding Standards +`references/coding_standards.md` + +Language-specific standards for: +- TypeScript (type annotations, null safety, async/await) +- JavaScript (declarations, patterns, modules) +- Python (type hints, exceptions, class design) +- Go (error handling, structs, concurrency) +- Swift (optionals, protocols, errors) +- Kotlin (null safety, data classes, coroutines) + +### Common Antipatterns +`references/common_antipatterns.md` + +Antipattern catalog with examples and fixes: +- Structural (god class, long method, deep nesting) +- Logic (boolean blindness, stringly typed code) +- Security (SQL injection, hardcoded credentials) +- Performance (N+1 queries, unbounded collections) +- Testing (duplication, testing implementation) +- Async (floating promises, callback hell) + +--- + +## Languages Supported + +| Language | Extensions | +|----------|------------| +| Python | `.py` | +| TypeScript | `.ts`, `.tsx` | +| JavaScript | `.js`, `.jsx`, `.mjs` | +| Go | `.go` | +| Swift | `.swift` | +| Kotlin | `.kt`, `.kts` | diff --git a/.agentsmesh/packs/alirezarezvani-claude-skills-skills/skills/code-reviewer/references/code_review_checklist.md b/.agentsmesh/packs/alirezarezvani-claude-skills-skills/skills/code-reviewer/references/code_review_checklist.md new file mode 100644 index 00000000..b7bd0867 --- /dev/null +++ b/.agentsmesh/packs/alirezarezvani-claude-skills-skills/skills/code-reviewer/references/code_review_checklist.md @@ -0,0 +1,270 @@ +# Code Review Checklist + +Structured checklists for systematic code review across different aspects. + +--- + +## Table of Contents + +- [Pre-Review Checks](#pre-review-checks) +- [Correctness](#correctness) +- [Security](#security) +- [Performance](#performance) +- [Maintainability](#maintainability) +- [Testing](#testing) +- [Documentation](#documentation) +- [Language-Specific Checks](#language-specific-checks) + +--- + +## Pre-Review Checks + +Before diving into code, verify these basics: + +### Build and Tests +- [ ] Code compiles without errors +- [ ] All existing tests pass +- [ ] New tests are included for new functionality +- [ ] No unintended files included (build artifacts, IDE configs) + +### PR Hygiene +- [ ] PR has clear title and description +- [ ] Changes are scoped appropriately (not too large) +- [ ] Commits follow conventional commit format +- [ ] Branch is up to date with base branch + +### Scope Verification +- [ ] Changes match the stated purpose +- [ ] No unrelated changes bundled in +- [ ] Breaking changes are documented +- [ ] Migration path provided if needed + +--- + +## Correctness + +### Logic +- [ ] Algorithm implements requirements correctly +- [ ] Edge cases handled (null, empty, boundary values) +- [ ] Off-by-one errors checked +- [ ] Correct operators used (== vs ===, & vs &&) +- [ ] Loop termination conditions correct +- [ ] Recursion has proper base cases + +### Data Handling +- [ ] Data types appropriate for the use case +- [ ] Numeric overflow/underflow considered +- [ ] Date/time handling accounts for timezones +- [ ] Unicode and internationalization handled +- [ ] Data validation at entry points + +### State Management +- [ ] State transitions are valid +- [ ] Race conditions addressed +- [ ] Concurrent access handled correctly +- [ ] State cleanup on errors/exit + +### Error Handling +- [ ] Errors caught at appropriate levels +- [ ] Error messages are actionable +- [ ] Errors don't expose sensitive information +- [ ] Recovery or graceful degradation implemented +- [ ] Resources cleaned up in error paths + +--- + +## Security + +### Input Validation +- [ ] All user input validated and sanitized +- [ ] Input length limits enforced +- [ ] File uploads validated (type, size, content) +- [ ] URL parameters validated + +### Injection Prevention +- [ ] SQL queries parameterized +- [ ] Command execution uses safe APIs +- [ ] HTML output escaped to prevent XSS +- [ ] LDAP queries properly escaped +- [ ] XML parsing disables external entities + +### Authentication & Authorization +- [ ] Authentication required for protected resources +- [ ] Authorization checked before operations +- [ ] Session management secure +- [ ] Password handling follows best practices +- [ ] Token expiration implemented + +### Data Protection +- [ ] Sensitive data encrypted at rest +- [ ] Sensitive data encrypted in transit +- [ ] PII handled according to policy +- [ ] Secrets not hardcoded +- [ ] Logs don't contain sensitive data + +### API Security +- [ ] Rate limiting implemented +- [ ] CORS configured correctly +- [ ] CSRF protection in place +- [ ] API keys/tokens secured +- [ ] Endpoints use HTTPS + +--- + +## Performance + +### Efficiency +- [ ] Appropriate data structures used +- [ ] Algorithms have acceptable complexity +- [ ] Database queries are optimized +- [ ] N+1 query problems avoided +- [ ] Indexes used where beneficial + +### Resource Usage +- [ ] Memory usage bounded +- [ ] No memory leaks +- [ ] File handles properly closed +- [ ] Database connections pooled +- [ ] Network calls minimized + +### Caching +- [ ] Appropriate caching strategy +- [ ] Cache invalidation handled +- [ ] Cache keys are unique and predictable +- [ ] TTL values appropriate + +### Scalability +- [ ] Horizontal scaling considered +- [ ] Bottlenecks identified +- [ ] Async processing for long operations +- [ ] Batch operations where appropriate + +--- + +## Maintainability + +### Code Quality +- [ ] Functions/methods have single responsibility +- [ ] Classes follow SOLID principles +- [ ] Code is DRY (Don't Repeat Yourself) +- [ ] No dead code or commented-out code +- [ ] Magic numbers replaced with constants + +### Naming +- [ ] Names are descriptive and consistent +- [ ] Naming follows project conventions +- [ ] No abbreviations that obscure meaning +- [ ] Boolean variables/functions have is/has/can prefix + +### Structure +- [ ] Functions are appropriately sized (<50 lines preferred) +- [ ] Nesting depth is reasonable (<4 levels) +- [ ] Related code is grouped together +- [ ] Dependencies are minimal and explicit + +### Readability +- [ ] Code is self-documenting where possible +- [ ] Complex logic has explanatory comments +- [ ] Formatting is consistent +- [ ] No overly clever or obscure code + +--- + +## Testing + +### Coverage +- [ ] New code has unit tests +- [ ] Critical paths have integration tests +- [ ] Edge cases are tested +- [ ] Error conditions are tested + +### Quality +- [ ] Tests are independent +- [ ] Tests have clear assertions +- [ ] Test names describe what is tested +- [ ] Tests don't depend on external state + +### Mocking +- [ ] External dependencies are mocked +- [ ] Mocks are realistic +- [ ] Mock setup is not excessive + +--- + +## Documentation + +### Code Documentation +- [ ] Public APIs are documented +- [ ] Complex algorithms explained +- [ ] Non-obvious decisions documented +- [ ] TODO/FIXME comments have context + +### External Documentation +- [ ] README updated if needed +- [ ] API documentation updated +- [ ] Changelog updated +- [ ] Migration guides provided + +--- + +## Language-Specific Checks + +### TypeScript/JavaScript +- [ ] Types are explicit (avoid `any`) +- [ ] Null checks present (`?.`, `??`) +- [ ] Async/await errors handled +- [ ] No floating promises +- [ ] Memory leaks from closures checked + +### Python +- [ ] Type hints used for public APIs +- [ ] Context managers for resources (`with` statements) +- [ ] Exception handling is specific (not bare `except`) +- [ ] No mutable default arguments +- [ ] List comprehensions used appropriately + +### Go +- [ ] Errors checked and handled +- [ ] Goroutine leaks prevented +- [ ] Context propagation correct +- [ ] Defer statements in right order +- [ ] Interfaces minimal + +### Swift +- [ ] Optionals handled safely +- [ ] Memory management correct (weak/unowned) +- [ ] Error handling uses Result or throws +- [ ] Access control appropriate +- [ ] Codable implementation correct + +### Kotlin +- [ ] Null safety leveraged +- [ ] Coroutine cancellation handled +- [ ] Data classes used appropriately +- [ ] Extension functions don't obscure behavior +- [ ] Sealed classes for state + +--- + +## Review Process Tips + +### Before Approving +1. Verify all critical checks passed +2. Confirm tests are adequate +3. Consider deployment impact +4. Check for any security concerns +5. Ensure documentation is updated + +### Providing Feedback +- Be specific about issues +- Explain why something is problematic +- Suggest alternatives when possible +- Distinguish blockers from suggestions +- Acknowledge good patterns + +### When to Block +- Security vulnerabilities present +- Critical logic errors +- No tests for risky changes +- Breaking changes without migration +- Significant performance regressions diff --git a/.agentsmesh/packs/alirezarezvani-claude-skills-skills/skills/code-reviewer/references/coding_standards.md b/.agentsmesh/packs/alirezarezvani-claude-skills-skills/skills/code-reviewer/references/coding_standards.md new file mode 100644 index 00000000..9fbc6a06 --- /dev/null +++ b/.agentsmesh/packs/alirezarezvani-claude-skills-skills/skills/code-reviewer/references/coding_standards.md @@ -0,0 +1,555 @@ +# Coding Standards + +Language-specific coding standards and conventions for code review. + +--- + +## Table of Contents + +- [Universal Principles](#universal-principles) +- [TypeScript Standards](#typescript-standards) +- [JavaScript Standards](#javascript-standards) +- [Python Standards](#python-standards) +- [Go Standards](#go-standards) +- [Swift Standards](#swift-standards) +- [Kotlin Standards](#kotlin-standards) + +--- + +## Universal Principles + +These apply across all languages. + +### Naming Conventions + +| Element | Convention | Example | +|---------|------------|---------| +| Variables | camelCase (JS/TS), snake_case (Python/Go) | `userName`, `user_name` | +| Constants | SCREAMING_SNAKE_CASE | `MAX_RETRY_COUNT` | +| Functions | camelCase (JS/TS), snake_case (Python) | `getUserById`, `get_user_by_id` | +| Classes | PascalCase | `UserRepository` | +| Interfaces | PascalCase, optionally prefixed | `IUserService` or `UserService` | +| Private members | Prefix with underscore or use access modifiers | `_internalState` | + +### Function Design + +``` +Good functions: +- Do one thing well +- Have descriptive names (verb + noun) +- Take 3 or fewer parameters +- Return early for error cases +- Stay under 50 lines +``` + +### Error Handling + +``` +Good error handling: +- Catch specific errors, not generic exceptions +- Log with context (what, where, why) +- Clean up resources in error paths +- Don't swallow errors silently +- Provide actionable error messages +``` + +--- + +## TypeScript Standards + +### Type Annotations + +```typescript +// Avoid 'any' - use unknown for truly unknown types +function processData(data: unknown): ProcessedResult { + if (isValidData(data)) { + return transform(data); + } + throw new Error('Invalid data format'); +} + +// Use explicit return types for public APIs +export function calculateTotal(items: CartItem[]): number { + return items.reduce((sum, item) => sum + item.price, 0); +} + +// Use type guards for runtime checks +function isUser(obj: unknown): obj is User { + return ( + typeof obj === 'object' && + obj !== null && + 'id' in obj && + 'email' in obj + ); +} +``` + +### Null Safety + +```typescript +// Use optional chaining and nullish coalescing +const userName = user?.profile?.name ?? 'Anonymous'; + +// Be explicit about nullable types +interface Config { + timeout: number; + retries?: number; // Optional + fallbackUrl: string | null; // Explicitly nullable +} + +// Use assertion functions for validation +function assertDefined(value: T | null | undefined): asserts value is T { + if (value === null || value === undefined) { + throw new Error('Value is not defined'); + } +} +``` + +### Async/Await + +```typescript +// Always handle errors in async functions +async function fetchUser(id: string): Promise { + try { + const response = await api.get(`/users/${id}`); + return response.data; + } catch (error) { + logger.error('Failed to fetch user', { id, error }); + throw new UserFetchError(id, error); + } +} + +// Use Promise.all for parallel operations +async function loadDashboard(userId: string): Promise { + const [profile, stats, notifications] = await Promise.all([ + fetchProfile(userId), + fetchStats(userId), + fetchNotifications(userId) + ]); + return { profile, stats, notifications }; +} +``` + +### React/Component Standards + +```typescript +// Use explicit prop types +interface ButtonProps { + label: string; + onClick: () => void; + variant?: 'primary' | 'secondary'; + disabled?: boolean; +} + +// Prefer functional components with hooks +function Button({ label, onClick, variant = 'primary', disabled = false }: ButtonProps) { + return ( + + ); +} + +// Use custom hooks for reusable logic +function useDebounce(value: T, delay: number): T { + const [debouncedValue, setDebouncedValue] = useState(value); + + useEffect(() => { + const timer = setTimeout(() => setDebouncedValue(value), delay); + return () => clearTimeout(timer); + }, [value, delay]); + + return debouncedValue; +} +``` + +--- + +## JavaScript Standards + +### Variable Declarations + +```javascript +// Use const by default, let when reassignment needed +const MAX_ITEMS = 100; +let currentCount = 0; + +// Never use var +// var is function-scoped and hoisted, leading to bugs +``` + +### Object and Array Patterns + +```javascript +// Use object destructuring +const { name, email, role = 'user' } = user; + +// Use spread for immutable updates +const updatedUser = { ...user, lastLogin: new Date() }; +const updatedList = [...items, newItem]; + +// Use array methods over loops +const activeUsers = users.filter(u => u.isActive); +const emails = users.map(u => u.email); +const total = orders.reduce((sum, o) => sum + o.amount, 0); +``` + +### Module Patterns + +```javascript +// Use named exports for utilities +export function formatDate(date) { ... } +export function parseDate(str) { ... } + +// Use default export for main component/class +export default class UserService { ... } + +// Group related exports +export { formatDate, parseDate, isValidDate } from './dateUtils'; +``` + +--- + +## Python Standards + +### Type Hints (PEP 484) + +```python +from typing import Optional, List, Dict, Union + +def get_user(user_id: int) -> Optional[User]: + """Fetch user by ID, returns None if not found.""" + return db.query(User).filter(User.id == user_id).first() + +def process_items(items: List[str]) -> Dict[str, int]: + """Count occurrences of each item.""" + return {item: items.count(item) for item in set(items)} + +def send_notification( + user: User, + message: str, + *, + priority: str = "normal", + channels: List[str] = None +) -> bool: + """Send notification to user via specified channels.""" + channels = channels or ["email"] + # Implementation +``` + +### Exception Handling + +```python +# Catch specific exceptions +try: + result = api_client.fetch_data(endpoint) +except ConnectionError as e: + logger.warning(f"Connection failed: {e}") + return cached_data +except TimeoutError as e: + logger.error(f"Request timed out: {e}") + raise ServiceUnavailableError() from e + +# Use context managers for resources +with open(filepath, 'r') as f: + data = json.load(f) + +# Custom exceptions should be informative +class ValidationError(Exception): + def __init__(self, field: str, message: str): + self.field = field + self.message = message + super().__init__(f"{field}: {message}") +``` + +### Class Design + +```python +from dataclasses import dataclass +from abc import ABC, abstractmethod + +# Use dataclasses for data containers +@dataclass +class UserDTO: + id: int + email: str + name: str + is_active: bool = True + +# Use ABC for interfaces +class Repository(ABC): + @abstractmethod + def find_by_id(self, id: int) -> Optional[Entity]: + pass + + @abstractmethod + def save(self, entity: Entity) -> Entity: + pass + +# Use properties for computed attributes +class Order: + def __init__(self, items: List[OrderItem]): + self._items = items + + @property + def total(self) -> Decimal: + return sum(item.price * item.quantity for item in self._items) +``` + +--- + +## Go Standards + +### Error Handling + +```go +// Always check errors +file, err := os.Open(filename) +if err != nil { + return fmt.Errorf("failed to open %s: %w", filename, err) +} +defer file.Close() + +// Use custom error types for specific cases +type ValidationError struct { + Field string + Message string +} + +func (e *ValidationError) Error() string { + return fmt.Sprintf("%s: %s", e.Field, e.Message) +} + +// Wrap errors with context +if err := db.Query(query); err != nil { + return fmt.Errorf("query failed for user %d: %w", userID, err) +} +``` + +### Struct Design + +```go +// Use unexported fields with exported methods +type UserService struct { + repo UserRepository + cache Cache + logger Logger +} + +// Constructor functions for initialization +func NewUserService(repo UserRepository, cache Cache, logger Logger) *UserService { + return &UserService{ + repo: repo, + cache: cache, + logger: logger, + } +} + +// Keep interfaces small +type Reader interface { + Read(p []byte) (n int, err error) +} + +type Writer interface { + Write(p []byte) (n int, err error) +} +``` + +### Concurrency + +```go +// Use context for cancellation +func fetchData(ctx context.Context, url string) ([]byte, error) { + req, err := http.NewRequestWithContext(ctx, "GET", url, nil) + if err != nil { + return nil, err + } + // ... +} + +// Use channels for communication +func worker(jobs <-chan Job, results chan<- Result) { + for job := range jobs { + result := process(job) + results <- result + } +} + +// Use sync.WaitGroup for coordination +var wg sync.WaitGroup +for _, item := range items { + wg.Add(1) + go func(i Item) { + defer wg.Done() + processItem(i) + }(item) +} +wg.Wait() +``` + +--- + +## Swift Standards + +### Optionals + +```swift +// Use optional binding +if let user = fetchUser(id: userId) { + displayProfile(user) +} + +// Use guard for early exit +guard let data = response.data else { + throw NetworkError.noData +} + +// Use nil coalescing for defaults +let displayName = user.nickname ?? user.email + +// Avoid force unwrapping except in tests +// BAD: let name = user.name! +// GOOD: guard let name = user.name else { return } +``` + +### Protocol-Oriented Design + +```swift +// Define protocols with minimal requirements +protocol Identifiable { + var id: String { get } +} + +protocol Persistable: Identifiable { + func save() throws + static func find(by id: String) -> Self? +} + +// Use protocol extensions for default implementations +extension Persistable { + func save() throws { + try Storage.shared.save(self) + } +} + +// Prefer composition over inheritance +struct User: Identifiable, Codable { + let id: String + var name: String + var email: String +} +``` + +### Error Handling + +```swift +// Define domain-specific errors +enum AuthError: Error { + case invalidCredentials + case tokenExpired + case networkFailure(underlying: Error) +} + +// Use Result type for async operations +func authenticate( + email: String, + password: String, + completion: @escaping (Result) -> Void +) + +// Use throws for synchronous operations +func validate(_ input: String) throws -> ValidatedInput { + guard !input.isEmpty else { + throw ValidationError.emptyInput + } + return ValidatedInput(value: input) +} +``` + +--- + +## Kotlin Standards + +### Null Safety + +```kotlin +// Use nullable types explicitly +fun findUser(id: Int): User? { + return userRepository.find(id) +} + +// Use safe calls and elvis operator +val name = user?.profile?.name ?: "Unknown" + +// Use let for null checks with side effects +user?.let { activeUser -> + sendWelcomeEmail(activeUser.email) + logActivity(activeUser.id) +} + +// Use require/check for validation +fun processPayment(amount: Double) { + require(amount > 0) { "Amount must be positive: $amount" } + // Process +} +``` + +### Data Classes and Sealed Classes + +```kotlin +// Use data classes for DTOs +data class UserDTO( + val id: Int, + val email: String, + val name: String, + val isActive: Boolean = true +) + +// Use sealed classes for state +sealed class Result { + data class Success(val data: T) : Result() + data class Error(val message: String, val cause: Throwable? = null) : Result() + object Loading : Result() +} + +// Pattern matching with when +fun handleResult(result: Result) = when (result) { + is Result.Success -> showUser(result.data) + is Result.Error -> showError(result.message) + Result.Loading -> showLoading() +} +``` + +### Coroutines + +```kotlin +// Use structured concurrency +suspend fun loadDashboard(): Dashboard = coroutineScope { + val profile = async { fetchProfile() } + val stats = async { fetchStats() } + val notifications = async { fetchNotifications() } + + Dashboard( + profile = profile.await(), + stats = stats.await(), + notifications = notifications.await() + ) +} + +// Handle cancellation +suspend fun fetchWithRetry(url: String): Response { + repeat(3) { attempt -> + try { + return httpClient.get(url) + } catch (e: IOException) { + if (attempt == 2) throw e + delay(1000L * (attempt + 1)) + } + } + throw IllegalStateException("Unreachable") +} +``` diff --git a/.agentsmesh/packs/alirezarezvani-claude-skills-skills/skills/code-reviewer/references/common_antipatterns.md b/.agentsmesh/packs/alirezarezvani-claude-skills-skills/skills/code-reviewer/references/common_antipatterns.md new file mode 100644 index 00000000..26045452 --- /dev/null +++ b/.agentsmesh/packs/alirezarezvani-claude-skills-skills/skills/code-reviewer/references/common_antipatterns.md @@ -0,0 +1,739 @@ +# Common Antipatterns + +Code antipatterns to identify during review, with examples and fixes. + +--- + +## Table of Contents + +- [Structural Antipatterns](#structural-antipatterns) +- [Logic Antipatterns](#logic-antipatterns) +- [Security Antipatterns](#security-antipatterns) +- [Performance Antipatterns](#performance-antipatterns) +- [Testing Antipatterns](#testing-antipatterns) +- [Async Antipatterns](#async-antipatterns) + +--- + +## Structural Antipatterns + +### God Class + +A class that does too much and knows too much. + +```typescript +// BAD: God class handling everything +class UserManager { + createUser(data: UserData) { ... } + updateUser(id: string, data: UserData) { ... } + deleteUser(id: string) { ... } + sendEmail(userId: string, content: string) { ... } + generateReport(userId: string) { ... } + validatePassword(password: string) { ... } + hashPassword(password: string) { ... } + uploadAvatar(userId: string, file: File) { ... } + resizeImage(file: File) { ... } + logActivity(userId: string, action: string) { ... } + // 50 more methods... +} + +// GOOD: Single responsibility classes +class UserRepository { + create(data: UserData): User { ... } + update(id: string, data: Partial): User { ... } + delete(id: string): void { ... } +} + +class EmailService { + send(to: string, content: string): void { ... } +} + +class PasswordService { + validate(password: string): ValidationResult { ... } + hash(password: string): string { ... } +} +``` + +**Detection:** Class has >20 methods, >500 lines, or handles unrelated concerns. + +--- + +### Long Method + +Functions that do too much and are hard to understand. + +```python +# BAD: Long method doing everything +def process_order(order_data): + # Validate order (20 lines) + if not order_data.get('items'): + raise ValueError('No items') + if not order_data.get('customer_id'): + raise ValueError('No customer') + # ... more validation + + # Calculate totals (30 lines) + subtotal = 0 + for item in order_data['items']: + price = get_product_price(item['product_id']) + subtotal += price * item['quantity'] + # ... tax calculation, discounts + + # Process payment (40 lines) + payment_result = payment_gateway.charge(...) + # ... handle payment errors + + # Create order record (20 lines) + order = Order.create(...) + + # Send notifications (20 lines) + send_order_confirmation(...) + notify_warehouse(...) + + return order + +# GOOD: Composed of focused functions +def process_order(order_data): + validate_order(order_data) + totals = calculate_order_totals(order_data) + payment = process_payment(order_data['customer_id'], totals) + order = create_order_record(order_data, totals, payment) + send_order_notifications(order) + return order +``` + +**Detection:** Function >50 lines or requires scrolling to read. + +--- + +### Deep Nesting + +Excessive indentation making code hard to follow. + +```javascript +// BAD: Deep nesting +function processData(data) { + if (data) { + if (data.items) { + if (data.items.length > 0) { + for (const item of data.items) { + if (item.isValid) { + if (item.type === 'premium') { + if (item.price > 100) { + // Finally do something + processItem(item); + } + } + } + } + } + } + } +} + +// GOOD: Early returns and guard clauses +function processData(data) { + if (!data?.items?.length) { + return; + } + + const premiumItems = data.items.filter( + item => item.isValid && item.type === 'premium' && item.price > 100 + ); + + premiumItems.forEach(processItem); +} +``` + +**Detection:** Indentation >4 levels deep. + +--- + +### Magic Numbers and Strings + +Hard-coded values without explanation. + +```go +// BAD: Magic numbers +func calculateDiscount(total float64, userType int) float64 { + if userType == 1 { + return total * 0.15 + } else if userType == 2 { + return total * 0.25 + } + return total * 0.05 +} + +// GOOD: Named constants +const ( + UserTypeRegular = 1 + UserTypePremium = 2 + + DiscountRegular = 0.05 + DiscountStandard = 0.15 + DiscountPremium = 0.25 +) + +func calculateDiscount(total float64, userType int) float64 { + switch userType { + case UserTypePremium: + return total * DiscountPremium + case UserTypeRegular: + return total * DiscountStandard + default: + return total * DiscountRegular + } +} +``` + +**Detection:** Literal numbers (except 0, 1) or repeated string literals. + +--- + +### Primitive Obsession + +Using primitives instead of small objects. + +```typescript +// BAD: Primitives everywhere +function createUser( + name: string, + email: string, + phone: string, + street: string, + city: string, + zipCode: string, + country: string +): User { ... } + +// GOOD: Value objects +interface Address { + street: string; + city: string; + zipCode: string; + country: string; +} + +interface ContactInfo { + email: string; + phone: string; +} + +function createUser( + name: string, + contact: ContactInfo, + address: Address +): User { ... } +``` + +**Detection:** Functions with >4 parameters of same type, or related primitives always passed together. + +--- + +## Logic Antipatterns + +### Boolean Blindness + +Passing booleans that make code unreadable at call sites. + +```swift +// BAD: What do these booleans mean? +user.configure(true, false, true, false) + +// GOOD: Named parameters or option objects +user.configure( + sendWelcomeEmail: true, + requireVerification: false, + enableNotifications: true, + isAdmin: false +) + +// Or use an options struct +struct UserConfiguration { + var sendWelcomeEmail: Bool = true + var requireVerification: Bool = false + var enableNotifications: Bool = true + var isAdmin: Bool = false +} + +user.configure(UserConfiguration()) +``` + +**Detection:** Function calls with multiple boolean literals. + +--- + +### Null Returns for Collections + +Returning null instead of empty collections. + +```kotlin +// BAD: Returning null +fun findUsersByRole(role: String): List? { + val users = repository.findByRole(role) + return if (users.isEmpty()) null else users +} + +// Caller must handle null +val users = findUsersByRole("admin") +if (users != null) { + users.forEach { ... } +} + +// GOOD: Return empty collection +fun findUsersByRole(role: String): List { + return repository.findByRole(role) +} + +// Caller can iterate directly +findUsersByRole("admin").forEach { ... } +``` + +**Detection:** Functions returning nullable collections. + +--- + +### Stringly Typed Code + +Using strings where enums or types should be used. + +```python +# BAD: String-based logic +def handle_event(event_type: str, data: dict): + if event_type == "user_created": + handle_user_created(data) + elif event_type == "user_updated": + handle_user_updated(data) + elif event_type == "user_dleted": # Typo won't be caught + handle_user_deleted(data) + +# GOOD: Enum-based +from enum import Enum + +class EventType(Enum): + USER_CREATED = "user_created" + USER_UPDATED = "user_updated" + USER_DELETED = "user_deleted" + +def handle_event(event_type: EventType, data: dict): + handlers = { + EventType.USER_CREATED: handle_user_created, + EventType.USER_UPDATED: handle_user_updated, + EventType.USER_DELETED: handle_user_deleted, + } + handlers[event_type](data) +``` + +**Detection:** String comparisons for type/status/category values. + +--- + +## Security Antipatterns + +### SQL Injection + +String concatenation in SQL queries. + +```javascript +// BAD: String concatenation +const query = `SELECT * FROM users WHERE id = ${userId}`; +db.query(query); + +// BAD: String templates still vulnerable +const query = `SELECT * FROM users WHERE name = '${userName}'`; + +// GOOD: Parameterized queries +const query = 'SELECT * FROM users WHERE id = $1'; +db.query(query, [userId]); + +// GOOD: Using ORM safely +User.findOne({ where: { id: userId } }); +``` + +**Detection:** String concatenation or template literals with SQL keywords. + +--- + +### Hardcoded Credentials + +Secrets in source code. + +```python +# BAD: Hardcoded secrets +API_KEY = "sk-abc123xyz789" +DATABASE_URL = "postgresql://admin:password123@prod-db.internal:5432/app" + +# GOOD: Environment variables +import os + +API_KEY = os.environ["API_KEY"] +DATABASE_URL = os.environ["DATABASE_URL"] + +# GOOD: Secrets manager +from aws_secretsmanager import get_secret + +API_KEY = get_secret("api-key") +``` + +**Detection:** Variables named `password`, `secret`, `key`, `token` with string literals. + +--- + +### Unsafe Deserialization + +Deserializing untrusted data without validation. + +```python +# BAD: Binary serialization from untrusted source can execute arbitrary code +# Examples: Python's binary serialization, yaml.load without SafeLoader + +# GOOD: Use safe alternatives +import json + +def load_data(file_path): + with open(file_path, 'r') as f: + return json.load(f) + +# GOOD: Use SafeLoader for YAML +import yaml + +with open('config.yaml') as f: + config = yaml.safe_load(f) +``` + +**Detection:** Binary deserialization functions, yaml.load without safe loader, dynamic code execution on external data. + +--- + +### Missing Input Validation + +Trusting user input without validation. + +```typescript +// BAD: No validation +app.post('/user', (req, res) => { + const user = db.create({ + name: req.body.name, + email: req.body.email, + role: req.body.role // User can set themselves as admin! + }); + res.json(user); +}); + +// GOOD: Validate and sanitize +import { z } from 'zod'; + +const CreateUserSchema = z.object({ + name: z.string().min(1).max(100), + email: z.string().email(), + // role is NOT accepted from input +}); + +app.post('/user', (req, res) => { + const validated = CreateUserSchema.parse(req.body); + const user = db.create({ + ...validated, + role: 'user' // Default role, not from input + }); + res.json(user); +}); +``` + +**Detection:** Request body/params used directly without validation schema. + +--- + +## Performance Antipatterns + +### N+1 Query Problem + +Loading related data one record at a time. + +```python +# BAD: N+1 queries +def get_orders_with_items(): + orders = Order.query.all() # 1 query + for order in orders: + items = OrderItem.query.filter_by(order_id=order.id).all() # N queries + order.items = items + return orders + +# GOOD: Eager loading +def get_orders_with_items(): + return Order.query.options( + joinedload(Order.items) + ).all() # 1 query with JOIN + +# GOOD: Batch loading +def get_orders_with_items(): + orders = Order.query.all() + order_ids = [o.id for o in orders] + items = OrderItem.query.filter( + OrderItem.order_id.in_(order_ids) + ).all() # 2 queries total + # Group items by order_id... +``` + +**Detection:** Database queries inside loops. + +--- + +### Unbounded Collections + +Loading unlimited data into memory. + +```go +// BAD: Load all records +func GetAllUsers() ([]User, error) { + return db.Find(&[]User{}) // Could be millions +} + +// GOOD: Pagination +func GetUsers(page, pageSize int) ([]User, error) { + offset := (page - 1) * pageSize + return db.Limit(pageSize).Offset(offset).Find(&[]User{}) +} + +// GOOD: Streaming for large datasets +func ProcessAllUsers(handler func(User) error) error { + rows, err := db.Model(&User{}).Rows() + if err != nil { + return err + } + defer rows.Close() + + for rows.Next() { + var user User + db.ScanRows(rows, &user) + if err := handler(user); err != nil { + return err + } + } + return nil +} +``` + +**Detection:** `findAll()`, `find({})`, or queries without `LIMIT`. + +--- + +### Synchronous I/O in Hot Paths + +Blocking operations in request handlers. + +```javascript +// BAD: Sync file read on every request +app.get('/config', (req, res) => { + const config = fs.readFileSync('./config.json'); // Blocks event loop + res.json(JSON.parse(config)); +}); + +// GOOD: Load once at startup +const config = JSON.parse(fs.readFileSync('./config.json')); + +app.get('/config', (req, res) => { + res.json(config); +}); + +// GOOD: Async with caching +let configCache = null; + +app.get('/config', async (req, res) => { + if (!configCache) { + configCache = JSON.parse(await fs.promises.readFile('./config.json')); + } + res.json(configCache); +}); +``` + +**Detection:** `readFileSync`, `execSync`, or blocking calls in request handlers. + +--- + +## Testing Antipatterns + +### Test Code Duplication + +Repeating setup in every test. + +```typescript +// BAD: Duplicate setup +describe('UserService', () => { + it('should create user', async () => { + const db = await createTestDatabase(); + const userRepo = new UserRepository(db); + const emailService = new MockEmailService(); + const service = new UserService(userRepo, emailService); + + const user = await service.create({ name: 'Test' }); + expect(user.name).toBe('Test'); + }); + + it('should update user', async () => { + const db = await createTestDatabase(); // Duplicated + const userRepo = new UserRepository(db); // Duplicated + const emailService = new MockEmailService(); // Duplicated + const service = new UserService(userRepo, emailService); // Duplicated + + // ... + }); +}); + +// GOOD: Shared setup +describe('UserService', () => { + let service: UserService; + let db: TestDatabase; + + beforeEach(async () => { + db = await createTestDatabase(); + const userRepo = new UserRepository(db); + const emailService = new MockEmailService(); + service = new UserService(userRepo, emailService); + }); + + afterEach(async () => { + await db.cleanup(); + }); + + it('should create user', async () => { + const user = await service.create({ name: 'Test' }); + expect(user.name).toBe('Test'); + }); +}); +``` + +--- + +### Testing Implementation Instead of Behavior + +Tests coupled to internal implementation. + +```python +# BAD: Testing implementation details +def test_add_item_to_cart(): + cart = ShoppingCart() + cart.add_item(Product("Apple", 1.00)) + + # Testing internal structure + assert cart._items[0].name == "Apple" + assert cart._total == 1.00 + +# GOOD: Testing behavior +def test_add_item_to_cart(): + cart = ShoppingCart() + cart.add_item(Product("Apple", 1.00)) + + # Testing public behavior + assert cart.item_count == 1 + assert cart.total == 1.00 + assert cart.contains("Apple") +``` + +--- + +## Async Antipatterns + +### Floating Promises + +Promises without await or catch. + +```typescript +// BAD: Floating promise +async function saveUser(user: User) { + db.save(user); // Not awaited, errors lost + logger.info('User saved'); // Logs before save completes +} + +// BAD: Fire and forget in loop +for (const item of items) { + processItem(item); // All run in parallel, no error handling +} + +// GOOD: Await the promise +async function saveUser(user: User) { + await db.save(user); + logger.info('User saved'); +} + +// GOOD: Process with proper handling +await Promise.all(items.map(item => processItem(item))); + +// Or sequentially +for (const item of items) { + await processItem(item); +} +``` + +**Detection:** Async function calls without `await` or `.then()`. + +--- + +### Callback Hell + +Deeply nested callbacks. + +```javascript +// BAD: Callback hell +getUser(userId, (err, user) => { + if (err) return handleError(err); + getOrders(user.id, (err, orders) => { + if (err) return handleError(err); + getProducts(orders[0].productIds, (err, products) => { + if (err) return handleError(err); + renderPage(user, orders, products, (err) => { + if (err) return handleError(err); + console.log('Done'); + }); + }); + }); +}); + +// GOOD: Async/await +async function loadPage(userId) { + try { + const user = await getUser(userId); + const orders = await getOrders(user.id); + const products = await getProducts(orders[0].productIds); + await renderPage(user, orders, products); + console.log('Done'); + } catch (err) { + handleError(err); + } +} +``` + +**Detection:** >2 levels of callback nesting. + +--- + +### Async in Constructor + +Async operations in constructors. + +```typescript +// BAD: Async in constructor +class DatabaseConnection { + constructor(url: string) { + this.connect(url); // Fire-and-forget async + } + + private async connect(url: string) { + this.client = await createClient(url); + } +} + +// GOOD: Factory method +class DatabaseConnection { + private constructor(private client: Client) {} + + static async create(url: string): Promise { + const client = await createClient(url); + return new DatabaseConnection(client); + } +} + +// Usage +const db = await DatabaseConnection.create(url); +``` + +**Detection:** `async` calls or `.then()` in constructor. diff --git a/.agentsmesh/packs/alirezarezvani-claude-skills-skills/skills/code-reviewer/scripts/code_quality_checker.py b/.agentsmesh/packs/alirezarezvani-claude-skills-skills/skills/code-reviewer/scripts/code_quality_checker.py new file mode 100755 index 00000000..128dc9d8 --- /dev/null +++ b/.agentsmesh/packs/alirezarezvani-claude-skills-skills/skills/code-reviewer/scripts/code_quality_checker.py @@ -0,0 +1,560 @@ +#!/usr/bin/env python3 +""" +Code Quality Checker + +Analyzes source code for quality issues, code smells, complexity metrics, +and SOLID principle violations. + +Usage: + python code_quality_checker.py /path/to/file.py + python code_quality_checker.py /path/to/directory --recursive + python code_quality_checker.py . --language typescript --json +""" + +import argparse +import json +import re +import sys +from pathlib import Path +from typing import Dict, List, Optional + + +# Language-specific file extensions +LANGUAGE_EXTENSIONS = { + "python": [".py"], + "typescript": [".ts", ".tsx"], + "javascript": [".js", ".jsx", ".mjs"], + "go": [".go"], + "swift": [".swift"], + "kotlin": [".kt", ".kts"] +} + +# Code smell thresholds +THRESHOLDS = { + "long_function_lines": 50, + "too_many_parameters": 5, + "high_complexity": 10, + "god_class_methods": 20, + "max_imports": 15 +} + + +def get_file_extension(filepath: Path) -> str: + """Get file extension.""" + return filepath.suffix.lower() + + +def detect_language(filepath: Path) -> Optional[str]: + """Detect programming language from file extension.""" + ext = get_file_extension(filepath) + for lang, extensions in LANGUAGE_EXTENSIONS.items(): + if ext in extensions: + return lang + return None + + +def read_file_content(filepath: Path) -> str: + """Read file content safely.""" + try: + with open(filepath, "r", encoding="utf-8", errors="ignore") as f: + return f.read() + except Exception: + return "" + + +def calculate_cyclomatic_complexity(content: str) -> int: + """ + Estimate cyclomatic complexity based on control flow keywords. + """ + complexity = 1 # Base complexity + + # Control flow patterns that increase complexity + patterns = [ + r"\bif\b", + r"\belif\b", + r"\belse\b", + r"\bfor\b", + r"\bwhile\b", + r"\bcase\b", + r"\bcatch\b", + r"\bexcept\b", + r"\band\b", + r"\bor\b", + r"\|\|", + r"&&" + ] + + for pattern in patterns: + matches = re.findall(pattern, content, re.IGNORECASE) + complexity += len(matches) + + return complexity + + +def count_lines(content: str) -> Dict[str, int]: + """Count different types of lines in code.""" + lines = content.split("\n") + total = len(lines) + blank = sum(1 for line in lines if not line.strip()) + comment = 0 + + for line in lines: + stripped = line.strip() + if stripped.startswith("#") or stripped.startswith("//"): + comment += 1 + elif stripped.startswith("/*") or stripped.startswith("'''") or stripped.startswith('"""'): + comment += 1 + + code = total - blank - comment + + return { + "total": total, + "code": code, + "blank": blank, + "comment": comment + } + + +def find_functions(content: str, language: str) -> List[Dict]: + """Find function definitions and their metrics.""" + functions = [] + + # Language-specific function patterns + patterns = { + "python": r"def\s+(\w+)\s*\(([^)]*)\)", + "typescript": r"(?:function\s+(\w+)|(?:const|let|var)\s+(\w+)\s*=\s*(?:async\s+)?\([^)]*\)\s*=>)", + "javascript": r"(?:function\s+(\w+)|(?:const|let|var)\s+(\w+)\s*=\s*(?:async\s+)?\([^)]*\)\s*=>)", + "go": r"func\s+(?:\([^)]+\)\s+)?(\w+)\s*\(([^)]*)\)", + "swift": r"func\s+(\w+)\s*\(([^)]*)\)", + "kotlin": r"fun\s+(\w+)\s*\(([^)]*)\)" + } + + pattern = patterns.get(language, patterns["python"]) + matches = re.finditer(pattern, content, re.MULTILINE) + + for match in matches: + name = next((g for g in match.groups() if g), "anonymous") + params_str = match.group(2) if len(match.groups()) > 1 and match.group(2) else "" + + # Count parameters + params = [p.strip() for p in params_str.split(",") if p.strip()] + param_count = len(params) + + # Estimate function length + start_pos = match.end() + remaining = content[start_pos:] + + next_func = re.search(pattern, remaining) + if next_func: + func_body = remaining[:next_func.start()] + else: + func_body = remaining[:min(2000, len(remaining))] + + line_count = len(func_body.split("\n")) + complexity = calculate_cyclomatic_complexity(func_body) + + functions.append({ + "name": name, + "parameters": param_count, + "lines": line_count, + "complexity": complexity + }) + + return functions + + +def find_classes(content: str, language: str) -> List[Dict]: + """Find class definitions and their metrics.""" + classes = [] + + patterns = { + "python": r"class\s+(\w+)", + "typescript": r"class\s+(\w+)", + "javascript": r"class\s+(\w+)", + "go": r"type\s+(\w+)\s+struct", + "swift": r"class\s+(\w+)", + "kotlin": r"class\s+(\w+)" + } + + pattern = patterns.get(language, patterns["python"]) + matches = re.finditer(pattern, content) + + for match in matches: + name = match.group(1) + + start_pos = match.end() + remaining = content[start_pos:] + + next_class = re.search(pattern, remaining) + if next_class: + class_body = remaining[:next_class.start()] + else: + class_body = remaining + + # Count methods + method_patterns = { + "python": r"def\s+\w+\s*\(", + "typescript": r"(?:public|private|protected)?\s*\w+\s*\([^)]*\)\s*[:{]", + "javascript": r"\w+\s*\([^)]*\)\s*\{", + "go": r"func\s+\(", + "swift": r"func\s+\w+", + "kotlin": r"fun\s+\w+" + } + method_pattern = method_patterns.get(language, method_patterns["python"]) + methods = len(re.findall(method_pattern, class_body)) + + classes.append({ + "name": name, + "methods": methods, + "lines": len(class_body.split("\n")) + }) + + return classes + + +def check_code_smells(content: str, functions: List[Dict], classes: List[Dict]) -> List[Dict]: + """Check for code smells in the content.""" + smells = [] + + # Long functions + for func in functions: + if func["lines"] > THRESHOLDS["long_function_lines"]: + smells.append({ + "type": "long_function", + "severity": "medium", + "message": f"Function '{func['name']}' has {func['lines']} lines (max: {THRESHOLDS['long_function_lines']})", + "location": func["name"] + }) + + # Too many parameters + for func in functions: + if func["parameters"] > THRESHOLDS["too_many_parameters"]: + smells.append({ + "type": "too_many_parameters", + "severity": "low", + "message": f"Function '{func['name']}' has {func['parameters']} parameters (max: {THRESHOLDS['too_many_parameters']})", + "location": func["name"] + }) + + # High complexity + for func in functions: + if func["complexity"] > THRESHOLDS["high_complexity"]: + severity = "high" if func["complexity"] > 20 else "medium" + smells.append({ + "type": "high_complexity", + "severity": severity, + "message": f"Function '{func['name']}' has complexity {func['complexity']} (max: {THRESHOLDS['high_complexity']})", + "location": func["name"] + }) + + # God classes + for cls in classes: + if cls["methods"] > THRESHOLDS["god_class_methods"]: + smells.append({ + "type": "god_class", + "severity": "high", + "message": f"Class '{cls['name']}' has {cls['methods']} methods (max: {THRESHOLDS['god_class_methods']})", + "location": cls["name"] + }) + + # Magic numbers + magic_pattern = r"\b(? List[Dict]: + """Check for potential SOLID principle violations.""" + violations = [] + + # OCP: Type checking instead of polymorphism + type_checks = len(re.findall(r"isinstance\(|type\(.*\)\s*==|typeof\s+\w+\s*===", content)) + if type_checks > 2: + violations.append({ + "principle": "OCP", + "name": "Open/Closed Principle", + "severity": "medium", + "message": f"Found {type_checks} type checks - consider using polymorphism" + }) + + # LSP/ISP: NotImplementedError + not_impl = len(re.findall(r"raise\s+NotImplementedError|not\s+implemented", content, re.IGNORECASE)) + if not_impl: + violations.append({ + "principle": "LSP/ISP", + "name": "Liskov/Interface Segregation", + "severity": "low", + "message": f"Found {not_impl} unimplemented methods - may indicate oversized interface" + }) + + # DIP: Too many direct imports + imports = len(re.findall(r"^(?:import|from)\s+", content, re.MULTILINE)) + if imports > THRESHOLDS["max_imports"]: + violations.append({ + "principle": "DIP", + "name": "Dependency Inversion Principle", + "severity": "low", + "message": f"File has {imports} imports - consider dependency injection" + }) + + return violations + + +def calculate_quality_score( + line_metrics: Dict, + functions: List[Dict], + classes: List[Dict], + smells: List[Dict], + violations: List[Dict] +) -> int: + """Calculate overall quality score (0-100).""" + score = 100 + + # Deduct for code smells + for smell in smells: + if smell["severity"] == "high": + score -= 10 + elif smell["severity"] == "medium": + score -= 5 + elif smell["severity"] == "low": + score -= 2 + + # Deduct for SOLID violations + for violation in violations: + if violation["severity"] == "high": + score -= 8 + elif violation["severity"] == "medium": + score -= 4 + elif violation["severity"] == "low": + score -= 2 + + # Bonus for good comment ratio (10-30%) + if line_metrics["total"] > 0: + comment_ratio = line_metrics["comment"] / line_metrics["total"] + if 0.1 <= comment_ratio <= 0.3: + score += 5 + + # Bonus for reasonable function sizes + if functions: + avg_lines = sum(f["lines"] for f in functions) / len(functions) + if avg_lines < 30: + score += 5 + + return max(0, min(100, score)) + + +def get_grade(score: int) -> str: + """Convert score to letter grade.""" + if score >= 90: + return "A" + elif score >= 80: + return "B" + elif score >= 70: + return "C" + elif score >= 60: + return "D" + else: + return "F" + + +def analyze_file(filepath: Path) -> Dict: + """Analyze a single file for code quality.""" + language = detect_language(filepath) + if not language: + return {"error": f"Unsupported file type: {filepath.suffix}"} + + content = read_file_content(filepath) + if not content: + return {"error": f"Could not read file: {filepath}"} + + line_metrics = count_lines(content) + functions = find_functions(content, language) + classes = find_classes(content, language) + smells = check_code_smells(content, functions, classes) + violations = check_solid_violations(content) + score = calculate_quality_score(line_metrics, functions, classes, smells, violations) + + return { + "file": str(filepath), + "language": language, + "metrics": { + "lines": line_metrics, + "functions": len(functions), + "classes": len(classes), + "avg_complexity": round(sum(f["complexity"] for f in functions) / max(1, len(functions)), 1) + }, + "quality_score": score, + "grade": get_grade(score), + "smells": smells, + "solid_violations": violations, + "function_details": functions[:10], + "class_details": classes[:10] + } + + +def analyze_directory( + dir_path: Path, + recursive: bool = True, + language: Optional[str] = None +) -> Dict: + """Analyze all files in a directory.""" + results = [] + extensions = [] + + if language: + extensions = LANGUAGE_EXTENSIONS.get(language, []) + else: + for exts in LANGUAGE_EXTENSIONS.values(): + extensions.extend(exts) + + pattern = "**/*" if recursive else "*" + + for ext in extensions: + for filepath in dir_path.glob(f"{pattern}{ext}"): + if "node_modules" in str(filepath) or ".git" in str(filepath): + continue + result = analyze_file(filepath) + if "error" not in result: + results.append(result) + + if not results: + return {"error": "No supported files found"} + + total_score = sum(r["quality_score"] for r in results) + avg_score = total_score / len(results) + total_smells = sum(len(r["smells"]) for r in results) + total_violations = sum(len(r["solid_violations"]) for r in results) + + return { + "directory": str(dir_path), + "files_analyzed": len(results), + "average_score": round(avg_score, 1), + "overall_grade": get_grade(int(avg_score)), + "total_code_smells": total_smells, + "total_solid_violations": total_violations, + "files": sorted(results, key=lambda x: x["quality_score"]) + } + + +def print_report(analysis: Dict) -> None: + """Print human-readable analysis report.""" + if "error" in analysis: + print(f"Error: {analysis['error']}") + return + + print("=" * 60) + print("CODE QUALITY REPORT") + print("=" * 60) + + if "file" in analysis: + print(f"\nFile: {analysis['file']}") + print(f"Language: {analysis['language']}") + print(f"Quality Score: {analysis['quality_score']}/100 ({analysis['grade']})") + + metrics = analysis["metrics"] + print(f"\nLines: {metrics['lines']['total']} ({metrics['lines']['code']} code, {metrics['lines']['comment']} comments)") + print(f"Functions: {metrics['functions']}") + print(f"Classes: {metrics['classes']}") + print(f"Avg Complexity: {metrics['avg_complexity']}") + + if analysis["smells"]: + print("\n--- CODE SMELLS ---") + for smell in analysis["smells"][:10]: + print(f" [{smell['severity'].upper()}] {smell['message']} ({smell['location']})") + + if analysis["solid_violations"]: + print("\n--- SOLID VIOLATIONS ---") + for v in analysis["solid_violations"]: + print(f" [{v['principle']}] {v['message']}") + else: + print(f"\nDirectory: {analysis['directory']}") + print(f"Files Analyzed: {analysis['files_analyzed']}") + print(f"Average Score: {analysis['average_score']}/100 ({analysis['overall_grade']})") + print(f"Total Code Smells: {analysis['total_code_smells']}") + print(f"Total SOLID Violations: {analysis['total_solid_violations']}") + + print("\n--- FILES BY QUALITY ---") + for f in analysis["files"][:10]: + print(f" {f['quality_score']:3d}/100 [{f['grade']}] {f['file']}") + + print("\n" + "=" * 60) + + +def main(): + parser = argparse.ArgumentParser( + description="Analyze code quality, smells, and SOLID violations" + ) + parser.add_argument( + "path", + help="File or directory to analyze" + ) + parser.add_argument( + "--recursive", "-r", + action="store_true", + default=True, + help="Recursively analyze directories (default: true)" + ) + parser.add_argument( + "--language", "-l", + choices=list(LANGUAGE_EXTENSIONS.keys()), + help="Filter by programming language" + ) + parser.add_argument( + "--json", + action="store_true", + help="Output in JSON format" + ) + parser.add_argument( + "--output", "-o", + help="Write output to file" + ) + + args = parser.parse_args() + + target = Path(args.path).resolve() + + if not target.exists(): + print(f"Error: Path does not exist: {target}", file=sys.stderr) + sys.exit(1) + + if target.is_file(): + analysis = analyze_file(target) + else: + analysis = analyze_directory(target, args.recursive, args.language) + + if args.json: + output = json.dumps(analysis, indent=2, default=str) + if args.output: + with open(args.output, "w") as f: + f.write(output) + print(f"Results written to {args.output}") + else: + print(output) + else: + print_report(analysis) + + +if __name__ == "__main__": + main() diff --git a/.agentsmesh/packs/alirezarezvani-claude-skills-skills/skills/code-reviewer/scripts/pr_analyzer.py b/.agentsmesh/packs/alirezarezvani-claude-skills-skills/skills/code-reviewer/scripts/pr_analyzer.py new file mode 100755 index 00000000..26780436 --- /dev/null +++ b/.agentsmesh/packs/alirezarezvani-claude-skills-skills/skills/code-reviewer/scripts/pr_analyzer.py @@ -0,0 +1,495 @@ +#!/usr/bin/env python3 +""" +PR Analyzer + +Analyzes pull request changes for review complexity, risk assessment, +and generates review priorities. + +Usage: + python pr_analyzer.py /path/to/repo + python pr_analyzer.py . --base main --head feature-branch + python pr_analyzer.py /path/to/repo --json +""" + +import argparse +import json +import os +import re +import subprocess +import sys +from pathlib import Path +from typing import Dict, List, Optional, Tuple + + +# File categories for review prioritization +FILE_CATEGORIES = { + "critical": { + "patterns": [ + r"auth", r"security", r"password", r"token", r"secret", + r"payment", r"billing", r"crypto", r"encrypt" + ], + "weight": 5, + "description": "Security-sensitive files requiring careful review" + }, + "high": { + "patterns": [ + r"api", r"database", r"migration", r"schema", r"model", + r"config", r"env", r"middleware" + ], + "weight": 4, + "description": "Core infrastructure files" + }, + "medium": { + "patterns": [ + r"service", r"controller", r"handler", r"util", r"helper" + ], + "weight": 3, + "description": "Business logic files" + }, + "low": { + "patterns": [ + r"test", r"spec", r"mock", r"fixture", r"story", + r"readme", r"docs", r"\.md$" + ], + "weight": 1, + "description": "Tests and documentation" + } +} + +# Risky patterns to flag +RISK_PATTERNS = [ + { + "name": "hardcoded_secrets", + "pattern": r"(password|secret|api_key|token)\s*[=:]\s*['\"][^'\"]+['\"]", + "severity": "critical", + "message": "Potential hardcoded secret detected" + }, + { + "name": "todo_fixme", + "pattern": r"(TODO|FIXME|HACK|XXX):", + "severity": "low", + "message": "TODO/FIXME comment found" + }, + { + "name": "console_log", + "pattern": r"console\.(log|debug|info|warn|error)\(", + "severity": "medium", + "message": "Console statement found (remove for production)" + }, + { + "name": "debugger", + "pattern": r"\bdebugger\b", + "severity": "high", + "message": "Debugger statement found" + }, + { + "name": "disable_eslint", + "pattern": r"eslint-disable", + "severity": "medium", + "message": "ESLint rule disabled" + }, + { + "name": "any_type", + "pattern": r":\s*any\b", + "severity": "medium", + "message": "TypeScript 'any' type used" + }, + { + "name": "sql_concatenation", + "pattern": r"(SELECT|INSERT|UPDATE|DELETE).*\+.*['\"]", + "severity": "critical", + "message": "Potential SQL injection (string concatenation in query)" + } +] + + +def run_git_command(cmd: List[str], cwd: Path) -> Tuple[bool, str]: + """Run a git command and return success status and output.""" + try: + result = subprocess.run( + cmd, + cwd=cwd, + capture_output=True, + text=True, + timeout=30 + ) + return result.returncode == 0, result.stdout.strip() + except subprocess.TimeoutExpired: + return False, "Command timed out" + except Exception as e: + return False, str(e) + + +def get_changed_files(repo_path: Path, base: str, head: str) -> List[Dict]: + """Get list of changed files between two refs.""" + success, output = run_git_command( + ["git", "diff", "--name-status", f"{base}...{head}"], + repo_path + ) + + if not success: + # Try without the triple dot (for uncommitted changes) + success, output = run_git_command( + ["git", "diff", "--name-status", base, head], + repo_path + ) + + if not success or not output: + # Fall back to staged changes + success, output = run_git_command( + ["git", "diff", "--name-status", "--cached"], + repo_path + ) + + files = [] + for line in output.split("\n"): + if not line.strip(): + continue + parts = line.split("\t") + if len(parts) >= 2: + status = parts[0][0] # First character of status + filepath = parts[-1] # Handle renames (R100\told\tnew) + status_map = { + "A": "added", + "M": "modified", + "D": "deleted", + "R": "renamed", + "C": "copied" + } + files.append({ + "path": filepath, + "status": status_map.get(status, "modified") + }) + + return files + + +def get_file_diff(repo_path: Path, filepath: str, base: str, head: str) -> str: + """Get diff content for a specific file.""" + success, output = run_git_command( + ["git", "diff", f"{base}...{head}", "--", filepath], + repo_path + ) + if not success: + success, output = run_git_command( + ["git", "diff", "--cached", "--", filepath], + repo_path + ) + return output if success else "" + + +def categorize_file(filepath: str) -> Tuple[str, int]: + """Categorize a file based on its path and name.""" + filepath_lower = filepath.lower() + + for category, info in FILE_CATEGORIES.items(): + for pattern in info["patterns"]: + if re.search(pattern, filepath_lower): + return category, info["weight"] + + return "medium", 2 # Default category + + +def analyze_diff_for_risks(diff_content: str, filepath: str) -> List[Dict]: + """Analyze diff content for risky patterns.""" + risks = [] + + # Only analyze added lines (starting with +) + added_lines = [ + line[1:] for line in diff_content.split("\n") + if line.startswith("+") and not line.startswith("+++") + ] + + content = "\n".join(added_lines) + + for risk in RISK_PATTERNS: + matches = re.findall(risk["pattern"], content, re.IGNORECASE) + if matches: + risks.append({ + "name": risk["name"], + "severity": risk["severity"], + "message": risk["message"], + "file": filepath, + "count": len(matches) + }) + + return risks + + +def count_changes(diff_content: str) -> Dict[str, int]: + """Count additions and deletions in diff.""" + additions = 0 + deletions = 0 + + for line in diff_content.split("\n"): + if line.startswith("+") and not line.startswith("+++"): + additions += 1 + elif line.startswith("-") and not line.startswith("---"): + deletions += 1 + + return {"additions": additions, "deletions": deletions} + + +def calculate_complexity_score(files: List[Dict], all_risks: List[Dict]) -> int: + """Calculate overall PR complexity score (1-10).""" + score = 0 + + # File count contribution (max 3 points) + file_count = len(files) + if file_count > 20: + score += 3 + elif file_count > 10: + score += 2 + elif file_count > 5: + score += 1 + + # Total changes contribution (max 3 points) + total_changes = sum(f.get("additions", 0) + f.get("deletions", 0) for f in files) + if total_changes > 500: + score += 3 + elif total_changes > 200: + score += 2 + elif total_changes > 50: + score += 1 + + # Risk severity contribution (max 4 points) + critical_risks = sum(1 for r in all_risks if r["severity"] == "critical") + high_risks = sum(1 for r in all_risks if r["severity"] == "high") + + score += min(2, critical_risks) + score += min(2, high_risks) + + return min(10, max(1, score)) + + +def analyze_commit_messages(repo_path: Path, base: str, head: str) -> Dict: + """Analyze commit messages in the PR.""" + success, output = run_git_command( + ["git", "log", "--oneline", f"{base}...{head}"], + repo_path + ) + + if not success or not output: + return {"commits": 0, "issues": []} + + commits = output.strip().split("\n") + issues = [] + + for commit in commits: + if len(commit) < 10: + continue + + # Check for conventional commit format + message = commit[8:] if len(commit) > 8 else commit # Skip hash + + if not re.match(r"^(feat|fix|docs|style|refactor|test|chore|perf|ci|build|revert)(\(.+\))?:", message): + issues.append({ + "commit": commit[:7], + "issue": "Does not follow conventional commit format" + }) + + if len(message) > 72: + issues.append({ + "commit": commit[:7], + "issue": "Commit message exceeds 72 characters" + }) + + return { + "commits": len(commits), + "issues": issues + } + + +def analyze_pr( + repo_path: Path, + base: str = "main", + head: str = "HEAD" +) -> Dict: + """Perform complete PR analysis.""" + # Get changed files + changed_files = get_changed_files(repo_path, base, head) + + if not changed_files: + return { + "status": "no_changes", + "message": "No changes detected between branches" + } + + # Analyze each file + all_risks = [] + file_analyses = [] + + for file_info in changed_files: + filepath = file_info["path"] + category, weight = categorize_file(filepath) + + # Get diff for the file + diff = get_file_diff(repo_path, filepath, base, head) + changes = count_changes(diff) + risks = analyze_diff_for_risks(diff, filepath) + + all_risks.extend(risks) + + file_analyses.append({ + "path": filepath, + "status": file_info["status"], + "category": category, + "priority_weight": weight, + "additions": changes["additions"], + "deletions": changes["deletions"], + "risks": risks + }) + + # Sort by priority (highest first) + file_analyses.sort(key=lambda x: (-x["priority_weight"], x["path"])) + + # Analyze commits + commit_analysis = analyze_commit_messages(repo_path, base, head) + + # Calculate metrics + complexity = calculate_complexity_score(file_analyses, all_risks) + + total_additions = sum(f["additions"] for f in file_analyses) + total_deletions = sum(f["deletions"] for f in file_analyses) + + return { + "status": "analyzed", + "summary": { + "files_changed": len(file_analyses), + "total_additions": total_additions, + "total_deletions": total_deletions, + "complexity_score": complexity, + "complexity_label": get_complexity_label(complexity), + "commits": commit_analysis["commits"] + }, + "risks": { + "critical": [r for r in all_risks if r["severity"] == "critical"], + "high": [r for r in all_risks if r["severity"] == "high"], + "medium": [r for r in all_risks if r["severity"] == "medium"], + "low": [r for r in all_risks if r["severity"] == "low"] + }, + "files": file_analyses, + "commit_issues": commit_analysis["issues"], + "review_order": [f["path"] for f in file_analyses[:10]] # Top 10 priority files + } + + +def get_complexity_label(score: int) -> str: + """Get human-readable complexity label.""" + if score <= 2: + return "Simple" + elif score <= 4: + return "Moderate" + elif score <= 6: + return "Complex" + elif score <= 8: + return "Very Complex" + else: + return "Critical" + + +def print_report(analysis: Dict) -> None: + """Print human-readable analysis report.""" + if analysis["status"] == "no_changes": + print("No changes detected.") + return + + summary = analysis["summary"] + risks = analysis["risks"] + + print("=" * 60) + print("PR ANALYSIS REPORT") + print("=" * 60) + + print(f"\nComplexity: {summary['complexity_score']}/10 ({summary['complexity_label']})") + print(f"Files Changed: {summary['files_changed']}") + print(f"Lines: +{summary['total_additions']} / -{summary['total_deletions']}") + print(f"Commits: {summary['commits']}") + + # Risk summary + print("\n--- RISK SUMMARY ---") + print(f"Critical: {len(risks['critical'])}") + print(f"High: {len(risks['high'])}") + print(f"Medium: {len(risks['medium'])}") + print(f"Low: {len(risks['low'])}") + + # Critical and high risks details + if risks["critical"]: + print("\n--- CRITICAL RISKS ---") + for risk in risks["critical"]: + print(f" [{risk['file']}] {risk['message']} (x{risk['count']})") + + if risks["high"]: + print("\n--- HIGH RISKS ---") + for risk in risks["high"]: + print(f" [{risk['file']}] {risk['message']} (x{risk['count']})") + + # Commit message issues + if analysis["commit_issues"]: + print("\n--- COMMIT MESSAGE ISSUES ---") + for issue in analysis["commit_issues"][:5]: + print(f" {issue['commit']}: {issue['issue']}") + + # Review order + print("\n--- SUGGESTED REVIEW ORDER ---") + for i, filepath in enumerate(analysis["review_order"], 1): + file_info = next(f for f in analysis["files"] if f["path"] == filepath) + print(f" {i}. [{file_info['category'].upper()}] {filepath}") + + print("\n" + "=" * 60) + + +def main(): + parser = argparse.ArgumentParser( + description="Analyze pull request for review complexity and risks" + ) + parser.add_argument( + "repo_path", + nargs="?", + default=".", + help="Path to git repository (default: current directory)" + ) + parser.add_argument( + "--base", "-b", + default="main", + help="Base branch for comparison (default: main)" + ) + parser.add_argument( + "--head", + default="HEAD", + help="Head branch/commit for comparison (default: HEAD)" + ) + parser.add_argument( + "--json", + action="store_true", + help="Output in JSON format" + ) + parser.add_argument( + "--output", "-o", + help="Write output to file" + ) + + args = parser.parse_args() + + repo_path = Path(args.repo_path).resolve() + + if not (repo_path / ".git").exists(): + print(f"Error: {repo_path} is not a git repository", file=sys.stderr) + sys.exit(1) + + analysis = analyze_pr(repo_path, args.base, args.head) + + if args.json: + output = json.dumps(analysis, indent=2) + if args.output: + with open(args.output, "w") as f: + f.write(output) + print(f"Results written to {args.output}") + else: + print(output) + else: + print_report(analysis) + + +if __name__ == "__main__": + main() diff --git a/.agentsmesh/packs/alirezarezvani-claude-skills-skills/skills/code-reviewer/scripts/review_report_generator.py b/.agentsmesh/packs/alirezarezvani-claude-skills-skills/skills/code-reviewer/scripts/review_report_generator.py new file mode 100755 index 00000000..7c2246a9 --- /dev/null +++ b/.agentsmesh/packs/alirezarezvani-claude-skills-skills/skills/code-reviewer/scripts/review_report_generator.py @@ -0,0 +1,505 @@ +#!/usr/bin/env python3 +""" +Review Report Generator + +Generates comprehensive code review reports by combining PR analysis +and code quality findings into structured, actionable reports. + +Usage: + python review_report_generator.py /path/to/repo + python review_report_generator.py . --pr-analysis pr_results.json --quality-analysis quality_results.json + python review_report_generator.py /path/to/repo --format markdown --output review.md +""" + +import argparse +import json +import os +import subprocess +import sys +from datetime import datetime +from pathlib import Path +from typing import Dict, List, Optional, Tuple + + +# Severity weights for prioritization +SEVERITY_WEIGHTS = { + "critical": 100, + "high": 75, + "medium": 50, + "low": 25, + "info": 10 +} + +# Review verdict thresholds +VERDICT_THRESHOLDS = { + "approve": {"max_critical": 0, "max_high": 0, "max_score": 100}, + "approve_with_suggestions": {"max_critical": 0, "max_high": 2, "max_score": 85}, + "request_changes": {"max_critical": 0, "max_high": 5, "max_score": 70}, + "block": {"max_critical": float("inf"), "max_high": float("inf"), "max_score": 0} +} + + +def load_json_file(filepath: str) -> Optional[Dict]: + """Load JSON file if it exists.""" + try: + with open(filepath, "r") as f: + return json.load(f) + except (FileNotFoundError, json.JSONDecodeError): + return None + + +def run_pr_analyzer(repo_path: Path) -> Dict: + """Run pr_analyzer.py and return results.""" + script_path = Path(__file__).parent / "pr_analyzer.py" + if not script_path.exists(): + return {"status": "error", "message": "pr_analyzer.py not found"} + + try: + result = subprocess.run( + [sys.executable, str(script_path), str(repo_path), "--json"], + capture_output=True, + text=True, + timeout=120 + ) + if result.returncode == 0: + return json.loads(result.stdout) + return {"status": "error", "message": result.stderr} + except Exception as e: + return {"status": "error", "message": str(e)} + + +def run_quality_checker(repo_path: Path) -> Dict: + """Run code_quality_checker.py and return results.""" + script_path = Path(__file__).parent / "code_quality_checker.py" + if not script_path.exists(): + return {"status": "error", "message": "code_quality_checker.py not found"} + + try: + result = subprocess.run( + [sys.executable, str(script_path), str(repo_path), "--json"], + capture_output=True, + text=True, + timeout=300 + ) + if result.returncode == 0: + return json.loads(result.stdout) + return {"status": "error", "message": result.stderr} + except Exception as e: + return {"status": "error", "message": str(e)} + + +def calculate_review_score(pr_analysis: Dict, quality_analysis: Dict) -> int: + """Calculate overall review score (0-100).""" + score = 100 + + # Deduct for PR risks + if "risks" in pr_analysis: + risks = pr_analysis["risks"] + score -= len(risks.get("critical", [])) * 15 + score -= len(risks.get("high", [])) * 10 + score -= len(risks.get("medium", [])) * 5 + score -= len(risks.get("low", [])) * 2 + + # Deduct for code quality issues + if "issues" in quality_analysis: + issues = quality_analysis["issues"] + score -= len([i for i in issues if i.get("severity") == "critical"]) * 12 + score -= len([i for i in issues if i.get("severity") == "high"]) * 8 + score -= len([i for i in issues if i.get("severity") == "medium"]) * 4 + score -= len([i for i in issues if i.get("severity") == "low"]) * 1 + + # Deduct for complexity + if "summary" in pr_analysis: + complexity = pr_analysis["summary"].get("complexity_score", 0) + if complexity > 7: + score -= 10 + elif complexity > 5: + score -= 5 + + return max(0, min(100, score)) + + +def determine_verdict(score: int, critical_count: int, high_count: int) -> Tuple[str, str]: + """Determine review verdict based on score and issue counts.""" + if critical_count > 0: + return "block", "Critical issues must be resolved before merge" + + if score >= 90 and high_count == 0: + return "approve", "Code meets quality standards" + + if score >= 75 and high_count <= 2: + return "approve_with_suggestions", "Minor improvements recommended" + + if score >= 50: + return "request_changes", "Several issues need to be addressed" + + return "block", "Significant issues prevent approval" + + +def generate_findings_list(pr_analysis: Dict, quality_analysis: Dict) -> List[Dict]: + """Combine and prioritize all findings.""" + findings = [] + + # Add PR risk findings + if "risks" in pr_analysis: + for severity, items in pr_analysis["risks"].items(): + for item in items: + findings.append({ + "source": "pr_analysis", + "severity": severity, + "category": item.get("name", "unknown"), + "message": item.get("message", ""), + "file": item.get("file", ""), + "count": item.get("count", 1) + }) + + # Add code quality findings + if "issues" in quality_analysis: + for issue in quality_analysis["issues"]: + findings.append({ + "source": "quality_analysis", + "severity": issue.get("severity", "medium"), + "category": issue.get("type", "unknown"), + "message": issue.get("message", ""), + "file": issue.get("file", ""), + "line": issue.get("line", 0) + }) + + # Sort by severity weight + findings.sort( + key=lambda x: -SEVERITY_WEIGHTS.get(x["severity"], 0) + ) + + return findings + + +def generate_action_items(findings: List[Dict]) -> List[Dict]: + """Generate prioritized action items from findings.""" + action_items = [] + seen_categories = set() + + for finding in findings: + category = finding["category"] + severity = finding["severity"] + + # Group similar issues + if category in seen_categories and severity not in ["critical", "high"]: + continue + + action = { + "priority": "P0" if severity == "critical" else "P1" if severity == "high" else "P2", + "action": get_action_for_category(category, finding), + "severity": severity, + "files_affected": [finding["file"]] if finding.get("file") else [] + } + action_items.append(action) + seen_categories.add(category) + + return action_items[:15] # Top 15 actions + + +def get_action_for_category(category: str, finding: Dict) -> str: + """Get actionable recommendation for issue category.""" + actions = { + "hardcoded_secrets": "Remove hardcoded credentials and use environment variables or a secrets manager", + "sql_concatenation": "Use parameterized queries to prevent SQL injection", + "debugger": "Remove debugger statements before merging", + "console_log": "Remove or replace console statements with proper logging", + "todo_fixme": "Address TODO/FIXME comments or create tracking issues", + "disable_eslint": "Address the underlying issue instead of disabling lint rules", + "any_type": "Replace 'any' types with proper type definitions", + "long_function": "Break down function into smaller, focused units", + "god_class": "Split class into smaller, single-responsibility classes", + "too_many_params": "Use parameter objects or builder pattern", + "deep_nesting": "Refactor using early returns, guard clauses, or extraction", + "high_complexity": "Reduce cyclomatic complexity through refactoring", + "missing_error_handling": "Add proper error handling and recovery logic", + "duplicate_code": "Extract duplicate code into shared functions", + "magic_numbers": "Replace magic numbers with named constants", + "large_file": "Consider splitting into multiple smaller modules" + } + return actions.get(category, f"Review and address: {finding.get('message', category)}") + + +def format_markdown_report(report: Dict) -> str: + """Generate markdown-formatted report.""" + lines = [] + + # Header + lines.append("# Code Review Report") + lines.append("") + lines.append(f"**Generated:** {report['metadata']['generated_at']}") + lines.append(f"**Repository:** {report['metadata']['repository']}") + lines.append("") + + # Executive Summary + lines.append("## Executive Summary") + lines.append("") + summary = report["summary"] + verdict = summary["verdict"] + verdict_emoji = { + "approve": "✅", + "approve_with_suggestions": "✅", + "request_changes": "⚠️", + "block": "❌" + }.get(verdict, "❓") + + lines.append(f"**Verdict:** {verdict_emoji} {verdict.upper().replace('_', ' ')}") + lines.append(f"**Score:** {summary['score']}/100") + lines.append(f"**Rationale:** {summary['rationale']}") + lines.append("") + + # Issue Counts + lines.append("### Issue Summary") + lines.append("") + lines.append("| Severity | Count |") + lines.append("|----------|-------|") + for severity in ["critical", "high", "medium", "low"]: + count = summary["issue_counts"].get(severity, 0) + lines.append(f"| {severity.capitalize()} | {count} |") + lines.append("") + + # PR Statistics (if available) + if "pr_summary" in report: + pr = report["pr_summary"] + lines.append("### Change Statistics") + lines.append("") + lines.append(f"- **Files Changed:** {pr.get('files_changed', 'N/A')}") + lines.append(f"- **Lines Added:** +{pr.get('total_additions', 0)}") + lines.append(f"- **Lines Removed:** -{pr.get('total_deletions', 0)}") + lines.append(f"- **Complexity:** {pr.get('complexity_label', 'N/A')}") + lines.append("") + + # Action Items + if report.get("action_items"): + lines.append("## Action Items") + lines.append("") + for i, item in enumerate(report["action_items"], 1): + priority = item["priority"] + emoji = "🔴" if priority == "P0" else "🟠" if priority == "P1" else "🟡" + lines.append(f"{i}. {emoji} **[{priority}]** {item['action']}") + if item.get("files_affected"): + lines.append(f" - Files: {', '.join(item['files_affected'][:3])}") + lines.append("") + + # Critical Findings + critical_findings = [f for f in report.get("findings", []) if f["severity"] == "critical"] + if critical_findings: + lines.append("## Critical Issues (Must Fix)") + lines.append("") + for finding in critical_findings: + lines.append(f"- **{finding['category']}** in `{finding.get('file', 'unknown')}`") + lines.append(f" - {finding['message']}") + lines.append("") + + # High Priority Findings + high_findings = [f for f in report.get("findings", []) if f["severity"] == "high"] + if high_findings: + lines.append("## High Priority Issues") + lines.append("") + for finding in high_findings[:10]: + lines.append(f"- **{finding['category']}** in `{finding.get('file', 'unknown')}`") + lines.append(f" - {finding['message']}") + lines.append("") + + # Review Order (if available) + if "review_order" in report: + lines.append("## Suggested Review Order") + lines.append("") + for i, filepath in enumerate(report["review_order"][:10], 1): + lines.append(f"{i}. `{filepath}`") + lines.append("") + + # Footer + lines.append("---") + lines.append("*Generated by Code Reviewer*") + + return "\n".join(lines) + + +def format_text_report(report: Dict) -> str: + """Generate plain text report.""" + lines = [] + + lines.append("=" * 60) + lines.append("CODE REVIEW REPORT") + lines.append("=" * 60) + lines.append("") + lines.append(f"Generated: {report['metadata']['generated_at']}") + lines.append(f"Repository: {report['metadata']['repository']}") + lines.append("") + + summary = report["summary"] + verdict = summary["verdict"].upper().replace("_", " ") + lines.append(f"VERDICT: {verdict}") + lines.append(f"SCORE: {summary['score']}/100") + lines.append(f"RATIONALE: {summary['rationale']}") + lines.append("") + + lines.append("--- ISSUE SUMMARY ---") + for severity in ["critical", "high", "medium", "low"]: + count = summary["issue_counts"].get(severity, 0) + lines.append(f" {severity.capitalize()}: {count}") + lines.append("") + + if report.get("action_items"): + lines.append("--- ACTION ITEMS ---") + for i, item in enumerate(report["action_items"][:10], 1): + lines.append(f" {i}. [{item['priority']}] {item['action']}") + lines.append("") + + critical = [f for f in report.get("findings", []) if f["severity"] == "critical"] + if critical: + lines.append("--- CRITICAL ISSUES ---") + for f in critical: + lines.append(f" [{f.get('file', 'unknown')}] {f['message']}") + lines.append("") + + lines.append("=" * 60) + + return "\n".join(lines) + + +def generate_report( + repo_path: Path, + pr_analysis: Optional[Dict] = None, + quality_analysis: Optional[Dict] = None +) -> Dict: + """Generate comprehensive review report.""" + # Run analyses if not provided + if pr_analysis is None: + pr_analysis = run_pr_analyzer(repo_path) + + if quality_analysis is None: + quality_analysis = run_quality_checker(repo_path) + + # Generate findings + findings = generate_findings_list(pr_analysis, quality_analysis) + + # Count issues by severity + issue_counts = { + "critical": len([f for f in findings if f["severity"] == "critical"]), + "high": len([f for f in findings if f["severity"] == "high"]), + "medium": len([f for f in findings if f["severity"] == "medium"]), + "low": len([f for f in findings if f["severity"] == "low"]) + } + + # Calculate score and verdict + score = calculate_review_score(pr_analysis, quality_analysis) + verdict, rationale = determine_verdict( + score, + issue_counts["critical"], + issue_counts["high"] + ) + + # Generate action items + action_items = generate_action_items(findings) + + # Build report + report = { + "metadata": { + "generated_at": datetime.now().isoformat(), + "repository": str(repo_path), + "version": "1.0.0" + }, + "summary": { + "score": score, + "verdict": verdict, + "rationale": rationale, + "issue_counts": issue_counts + }, + "findings": findings, + "action_items": action_items + } + + # Add PR summary if available + if pr_analysis.get("status") == "analyzed": + report["pr_summary"] = pr_analysis.get("summary", {}) + report["review_order"] = pr_analysis.get("review_order", []) + + # Add quality summary if available + if quality_analysis.get("status") == "analyzed": + report["quality_summary"] = quality_analysis.get("summary", {}) + + return report + + +def main(): + parser = argparse.ArgumentParser( + description="Generate comprehensive code review reports" + ) + parser.add_argument( + "repo_path", + nargs="?", + default=".", + help="Path to repository (default: current directory)" + ) + parser.add_argument( + "--pr-analysis", + help="Path to pre-computed PR analysis JSON" + ) + parser.add_argument( + "--quality-analysis", + help="Path to pre-computed quality analysis JSON" + ) + parser.add_argument( + "--format", "-f", + choices=["text", "markdown", "json"], + default="text", + help="Output format (default: text)" + ) + parser.add_argument( + "--output", "-o", + help="Write output to file" + ) + parser.add_argument( + "--json", + action="store_true", + help="Output as JSON (shortcut for --format json)" + ) + + args = parser.parse_args() + + repo_path = Path(args.repo_path).resolve() + if not repo_path.exists(): + print(f"Error: Path does not exist: {repo_path}", file=sys.stderr) + sys.exit(1) + + # Load pre-computed analyses if provided + pr_analysis = None + quality_analysis = None + + if args.pr_analysis: + pr_analysis = load_json_file(args.pr_analysis) + if not pr_analysis: + print(f"Warning: Could not load PR analysis from {args.pr_analysis}") + + if args.quality_analysis: + quality_analysis = load_json_file(args.quality_analysis) + if not quality_analysis: + print(f"Warning: Could not load quality analysis from {args.quality_analysis}") + + # Generate report + report = generate_report(repo_path, pr_analysis, quality_analysis) + + # Format output + output_format = "json" if args.json else args.format + + if output_format == "json": + output = json.dumps(report, indent=2) + elif output_format == "markdown": + output = format_markdown_report(report) + else: + output = format_text_report(report) + + # Write or print output + if args.output: + with open(args.output, "w") as f: + f.write(output) + print(f"Report written to {args.output}") + else: + print(output) + + +if __name__ == "__main__": + main() diff --git a/.claude/skills/code-reviewer/SKILL.md b/.claude/skills/code-reviewer/SKILL.md new file mode 100644 index 00000000..450a39f8 --- /dev/null +++ b/.claude/skills/code-reviewer/SKILL.md @@ -0,0 +1,177 @@ +--- +name: code-reviewer +description: Code review automation for TypeScript, JavaScript, Python, Go, Swift, Kotlin. Analyzes PRs for complexity and risk, checks code quality for SOLID violations and code smells, generates review reports. Use when reviewing pull requests, analyzing code quality, identifying issues, generating review checklists. +--- + +# Code Reviewer + +Automated code review tools for analyzing pull requests, detecting code quality issues, and generating review reports. + +--- + +## Table of Contents + +- [Tools](#tools) + - [PR Analyzer](#pr-analyzer) + - [Code Quality Checker](#code-quality-checker) + - [Review Report Generator](#review-report-generator) +- [Reference Guides](#reference-guides) +- [Languages Supported](#languages-supported) + +--- + +## Tools + +### PR Analyzer + +Analyzes git diff between branches to assess review complexity and identify risks. + +```bash +# Analyze current branch against main +python scripts/pr_analyzer.py /path/to/repo + +# Compare specific branches +python scripts/pr_analyzer.py . --base main --head feature-branch + +# JSON output for integration +python scripts/pr_analyzer.py /path/to/repo --json +``` + +**What it detects:** +- Hardcoded secrets (passwords, API keys, tokens) +- SQL injection patterns (string concatenation in queries) +- Debug statements (debugger, console.log) +- ESLint rule disabling +- TypeScript `any` types +- TODO/FIXME comments + +**Output includes:** +- Complexity score (1-10) +- Risk categorization (critical, high, medium, low) +- File prioritization for review order +- Commit message validation + +--- + +### Code Quality Checker + +Analyzes source code for structural issues, code smells, and SOLID violations. + +```bash +# Analyze a directory +python scripts/code_quality_checker.py /path/to/code + +# Analyze specific language +python scripts/code_quality_checker.py . --language python + +# JSON output +python scripts/code_quality_checker.py /path/to/code --json +``` + +**What it detects:** +- Long functions (>50 lines) +- Large files (>500 lines) +- God classes (>20 methods) +- Deep nesting (>4 levels) +- Too many parameters (>5) +- High cyclomatic complexity +- Missing error handling +- Unused imports +- Magic numbers + +**Thresholds:** + +| Issue | Threshold | +|-------|-----------| +| Long function | >50 lines | +| Large file | >500 lines | +| God class | >20 methods | +| Too many params | >5 | +| Deep nesting | >4 levels | +| High complexity | >10 branches | + +--- + +### Review Report Generator + +Combines PR analysis and code quality findings into structured review reports. + +```bash +# Generate report for current repo +python scripts/review_report_generator.py /path/to/repo + +# Markdown output +python scripts/review_report_generator.py . --format markdown --output review.md + +# Use pre-computed analyses +python scripts/review_report_generator.py . \ + --pr-analysis pr_results.json \ + --quality-analysis quality_results.json +``` + +**Report includes:** +- Review verdict (approve, request changes, block) +- Score (0-100) +- Prioritized action items +- Issue summary by severity +- Suggested review order + +**Verdicts:** + +| Score | Verdict | +|-------|---------| +| 90+ with no high issues | Approve | +| 75+ with ≤2 high issues | Approve with suggestions | +| 50-74 | Request changes | +| <50 or critical issues | Block | + +--- + +## Reference Guides + +### Code Review Checklist +`.claude/skills/code-reviewer/references/code_review_checklist.md` + +Systematic checklists covering: +- Pre-review checks (build, tests, PR hygiene) +- Correctness (logic, data handling, error handling) +- Security (input validation, injection prevention) +- Performance (efficiency, caching, scalability) +- Maintainability (code quality, naming, structure) +- Testing (coverage, quality, mocking) +- Language-specific checks + +### Coding Standards +`.claude/skills/code-reviewer/references/coding_standards.md` + +Language-specific standards for: +- TypeScript (type annotations, null safety, async/await) +- JavaScript (declarations, patterns, modules) +- Python (type hints, exceptions, class design) +- Go (error handling, structs, concurrency) +- Swift (optionals, protocols, errors) +- Kotlin (null safety, data classes, coroutines) + +### Common Antipatterns +`.claude/skills/code-reviewer/references/common_antipatterns.md` + +Antipattern catalog with examples and fixes: +- Structural (god class, long method, deep nesting) +- Logic (boolean blindness, stringly typed code) +- Security (SQL injection, hardcoded credentials) +- Performance (N+1 queries, unbounded collections) +- Testing (duplication, testing implementation) +- Async (floating promises, callback hell) + +--- + +## Languages Supported + +| Language | Extensions | +|----------|------------| +| Python | `.py` | +| TypeScript | `.ts`, `.tsx` | +| JavaScript | `.js`, `.jsx`, `.mjs` | +| Go | `.go` | +| Swift | `.swift` | +| Kotlin | `.kt`, `.kts` | \ No newline at end of file diff --git a/.claude/skills/code-reviewer/references/code_review_checklist.md b/.claude/skills/code-reviewer/references/code_review_checklist.md new file mode 100644 index 00000000..b7bd0867 --- /dev/null +++ b/.claude/skills/code-reviewer/references/code_review_checklist.md @@ -0,0 +1,270 @@ +# Code Review Checklist + +Structured checklists for systematic code review across different aspects. + +--- + +## Table of Contents + +- [Pre-Review Checks](#pre-review-checks) +- [Correctness](#correctness) +- [Security](#security) +- [Performance](#performance) +- [Maintainability](#maintainability) +- [Testing](#testing) +- [Documentation](#documentation) +- [Language-Specific Checks](#language-specific-checks) + +--- + +## Pre-Review Checks + +Before diving into code, verify these basics: + +### Build and Tests +- [ ] Code compiles without errors +- [ ] All existing tests pass +- [ ] New tests are included for new functionality +- [ ] No unintended files included (build artifacts, IDE configs) + +### PR Hygiene +- [ ] PR has clear title and description +- [ ] Changes are scoped appropriately (not too large) +- [ ] Commits follow conventional commit format +- [ ] Branch is up to date with base branch + +### Scope Verification +- [ ] Changes match the stated purpose +- [ ] No unrelated changes bundled in +- [ ] Breaking changes are documented +- [ ] Migration path provided if needed + +--- + +## Correctness + +### Logic +- [ ] Algorithm implements requirements correctly +- [ ] Edge cases handled (null, empty, boundary values) +- [ ] Off-by-one errors checked +- [ ] Correct operators used (== vs ===, & vs &&) +- [ ] Loop termination conditions correct +- [ ] Recursion has proper base cases + +### Data Handling +- [ ] Data types appropriate for the use case +- [ ] Numeric overflow/underflow considered +- [ ] Date/time handling accounts for timezones +- [ ] Unicode and internationalization handled +- [ ] Data validation at entry points + +### State Management +- [ ] State transitions are valid +- [ ] Race conditions addressed +- [ ] Concurrent access handled correctly +- [ ] State cleanup on errors/exit + +### Error Handling +- [ ] Errors caught at appropriate levels +- [ ] Error messages are actionable +- [ ] Errors don't expose sensitive information +- [ ] Recovery or graceful degradation implemented +- [ ] Resources cleaned up in error paths + +--- + +## Security + +### Input Validation +- [ ] All user input validated and sanitized +- [ ] Input length limits enforced +- [ ] File uploads validated (type, size, content) +- [ ] URL parameters validated + +### Injection Prevention +- [ ] SQL queries parameterized +- [ ] Command execution uses safe APIs +- [ ] HTML output escaped to prevent XSS +- [ ] LDAP queries properly escaped +- [ ] XML parsing disables external entities + +### Authentication & Authorization +- [ ] Authentication required for protected resources +- [ ] Authorization checked before operations +- [ ] Session management secure +- [ ] Password handling follows best practices +- [ ] Token expiration implemented + +### Data Protection +- [ ] Sensitive data encrypted at rest +- [ ] Sensitive data encrypted in transit +- [ ] PII handled according to policy +- [ ] Secrets not hardcoded +- [ ] Logs don't contain sensitive data + +### API Security +- [ ] Rate limiting implemented +- [ ] CORS configured correctly +- [ ] CSRF protection in place +- [ ] API keys/tokens secured +- [ ] Endpoints use HTTPS + +--- + +## Performance + +### Efficiency +- [ ] Appropriate data structures used +- [ ] Algorithms have acceptable complexity +- [ ] Database queries are optimized +- [ ] N+1 query problems avoided +- [ ] Indexes used where beneficial + +### Resource Usage +- [ ] Memory usage bounded +- [ ] No memory leaks +- [ ] File handles properly closed +- [ ] Database connections pooled +- [ ] Network calls minimized + +### Caching +- [ ] Appropriate caching strategy +- [ ] Cache invalidation handled +- [ ] Cache keys are unique and predictable +- [ ] TTL values appropriate + +### Scalability +- [ ] Horizontal scaling considered +- [ ] Bottlenecks identified +- [ ] Async processing for long operations +- [ ] Batch operations where appropriate + +--- + +## Maintainability + +### Code Quality +- [ ] Functions/methods have single responsibility +- [ ] Classes follow SOLID principles +- [ ] Code is DRY (Don't Repeat Yourself) +- [ ] No dead code or commented-out code +- [ ] Magic numbers replaced with constants + +### Naming +- [ ] Names are descriptive and consistent +- [ ] Naming follows project conventions +- [ ] No abbreviations that obscure meaning +- [ ] Boolean variables/functions have is/has/can prefix + +### Structure +- [ ] Functions are appropriately sized (<50 lines preferred) +- [ ] Nesting depth is reasonable (<4 levels) +- [ ] Related code is grouped together +- [ ] Dependencies are minimal and explicit + +### Readability +- [ ] Code is self-documenting where possible +- [ ] Complex logic has explanatory comments +- [ ] Formatting is consistent +- [ ] No overly clever or obscure code + +--- + +## Testing + +### Coverage +- [ ] New code has unit tests +- [ ] Critical paths have integration tests +- [ ] Edge cases are tested +- [ ] Error conditions are tested + +### Quality +- [ ] Tests are independent +- [ ] Tests have clear assertions +- [ ] Test names describe what is tested +- [ ] Tests don't depend on external state + +### Mocking +- [ ] External dependencies are mocked +- [ ] Mocks are realistic +- [ ] Mock setup is not excessive + +--- + +## Documentation + +### Code Documentation +- [ ] Public APIs are documented +- [ ] Complex algorithms explained +- [ ] Non-obvious decisions documented +- [ ] TODO/FIXME comments have context + +### External Documentation +- [ ] README updated if needed +- [ ] API documentation updated +- [ ] Changelog updated +- [ ] Migration guides provided + +--- + +## Language-Specific Checks + +### TypeScript/JavaScript +- [ ] Types are explicit (avoid `any`) +- [ ] Null checks present (`?.`, `??`) +- [ ] Async/await errors handled +- [ ] No floating promises +- [ ] Memory leaks from closures checked + +### Python +- [ ] Type hints used for public APIs +- [ ] Context managers for resources (`with` statements) +- [ ] Exception handling is specific (not bare `except`) +- [ ] No mutable default arguments +- [ ] List comprehensions used appropriately + +### Go +- [ ] Errors checked and handled +- [ ] Goroutine leaks prevented +- [ ] Context propagation correct +- [ ] Defer statements in right order +- [ ] Interfaces minimal + +### Swift +- [ ] Optionals handled safely +- [ ] Memory management correct (weak/unowned) +- [ ] Error handling uses Result or throws +- [ ] Access control appropriate +- [ ] Codable implementation correct + +### Kotlin +- [ ] Null safety leveraged +- [ ] Coroutine cancellation handled +- [ ] Data classes used appropriately +- [ ] Extension functions don't obscure behavior +- [ ] Sealed classes for state + +--- + +## Review Process Tips + +### Before Approving +1. Verify all critical checks passed +2. Confirm tests are adequate +3. Consider deployment impact +4. Check for any security concerns +5. Ensure documentation is updated + +### Providing Feedback +- Be specific about issues +- Explain why something is problematic +- Suggest alternatives when possible +- Distinguish blockers from suggestions +- Acknowledge good patterns + +### When to Block +- Security vulnerabilities present +- Critical logic errors +- No tests for risky changes +- Breaking changes without migration +- Significant performance regressions diff --git a/.claude/skills/code-reviewer/references/coding_standards.md b/.claude/skills/code-reviewer/references/coding_standards.md new file mode 100644 index 00000000..9fbc6a06 --- /dev/null +++ b/.claude/skills/code-reviewer/references/coding_standards.md @@ -0,0 +1,555 @@ +# Coding Standards + +Language-specific coding standards and conventions for code review. + +--- + +## Table of Contents + +- [Universal Principles](#universal-principles) +- [TypeScript Standards](#typescript-standards) +- [JavaScript Standards](#javascript-standards) +- [Python Standards](#python-standards) +- [Go Standards](#go-standards) +- [Swift Standards](#swift-standards) +- [Kotlin Standards](#kotlin-standards) + +--- + +## Universal Principles + +These apply across all languages. + +### Naming Conventions + +| Element | Convention | Example | +|---------|------------|---------| +| Variables | camelCase (JS/TS), snake_case (Python/Go) | `userName`, `user_name` | +| Constants | SCREAMING_SNAKE_CASE | `MAX_RETRY_COUNT` | +| Functions | camelCase (JS/TS), snake_case (Python) | `getUserById`, `get_user_by_id` | +| Classes | PascalCase | `UserRepository` | +| Interfaces | PascalCase, optionally prefixed | `IUserService` or `UserService` | +| Private members | Prefix with underscore or use access modifiers | `_internalState` | + +### Function Design + +``` +Good functions: +- Do one thing well +- Have descriptive names (verb + noun) +- Take 3 or fewer parameters +- Return early for error cases +- Stay under 50 lines +``` + +### Error Handling + +``` +Good error handling: +- Catch specific errors, not generic exceptions +- Log with context (what, where, why) +- Clean up resources in error paths +- Don't swallow errors silently +- Provide actionable error messages +``` + +--- + +## TypeScript Standards + +### Type Annotations + +```typescript +// Avoid 'any' - use unknown for truly unknown types +function processData(data: unknown): ProcessedResult { + if (isValidData(data)) { + return transform(data); + } + throw new Error('Invalid data format'); +} + +// Use explicit return types for public APIs +export function calculateTotal(items: CartItem[]): number { + return items.reduce((sum, item) => sum + item.price, 0); +} + +// Use type guards for runtime checks +function isUser(obj: unknown): obj is User { + return ( + typeof obj === 'object' && + obj !== null && + 'id' in obj && + 'email' in obj + ); +} +``` + +### Null Safety + +```typescript +// Use optional chaining and nullish coalescing +const userName = user?.profile?.name ?? 'Anonymous'; + +// Be explicit about nullable types +interface Config { + timeout: number; + retries?: number; // Optional + fallbackUrl: string | null; // Explicitly nullable +} + +// Use assertion functions for validation +function assertDefined(value: T | null | undefined): asserts value is T { + if (value === null || value === undefined) { + throw new Error('Value is not defined'); + } +} +``` + +### Async/Await + +```typescript +// Always handle errors in async functions +async function fetchUser(id: string): Promise { + try { + const response = await api.get(`/users/${id}`); + return response.data; + } catch (error) { + logger.error('Failed to fetch user', { id, error }); + throw new UserFetchError(id, error); + } +} + +// Use Promise.all for parallel operations +async function loadDashboard(userId: string): Promise { + const [profile, stats, notifications] = await Promise.all([ + fetchProfile(userId), + fetchStats(userId), + fetchNotifications(userId) + ]); + return { profile, stats, notifications }; +} +``` + +### React/Component Standards + +```typescript +// Use explicit prop types +interface ButtonProps { + label: string; + onClick: () => void; + variant?: 'primary' | 'secondary'; + disabled?: boolean; +} + +// Prefer functional components with hooks +function Button({ label, onClick, variant = 'primary', disabled = false }: ButtonProps) { + return ( + + ); +} + +// Use custom hooks for reusable logic +function useDebounce(value: T, delay: number): T { + const [debouncedValue, setDebouncedValue] = useState(value); + + useEffect(() => { + const timer = setTimeout(() => setDebouncedValue(value), delay); + return () => clearTimeout(timer); + }, [value, delay]); + + return debouncedValue; +} +``` + +--- + +## JavaScript Standards + +### Variable Declarations + +```javascript +// Use const by default, let when reassignment needed +const MAX_ITEMS = 100; +let currentCount = 0; + +// Never use var +// var is function-scoped and hoisted, leading to bugs +``` + +### Object and Array Patterns + +```javascript +// Use object destructuring +const { name, email, role = 'user' } = user; + +// Use spread for immutable updates +const updatedUser = { ...user, lastLogin: new Date() }; +const updatedList = [...items, newItem]; + +// Use array methods over loops +const activeUsers = users.filter(u => u.isActive); +const emails = users.map(u => u.email); +const total = orders.reduce((sum, o) => sum + o.amount, 0); +``` + +### Module Patterns + +```javascript +// Use named exports for utilities +export function formatDate(date) { ... } +export function parseDate(str) { ... } + +// Use default export for main component/class +export default class UserService { ... } + +// Group related exports +export { formatDate, parseDate, isValidDate } from './dateUtils'; +``` + +--- + +## Python Standards + +### Type Hints (PEP 484) + +```python +from typing import Optional, List, Dict, Union + +def get_user(user_id: int) -> Optional[User]: + """Fetch user by ID, returns None if not found.""" + return db.query(User).filter(User.id == user_id).first() + +def process_items(items: List[str]) -> Dict[str, int]: + """Count occurrences of each item.""" + return {item: items.count(item) for item in set(items)} + +def send_notification( + user: User, + message: str, + *, + priority: str = "normal", + channels: List[str] = None +) -> bool: + """Send notification to user via specified channels.""" + channels = channels or ["email"] + # Implementation +``` + +### Exception Handling + +```python +# Catch specific exceptions +try: + result = api_client.fetch_data(endpoint) +except ConnectionError as e: + logger.warning(f"Connection failed: {e}") + return cached_data +except TimeoutError as e: + logger.error(f"Request timed out: {e}") + raise ServiceUnavailableError() from e + +# Use context managers for resources +with open(filepath, 'r') as f: + data = json.load(f) + +# Custom exceptions should be informative +class ValidationError(Exception): + def __init__(self, field: str, message: str): + self.field = field + self.message = message + super().__init__(f"{field}: {message}") +``` + +### Class Design + +```python +from dataclasses import dataclass +from abc import ABC, abstractmethod + +# Use dataclasses for data containers +@dataclass +class UserDTO: + id: int + email: str + name: str + is_active: bool = True + +# Use ABC for interfaces +class Repository(ABC): + @abstractmethod + def find_by_id(self, id: int) -> Optional[Entity]: + pass + + @abstractmethod + def save(self, entity: Entity) -> Entity: + pass + +# Use properties for computed attributes +class Order: + def __init__(self, items: List[OrderItem]): + self._items = items + + @property + def total(self) -> Decimal: + return sum(item.price * item.quantity for item in self._items) +``` + +--- + +## Go Standards + +### Error Handling + +```go +// Always check errors +file, err := os.Open(filename) +if err != nil { + return fmt.Errorf("failed to open %s: %w", filename, err) +} +defer file.Close() + +// Use custom error types for specific cases +type ValidationError struct { + Field string + Message string +} + +func (e *ValidationError) Error() string { + return fmt.Sprintf("%s: %s", e.Field, e.Message) +} + +// Wrap errors with context +if err := db.Query(query); err != nil { + return fmt.Errorf("query failed for user %d: %w", userID, err) +} +``` + +### Struct Design + +```go +// Use unexported fields with exported methods +type UserService struct { + repo UserRepository + cache Cache + logger Logger +} + +// Constructor functions for initialization +func NewUserService(repo UserRepository, cache Cache, logger Logger) *UserService { + return &UserService{ + repo: repo, + cache: cache, + logger: logger, + } +} + +// Keep interfaces small +type Reader interface { + Read(p []byte) (n int, err error) +} + +type Writer interface { + Write(p []byte) (n int, err error) +} +``` + +### Concurrency + +```go +// Use context for cancellation +func fetchData(ctx context.Context, url string) ([]byte, error) { + req, err := http.NewRequestWithContext(ctx, "GET", url, nil) + if err != nil { + return nil, err + } + // ... +} + +// Use channels for communication +func worker(jobs <-chan Job, results chan<- Result) { + for job := range jobs { + result := process(job) + results <- result + } +} + +// Use sync.WaitGroup for coordination +var wg sync.WaitGroup +for _, item := range items { + wg.Add(1) + go func(i Item) { + defer wg.Done() + processItem(i) + }(item) +} +wg.Wait() +``` + +--- + +## Swift Standards + +### Optionals + +```swift +// Use optional binding +if let user = fetchUser(id: userId) { + displayProfile(user) +} + +// Use guard for early exit +guard let data = response.data else { + throw NetworkError.noData +} + +// Use nil coalescing for defaults +let displayName = user.nickname ?? user.email + +// Avoid force unwrapping except in tests +// BAD: let name = user.name! +// GOOD: guard let name = user.name else { return } +``` + +### Protocol-Oriented Design + +```swift +// Define protocols with minimal requirements +protocol Identifiable { + var id: String { get } +} + +protocol Persistable: Identifiable { + func save() throws + static func find(by id: String) -> Self? +} + +// Use protocol extensions for default implementations +extension Persistable { + func save() throws { + try Storage.shared.save(self) + } +} + +// Prefer composition over inheritance +struct User: Identifiable, Codable { + let id: String + var name: String + var email: String +} +``` + +### Error Handling + +```swift +// Define domain-specific errors +enum AuthError: Error { + case invalidCredentials + case tokenExpired + case networkFailure(underlying: Error) +} + +// Use Result type for async operations +func authenticate( + email: String, + password: String, + completion: @escaping (Result) -> Void +) + +// Use throws for synchronous operations +func validate(_ input: String) throws -> ValidatedInput { + guard !input.isEmpty else { + throw ValidationError.emptyInput + } + return ValidatedInput(value: input) +} +``` + +--- + +## Kotlin Standards + +### Null Safety + +```kotlin +// Use nullable types explicitly +fun findUser(id: Int): User? { + return userRepository.find(id) +} + +// Use safe calls and elvis operator +val name = user?.profile?.name ?: "Unknown" + +// Use let for null checks with side effects +user?.let { activeUser -> + sendWelcomeEmail(activeUser.email) + logActivity(activeUser.id) +} + +// Use require/check for validation +fun processPayment(amount: Double) { + require(amount > 0) { "Amount must be positive: $amount" } + // Process +} +``` + +### Data Classes and Sealed Classes + +```kotlin +// Use data classes for DTOs +data class UserDTO( + val id: Int, + val email: String, + val name: String, + val isActive: Boolean = true +) + +// Use sealed classes for state +sealed class Result { + data class Success(val data: T) : Result() + data class Error(val message: String, val cause: Throwable? = null) : Result() + object Loading : Result() +} + +// Pattern matching with when +fun handleResult(result: Result) = when (result) { + is Result.Success -> showUser(result.data) + is Result.Error -> showError(result.message) + Result.Loading -> showLoading() +} +``` + +### Coroutines + +```kotlin +// Use structured concurrency +suspend fun loadDashboard(): Dashboard = coroutineScope { + val profile = async { fetchProfile() } + val stats = async { fetchStats() } + val notifications = async { fetchNotifications() } + + Dashboard( + profile = profile.await(), + stats = stats.await(), + notifications = notifications.await() + ) +} + +// Handle cancellation +suspend fun fetchWithRetry(url: String): Response { + repeat(3) { attempt -> + try { + return httpClient.get(url) + } catch (e: IOException) { + if (attempt == 2) throw e + delay(1000L * (attempt + 1)) + } + } + throw IllegalStateException("Unreachable") +} +``` diff --git a/.claude/skills/code-reviewer/references/common_antipatterns.md b/.claude/skills/code-reviewer/references/common_antipatterns.md new file mode 100644 index 00000000..26045452 --- /dev/null +++ b/.claude/skills/code-reviewer/references/common_antipatterns.md @@ -0,0 +1,739 @@ +# Common Antipatterns + +Code antipatterns to identify during review, with examples and fixes. + +--- + +## Table of Contents + +- [Structural Antipatterns](#structural-antipatterns) +- [Logic Antipatterns](#logic-antipatterns) +- [Security Antipatterns](#security-antipatterns) +- [Performance Antipatterns](#performance-antipatterns) +- [Testing Antipatterns](#testing-antipatterns) +- [Async Antipatterns](#async-antipatterns) + +--- + +## Structural Antipatterns + +### God Class + +A class that does too much and knows too much. + +```typescript +// BAD: God class handling everything +class UserManager { + createUser(data: UserData) { ... } + updateUser(id: string, data: UserData) { ... } + deleteUser(id: string) { ... } + sendEmail(userId: string, content: string) { ... } + generateReport(userId: string) { ... } + validatePassword(password: string) { ... } + hashPassword(password: string) { ... } + uploadAvatar(userId: string, file: File) { ... } + resizeImage(file: File) { ... } + logActivity(userId: string, action: string) { ... } + // 50 more methods... +} + +// GOOD: Single responsibility classes +class UserRepository { + create(data: UserData): User { ... } + update(id: string, data: Partial): User { ... } + delete(id: string): void { ... } +} + +class EmailService { + send(to: string, content: string): void { ... } +} + +class PasswordService { + validate(password: string): ValidationResult { ... } + hash(password: string): string { ... } +} +``` + +**Detection:** Class has >20 methods, >500 lines, or handles unrelated concerns. + +--- + +### Long Method + +Functions that do too much and are hard to understand. + +```python +# BAD: Long method doing everything +def process_order(order_data): + # Validate order (20 lines) + if not order_data.get('items'): + raise ValueError('No items') + if not order_data.get('customer_id'): + raise ValueError('No customer') + # ... more validation + + # Calculate totals (30 lines) + subtotal = 0 + for item in order_data['items']: + price = get_product_price(item['product_id']) + subtotal += price * item['quantity'] + # ... tax calculation, discounts + + # Process payment (40 lines) + payment_result = payment_gateway.charge(...) + # ... handle payment errors + + # Create order record (20 lines) + order = Order.create(...) + + # Send notifications (20 lines) + send_order_confirmation(...) + notify_warehouse(...) + + return order + +# GOOD: Composed of focused functions +def process_order(order_data): + validate_order(order_data) + totals = calculate_order_totals(order_data) + payment = process_payment(order_data['customer_id'], totals) + order = create_order_record(order_data, totals, payment) + send_order_notifications(order) + return order +``` + +**Detection:** Function >50 lines or requires scrolling to read. + +--- + +### Deep Nesting + +Excessive indentation making code hard to follow. + +```javascript +// BAD: Deep nesting +function processData(data) { + if (data) { + if (data.items) { + if (data.items.length > 0) { + for (const item of data.items) { + if (item.isValid) { + if (item.type === 'premium') { + if (item.price > 100) { + // Finally do something + processItem(item); + } + } + } + } + } + } + } +} + +// GOOD: Early returns and guard clauses +function processData(data) { + if (!data?.items?.length) { + return; + } + + const premiumItems = data.items.filter( + item => item.isValid && item.type === 'premium' && item.price > 100 + ); + + premiumItems.forEach(processItem); +} +``` + +**Detection:** Indentation >4 levels deep. + +--- + +### Magic Numbers and Strings + +Hard-coded values without explanation. + +```go +// BAD: Magic numbers +func calculateDiscount(total float64, userType int) float64 { + if userType == 1 { + return total * 0.15 + } else if userType == 2 { + return total * 0.25 + } + return total * 0.05 +} + +// GOOD: Named constants +const ( + UserTypeRegular = 1 + UserTypePremium = 2 + + DiscountRegular = 0.05 + DiscountStandard = 0.15 + DiscountPremium = 0.25 +) + +func calculateDiscount(total float64, userType int) float64 { + switch userType { + case UserTypePremium: + return total * DiscountPremium + case UserTypeRegular: + return total * DiscountStandard + default: + return total * DiscountRegular + } +} +``` + +**Detection:** Literal numbers (except 0, 1) or repeated string literals. + +--- + +### Primitive Obsession + +Using primitives instead of small objects. + +```typescript +// BAD: Primitives everywhere +function createUser( + name: string, + email: string, + phone: string, + street: string, + city: string, + zipCode: string, + country: string +): User { ... } + +// GOOD: Value objects +interface Address { + street: string; + city: string; + zipCode: string; + country: string; +} + +interface ContactInfo { + email: string; + phone: string; +} + +function createUser( + name: string, + contact: ContactInfo, + address: Address +): User { ... } +``` + +**Detection:** Functions with >4 parameters of same type, or related primitives always passed together. + +--- + +## Logic Antipatterns + +### Boolean Blindness + +Passing booleans that make code unreadable at call sites. + +```swift +// BAD: What do these booleans mean? +user.configure(true, false, true, false) + +// GOOD: Named parameters or option objects +user.configure( + sendWelcomeEmail: true, + requireVerification: false, + enableNotifications: true, + isAdmin: false +) + +// Or use an options struct +struct UserConfiguration { + var sendWelcomeEmail: Bool = true + var requireVerification: Bool = false + var enableNotifications: Bool = true + var isAdmin: Bool = false +} + +user.configure(UserConfiguration()) +``` + +**Detection:** Function calls with multiple boolean literals. + +--- + +### Null Returns for Collections + +Returning null instead of empty collections. + +```kotlin +// BAD: Returning null +fun findUsersByRole(role: String): List? { + val users = repository.findByRole(role) + return if (users.isEmpty()) null else users +} + +// Caller must handle null +val users = findUsersByRole("admin") +if (users != null) { + users.forEach { ... } +} + +// GOOD: Return empty collection +fun findUsersByRole(role: String): List { + return repository.findByRole(role) +} + +// Caller can iterate directly +findUsersByRole("admin").forEach { ... } +``` + +**Detection:** Functions returning nullable collections. + +--- + +### Stringly Typed Code + +Using strings where enums or types should be used. + +```python +# BAD: String-based logic +def handle_event(event_type: str, data: dict): + if event_type == "user_created": + handle_user_created(data) + elif event_type == "user_updated": + handle_user_updated(data) + elif event_type == "user_dleted": # Typo won't be caught + handle_user_deleted(data) + +# GOOD: Enum-based +from enum import Enum + +class EventType(Enum): + USER_CREATED = "user_created" + USER_UPDATED = "user_updated" + USER_DELETED = "user_deleted" + +def handle_event(event_type: EventType, data: dict): + handlers = { + EventType.USER_CREATED: handle_user_created, + EventType.USER_UPDATED: handle_user_updated, + EventType.USER_DELETED: handle_user_deleted, + } + handlers[event_type](data) +``` + +**Detection:** String comparisons for type/status/category values. + +--- + +## Security Antipatterns + +### SQL Injection + +String concatenation in SQL queries. + +```javascript +// BAD: String concatenation +const query = `SELECT * FROM users WHERE id = ${userId}`; +db.query(query); + +// BAD: String templates still vulnerable +const query = `SELECT * FROM users WHERE name = '${userName}'`; + +// GOOD: Parameterized queries +const query = 'SELECT * FROM users WHERE id = $1'; +db.query(query, [userId]); + +// GOOD: Using ORM safely +User.findOne({ where: { id: userId } }); +``` + +**Detection:** String concatenation or template literals with SQL keywords. + +--- + +### Hardcoded Credentials + +Secrets in source code. + +```python +# BAD: Hardcoded secrets +API_KEY = "sk-abc123xyz789" +DATABASE_URL = "postgresql://admin:password123@prod-db.internal:5432/app" + +# GOOD: Environment variables +import os + +API_KEY = os.environ["API_KEY"] +DATABASE_URL = os.environ["DATABASE_URL"] + +# GOOD: Secrets manager +from aws_secretsmanager import get_secret + +API_KEY = get_secret("api-key") +``` + +**Detection:** Variables named `password`, `secret`, `key`, `token` with string literals. + +--- + +### Unsafe Deserialization + +Deserializing untrusted data without validation. + +```python +# BAD: Binary serialization from untrusted source can execute arbitrary code +# Examples: Python's binary serialization, yaml.load without SafeLoader + +# GOOD: Use safe alternatives +import json + +def load_data(file_path): + with open(file_path, 'r') as f: + return json.load(f) + +# GOOD: Use SafeLoader for YAML +import yaml + +with open('config.yaml') as f: + config = yaml.safe_load(f) +``` + +**Detection:** Binary deserialization functions, yaml.load without safe loader, dynamic code execution on external data. + +--- + +### Missing Input Validation + +Trusting user input without validation. + +```typescript +// BAD: No validation +app.post('/user', (req, res) => { + const user = db.create({ + name: req.body.name, + email: req.body.email, + role: req.body.role // User can set themselves as admin! + }); + res.json(user); +}); + +// GOOD: Validate and sanitize +import { z } from 'zod'; + +const CreateUserSchema = z.object({ + name: z.string().min(1).max(100), + email: z.string().email(), + // role is NOT accepted from input +}); + +app.post('/user', (req, res) => { + const validated = CreateUserSchema.parse(req.body); + const user = db.create({ + ...validated, + role: 'user' // Default role, not from input + }); + res.json(user); +}); +``` + +**Detection:** Request body/params used directly without validation schema. + +--- + +## Performance Antipatterns + +### N+1 Query Problem + +Loading related data one record at a time. + +```python +# BAD: N+1 queries +def get_orders_with_items(): + orders = Order.query.all() # 1 query + for order in orders: + items = OrderItem.query.filter_by(order_id=order.id).all() # N queries + order.items = items + return orders + +# GOOD: Eager loading +def get_orders_with_items(): + return Order.query.options( + joinedload(Order.items) + ).all() # 1 query with JOIN + +# GOOD: Batch loading +def get_orders_with_items(): + orders = Order.query.all() + order_ids = [o.id for o in orders] + items = OrderItem.query.filter( + OrderItem.order_id.in_(order_ids) + ).all() # 2 queries total + # Group items by order_id... +``` + +**Detection:** Database queries inside loops. + +--- + +### Unbounded Collections + +Loading unlimited data into memory. + +```go +// BAD: Load all records +func GetAllUsers() ([]User, error) { + return db.Find(&[]User{}) // Could be millions +} + +// GOOD: Pagination +func GetUsers(page, pageSize int) ([]User, error) { + offset := (page - 1) * pageSize + return db.Limit(pageSize).Offset(offset).Find(&[]User{}) +} + +// GOOD: Streaming for large datasets +func ProcessAllUsers(handler func(User) error) error { + rows, err := db.Model(&User{}).Rows() + if err != nil { + return err + } + defer rows.Close() + + for rows.Next() { + var user User + db.ScanRows(rows, &user) + if err := handler(user); err != nil { + return err + } + } + return nil +} +``` + +**Detection:** `findAll()`, `find({})`, or queries without `LIMIT`. + +--- + +### Synchronous I/O in Hot Paths + +Blocking operations in request handlers. + +```javascript +// BAD: Sync file read on every request +app.get('/config', (req, res) => { + const config = fs.readFileSync('./config.json'); // Blocks event loop + res.json(JSON.parse(config)); +}); + +// GOOD: Load once at startup +const config = JSON.parse(fs.readFileSync('./config.json')); + +app.get('/config', (req, res) => { + res.json(config); +}); + +// GOOD: Async with caching +let configCache = null; + +app.get('/config', async (req, res) => { + if (!configCache) { + configCache = JSON.parse(await fs.promises.readFile('./config.json')); + } + res.json(configCache); +}); +``` + +**Detection:** `readFileSync`, `execSync`, or blocking calls in request handlers. + +--- + +## Testing Antipatterns + +### Test Code Duplication + +Repeating setup in every test. + +```typescript +// BAD: Duplicate setup +describe('UserService', () => { + it('should create user', async () => { + const db = await createTestDatabase(); + const userRepo = new UserRepository(db); + const emailService = new MockEmailService(); + const service = new UserService(userRepo, emailService); + + const user = await service.create({ name: 'Test' }); + expect(user.name).toBe('Test'); + }); + + it('should update user', async () => { + const db = await createTestDatabase(); // Duplicated + const userRepo = new UserRepository(db); // Duplicated + const emailService = new MockEmailService(); // Duplicated + const service = new UserService(userRepo, emailService); // Duplicated + + // ... + }); +}); + +// GOOD: Shared setup +describe('UserService', () => { + let service: UserService; + let db: TestDatabase; + + beforeEach(async () => { + db = await createTestDatabase(); + const userRepo = new UserRepository(db); + const emailService = new MockEmailService(); + service = new UserService(userRepo, emailService); + }); + + afterEach(async () => { + await db.cleanup(); + }); + + it('should create user', async () => { + const user = await service.create({ name: 'Test' }); + expect(user.name).toBe('Test'); + }); +}); +``` + +--- + +### Testing Implementation Instead of Behavior + +Tests coupled to internal implementation. + +```python +# BAD: Testing implementation details +def test_add_item_to_cart(): + cart = ShoppingCart() + cart.add_item(Product("Apple", 1.00)) + + # Testing internal structure + assert cart._items[0].name == "Apple" + assert cart._total == 1.00 + +# GOOD: Testing behavior +def test_add_item_to_cart(): + cart = ShoppingCart() + cart.add_item(Product("Apple", 1.00)) + + # Testing public behavior + assert cart.item_count == 1 + assert cart.total == 1.00 + assert cart.contains("Apple") +``` + +--- + +## Async Antipatterns + +### Floating Promises + +Promises without await or catch. + +```typescript +// BAD: Floating promise +async function saveUser(user: User) { + db.save(user); // Not awaited, errors lost + logger.info('User saved'); // Logs before save completes +} + +// BAD: Fire and forget in loop +for (const item of items) { + processItem(item); // All run in parallel, no error handling +} + +// GOOD: Await the promise +async function saveUser(user: User) { + await db.save(user); + logger.info('User saved'); +} + +// GOOD: Process with proper handling +await Promise.all(items.map(item => processItem(item))); + +// Or sequentially +for (const item of items) { + await processItem(item); +} +``` + +**Detection:** Async function calls without `await` or `.then()`. + +--- + +### Callback Hell + +Deeply nested callbacks. + +```javascript +// BAD: Callback hell +getUser(userId, (err, user) => { + if (err) return handleError(err); + getOrders(user.id, (err, orders) => { + if (err) return handleError(err); + getProducts(orders[0].productIds, (err, products) => { + if (err) return handleError(err); + renderPage(user, orders, products, (err) => { + if (err) return handleError(err); + console.log('Done'); + }); + }); + }); +}); + +// GOOD: Async/await +async function loadPage(userId) { + try { + const user = await getUser(userId); + const orders = await getOrders(user.id); + const products = await getProducts(orders[0].productIds); + await renderPage(user, orders, products); + console.log('Done'); + } catch (err) { + handleError(err); + } +} +``` + +**Detection:** >2 levels of callback nesting. + +--- + +### Async in Constructor + +Async operations in constructors. + +```typescript +// BAD: Async in constructor +class DatabaseConnection { + constructor(url: string) { + this.connect(url); // Fire-and-forget async + } + + private async connect(url: string) { + this.client = await createClient(url); + } +} + +// GOOD: Factory method +class DatabaseConnection { + private constructor(private client: Client) {} + + static async create(url: string): Promise { + const client = await createClient(url); + return new DatabaseConnection(client); + } +} + +// Usage +const db = await DatabaseConnection.create(url); +``` + +**Detection:** `async` calls or `.then()` in constructor. diff --git a/.claude/skills/code-reviewer/scripts/code_quality_checker.py b/.claude/skills/code-reviewer/scripts/code_quality_checker.py new file mode 100644 index 00000000..3d80aece --- /dev/null +++ b/.claude/skills/code-reviewer/scripts/code_quality_checker.py @@ -0,0 +1,560 @@ +#!/usr/bin/env python3 +""" +Code Quality Checker + +Analyzes source code for quality issues, code smells, complexity metrics, +and SOLID principle violations. + +Usage: + python .claude/skills/code-reviewer/scripts/code_quality_checker.py /path/to/file.py + python .claude/skills/code-reviewer/scripts/code_quality_checker.py /path/to/directory --recursive + python .claude/skills/code-reviewer/scripts/code_quality_checker.py . --language typescript --json +""" + +import argparse +import json +import re +import sys +from pathlib import Path +from typing import Dict, List, Optional + + +# Language-specific file extensions +LANGUAGE_EXTENSIONS = { + "python": [".py"], + "typescript": [".ts", ".tsx"], + "javascript": [".js", ".jsx", ".mjs"], + "go": [".go"], + "swift": [".swift"], + "kotlin": [".kt", ".kts"] +} + +# Code smell thresholds +THRESHOLDS = { + "long_function_lines": 50, + "too_many_parameters": 5, + "high_complexity": 10, + "god_class_methods": 20, + "max_imports": 15 +} + + +def get_file_extension(filepath: Path) -> str: + """Get file extension.""" + return filepath.suffix.lower() + + +def detect_language(filepath: Path) -> Optional[str]: + """Detect programming language from file extension.""" + ext = get_file_extension(filepath) + for lang, extensions in LANGUAGE_EXTENSIONS.items(): + if ext in extensions: + return lang + return None + + +def read_file_content(filepath: Path) -> str: + """Read file content safely.""" + try: + with open(filepath, "r", encoding="utf-8", errors="ignore") as f: + return f.read() + except Exception: + return "" + + +def calculate_cyclomatic_complexity(content: str) -> int: + """ + Estimate cyclomatic complexity based on control flow keywords. + """ + complexity = 1 # Base complexity + + # Control flow patterns that increase complexity + patterns = [ + r"\bif\b", + r"\belif\b", + r"\belse\b", + r"\bfor\b", + r"\bwhile\b", + r"\bcase\b", + r"\bcatch\b", + r"\bexcept\b", + r"\band\b", + r"\bor\b", + r"\|\|", + r"&&" + ] + + for pattern in patterns: + matches = re.findall(pattern, content, re.IGNORECASE) + complexity += len(matches) + + return complexity + + +def count_lines(content: str) -> Dict[str, int]: + """Count different types of lines in code.""" + lines = content.split("\n") + total = len(lines) + blank = sum(1 for line in lines if not line.strip()) + comment = 0 + + for line in lines: + stripped = line.strip() + if stripped.startswith("#") or stripped.startswith("//"): + comment += 1 + elif stripped.startswith("/*") or stripped.startswith("'''") or stripped.startswith('"""'): + comment += 1 + + code = total - blank - comment + + return { + "total": total, + "code": code, + "blank": blank, + "comment": comment + } + + +def find_functions(content: str, language: str) -> List[Dict]: + """Find function definitions and their metrics.""" + functions = [] + + # Language-specific function patterns + patterns = { + "python": r"def\s+(\w+)\s*\(([^)]*)\)", + "typescript": r"(?:function\s+(\w+)|(?:const|let|var)\s+(\w+)\s*=\s*(?:async\s+)?\([^)]*\)\s*=>)", + "javascript": r"(?:function\s+(\w+)|(?:const|let|var)\s+(\w+)\s*=\s*(?:async\s+)?\([^)]*\)\s*=>)", + "go": r"func\s+(?:\([^)]+\)\s+)?(\w+)\s*\(([^)]*)\)", + "swift": r"func\s+(\w+)\s*\(([^)]*)\)", + "kotlin": r"fun\s+(\w+)\s*\(([^)]*)\)" + } + + pattern = patterns.get(language, patterns["python"]) + matches = re.finditer(pattern, content, re.MULTILINE) + + for match in matches: + name = next((g for g in match.groups() if g), "anonymous") + params_str = match.group(2) if len(match.groups()) > 1 and match.group(2) else "" + + # Count parameters + params = [p.strip() for p in params_str.split(",") if p.strip()] + param_count = len(params) + + # Estimate function length + start_pos = match.end() + remaining = content[start_pos:] + + next_func = re.search(pattern, remaining) + if next_func: + func_body = remaining[:next_func.start()] + else: + func_body = remaining[:min(2000, len(remaining))] + + line_count = len(func_body.split("\n")) + complexity = calculate_cyclomatic_complexity(func_body) + + functions.append({ + "name": name, + "parameters": param_count, + "lines": line_count, + "complexity": complexity + }) + + return functions + + +def find_classes(content: str, language: str) -> List[Dict]: + """Find class definitions and their metrics.""" + classes = [] + + patterns = { + "python": r"class\s+(\w+)", + "typescript": r"class\s+(\w+)", + "javascript": r"class\s+(\w+)", + "go": r"type\s+(\w+)\s+struct", + "swift": r"class\s+(\w+)", + "kotlin": r"class\s+(\w+)" + } + + pattern = patterns.get(language, patterns["python"]) + matches = re.finditer(pattern, content) + + for match in matches: + name = match.group(1) + + start_pos = match.end() + remaining = content[start_pos:] + + next_class = re.search(pattern, remaining) + if next_class: + class_body = remaining[:next_class.start()] + else: + class_body = remaining + + # Count methods + method_patterns = { + "python": r"def\s+\w+\s*\(", + "typescript": r"(?:public|private|protected)?\s*\w+\s*\([^)]*\)\s*[:{]", + "javascript": r"\w+\s*\([^)]*\)\s*\{", + "go": r"func\s+\(", + "swift": r"func\s+\w+", + "kotlin": r"fun\s+\w+" + } + method_pattern = method_patterns.get(language, method_patterns["python"]) + methods = len(re.findall(method_pattern, class_body)) + + classes.append({ + "name": name, + "methods": methods, + "lines": len(class_body.split("\n")) + }) + + return classes + + +def check_code_smells(content: str, functions: List[Dict], classes: List[Dict]) -> List[Dict]: + """Check for code smells in the content.""" + smells = [] + + # Long functions + for func in functions: + if func["lines"] > THRESHOLDS["long_function_lines"]: + smells.append({ + "type": "long_function", + "severity": "medium", + "message": f"Function '{func['name']}' has {func['lines']} lines (max: {THRESHOLDS['long_function_lines']})", + "location": func["name"] + }) + + # Too many parameters + for func in functions: + if func["parameters"] > THRESHOLDS["too_many_parameters"]: + smells.append({ + "type": "too_many_parameters", + "severity": "low", + "message": f"Function '{func['name']}' has {func['parameters']} parameters (max: {THRESHOLDS['too_many_parameters']})", + "location": func["name"] + }) + + # High complexity + for func in functions: + if func["complexity"] > THRESHOLDS["high_complexity"]: + severity = "high" if func["complexity"] > 20 else "medium" + smells.append({ + "type": "high_complexity", + "severity": severity, + "message": f"Function '{func['name']}' has complexity {func['complexity']} (max: {THRESHOLDS['high_complexity']})", + "location": func["name"] + }) + + # God classes + for cls in classes: + if cls["methods"] > THRESHOLDS["god_class_methods"]: + smells.append({ + "type": "god_class", + "severity": "high", + "message": f"Class '{cls['name']}' has {cls['methods']} methods (max: {THRESHOLDS['god_class_methods']})", + "location": cls["name"] + }) + + # Magic numbers + magic_pattern = r"\b(? List[Dict]: + """Check for potential SOLID principle violations.""" + violations = [] + + # OCP: Type checking instead of polymorphism + type_checks = len(re.findall(r"isinstance\(|type\(.*\)\s*==|typeof\s+\w+\s*===", content)) + if type_checks > 2: + violations.append({ + "principle": "OCP", + "name": "Open/Closed Principle", + "severity": "medium", + "message": f"Found {type_checks} type checks - consider using polymorphism" + }) + + # LSP/ISP: NotImplementedError + not_impl = len(re.findall(r"raise\s+NotImplementedError|not\s+implemented", content, re.IGNORECASE)) + if not_impl: + violations.append({ + "principle": "LSP/ISP", + "name": "Liskov/Interface Segregation", + "severity": "low", + "message": f"Found {not_impl} unimplemented methods - may indicate oversized interface" + }) + + # DIP: Too many direct imports + imports = len(re.findall(r"^(?:import|from)\s+", content, re.MULTILINE)) + if imports > THRESHOLDS["max_imports"]: + violations.append({ + "principle": "DIP", + "name": "Dependency Inversion Principle", + "severity": "low", + "message": f"File has {imports} imports - consider dependency injection" + }) + + return violations + + +def calculate_quality_score( + line_metrics: Dict, + functions: List[Dict], + classes: List[Dict], + smells: List[Dict], + violations: List[Dict] +) -> int: + """Calculate overall quality score (0-100).""" + score = 100 + + # Deduct for code smells + for smell in smells: + if smell["severity"] == "high": + score -= 10 + elif smell["severity"] == "medium": + score -= 5 + elif smell["severity"] == "low": + score -= 2 + + # Deduct for SOLID violations + for violation in violations: + if violation["severity"] == "high": + score -= 8 + elif violation["severity"] == "medium": + score -= 4 + elif violation["severity"] == "low": + score -= 2 + + # Bonus for good comment ratio (10-30%) + if line_metrics["total"] > 0: + comment_ratio = line_metrics["comment"] / line_metrics["total"] + if 0.1 <= comment_ratio <= 0.3: + score += 5 + + # Bonus for reasonable function sizes + if functions: + avg_lines = sum(f["lines"] for f in functions) / len(functions) + if avg_lines < 30: + score += 5 + + return max(0, min(100, score)) + + +def get_grade(score: int) -> str: + """Convert score to letter grade.""" + if score >= 90: + return "A" + elif score >= 80: + return "B" + elif score >= 70: + return "C" + elif score >= 60: + return "D" + else: + return "F" + + +def analyze_file(filepath: Path) -> Dict: + """Analyze a single file for code quality.""" + language = detect_language(filepath) + if not language: + return {"error": f"Unsupported file type: {filepath.suffix}"} + + content = read_file_content(filepath) + if not content: + return {"error": f"Could not read file: {filepath}"} + + line_metrics = count_lines(content) + functions = find_functions(content, language) + classes = find_classes(content, language) + smells = check_code_smells(content, functions, classes) + violations = check_solid_violations(content) + score = calculate_quality_score(line_metrics, functions, classes, smells, violations) + + return { + "file": str(filepath), + "language": language, + "metrics": { + "lines": line_metrics, + "functions": len(functions), + "classes": len(classes), + "avg_complexity": round(sum(f["complexity"] for f in functions) / max(1, len(functions)), 1) + }, + "quality_score": score, + "grade": get_grade(score), + "smells": smells, + "solid_violations": violations, + "function_details": functions[:10], + "class_details": classes[:10] + } + + +def analyze_directory( + dir_path: Path, + recursive: bool = True, + language: Optional[str] = None +) -> Dict: + """Analyze all files in a directory.""" + results = [] + extensions = [] + + if language: + extensions = LANGUAGE_EXTENSIONS.get(language, []) + else: + for exts in LANGUAGE_EXTENSIONS.values(): + extensions.extend(exts) + + pattern = "**/*" if recursive else "*" + + for ext in extensions: + for filepath in dir_path.glob(f"{pattern}{ext}"): + if "node_modules" in str(filepath) or ".git" in str(filepath): + continue + result = analyze_file(filepath) + if "error" not in result: + results.append(result) + + if not results: + return {"error": "No supported files found"} + + total_score = sum(r["quality_score"] for r in results) + avg_score = total_score / len(results) + total_smells = sum(len(r["smells"]) for r in results) + total_violations = sum(len(r["solid_violations"]) for r in results) + + return { + "directory": str(dir_path), + "files_analyzed": len(results), + "average_score": round(avg_score, 1), + "overall_grade": get_grade(int(avg_score)), + "total_code_smells": total_smells, + "total_solid_violations": total_violations, + "files": sorted(results, key=lambda x: x["quality_score"]) + } + + +def print_report(analysis: Dict) -> None: + """Print human-readable analysis report.""" + if "error" in analysis: + print(f"Error: {analysis['error']}") + return + + print("=" * 60) + print("CODE QUALITY REPORT") + print("=" * 60) + + if "file" in analysis: + print(f"\nFile: {analysis['file']}") + print(f"Language: {analysis['language']}") + print(f"Quality Score: {analysis['quality_score']}/100 ({analysis['grade']})") + + metrics = analysis["metrics"] + print(f"\nLines: {metrics['lines']['total']} ({metrics['lines']['code']} code, {metrics['lines']['comment']} comments)") + print(f"Functions: {metrics['functions']}") + print(f"Classes: {metrics['classes']}") + print(f"Avg Complexity: {metrics['avg_complexity']}") + + if analysis["smells"]: + print("\n--- CODE SMELLS ---") + for smell in analysis["smells"][:10]: + print(f" [{smell['severity'].upper()}] {smell['message']} ({smell['location']})") + + if analysis["solid_violations"]: + print("\n--- SOLID VIOLATIONS ---") + for v in analysis["solid_violations"]: + print(f" [{v['principle']}] {v['message']}") + else: + print(f"\nDirectory: {analysis['directory']}") + print(f"Files Analyzed: {analysis['files_analyzed']}") + print(f"Average Score: {analysis['average_score']}/100 ({analysis['overall_grade']})") + print(f"Total Code Smells: {analysis['total_code_smells']}") + print(f"Total SOLID Violations: {analysis['total_solid_violations']}") + + print("\n--- FILES BY QUALITY ---") + for f in analysis["files"][:10]: + print(f" {f['quality_score']:3d}/100 [{f['grade']}] {f['file']}") + + print("\n" + "=" * 60) + + +def main(): + parser = argparse.ArgumentParser( + description="Analyze code quality, smells, and SOLID violations" + ) + parser.add_argument( + "path", + help="File or directory to analyze" + ) + parser.add_argument( + "--recursive", "-r", + action="store_true", + default=True, + help="Recursively analyze directories (default: true)" + ) + parser.add_argument( + "--language", "-l", + choices=list(LANGUAGE_EXTENSIONS.keys()), + help="Filter by programming language" + ) + parser.add_argument( + "--json", + action="store_true", + help="Output in JSON format" + ) + parser.add_argument( + "--output", "-o", + help="Write output to file" + ) + + args = parser.parse_args() + + target = Path(args.path).resolve() + + if not target.exists(): + print(f"Error: Path does not exist: {target}", file=sys.stderr) + sys.exit(1) + + if target.is_file(): + analysis = analyze_file(target) + else: + analysis = analyze_directory(target, args.recursive, args.language) + + if args.json: + output = json.dumps(analysis, indent=2, default=str) + if args.output: + with open(args.output, "w") as f: + f.write(output) + print(f"Results written to {args.output}") + else: + print(output) + else: + print_report(analysis) + + +if __name__ == "__main__": + main() diff --git a/.claude/skills/code-reviewer/scripts/pr_analyzer.py b/.claude/skills/code-reviewer/scripts/pr_analyzer.py new file mode 100644 index 00000000..257a64c0 --- /dev/null +++ b/.claude/skills/code-reviewer/scripts/pr_analyzer.py @@ -0,0 +1,495 @@ +#!/usr/bin/env python3 +""" +PR Analyzer + +Analyzes pull request changes for review complexity, risk assessment, +and generates review priorities. + +Usage: + python .claude/skills/code-reviewer/scripts/pr_analyzer.py /path/to/repo + python .claude/skills/code-reviewer/scripts/pr_analyzer.py . --base main --head feature-branch + python .claude/skills/code-reviewer/scripts/pr_analyzer.py /path/to/repo --json +""" + +import argparse +import json +import os +import re +import subprocess +import sys +from pathlib import Path +from typing import Dict, List, Optional, Tuple + + +# File categories for review prioritization +FILE_CATEGORIES = { + "critical": { + "patterns": [ + r"auth", r"security", r"password", r"token", r"secret", + r"payment", r"billing", r"crypto", r"encrypt" + ], + "weight": 5, + "description": "Security-sensitive files requiring careful review" + }, + "high": { + "patterns": [ + r"api", r"database", r"migration", r"schema", r"model", + r"config", r"env", r"middleware" + ], + "weight": 4, + "description": "Core infrastructure files" + }, + "medium": { + "patterns": [ + r"service", r"controller", r"handler", r"util", r"helper" + ], + "weight": 3, + "description": "Business logic files" + }, + "low": { + "patterns": [ + r"test", r"spec", r"mock", r"fixture", r"story", + r"readme", r"docs", r"\.md$" + ], + "weight": 1, + "description": "Tests and documentation" + } +} + +# Risky patterns to flag +RISK_PATTERNS = [ + { + "name": "hardcoded_secrets", + "pattern": r"(password|secret|api_key|token)\s*[=:]\s*['\"][^'\"]+['\"]", + "severity": "critical", + "message": "Potential hardcoded secret detected" + }, + { + "name": "todo_fixme", + "pattern": r"(TODO|FIXME|HACK|XXX):", + "severity": "low", + "message": "TODO/FIXME comment found" + }, + { + "name": "console_log", + "pattern": r"console\.(log|debug|info|warn|error)\(", + "severity": "medium", + "message": "Console statement found (remove for production)" + }, + { + "name": "debugger", + "pattern": r"\bdebugger\b", + "severity": "high", + "message": "Debugger statement found" + }, + { + "name": "disable_eslint", + "pattern": r"eslint-disable", + "severity": "medium", + "message": "ESLint rule disabled" + }, + { + "name": "any_type", + "pattern": r":\s*any\b", + "severity": "medium", + "message": "TypeScript 'any' type used" + }, + { + "name": "sql_concatenation", + "pattern": r"(SELECT|INSERT|UPDATE|DELETE).*\+.*['\"]", + "severity": "critical", + "message": "Potential SQL injection (string concatenation in query)" + } +] + + +def run_git_command(cmd: List[str], cwd: Path) -> Tuple[bool, str]: + """Run a git command and return success status and output.""" + try: + result = subprocess.run( + cmd, + cwd=cwd, + capture_output=True, + text=True, + timeout=30 + ) + return result.returncode == 0, result.stdout.strip() + except subprocess.TimeoutExpired: + return False, "Command timed out" + except Exception as e: + return False, str(e) + + +def get_changed_files(repo_path: Path, base: str, head: str) -> List[Dict]: + """Get list of changed files between two refs.""" + success, output = run_git_command( + ["git", "diff", "--name-status", f"{base}...{head}"], + repo_path + ) + + if not success: + # Try without the triple dot (for uncommitted changes) + success, output = run_git_command( + ["git", "diff", "--name-status", base, head], + repo_path + ) + + if not success or not output: + # Fall back to staged changes + success, output = run_git_command( + ["git", "diff", "--name-status", "--cached"], + repo_path + ) + + files = [] + for line in output.split("\n"): + if not line.strip(): + continue + parts = line.split("\t") + if len(parts) >= 2: + status = parts[0][0] # First character of status + filepath = parts[-1] # Handle renames (R100\told\tnew) + status_map = { + "A": "added", + "M": "modified", + "D": "deleted", + "R": "renamed", + "C": "copied" + } + files.append({ + "path": filepath, + "status": status_map.get(status, "modified") + }) + + return files + + +def get_file_diff(repo_path: Path, filepath: str, base: str, head: str) -> str: + """Get diff content for a specific file.""" + success, output = run_git_command( + ["git", "diff", f"{base}...{head}", "--", filepath], + repo_path + ) + if not success: + success, output = run_git_command( + ["git", "diff", "--cached", "--", filepath], + repo_path + ) + return output if success else "" + + +def categorize_file(filepath: str) -> Tuple[str, int]: + """Categorize a file based on its path and name.""" + filepath_lower = filepath.lower() + + for category, info in FILE_CATEGORIES.items(): + for pattern in info["patterns"]: + if re.search(pattern, filepath_lower): + return category, info["weight"] + + return "medium", 2 # Default category + + +def analyze_diff_for_risks(diff_content: str, filepath: str) -> List[Dict]: + """Analyze diff content for risky patterns.""" + risks = [] + + # Only analyze added lines (starting with +) + added_lines = [ + line[1:] for line in diff_content.split("\n") + if line.startswith("+") and not line.startswith("+++") + ] + + content = "\n".join(added_lines) + + for risk in RISK_PATTERNS: + matches = re.findall(risk["pattern"], content, re.IGNORECASE) + if matches: + risks.append({ + "name": risk["name"], + "severity": risk["severity"], + "message": risk["message"], + "file": filepath, + "count": len(matches) + }) + + return risks + + +def count_changes(diff_content: str) -> Dict[str, int]: + """Count additions and deletions in diff.""" + additions = 0 + deletions = 0 + + for line in diff_content.split("\n"): + if line.startswith("+") and not line.startswith("+++"): + additions += 1 + elif line.startswith("-") and not line.startswith("---"): + deletions += 1 + + return {"additions": additions, "deletions": deletions} + + +def calculate_complexity_score(files: List[Dict], all_risks: List[Dict]) -> int: + """Calculate overall PR complexity score (1-10).""" + score = 0 + + # File count contribution (max 3 points) + file_count = len(files) + if file_count > 20: + score += 3 + elif file_count > 10: + score += 2 + elif file_count > 5: + score += 1 + + # Total changes contribution (max 3 points) + total_changes = sum(f.get("additions", 0) + f.get("deletions", 0) for f in files) + if total_changes > 500: + score += 3 + elif total_changes > 200: + score += 2 + elif total_changes > 50: + score += 1 + + # Risk severity contribution (max 4 points) + critical_risks = sum(1 for r in all_risks if r["severity"] == "critical") + high_risks = sum(1 for r in all_risks if r["severity"] == "high") + + score += min(2, critical_risks) + score += min(2, high_risks) + + return min(10, max(1, score)) + + +def analyze_commit_messages(repo_path: Path, base: str, head: str) -> Dict: + """Analyze commit messages in the PR.""" + success, output = run_git_command( + ["git", "log", "--oneline", f"{base}...{head}"], + repo_path + ) + + if not success or not output: + return {"commits": 0, "issues": []} + + commits = output.strip().split("\n") + issues = [] + + for commit in commits: + if len(commit) < 10: + continue + + # Check for conventional commit format + message = commit[8:] if len(commit) > 8 else commit # Skip hash + + if not re.match(r"^(feat|fix|docs|style|refactor|test|chore|perf|ci|build|revert)(\(.+\))?:", message): + issues.append({ + "commit": commit[:7], + "issue": "Does not follow conventional commit format" + }) + + if len(message) > 72: + issues.append({ + "commit": commit[:7], + "issue": "Commit message exceeds 72 characters" + }) + + return { + "commits": len(commits), + "issues": issues + } + + +def analyze_pr( + repo_path: Path, + base: str = "main", + head: str = "HEAD" +) -> Dict: + """Perform complete PR analysis.""" + # Get changed files + changed_files = get_changed_files(repo_path, base, head) + + if not changed_files: + return { + "status": "no_changes", + "message": "No changes detected between branches" + } + + # Analyze each file + all_risks = [] + file_analyses = [] + + for file_info in changed_files: + filepath = file_info["path"] + category, weight = categorize_file(filepath) + + # Get diff for the file + diff = get_file_diff(repo_path, filepath, base, head) + changes = count_changes(diff) + risks = analyze_diff_for_risks(diff, filepath) + + all_risks.extend(risks) + + file_analyses.append({ + "path": filepath, + "status": file_info["status"], + "category": category, + "priority_weight": weight, + "additions": changes["additions"], + "deletions": changes["deletions"], + "risks": risks + }) + + # Sort by priority (highest first) + file_analyses.sort(key=lambda x: (-x["priority_weight"], x["path"])) + + # Analyze commits + commit_analysis = analyze_commit_messages(repo_path, base, head) + + # Calculate metrics + complexity = calculate_complexity_score(file_analyses, all_risks) + + total_additions = sum(f["additions"] for f in file_analyses) + total_deletions = sum(f["deletions"] for f in file_analyses) + + return { + "status": "analyzed", + "summary": { + "files_changed": len(file_analyses), + "total_additions": total_additions, + "total_deletions": total_deletions, + "complexity_score": complexity, + "complexity_label": get_complexity_label(complexity), + "commits": commit_analysis["commits"] + }, + "risks": { + "critical": [r for r in all_risks if r["severity"] == "critical"], + "high": [r for r in all_risks if r["severity"] == "high"], + "medium": [r for r in all_risks if r["severity"] == "medium"], + "low": [r for r in all_risks if r["severity"] == "low"] + }, + "files": file_analyses, + "commit_issues": commit_analysis["issues"], + "review_order": [f["path"] for f in file_analyses[:10]] # Top 10 priority files + } + + +def get_complexity_label(score: int) -> str: + """Get human-readable complexity label.""" + if score <= 2: + return "Simple" + elif score <= 4: + return "Moderate" + elif score <= 6: + return "Complex" + elif score <= 8: + return "Very Complex" + else: + return "Critical" + + +def print_report(analysis: Dict) -> None: + """Print human-readable analysis report.""" + if analysis["status"] == "no_changes": + print("No changes detected.") + return + + summary = analysis["summary"] + risks = analysis["risks"] + + print("=" * 60) + print("PR ANALYSIS REPORT") + print("=" * 60) + + print(f"\nComplexity: {summary['complexity_score']}/10 ({summary['complexity_label']})") + print(f"Files Changed: {summary['files_changed']}") + print(f"Lines: +{summary['total_additions']} / -{summary['total_deletions']}") + print(f"Commits: {summary['commits']}") + + # Risk summary + print("\n--- RISK SUMMARY ---") + print(f"Critical: {len(risks['critical'])}") + print(f"High: {len(risks['high'])}") + print(f"Medium: {len(risks['medium'])}") + print(f"Low: {len(risks['low'])}") + + # Critical and high risks details + if risks["critical"]: + print("\n--- CRITICAL RISKS ---") + for risk in risks["critical"]: + print(f" [{risk['file']}] {risk['message']} (x{risk['count']})") + + if risks["high"]: + print("\n--- HIGH RISKS ---") + for risk in risks["high"]: + print(f" [{risk['file']}] {risk['message']} (x{risk['count']})") + + # Commit message issues + if analysis["commit_issues"]: + print("\n--- COMMIT MESSAGE ISSUES ---") + for issue in analysis["commit_issues"][:5]: + print(f" {issue['commit']}: {issue['issue']}") + + # Review order + print("\n--- SUGGESTED REVIEW ORDER ---") + for i, filepath in enumerate(analysis["review_order"], 1): + file_info = next(f for f in analysis["files"] if f["path"] == filepath) + print(f" {i}. [{file_info['category'].upper()}] {filepath}") + + print("\n" + "=" * 60) + + +def main(): + parser = argparse.ArgumentParser( + description="Analyze pull request for review complexity and risks" + ) + parser.add_argument( + "repo_path", + nargs="?", + default=".", + help="Path to git repository (default: current directory)" + ) + parser.add_argument( + "--base", "-b", + default="main", + help="Base branch for comparison (default: main)" + ) + parser.add_argument( + "--head", + default="HEAD", + help="Head branch/commit for comparison (default: HEAD)" + ) + parser.add_argument( + "--json", + action="store_true", + help="Output in JSON format" + ) + parser.add_argument( + "--output", "-o", + help="Write output to file" + ) + + args = parser.parse_args() + + repo_path = Path(args.repo_path).resolve() + + if not (repo_path / ".git").exists(): + print(f"Error: {repo_path} is not a git repository", file=sys.stderr) + sys.exit(1) + + analysis = analyze_pr(repo_path, args.base, args.head) + + if args.json: + output = json.dumps(analysis, indent=2) + if args.output: + with open(args.output, "w") as f: + f.write(output) + print(f"Results written to {args.output}") + else: + print(output) + else: + print_report(analysis) + + +if __name__ == "__main__": + main() diff --git a/.claude/skills/code-reviewer/scripts/review_report_generator.py b/.claude/skills/code-reviewer/scripts/review_report_generator.py new file mode 100644 index 00000000..1f1e3784 --- /dev/null +++ b/.claude/skills/code-reviewer/scripts/review_report_generator.py @@ -0,0 +1,505 @@ +#!/usr/bin/env python3 +""" +Review Report Generator + +Generates comprehensive code review reports by combining PR analysis +and code quality findings into structured, actionable reports. + +Usage: + python .claude/skills/code-reviewer/scripts/review_report_generator.py /path/to/repo + python .claude/skills/code-reviewer/scripts/review_report_generator.py . --pr-analysis pr_results.json --quality-analysis quality_results.json + python .claude/skills/code-reviewer/scripts/review_report_generator.py /path/to/repo --format markdown --output review.md +""" + +import argparse +import json +import os +import subprocess +import sys +from datetime import datetime +from pathlib import Path +from typing import Dict, List, Optional, Tuple + + +# Severity weights for prioritization +SEVERITY_WEIGHTS = { + "critical": 100, + "high": 75, + "medium": 50, + "low": 25, + "info": 10 +} + +# Review verdict thresholds +VERDICT_THRESHOLDS = { + "approve": {"max_critical": 0, "max_high": 0, "max_score": 100}, + "approve_with_suggestions": {"max_critical": 0, "max_high": 2, "max_score": 85}, + "request_changes": {"max_critical": 0, "max_high": 5, "max_score": 70}, + "block": {"max_critical": float("inf"), "max_high": float("inf"), "max_score": 0} +} + + +def load_json_file(filepath: str) -> Optional[Dict]: + """Load JSON file if it exists.""" + try: + with open(filepath, "r") as f: + return json.load(f) + except (FileNotFoundError, json.JSONDecodeError): + return None + + +def run_pr_analyzer(repo_path: Path) -> Dict: + """Run .claude/skills/code-reviewer/scripts/pr_analyzer.py and return results.""" + script_path = Path(__file__).parent / ".claude/skills/code-reviewer/scripts/pr_analyzer.py" + if not script_path.exists(): + return {"status": "error", "message": ".claude/skills/code-reviewer/scripts/pr_analyzer.py not found"} + + try: + result = subprocess.run( + [sys.executable, str(script_path), str(repo_path), "--json"], + capture_output=True, + text=True, + timeout=120 + ) + if result.returncode == 0: + return json.loads(result.stdout) + return {"status": "error", "message": result.stderr} + except Exception as e: + return {"status": "error", "message": str(e)} + + +def run_quality_checker(repo_path: Path) -> Dict: + """Run .claude/skills/code-reviewer/scripts/code_quality_checker.py and return results.""" + script_path = Path(__file__).parent / ".claude/skills/code-reviewer/scripts/code_quality_checker.py" + if not script_path.exists(): + return {"status": "error", "message": ".claude/skills/code-reviewer/scripts/code_quality_checker.py not found"} + + try: + result = subprocess.run( + [sys.executable, str(script_path), str(repo_path), "--json"], + capture_output=True, + text=True, + timeout=300 + ) + if result.returncode == 0: + return json.loads(result.stdout) + return {"status": "error", "message": result.stderr} + except Exception as e: + return {"status": "error", "message": str(e)} + + +def calculate_review_score(pr_analysis: Dict, quality_analysis: Dict) -> int: + """Calculate overall review score (0-100).""" + score = 100 + + # Deduct for PR risks + if "risks" in pr_analysis: + risks = pr_analysis["risks"] + score -= len(risks.get("critical", [])) * 15 + score -= len(risks.get("high", [])) * 10 + score -= len(risks.get("medium", [])) * 5 + score -= len(risks.get("low", [])) * 2 + + # Deduct for code quality issues + if "issues" in quality_analysis: + issues = quality_analysis["issues"] + score -= len([i for i in issues if i.get("severity") == "critical"]) * 12 + score -= len([i for i in issues if i.get("severity") == "high"]) * 8 + score -= len([i for i in issues if i.get("severity") == "medium"]) * 4 + score -= len([i for i in issues if i.get("severity") == "low"]) * 1 + + # Deduct for complexity + if "summary" in pr_analysis: + complexity = pr_analysis["summary"].get("complexity_score", 0) + if complexity > 7: + score -= 10 + elif complexity > 5: + score -= 5 + + return max(0, min(100, score)) + + +def determine_verdict(score: int, critical_count: int, high_count: int) -> Tuple[str, str]: + """Determine review verdict based on score and issue counts.""" + if critical_count > 0: + return "block", "Critical issues must be resolved before merge" + + if score >= 90 and high_count == 0: + return "approve", "Code meets quality standards" + + if score >= 75 and high_count <= 2: + return "approve_with_suggestions", "Minor improvements recommended" + + if score >= 50: + return "request_changes", "Several issues need to be addressed" + + return "block", "Significant issues prevent approval" + + +def generate_findings_list(pr_analysis: Dict, quality_analysis: Dict) -> List[Dict]: + """Combine and prioritize all findings.""" + findings = [] + + # Add PR risk findings + if "risks" in pr_analysis: + for severity, items in pr_analysis["risks"].items(): + for item in items: + findings.append({ + "source": "pr_analysis", + "severity": severity, + "category": item.get("name", "unknown"), + "message": item.get("message", ""), + "file": item.get("file", ""), + "count": item.get("count", 1) + }) + + # Add code quality findings + if "issues" in quality_analysis: + for issue in quality_analysis["issues"]: + findings.append({ + "source": "quality_analysis", + "severity": issue.get("severity", "medium"), + "category": issue.get("type", "unknown"), + "message": issue.get("message", ""), + "file": issue.get("file", ""), + "line": issue.get("line", 0) + }) + + # Sort by severity weight + findings.sort( + key=lambda x: -SEVERITY_WEIGHTS.get(x["severity"], 0) + ) + + return findings + + +def generate_action_items(findings: List[Dict]) -> List[Dict]: + """Generate prioritized action items from findings.""" + action_items = [] + seen_categories = set() + + for finding in findings: + category = finding["category"] + severity = finding["severity"] + + # Group similar issues + if category in seen_categories and severity not in ["critical", "high"]: + continue + + action = { + "priority": "P0" if severity == "critical" else "P1" if severity == "high" else "P2", + "action": get_action_for_category(category, finding), + "severity": severity, + "files_affected": [finding["file"]] if finding.get("file") else [] + } + action_items.append(action) + seen_categories.add(category) + + return action_items[:15] # Top 15 actions + + +def get_action_for_category(category: str, finding: Dict) -> str: + """Get actionable recommendation for issue category.""" + actions = { + "hardcoded_secrets": "Remove hardcoded credentials and use environment variables or a secrets manager", + "sql_concatenation": "Use parameterized queries to prevent SQL injection", + "debugger": "Remove debugger statements before merging", + "console_log": "Remove or replace console statements with proper logging", + "todo_fixme": "Address TODO/FIXME comments or create tracking issues", + "disable_eslint": "Address the underlying issue instead of disabling lint rules", + "any_type": "Replace 'any' types with proper type definitions", + "long_function": "Break down function into smaller, focused units", + "god_class": "Split class into smaller, single-responsibility classes", + "too_many_params": "Use parameter objects or builder pattern", + "deep_nesting": "Refactor using early returns, guard clauses, or extraction", + "high_complexity": "Reduce cyclomatic complexity through refactoring", + "missing_error_handling": "Add proper error handling and recovery logic", + "duplicate_code": "Extract duplicate code into shared functions", + "magic_numbers": "Replace magic numbers with named constants", + "large_file": "Consider splitting into multiple smaller modules" + } + return actions.get(category, f"Review and address: {finding.get('message', category)}") + + +def format_markdown_report(report: Dict) -> str: + """Generate markdown-formatted report.""" + lines = [] + + # Header + lines.append("# Code Review Report") + lines.append("") + lines.append(f"**Generated:** {report['metadata']['generated_at']}") + lines.append(f"**Repository:** {report['metadata']['repository']}") + lines.append("") + + # Executive Summary + lines.append("## Executive Summary") + lines.append("") + summary = report["summary"] + verdict = summary["verdict"] + verdict_emoji = { + "approve": "✅", + "approve_with_suggestions": "✅", + "request_changes": "⚠️", + "block": "❌" + }.get(verdict, "❓") + + lines.append(f"**Verdict:** {verdict_emoji} {verdict.upper().replace('_', ' ')}") + lines.append(f"**Score:** {summary['score']}/100") + lines.append(f"**Rationale:** {summary['rationale']}") + lines.append("") + + # Issue Counts + lines.append("### Issue Summary") + lines.append("") + lines.append("| Severity | Count |") + lines.append("|----------|-------|") + for severity in ["critical", "high", "medium", "low"]: + count = summary["issue_counts"].get(severity, 0) + lines.append(f"| {severity.capitalize()} | {count} |") + lines.append("") + + # PR Statistics (if available) + if "pr_summary" in report: + pr = report["pr_summary"] + lines.append("### Change Statistics") + lines.append("") + lines.append(f"- **Files Changed:** {pr.get('files_changed', 'N/A')}") + lines.append(f"- **Lines Added:** +{pr.get('total_additions', 0)}") + lines.append(f"- **Lines Removed:** -{pr.get('total_deletions', 0)}") + lines.append(f"- **Complexity:** {pr.get('complexity_label', 'N/A')}") + lines.append("") + + # Action Items + if report.get("action_items"): + lines.append("## Action Items") + lines.append("") + for i, item in enumerate(report["action_items"], 1): + priority = item["priority"] + emoji = "🔴" if priority == "P0" else "🟠" if priority == "P1" else "🟡" + lines.append(f"{i}. {emoji} **[{priority}]** {item['action']}") + if item.get("files_affected"): + lines.append(f" - Files: {', '.join(item['files_affected'][:3])}") + lines.append("") + + # Critical Findings + critical_findings = [f for f in report.get("findings", []) if f["severity"] == "critical"] + if critical_findings: + lines.append("## Critical Issues (Must Fix)") + lines.append("") + for finding in critical_findings: + lines.append(f"- **{finding['category']}** in `{finding.get('file', 'unknown')}`") + lines.append(f" - {finding['message']}") + lines.append("") + + # High Priority Findings + high_findings = [f for f in report.get("findings", []) if f["severity"] == "high"] + if high_findings: + lines.append("## High Priority Issues") + lines.append("") + for finding in high_findings[:10]: + lines.append(f"- **{finding['category']}** in `{finding.get('file', 'unknown')}`") + lines.append(f" - {finding['message']}") + lines.append("") + + # Review Order (if available) + if "review_order" in report: + lines.append("## Suggested Review Order") + lines.append("") + for i, filepath in enumerate(report["review_order"][:10], 1): + lines.append(f"{i}. `{filepath}`") + lines.append("") + + # Footer + lines.append("---") + lines.append("*Generated by Code Reviewer*") + + return "\n".join(lines) + + +def format_text_report(report: Dict) -> str: + """Generate plain text report.""" + lines = [] + + lines.append("=" * 60) + lines.append("CODE REVIEW REPORT") + lines.append("=" * 60) + lines.append("") + lines.append(f"Generated: {report['metadata']['generated_at']}") + lines.append(f"Repository: {report['metadata']['repository']}") + lines.append("") + + summary = report["summary"] + verdict = summary["verdict"].upper().replace("_", " ") + lines.append(f"VERDICT: {verdict}") + lines.append(f"SCORE: {summary['score']}/100") + lines.append(f"RATIONALE: {summary['rationale']}") + lines.append("") + + lines.append("--- ISSUE SUMMARY ---") + for severity in ["critical", "high", "medium", "low"]: + count = summary["issue_counts"].get(severity, 0) + lines.append(f" {severity.capitalize()}: {count}") + lines.append("") + + if report.get("action_items"): + lines.append("--- ACTION ITEMS ---") + for i, item in enumerate(report["action_items"][:10], 1): + lines.append(f" {i}. [{item['priority']}] {item['action']}") + lines.append("") + + critical = [f for f in report.get("findings", []) if f["severity"] == "critical"] + if critical: + lines.append("--- CRITICAL ISSUES ---") + for f in critical: + lines.append(f" [{f.get('file', 'unknown')}] {f['message']}") + lines.append("") + + lines.append("=" * 60) + + return "\n".join(lines) + + +def generate_report( + repo_path: Path, + pr_analysis: Optional[Dict] = None, + quality_analysis: Optional[Dict] = None +) -> Dict: + """Generate comprehensive review report.""" + # Run analyses if not provided + if pr_analysis is None: + pr_analysis = run_pr_analyzer(repo_path) + + if quality_analysis is None: + quality_analysis = run_quality_checker(repo_path) + + # Generate findings + findings = generate_findings_list(pr_analysis, quality_analysis) + + # Count issues by severity + issue_counts = { + "critical": len([f for f in findings if f["severity"] == "critical"]), + "high": len([f for f in findings if f["severity"] == "high"]), + "medium": len([f for f in findings if f["severity"] == "medium"]), + "low": len([f for f in findings if f["severity"] == "low"]) + } + + # Calculate score and verdict + score = calculate_review_score(pr_analysis, quality_analysis) + verdict, rationale = determine_verdict( + score, + issue_counts["critical"], + issue_counts["high"] + ) + + # Generate action items + action_items = generate_action_items(findings) + + # Build report + report = { + "metadata": { + "generated_at": datetime.now().isoformat(), + "repository": str(repo_path), + "version": "1.0.0" + }, + "summary": { + "score": score, + "verdict": verdict, + "rationale": rationale, + "issue_counts": issue_counts + }, + "findings": findings, + "action_items": action_items + } + + # Add PR summary if available + if pr_analysis.get("status") == "analyzed": + report["pr_summary"] = pr_analysis.get("summary", {}) + report["review_order"] = pr_analysis.get("review_order", []) + + # Add quality summary if available + if quality_analysis.get("status") == "analyzed": + report["quality_summary"] = quality_analysis.get("summary", {}) + + return report + + +def main(): + parser = argparse.ArgumentParser( + description="Generate comprehensive code review reports" + ) + parser.add_argument( + "repo_path", + nargs="?", + default=".", + help="Path to repository (default: current directory)" + ) + parser.add_argument( + "--pr-analysis", + help="Path to pre-computed PR analysis JSON" + ) + parser.add_argument( + "--quality-analysis", + help="Path to pre-computed quality analysis JSON" + ) + parser.add_argument( + "--format", "-f", + choices=["text", "markdown", "json"], + default="text", + help="Output format (default: text)" + ) + parser.add_argument( + "--output", "-o", + help="Write output to file" + ) + parser.add_argument( + "--json", + action="store_true", + help="Output as JSON (shortcut for --format json)" + ) + + args = parser.parse_args() + + repo_path = Path(args.repo_path).resolve() + if not repo_path.exists(): + print(f"Error: Path does not exist: {repo_path}", file=sys.stderr) + sys.exit(1) + + # Load pre-computed analyses if provided + pr_analysis = None + quality_analysis = None + + if args.pr_analysis: + pr_analysis = load_json_file(args.pr_analysis) + if not pr_analysis: + print(f"Warning: Could not load PR analysis from {args.pr_analysis}") + + if args.quality_analysis: + quality_analysis = load_json_file(args.quality_analysis) + if not quality_analysis: + print(f"Warning: Could not load quality analysis from {args.quality_analysis}") + + # Generate report + report = generate_report(repo_path, pr_analysis, quality_analysis) + + # Format output + output_format = "json" if args.json else args.format + + if output_format == "json": + output = json.dumps(report, indent=2) + elif output_format == "markdown": + output = format_markdown_report(report) + else: + output = format_text_report(report) + + # Write or print output + if args.output: + with open(args.output, "w") as f: + f.write(output) + print(f"Report written to {args.output}") + else: + print(output) + + +if __name__ == "__main__": + main() diff --git a/.cline/skills/code-reviewer/SKILL.md b/.cline/skills/code-reviewer/SKILL.md new file mode 100644 index 00000000..9c07324a --- /dev/null +++ b/.cline/skills/code-reviewer/SKILL.md @@ -0,0 +1,177 @@ +--- +name: code-reviewer +description: Code review automation for TypeScript, JavaScript, Python, Go, Swift, Kotlin. Analyzes PRs for complexity and risk, checks code quality for SOLID violations and code smells, generates review reports. Use when reviewing pull requests, analyzing code quality, identifying issues, generating review checklists. +--- + +# Code Reviewer + +Automated code review tools for analyzing pull requests, detecting code quality issues, and generating review reports. + +--- + +## Table of Contents + +- [Tools](#tools) + - [PR Analyzer](#pr-analyzer) + - [Code Quality Checker](#code-quality-checker) + - [Review Report Generator](#review-report-generator) +- [Reference Guides](#reference-guides) +- [Languages Supported](#languages-supported) + +--- + +## Tools + +### PR Analyzer + +Analyzes git diff between branches to assess review complexity and identify risks. + +```bash +# Analyze current branch against main +python scripts/pr_analyzer.py /path/to/repo + +# Compare specific branches +python scripts/pr_analyzer.py . --base main --head feature-branch + +# JSON output for integration +python scripts/pr_analyzer.py /path/to/repo --json +``` + +**What it detects:** +- Hardcoded secrets (passwords, API keys, tokens) +- SQL injection patterns (string concatenation in queries) +- Debug statements (debugger, console.log) +- ESLint rule disabling +- TypeScript `any` types +- TODO/FIXME comments + +**Output includes:** +- Complexity score (1-10) +- Risk categorization (critical, high, medium, low) +- File prioritization for review order +- Commit message validation + +--- + +### Code Quality Checker + +Analyzes source code for structural issues, code smells, and SOLID violations. + +```bash +# Analyze a directory +python scripts/code_quality_checker.py /path/to/code + +# Analyze specific language +python scripts/code_quality_checker.py . --language python + +# JSON output +python scripts/code_quality_checker.py /path/to/code --json +``` + +**What it detects:** +- Long functions (>50 lines) +- Large files (>500 lines) +- God classes (>20 methods) +- Deep nesting (>4 levels) +- Too many parameters (>5) +- High cyclomatic complexity +- Missing error handling +- Unused imports +- Magic numbers + +**Thresholds:** + +| Issue | Threshold | +|-------|-----------| +| Long function | >50 lines | +| Large file | >500 lines | +| God class | >20 methods | +| Too many params | >5 | +| Deep nesting | >4 levels | +| High complexity | >10 branches | + +--- + +### Review Report Generator + +Combines PR analysis and code quality findings into structured review reports. + +```bash +# Generate report for current repo +python scripts/review_report_generator.py /path/to/repo + +# Markdown output +python scripts/review_report_generator.py . --format markdown --output review.md + +# Use pre-computed analyses +python scripts/review_report_generator.py . \ + --pr-analysis pr_results.json \ + --quality-analysis quality_results.json +``` + +**Report includes:** +- Review verdict (approve, request changes, block) +- Score (0-100) +- Prioritized action items +- Issue summary by severity +- Suggested review order + +**Verdicts:** + +| Score | Verdict | +|-------|---------| +| 90+ with no high issues | Approve | +| 75+ with ≤2 high issues | Approve with suggestions | +| 50-74 | Request changes | +| <50 or critical issues | Block | + +--- + +## Reference Guides + +### Code Review Checklist +`.cline/skills/code-reviewer/references/code_review_checklist.md` + +Systematic checklists covering: +- Pre-review checks (build, tests, PR hygiene) +- Correctness (logic, data handling, error handling) +- Security (input validation, injection prevention) +- Performance (efficiency, caching, scalability) +- Maintainability (code quality, naming, structure) +- Testing (coverage, quality, mocking) +- Language-specific checks + +### Coding Standards +`.cline/skills/code-reviewer/references/coding_standards.md` + +Language-specific standards for: +- TypeScript (type annotations, null safety, async/await) +- JavaScript (declarations, patterns, modules) +- Python (type hints, exceptions, class design) +- Go (error handling, structs, concurrency) +- Swift (optionals, protocols, errors) +- Kotlin (null safety, data classes, coroutines) + +### Common Antipatterns +`.cline/skills/code-reviewer/references/common_antipatterns.md` + +Antipattern catalog with examples and fixes: +- Structural (god class, long method, deep nesting) +- Logic (boolean blindness, stringly typed code) +- Security (SQL injection, hardcoded credentials) +- Performance (N+1 queries, unbounded collections) +- Testing (duplication, testing implementation) +- Async (floating promises, callback hell) + +--- + +## Languages Supported + +| Language | Extensions | +|----------|------------| +| Python | `.py` | +| TypeScript | `.ts`, `.tsx` | +| JavaScript | `.js`, `.jsx`, `.mjs` | +| Go | `.go` | +| Swift | `.swift` | +| Kotlin | `.kt`, `.kts` | \ No newline at end of file diff --git a/.cline/skills/code-reviewer/references/code_review_checklist.md b/.cline/skills/code-reviewer/references/code_review_checklist.md new file mode 100644 index 00000000..b7bd0867 --- /dev/null +++ b/.cline/skills/code-reviewer/references/code_review_checklist.md @@ -0,0 +1,270 @@ +# Code Review Checklist + +Structured checklists for systematic code review across different aspects. + +--- + +## Table of Contents + +- [Pre-Review Checks](#pre-review-checks) +- [Correctness](#correctness) +- [Security](#security) +- [Performance](#performance) +- [Maintainability](#maintainability) +- [Testing](#testing) +- [Documentation](#documentation) +- [Language-Specific Checks](#language-specific-checks) + +--- + +## Pre-Review Checks + +Before diving into code, verify these basics: + +### Build and Tests +- [ ] Code compiles without errors +- [ ] All existing tests pass +- [ ] New tests are included for new functionality +- [ ] No unintended files included (build artifacts, IDE configs) + +### PR Hygiene +- [ ] PR has clear title and description +- [ ] Changes are scoped appropriately (not too large) +- [ ] Commits follow conventional commit format +- [ ] Branch is up to date with base branch + +### Scope Verification +- [ ] Changes match the stated purpose +- [ ] No unrelated changes bundled in +- [ ] Breaking changes are documented +- [ ] Migration path provided if needed + +--- + +## Correctness + +### Logic +- [ ] Algorithm implements requirements correctly +- [ ] Edge cases handled (null, empty, boundary values) +- [ ] Off-by-one errors checked +- [ ] Correct operators used (== vs ===, & vs &&) +- [ ] Loop termination conditions correct +- [ ] Recursion has proper base cases + +### Data Handling +- [ ] Data types appropriate for the use case +- [ ] Numeric overflow/underflow considered +- [ ] Date/time handling accounts for timezones +- [ ] Unicode and internationalization handled +- [ ] Data validation at entry points + +### State Management +- [ ] State transitions are valid +- [ ] Race conditions addressed +- [ ] Concurrent access handled correctly +- [ ] State cleanup on errors/exit + +### Error Handling +- [ ] Errors caught at appropriate levels +- [ ] Error messages are actionable +- [ ] Errors don't expose sensitive information +- [ ] Recovery or graceful degradation implemented +- [ ] Resources cleaned up in error paths + +--- + +## Security + +### Input Validation +- [ ] All user input validated and sanitized +- [ ] Input length limits enforced +- [ ] File uploads validated (type, size, content) +- [ ] URL parameters validated + +### Injection Prevention +- [ ] SQL queries parameterized +- [ ] Command execution uses safe APIs +- [ ] HTML output escaped to prevent XSS +- [ ] LDAP queries properly escaped +- [ ] XML parsing disables external entities + +### Authentication & Authorization +- [ ] Authentication required for protected resources +- [ ] Authorization checked before operations +- [ ] Session management secure +- [ ] Password handling follows best practices +- [ ] Token expiration implemented + +### Data Protection +- [ ] Sensitive data encrypted at rest +- [ ] Sensitive data encrypted in transit +- [ ] PII handled according to policy +- [ ] Secrets not hardcoded +- [ ] Logs don't contain sensitive data + +### API Security +- [ ] Rate limiting implemented +- [ ] CORS configured correctly +- [ ] CSRF protection in place +- [ ] API keys/tokens secured +- [ ] Endpoints use HTTPS + +--- + +## Performance + +### Efficiency +- [ ] Appropriate data structures used +- [ ] Algorithms have acceptable complexity +- [ ] Database queries are optimized +- [ ] N+1 query problems avoided +- [ ] Indexes used where beneficial + +### Resource Usage +- [ ] Memory usage bounded +- [ ] No memory leaks +- [ ] File handles properly closed +- [ ] Database connections pooled +- [ ] Network calls minimized + +### Caching +- [ ] Appropriate caching strategy +- [ ] Cache invalidation handled +- [ ] Cache keys are unique and predictable +- [ ] TTL values appropriate + +### Scalability +- [ ] Horizontal scaling considered +- [ ] Bottlenecks identified +- [ ] Async processing for long operations +- [ ] Batch operations where appropriate + +--- + +## Maintainability + +### Code Quality +- [ ] Functions/methods have single responsibility +- [ ] Classes follow SOLID principles +- [ ] Code is DRY (Don't Repeat Yourself) +- [ ] No dead code or commented-out code +- [ ] Magic numbers replaced with constants + +### Naming +- [ ] Names are descriptive and consistent +- [ ] Naming follows project conventions +- [ ] No abbreviations that obscure meaning +- [ ] Boolean variables/functions have is/has/can prefix + +### Structure +- [ ] Functions are appropriately sized (<50 lines preferred) +- [ ] Nesting depth is reasonable (<4 levels) +- [ ] Related code is grouped together +- [ ] Dependencies are minimal and explicit + +### Readability +- [ ] Code is self-documenting where possible +- [ ] Complex logic has explanatory comments +- [ ] Formatting is consistent +- [ ] No overly clever or obscure code + +--- + +## Testing + +### Coverage +- [ ] New code has unit tests +- [ ] Critical paths have integration tests +- [ ] Edge cases are tested +- [ ] Error conditions are tested + +### Quality +- [ ] Tests are independent +- [ ] Tests have clear assertions +- [ ] Test names describe what is tested +- [ ] Tests don't depend on external state + +### Mocking +- [ ] External dependencies are mocked +- [ ] Mocks are realistic +- [ ] Mock setup is not excessive + +--- + +## Documentation + +### Code Documentation +- [ ] Public APIs are documented +- [ ] Complex algorithms explained +- [ ] Non-obvious decisions documented +- [ ] TODO/FIXME comments have context + +### External Documentation +- [ ] README updated if needed +- [ ] API documentation updated +- [ ] Changelog updated +- [ ] Migration guides provided + +--- + +## Language-Specific Checks + +### TypeScript/JavaScript +- [ ] Types are explicit (avoid `any`) +- [ ] Null checks present (`?.`, `??`) +- [ ] Async/await errors handled +- [ ] No floating promises +- [ ] Memory leaks from closures checked + +### Python +- [ ] Type hints used for public APIs +- [ ] Context managers for resources (`with` statements) +- [ ] Exception handling is specific (not bare `except`) +- [ ] No mutable default arguments +- [ ] List comprehensions used appropriately + +### Go +- [ ] Errors checked and handled +- [ ] Goroutine leaks prevented +- [ ] Context propagation correct +- [ ] Defer statements in right order +- [ ] Interfaces minimal + +### Swift +- [ ] Optionals handled safely +- [ ] Memory management correct (weak/unowned) +- [ ] Error handling uses Result or throws +- [ ] Access control appropriate +- [ ] Codable implementation correct + +### Kotlin +- [ ] Null safety leveraged +- [ ] Coroutine cancellation handled +- [ ] Data classes used appropriately +- [ ] Extension functions don't obscure behavior +- [ ] Sealed classes for state + +--- + +## Review Process Tips + +### Before Approving +1. Verify all critical checks passed +2. Confirm tests are adequate +3. Consider deployment impact +4. Check for any security concerns +5. Ensure documentation is updated + +### Providing Feedback +- Be specific about issues +- Explain why something is problematic +- Suggest alternatives when possible +- Distinguish blockers from suggestions +- Acknowledge good patterns + +### When to Block +- Security vulnerabilities present +- Critical logic errors +- No tests for risky changes +- Breaking changes without migration +- Significant performance regressions diff --git a/.cline/skills/code-reviewer/references/coding_standards.md b/.cline/skills/code-reviewer/references/coding_standards.md new file mode 100644 index 00000000..9fbc6a06 --- /dev/null +++ b/.cline/skills/code-reviewer/references/coding_standards.md @@ -0,0 +1,555 @@ +# Coding Standards + +Language-specific coding standards and conventions for code review. + +--- + +## Table of Contents + +- [Universal Principles](#universal-principles) +- [TypeScript Standards](#typescript-standards) +- [JavaScript Standards](#javascript-standards) +- [Python Standards](#python-standards) +- [Go Standards](#go-standards) +- [Swift Standards](#swift-standards) +- [Kotlin Standards](#kotlin-standards) + +--- + +## Universal Principles + +These apply across all languages. + +### Naming Conventions + +| Element | Convention | Example | +|---------|------------|---------| +| Variables | camelCase (JS/TS), snake_case (Python/Go) | `userName`, `user_name` | +| Constants | SCREAMING_SNAKE_CASE | `MAX_RETRY_COUNT` | +| Functions | camelCase (JS/TS), snake_case (Python) | `getUserById`, `get_user_by_id` | +| Classes | PascalCase | `UserRepository` | +| Interfaces | PascalCase, optionally prefixed | `IUserService` or `UserService` | +| Private members | Prefix with underscore or use access modifiers | `_internalState` | + +### Function Design + +``` +Good functions: +- Do one thing well +- Have descriptive names (verb + noun) +- Take 3 or fewer parameters +- Return early for error cases +- Stay under 50 lines +``` + +### Error Handling + +``` +Good error handling: +- Catch specific errors, not generic exceptions +- Log with context (what, where, why) +- Clean up resources in error paths +- Don't swallow errors silently +- Provide actionable error messages +``` + +--- + +## TypeScript Standards + +### Type Annotations + +```typescript +// Avoid 'any' - use unknown for truly unknown types +function processData(data: unknown): ProcessedResult { + if (isValidData(data)) { + return transform(data); + } + throw new Error('Invalid data format'); +} + +// Use explicit return types for public APIs +export function calculateTotal(items: CartItem[]): number { + return items.reduce((sum, item) => sum + item.price, 0); +} + +// Use type guards for runtime checks +function isUser(obj: unknown): obj is User { + return ( + typeof obj === 'object' && + obj !== null && + 'id' in obj && + 'email' in obj + ); +} +``` + +### Null Safety + +```typescript +// Use optional chaining and nullish coalescing +const userName = user?.profile?.name ?? 'Anonymous'; + +// Be explicit about nullable types +interface Config { + timeout: number; + retries?: number; // Optional + fallbackUrl: string | null; // Explicitly nullable +} + +// Use assertion functions for validation +function assertDefined(value: T | null | undefined): asserts value is T { + if (value === null || value === undefined) { + throw new Error('Value is not defined'); + } +} +``` + +### Async/Await + +```typescript +// Always handle errors in async functions +async function fetchUser(id: string): Promise { + try { + const response = await api.get(`/users/${id}`); + return response.data; + } catch (error) { + logger.error('Failed to fetch user', { id, error }); + throw new UserFetchError(id, error); + } +} + +// Use Promise.all for parallel operations +async function loadDashboard(userId: string): Promise { + const [profile, stats, notifications] = await Promise.all([ + fetchProfile(userId), + fetchStats(userId), + fetchNotifications(userId) + ]); + return { profile, stats, notifications }; +} +``` + +### React/Component Standards + +```typescript +// Use explicit prop types +interface ButtonProps { + label: string; + onClick: () => void; + variant?: 'primary' | 'secondary'; + disabled?: boolean; +} + +// Prefer functional components with hooks +function Button({ label, onClick, variant = 'primary', disabled = false }: ButtonProps) { + return ( + + ); +} + +// Use custom hooks for reusable logic +function useDebounce(value: T, delay: number): T { + const [debouncedValue, setDebouncedValue] = useState(value); + + useEffect(() => { + const timer = setTimeout(() => setDebouncedValue(value), delay); + return () => clearTimeout(timer); + }, [value, delay]); + + return debouncedValue; +} +``` + +--- + +## JavaScript Standards + +### Variable Declarations + +```javascript +// Use const by default, let when reassignment needed +const MAX_ITEMS = 100; +let currentCount = 0; + +// Never use var +// var is function-scoped and hoisted, leading to bugs +``` + +### Object and Array Patterns + +```javascript +// Use object destructuring +const { name, email, role = 'user' } = user; + +// Use spread for immutable updates +const updatedUser = { ...user, lastLogin: new Date() }; +const updatedList = [...items, newItem]; + +// Use array methods over loops +const activeUsers = users.filter(u => u.isActive); +const emails = users.map(u => u.email); +const total = orders.reduce((sum, o) => sum + o.amount, 0); +``` + +### Module Patterns + +```javascript +// Use named exports for utilities +export function formatDate(date) { ... } +export function parseDate(str) { ... } + +// Use default export for main component/class +export default class UserService { ... } + +// Group related exports +export { formatDate, parseDate, isValidDate } from './dateUtils'; +``` + +--- + +## Python Standards + +### Type Hints (PEP 484) + +```python +from typing import Optional, List, Dict, Union + +def get_user(user_id: int) -> Optional[User]: + """Fetch user by ID, returns None if not found.""" + return db.query(User).filter(User.id == user_id).first() + +def process_items(items: List[str]) -> Dict[str, int]: + """Count occurrences of each item.""" + return {item: items.count(item) for item in set(items)} + +def send_notification( + user: User, + message: str, + *, + priority: str = "normal", + channels: List[str] = None +) -> bool: + """Send notification to user via specified channels.""" + channels = channels or ["email"] + # Implementation +``` + +### Exception Handling + +```python +# Catch specific exceptions +try: + result = api_client.fetch_data(endpoint) +except ConnectionError as e: + logger.warning(f"Connection failed: {e}") + return cached_data +except TimeoutError as e: + logger.error(f"Request timed out: {e}") + raise ServiceUnavailableError() from e + +# Use context managers for resources +with open(filepath, 'r') as f: + data = json.load(f) + +# Custom exceptions should be informative +class ValidationError(Exception): + def __init__(self, field: str, message: str): + self.field = field + self.message = message + super().__init__(f"{field}: {message}") +``` + +### Class Design + +```python +from dataclasses import dataclass +from abc import ABC, abstractmethod + +# Use dataclasses for data containers +@dataclass +class UserDTO: + id: int + email: str + name: str + is_active: bool = True + +# Use ABC for interfaces +class Repository(ABC): + @abstractmethod + def find_by_id(self, id: int) -> Optional[Entity]: + pass + + @abstractmethod + def save(self, entity: Entity) -> Entity: + pass + +# Use properties for computed attributes +class Order: + def __init__(self, items: List[OrderItem]): + self._items = items + + @property + def total(self) -> Decimal: + return sum(item.price * item.quantity for item in self._items) +``` + +--- + +## Go Standards + +### Error Handling + +```go +// Always check errors +file, err := os.Open(filename) +if err != nil { + return fmt.Errorf("failed to open %s: %w", filename, err) +} +defer file.Close() + +// Use custom error types for specific cases +type ValidationError struct { + Field string + Message string +} + +func (e *ValidationError) Error() string { + return fmt.Sprintf("%s: %s", e.Field, e.Message) +} + +// Wrap errors with context +if err := db.Query(query); err != nil { + return fmt.Errorf("query failed for user %d: %w", userID, err) +} +``` + +### Struct Design + +```go +// Use unexported fields with exported methods +type UserService struct { + repo UserRepository + cache Cache + logger Logger +} + +// Constructor functions for initialization +func NewUserService(repo UserRepository, cache Cache, logger Logger) *UserService { + return &UserService{ + repo: repo, + cache: cache, + logger: logger, + } +} + +// Keep interfaces small +type Reader interface { + Read(p []byte) (n int, err error) +} + +type Writer interface { + Write(p []byte) (n int, err error) +} +``` + +### Concurrency + +```go +// Use context for cancellation +func fetchData(ctx context.Context, url string) ([]byte, error) { + req, err := http.NewRequestWithContext(ctx, "GET", url, nil) + if err != nil { + return nil, err + } + // ... +} + +// Use channels for communication +func worker(jobs <-chan Job, results chan<- Result) { + for job := range jobs { + result := process(job) + results <- result + } +} + +// Use sync.WaitGroup for coordination +var wg sync.WaitGroup +for _, item := range items { + wg.Add(1) + go func(i Item) { + defer wg.Done() + processItem(i) + }(item) +} +wg.Wait() +``` + +--- + +## Swift Standards + +### Optionals + +```swift +// Use optional binding +if let user = fetchUser(id: userId) { + displayProfile(user) +} + +// Use guard for early exit +guard let data = response.data else { + throw NetworkError.noData +} + +// Use nil coalescing for defaults +let displayName = user.nickname ?? user.email + +// Avoid force unwrapping except in tests +// BAD: let name = user.name! +// GOOD: guard let name = user.name else { return } +``` + +### Protocol-Oriented Design + +```swift +// Define protocols with minimal requirements +protocol Identifiable { + var id: String { get } +} + +protocol Persistable: Identifiable { + func save() throws + static func find(by id: String) -> Self? +} + +// Use protocol extensions for default implementations +extension Persistable { + func save() throws { + try Storage.shared.save(self) + } +} + +// Prefer composition over inheritance +struct User: Identifiable, Codable { + let id: String + var name: String + var email: String +} +``` + +### Error Handling + +```swift +// Define domain-specific errors +enum AuthError: Error { + case invalidCredentials + case tokenExpired + case networkFailure(underlying: Error) +} + +// Use Result type for async operations +func authenticate( + email: String, + password: String, + completion: @escaping (Result) -> Void +) + +// Use throws for synchronous operations +func validate(_ input: String) throws -> ValidatedInput { + guard !input.isEmpty else { + throw ValidationError.emptyInput + } + return ValidatedInput(value: input) +} +``` + +--- + +## Kotlin Standards + +### Null Safety + +```kotlin +// Use nullable types explicitly +fun findUser(id: Int): User? { + return userRepository.find(id) +} + +// Use safe calls and elvis operator +val name = user?.profile?.name ?: "Unknown" + +// Use let for null checks with side effects +user?.let { activeUser -> + sendWelcomeEmail(activeUser.email) + logActivity(activeUser.id) +} + +// Use require/check for validation +fun processPayment(amount: Double) { + require(amount > 0) { "Amount must be positive: $amount" } + // Process +} +``` + +### Data Classes and Sealed Classes + +```kotlin +// Use data classes for DTOs +data class UserDTO( + val id: Int, + val email: String, + val name: String, + val isActive: Boolean = true +) + +// Use sealed classes for state +sealed class Result { + data class Success(val data: T) : Result() + data class Error(val message: String, val cause: Throwable? = null) : Result() + object Loading : Result() +} + +// Pattern matching with when +fun handleResult(result: Result) = when (result) { + is Result.Success -> showUser(result.data) + is Result.Error -> showError(result.message) + Result.Loading -> showLoading() +} +``` + +### Coroutines + +```kotlin +// Use structured concurrency +suspend fun loadDashboard(): Dashboard = coroutineScope { + val profile = async { fetchProfile() } + val stats = async { fetchStats() } + val notifications = async { fetchNotifications() } + + Dashboard( + profile = profile.await(), + stats = stats.await(), + notifications = notifications.await() + ) +} + +// Handle cancellation +suspend fun fetchWithRetry(url: String): Response { + repeat(3) { attempt -> + try { + return httpClient.get(url) + } catch (e: IOException) { + if (attempt == 2) throw e + delay(1000L * (attempt + 1)) + } + } + throw IllegalStateException("Unreachable") +} +``` diff --git a/.cline/skills/code-reviewer/references/common_antipatterns.md b/.cline/skills/code-reviewer/references/common_antipatterns.md new file mode 100644 index 00000000..26045452 --- /dev/null +++ b/.cline/skills/code-reviewer/references/common_antipatterns.md @@ -0,0 +1,739 @@ +# Common Antipatterns + +Code antipatterns to identify during review, with examples and fixes. + +--- + +## Table of Contents + +- [Structural Antipatterns](#structural-antipatterns) +- [Logic Antipatterns](#logic-antipatterns) +- [Security Antipatterns](#security-antipatterns) +- [Performance Antipatterns](#performance-antipatterns) +- [Testing Antipatterns](#testing-antipatterns) +- [Async Antipatterns](#async-antipatterns) + +--- + +## Structural Antipatterns + +### God Class + +A class that does too much and knows too much. + +```typescript +// BAD: God class handling everything +class UserManager { + createUser(data: UserData) { ... } + updateUser(id: string, data: UserData) { ... } + deleteUser(id: string) { ... } + sendEmail(userId: string, content: string) { ... } + generateReport(userId: string) { ... } + validatePassword(password: string) { ... } + hashPassword(password: string) { ... } + uploadAvatar(userId: string, file: File) { ... } + resizeImage(file: File) { ... } + logActivity(userId: string, action: string) { ... } + // 50 more methods... +} + +// GOOD: Single responsibility classes +class UserRepository { + create(data: UserData): User { ... } + update(id: string, data: Partial): User { ... } + delete(id: string): void { ... } +} + +class EmailService { + send(to: string, content: string): void { ... } +} + +class PasswordService { + validate(password: string): ValidationResult { ... } + hash(password: string): string { ... } +} +``` + +**Detection:** Class has >20 methods, >500 lines, or handles unrelated concerns. + +--- + +### Long Method + +Functions that do too much and are hard to understand. + +```python +# BAD: Long method doing everything +def process_order(order_data): + # Validate order (20 lines) + if not order_data.get('items'): + raise ValueError('No items') + if not order_data.get('customer_id'): + raise ValueError('No customer') + # ... more validation + + # Calculate totals (30 lines) + subtotal = 0 + for item in order_data['items']: + price = get_product_price(item['product_id']) + subtotal += price * item['quantity'] + # ... tax calculation, discounts + + # Process payment (40 lines) + payment_result = payment_gateway.charge(...) + # ... handle payment errors + + # Create order record (20 lines) + order = Order.create(...) + + # Send notifications (20 lines) + send_order_confirmation(...) + notify_warehouse(...) + + return order + +# GOOD: Composed of focused functions +def process_order(order_data): + validate_order(order_data) + totals = calculate_order_totals(order_data) + payment = process_payment(order_data['customer_id'], totals) + order = create_order_record(order_data, totals, payment) + send_order_notifications(order) + return order +``` + +**Detection:** Function >50 lines or requires scrolling to read. + +--- + +### Deep Nesting + +Excessive indentation making code hard to follow. + +```javascript +// BAD: Deep nesting +function processData(data) { + if (data) { + if (data.items) { + if (data.items.length > 0) { + for (const item of data.items) { + if (item.isValid) { + if (item.type === 'premium') { + if (item.price > 100) { + // Finally do something + processItem(item); + } + } + } + } + } + } + } +} + +// GOOD: Early returns and guard clauses +function processData(data) { + if (!data?.items?.length) { + return; + } + + const premiumItems = data.items.filter( + item => item.isValid && item.type === 'premium' && item.price > 100 + ); + + premiumItems.forEach(processItem); +} +``` + +**Detection:** Indentation >4 levels deep. + +--- + +### Magic Numbers and Strings + +Hard-coded values without explanation. + +```go +// BAD: Magic numbers +func calculateDiscount(total float64, userType int) float64 { + if userType == 1 { + return total * 0.15 + } else if userType == 2 { + return total * 0.25 + } + return total * 0.05 +} + +// GOOD: Named constants +const ( + UserTypeRegular = 1 + UserTypePremium = 2 + + DiscountRegular = 0.05 + DiscountStandard = 0.15 + DiscountPremium = 0.25 +) + +func calculateDiscount(total float64, userType int) float64 { + switch userType { + case UserTypePremium: + return total * DiscountPremium + case UserTypeRegular: + return total * DiscountStandard + default: + return total * DiscountRegular + } +} +``` + +**Detection:** Literal numbers (except 0, 1) or repeated string literals. + +--- + +### Primitive Obsession + +Using primitives instead of small objects. + +```typescript +// BAD: Primitives everywhere +function createUser( + name: string, + email: string, + phone: string, + street: string, + city: string, + zipCode: string, + country: string +): User { ... } + +// GOOD: Value objects +interface Address { + street: string; + city: string; + zipCode: string; + country: string; +} + +interface ContactInfo { + email: string; + phone: string; +} + +function createUser( + name: string, + contact: ContactInfo, + address: Address +): User { ... } +``` + +**Detection:** Functions with >4 parameters of same type, or related primitives always passed together. + +--- + +## Logic Antipatterns + +### Boolean Blindness + +Passing booleans that make code unreadable at call sites. + +```swift +// BAD: What do these booleans mean? +user.configure(true, false, true, false) + +// GOOD: Named parameters or option objects +user.configure( + sendWelcomeEmail: true, + requireVerification: false, + enableNotifications: true, + isAdmin: false +) + +// Or use an options struct +struct UserConfiguration { + var sendWelcomeEmail: Bool = true + var requireVerification: Bool = false + var enableNotifications: Bool = true + var isAdmin: Bool = false +} + +user.configure(UserConfiguration()) +``` + +**Detection:** Function calls with multiple boolean literals. + +--- + +### Null Returns for Collections + +Returning null instead of empty collections. + +```kotlin +// BAD: Returning null +fun findUsersByRole(role: String): List? { + val users = repository.findByRole(role) + return if (users.isEmpty()) null else users +} + +// Caller must handle null +val users = findUsersByRole("admin") +if (users != null) { + users.forEach { ... } +} + +// GOOD: Return empty collection +fun findUsersByRole(role: String): List { + return repository.findByRole(role) +} + +// Caller can iterate directly +findUsersByRole("admin").forEach { ... } +``` + +**Detection:** Functions returning nullable collections. + +--- + +### Stringly Typed Code + +Using strings where enums or types should be used. + +```python +# BAD: String-based logic +def handle_event(event_type: str, data: dict): + if event_type == "user_created": + handle_user_created(data) + elif event_type == "user_updated": + handle_user_updated(data) + elif event_type == "user_dleted": # Typo won't be caught + handle_user_deleted(data) + +# GOOD: Enum-based +from enum import Enum + +class EventType(Enum): + USER_CREATED = "user_created" + USER_UPDATED = "user_updated" + USER_DELETED = "user_deleted" + +def handle_event(event_type: EventType, data: dict): + handlers = { + EventType.USER_CREATED: handle_user_created, + EventType.USER_UPDATED: handle_user_updated, + EventType.USER_DELETED: handle_user_deleted, + } + handlers[event_type](data) +``` + +**Detection:** String comparisons for type/status/category values. + +--- + +## Security Antipatterns + +### SQL Injection + +String concatenation in SQL queries. + +```javascript +// BAD: String concatenation +const query = `SELECT * FROM users WHERE id = ${userId}`; +db.query(query); + +// BAD: String templates still vulnerable +const query = `SELECT * FROM users WHERE name = '${userName}'`; + +// GOOD: Parameterized queries +const query = 'SELECT * FROM users WHERE id = $1'; +db.query(query, [userId]); + +// GOOD: Using ORM safely +User.findOne({ where: { id: userId } }); +``` + +**Detection:** String concatenation or template literals with SQL keywords. + +--- + +### Hardcoded Credentials + +Secrets in source code. + +```python +# BAD: Hardcoded secrets +API_KEY = "sk-abc123xyz789" +DATABASE_URL = "postgresql://admin:password123@prod-db.internal:5432/app" + +# GOOD: Environment variables +import os + +API_KEY = os.environ["API_KEY"] +DATABASE_URL = os.environ["DATABASE_URL"] + +# GOOD: Secrets manager +from aws_secretsmanager import get_secret + +API_KEY = get_secret("api-key") +``` + +**Detection:** Variables named `password`, `secret`, `key`, `token` with string literals. + +--- + +### Unsafe Deserialization + +Deserializing untrusted data without validation. + +```python +# BAD: Binary serialization from untrusted source can execute arbitrary code +# Examples: Python's binary serialization, yaml.load without SafeLoader + +# GOOD: Use safe alternatives +import json + +def load_data(file_path): + with open(file_path, 'r') as f: + return json.load(f) + +# GOOD: Use SafeLoader for YAML +import yaml + +with open('config.yaml') as f: + config = yaml.safe_load(f) +``` + +**Detection:** Binary deserialization functions, yaml.load without safe loader, dynamic code execution on external data. + +--- + +### Missing Input Validation + +Trusting user input without validation. + +```typescript +// BAD: No validation +app.post('/user', (req, res) => { + const user = db.create({ + name: req.body.name, + email: req.body.email, + role: req.body.role // User can set themselves as admin! + }); + res.json(user); +}); + +// GOOD: Validate and sanitize +import { z } from 'zod'; + +const CreateUserSchema = z.object({ + name: z.string().min(1).max(100), + email: z.string().email(), + // role is NOT accepted from input +}); + +app.post('/user', (req, res) => { + const validated = CreateUserSchema.parse(req.body); + const user = db.create({ + ...validated, + role: 'user' // Default role, not from input + }); + res.json(user); +}); +``` + +**Detection:** Request body/params used directly without validation schema. + +--- + +## Performance Antipatterns + +### N+1 Query Problem + +Loading related data one record at a time. + +```python +# BAD: N+1 queries +def get_orders_with_items(): + orders = Order.query.all() # 1 query + for order in orders: + items = OrderItem.query.filter_by(order_id=order.id).all() # N queries + order.items = items + return orders + +# GOOD: Eager loading +def get_orders_with_items(): + return Order.query.options( + joinedload(Order.items) + ).all() # 1 query with JOIN + +# GOOD: Batch loading +def get_orders_with_items(): + orders = Order.query.all() + order_ids = [o.id for o in orders] + items = OrderItem.query.filter( + OrderItem.order_id.in_(order_ids) + ).all() # 2 queries total + # Group items by order_id... +``` + +**Detection:** Database queries inside loops. + +--- + +### Unbounded Collections + +Loading unlimited data into memory. + +```go +// BAD: Load all records +func GetAllUsers() ([]User, error) { + return db.Find(&[]User{}) // Could be millions +} + +// GOOD: Pagination +func GetUsers(page, pageSize int) ([]User, error) { + offset := (page - 1) * pageSize + return db.Limit(pageSize).Offset(offset).Find(&[]User{}) +} + +// GOOD: Streaming for large datasets +func ProcessAllUsers(handler func(User) error) error { + rows, err := db.Model(&User{}).Rows() + if err != nil { + return err + } + defer rows.Close() + + for rows.Next() { + var user User + db.ScanRows(rows, &user) + if err := handler(user); err != nil { + return err + } + } + return nil +} +``` + +**Detection:** `findAll()`, `find({})`, or queries without `LIMIT`. + +--- + +### Synchronous I/O in Hot Paths + +Blocking operations in request handlers. + +```javascript +// BAD: Sync file read on every request +app.get('/config', (req, res) => { + const config = fs.readFileSync('./config.json'); // Blocks event loop + res.json(JSON.parse(config)); +}); + +// GOOD: Load once at startup +const config = JSON.parse(fs.readFileSync('./config.json')); + +app.get('/config', (req, res) => { + res.json(config); +}); + +// GOOD: Async with caching +let configCache = null; + +app.get('/config', async (req, res) => { + if (!configCache) { + configCache = JSON.parse(await fs.promises.readFile('./config.json')); + } + res.json(configCache); +}); +``` + +**Detection:** `readFileSync`, `execSync`, or blocking calls in request handlers. + +--- + +## Testing Antipatterns + +### Test Code Duplication + +Repeating setup in every test. + +```typescript +// BAD: Duplicate setup +describe('UserService', () => { + it('should create user', async () => { + const db = await createTestDatabase(); + const userRepo = new UserRepository(db); + const emailService = new MockEmailService(); + const service = new UserService(userRepo, emailService); + + const user = await service.create({ name: 'Test' }); + expect(user.name).toBe('Test'); + }); + + it('should update user', async () => { + const db = await createTestDatabase(); // Duplicated + const userRepo = new UserRepository(db); // Duplicated + const emailService = new MockEmailService(); // Duplicated + const service = new UserService(userRepo, emailService); // Duplicated + + // ... + }); +}); + +// GOOD: Shared setup +describe('UserService', () => { + let service: UserService; + let db: TestDatabase; + + beforeEach(async () => { + db = await createTestDatabase(); + const userRepo = new UserRepository(db); + const emailService = new MockEmailService(); + service = new UserService(userRepo, emailService); + }); + + afterEach(async () => { + await db.cleanup(); + }); + + it('should create user', async () => { + const user = await service.create({ name: 'Test' }); + expect(user.name).toBe('Test'); + }); +}); +``` + +--- + +### Testing Implementation Instead of Behavior + +Tests coupled to internal implementation. + +```python +# BAD: Testing implementation details +def test_add_item_to_cart(): + cart = ShoppingCart() + cart.add_item(Product("Apple", 1.00)) + + # Testing internal structure + assert cart._items[0].name == "Apple" + assert cart._total == 1.00 + +# GOOD: Testing behavior +def test_add_item_to_cart(): + cart = ShoppingCart() + cart.add_item(Product("Apple", 1.00)) + + # Testing public behavior + assert cart.item_count == 1 + assert cart.total == 1.00 + assert cart.contains("Apple") +``` + +--- + +## Async Antipatterns + +### Floating Promises + +Promises without await or catch. + +```typescript +// BAD: Floating promise +async function saveUser(user: User) { + db.save(user); // Not awaited, errors lost + logger.info('User saved'); // Logs before save completes +} + +// BAD: Fire and forget in loop +for (const item of items) { + processItem(item); // All run in parallel, no error handling +} + +// GOOD: Await the promise +async function saveUser(user: User) { + await db.save(user); + logger.info('User saved'); +} + +// GOOD: Process with proper handling +await Promise.all(items.map(item => processItem(item))); + +// Or sequentially +for (const item of items) { + await processItem(item); +} +``` + +**Detection:** Async function calls without `await` or `.then()`. + +--- + +### Callback Hell + +Deeply nested callbacks. + +```javascript +// BAD: Callback hell +getUser(userId, (err, user) => { + if (err) return handleError(err); + getOrders(user.id, (err, orders) => { + if (err) return handleError(err); + getProducts(orders[0].productIds, (err, products) => { + if (err) return handleError(err); + renderPage(user, orders, products, (err) => { + if (err) return handleError(err); + console.log('Done'); + }); + }); + }); +}); + +// GOOD: Async/await +async function loadPage(userId) { + try { + const user = await getUser(userId); + const orders = await getOrders(user.id); + const products = await getProducts(orders[0].productIds); + await renderPage(user, orders, products); + console.log('Done'); + } catch (err) { + handleError(err); + } +} +``` + +**Detection:** >2 levels of callback nesting. + +--- + +### Async in Constructor + +Async operations in constructors. + +```typescript +// BAD: Async in constructor +class DatabaseConnection { + constructor(url: string) { + this.connect(url); // Fire-and-forget async + } + + private async connect(url: string) { + this.client = await createClient(url); + } +} + +// GOOD: Factory method +class DatabaseConnection { + private constructor(private client: Client) {} + + static async create(url: string): Promise { + const client = await createClient(url); + return new DatabaseConnection(client); + } +} + +// Usage +const db = await DatabaseConnection.create(url); +``` + +**Detection:** `async` calls or `.then()` in constructor. diff --git a/.cline/skills/code-reviewer/scripts/code_quality_checker.py b/.cline/skills/code-reviewer/scripts/code_quality_checker.py new file mode 100644 index 00000000..1e44a3a5 --- /dev/null +++ b/.cline/skills/code-reviewer/scripts/code_quality_checker.py @@ -0,0 +1,560 @@ +#!/usr/bin/env python3 +""" +Code Quality Checker + +Analyzes source code for quality issues, code smells, complexity metrics, +and SOLID principle violations. + +Usage: + python .cline/skills/code-reviewer/scripts/code_quality_checker.py /path/to/file.py + python .cline/skills/code-reviewer/scripts/code_quality_checker.py /path/to/directory --recursive + python .cline/skills/code-reviewer/scripts/code_quality_checker.py . --language typescript --json +""" + +import argparse +import json +import re +import sys +from pathlib import Path +from typing import Dict, List, Optional + + +# Language-specific file extensions +LANGUAGE_EXTENSIONS = { + "python": [".py"], + "typescript": [".ts", ".tsx"], + "javascript": [".js", ".jsx", ".mjs"], + "go": [".go"], + "swift": [".swift"], + "kotlin": [".kt", ".kts"] +} + +# Code smell thresholds +THRESHOLDS = { + "long_function_lines": 50, + "too_many_parameters": 5, + "high_complexity": 10, + "god_class_methods": 20, + "max_imports": 15 +} + + +def get_file_extension(filepath: Path) -> str: + """Get file extension.""" + return filepath.suffix.lower() + + +def detect_language(filepath: Path) -> Optional[str]: + """Detect programming language from file extension.""" + ext = get_file_extension(filepath) + for lang, extensions in LANGUAGE_EXTENSIONS.items(): + if ext in extensions: + return lang + return None + + +def read_file_content(filepath: Path) -> str: + """Read file content safely.""" + try: + with open(filepath, "r", encoding="utf-8", errors="ignore") as f: + return f.read() + except Exception: + return "" + + +def calculate_cyclomatic_complexity(content: str) -> int: + """ + Estimate cyclomatic complexity based on control flow keywords. + """ + complexity = 1 # Base complexity + + # Control flow patterns that increase complexity + patterns = [ + r"\bif\b", + r"\belif\b", + r"\belse\b", + r"\bfor\b", + r"\bwhile\b", + r"\bcase\b", + r"\bcatch\b", + r"\bexcept\b", + r"\band\b", + r"\bor\b", + r"\|\|", + r"&&" + ] + + for pattern in patterns: + matches = re.findall(pattern, content, re.IGNORECASE) + complexity += len(matches) + + return complexity + + +def count_lines(content: str) -> Dict[str, int]: + """Count different types of lines in code.""" + lines = content.split("\n") + total = len(lines) + blank = sum(1 for line in lines if not line.strip()) + comment = 0 + + for line in lines: + stripped = line.strip() + if stripped.startswith("#") or stripped.startswith("//"): + comment += 1 + elif stripped.startswith("/*") or stripped.startswith("'''") or stripped.startswith('"""'): + comment += 1 + + code = total - blank - comment + + return { + "total": total, + "code": code, + "blank": blank, + "comment": comment + } + + +def find_functions(content: str, language: str) -> List[Dict]: + """Find function definitions and their metrics.""" + functions = [] + + # Language-specific function patterns + patterns = { + "python": r"def\s+(\w+)\s*\(([^)]*)\)", + "typescript": r"(?:function\s+(\w+)|(?:const|let|var)\s+(\w+)\s*=\s*(?:async\s+)?\([^)]*\)\s*=>)", + "javascript": r"(?:function\s+(\w+)|(?:const|let|var)\s+(\w+)\s*=\s*(?:async\s+)?\([^)]*\)\s*=>)", + "go": r"func\s+(?:\([^)]+\)\s+)?(\w+)\s*\(([^)]*)\)", + "swift": r"func\s+(\w+)\s*\(([^)]*)\)", + "kotlin": r"fun\s+(\w+)\s*\(([^)]*)\)" + } + + pattern = patterns.get(language, patterns["python"]) + matches = re.finditer(pattern, content, re.MULTILINE) + + for match in matches: + name = next((g for g in match.groups() if g), "anonymous") + params_str = match.group(2) if len(match.groups()) > 1 and match.group(2) else "" + + # Count parameters + params = [p.strip() for p in params_str.split(",") if p.strip()] + param_count = len(params) + + # Estimate function length + start_pos = match.end() + remaining = content[start_pos:] + + next_func = re.search(pattern, remaining) + if next_func: + func_body = remaining[:next_func.start()] + else: + func_body = remaining[:min(2000, len(remaining))] + + line_count = len(func_body.split("\n")) + complexity = calculate_cyclomatic_complexity(func_body) + + functions.append({ + "name": name, + "parameters": param_count, + "lines": line_count, + "complexity": complexity + }) + + return functions + + +def find_classes(content: str, language: str) -> List[Dict]: + """Find class definitions and their metrics.""" + classes = [] + + patterns = { + "python": r"class\s+(\w+)", + "typescript": r"class\s+(\w+)", + "javascript": r"class\s+(\w+)", + "go": r"type\s+(\w+)\s+struct", + "swift": r"class\s+(\w+)", + "kotlin": r"class\s+(\w+)" + } + + pattern = patterns.get(language, patterns["python"]) + matches = re.finditer(pattern, content) + + for match in matches: + name = match.group(1) + + start_pos = match.end() + remaining = content[start_pos:] + + next_class = re.search(pattern, remaining) + if next_class: + class_body = remaining[:next_class.start()] + else: + class_body = remaining + + # Count methods + method_patterns = { + "python": r"def\s+\w+\s*\(", + "typescript": r"(?:public|private|protected)?\s*\w+\s*\([^)]*\)\s*[:{]", + "javascript": r"\w+\s*\([^)]*\)\s*\{", + "go": r"func\s+\(", + "swift": r"func\s+\w+", + "kotlin": r"fun\s+\w+" + } + method_pattern = method_patterns.get(language, method_patterns["python"]) + methods = len(re.findall(method_pattern, class_body)) + + classes.append({ + "name": name, + "methods": methods, + "lines": len(class_body.split("\n")) + }) + + return classes + + +def check_code_smells(content: str, functions: List[Dict], classes: List[Dict]) -> List[Dict]: + """Check for code smells in the content.""" + smells = [] + + # Long functions + for func in functions: + if func["lines"] > THRESHOLDS["long_function_lines"]: + smells.append({ + "type": "long_function", + "severity": "medium", + "message": f"Function '{func['name']}' has {func['lines']} lines (max: {THRESHOLDS['long_function_lines']})", + "location": func["name"] + }) + + # Too many parameters + for func in functions: + if func["parameters"] > THRESHOLDS["too_many_parameters"]: + smells.append({ + "type": "too_many_parameters", + "severity": "low", + "message": f"Function '{func['name']}' has {func['parameters']} parameters (max: {THRESHOLDS['too_many_parameters']})", + "location": func["name"] + }) + + # High complexity + for func in functions: + if func["complexity"] > THRESHOLDS["high_complexity"]: + severity = "high" if func["complexity"] > 20 else "medium" + smells.append({ + "type": "high_complexity", + "severity": severity, + "message": f"Function '{func['name']}' has complexity {func['complexity']} (max: {THRESHOLDS['high_complexity']})", + "location": func["name"] + }) + + # God classes + for cls in classes: + if cls["methods"] > THRESHOLDS["god_class_methods"]: + smells.append({ + "type": "god_class", + "severity": "high", + "message": f"Class '{cls['name']}' has {cls['methods']} methods (max: {THRESHOLDS['god_class_methods']})", + "location": cls["name"] + }) + + # Magic numbers + magic_pattern = r"\b(? List[Dict]: + """Check for potential SOLID principle violations.""" + violations = [] + + # OCP: Type checking instead of polymorphism + type_checks = len(re.findall(r"isinstance\(|type\(.*\)\s*==|typeof\s+\w+\s*===", content)) + if type_checks > 2: + violations.append({ + "principle": "OCP", + "name": "Open/Closed Principle", + "severity": "medium", + "message": f"Found {type_checks} type checks - consider using polymorphism" + }) + + # LSP/ISP: NotImplementedError + not_impl = len(re.findall(r"raise\s+NotImplementedError|not\s+implemented", content, re.IGNORECASE)) + if not_impl: + violations.append({ + "principle": "LSP/ISP", + "name": "Liskov/Interface Segregation", + "severity": "low", + "message": f"Found {not_impl} unimplemented methods - may indicate oversized interface" + }) + + # DIP: Too many direct imports + imports = len(re.findall(r"^(?:import|from)\s+", content, re.MULTILINE)) + if imports > THRESHOLDS["max_imports"]: + violations.append({ + "principle": "DIP", + "name": "Dependency Inversion Principle", + "severity": "low", + "message": f"File has {imports} imports - consider dependency injection" + }) + + return violations + + +def calculate_quality_score( + line_metrics: Dict, + functions: List[Dict], + classes: List[Dict], + smells: List[Dict], + violations: List[Dict] +) -> int: + """Calculate overall quality score (0-100).""" + score = 100 + + # Deduct for code smells + for smell in smells: + if smell["severity"] == "high": + score -= 10 + elif smell["severity"] == "medium": + score -= 5 + elif smell["severity"] == "low": + score -= 2 + + # Deduct for SOLID violations + for violation in violations: + if violation["severity"] == "high": + score -= 8 + elif violation["severity"] == "medium": + score -= 4 + elif violation["severity"] == "low": + score -= 2 + + # Bonus for good comment ratio (10-30%) + if line_metrics["total"] > 0: + comment_ratio = line_metrics["comment"] / line_metrics["total"] + if 0.1 <= comment_ratio <= 0.3: + score += 5 + + # Bonus for reasonable function sizes + if functions: + avg_lines = sum(f["lines"] for f in functions) / len(functions) + if avg_lines < 30: + score += 5 + + return max(0, min(100, score)) + + +def get_grade(score: int) -> str: + """Convert score to letter grade.""" + if score >= 90: + return "A" + elif score >= 80: + return "B" + elif score >= 70: + return "C" + elif score >= 60: + return "D" + else: + return "F" + + +def analyze_file(filepath: Path) -> Dict: + """Analyze a single file for code quality.""" + language = detect_language(filepath) + if not language: + return {"error": f"Unsupported file type: {filepath.suffix}"} + + content = read_file_content(filepath) + if not content: + return {"error": f"Could not read file: {filepath}"} + + line_metrics = count_lines(content) + functions = find_functions(content, language) + classes = find_classes(content, language) + smells = check_code_smells(content, functions, classes) + violations = check_solid_violations(content) + score = calculate_quality_score(line_metrics, functions, classes, smells, violations) + + return { + "file": str(filepath), + "language": language, + "metrics": { + "lines": line_metrics, + "functions": len(functions), + "classes": len(classes), + "avg_complexity": round(sum(f["complexity"] for f in functions) / max(1, len(functions)), 1) + }, + "quality_score": score, + "grade": get_grade(score), + "smells": smells, + "solid_violations": violations, + "function_details": functions[:10], + "class_details": classes[:10] + } + + +def analyze_directory( + dir_path: Path, + recursive: bool = True, + language: Optional[str] = None +) -> Dict: + """Analyze all files in a directory.""" + results = [] + extensions = [] + + if language: + extensions = LANGUAGE_EXTENSIONS.get(language, []) + else: + for exts in LANGUAGE_EXTENSIONS.values(): + extensions.extend(exts) + + pattern = "**/*" if recursive else "*" + + for ext in extensions: + for filepath in dir_path.glob(f"{pattern}{ext}"): + if "node_modules" in str(filepath) or ".git" in str(filepath): + continue + result = analyze_file(filepath) + if "error" not in result: + results.append(result) + + if not results: + return {"error": "No supported files found"} + + total_score = sum(r["quality_score"] for r in results) + avg_score = total_score / len(results) + total_smells = sum(len(r["smells"]) for r in results) + total_violations = sum(len(r["solid_violations"]) for r in results) + + return { + "directory": str(dir_path), + "files_analyzed": len(results), + "average_score": round(avg_score, 1), + "overall_grade": get_grade(int(avg_score)), + "total_code_smells": total_smells, + "total_solid_violations": total_violations, + "files": sorted(results, key=lambda x: x["quality_score"]) + } + + +def print_report(analysis: Dict) -> None: + """Print human-readable analysis report.""" + if "error" in analysis: + print(f"Error: {analysis['error']}") + return + + print("=" * 60) + print("CODE QUALITY REPORT") + print("=" * 60) + + if "file" in analysis: + print(f"\nFile: {analysis['file']}") + print(f"Language: {analysis['language']}") + print(f"Quality Score: {analysis['quality_score']}/100 ({analysis['grade']})") + + metrics = analysis["metrics"] + print(f"\nLines: {metrics['lines']['total']} ({metrics['lines']['code']} code, {metrics['lines']['comment']} comments)") + print(f"Functions: {metrics['functions']}") + print(f"Classes: {metrics['classes']}") + print(f"Avg Complexity: {metrics['avg_complexity']}") + + if analysis["smells"]: + print("\n--- CODE SMELLS ---") + for smell in analysis["smells"][:10]: + print(f" [{smell['severity'].upper()}] {smell['message']} ({smell['location']})") + + if analysis["solid_violations"]: + print("\n--- SOLID VIOLATIONS ---") + for v in analysis["solid_violations"]: + print(f" [{v['principle']}] {v['message']}") + else: + print(f"\nDirectory: {analysis['directory']}") + print(f"Files Analyzed: {analysis['files_analyzed']}") + print(f"Average Score: {analysis['average_score']}/100 ({analysis['overall_grade']})") + print(f"Total Code Smells: {analysis['total_code_smells']}") + print(f"Total SOLID Violations: {analysis['total_solid_violations']}") + + print("\n--- FILES BY QUALITY ---") + for f in analysis["files"][:10]: + print(f" {f['quality_score']:3d}/100 [{f['grade']}] {f['file']}") + + print("\n" + "=" * 60) + + +def main(): + parser = argparse.ArgumentParser( + description="Analyze code quality, smells, and SOLID violations" + ) + parser.add_argument( + "path", + help="File or directory to analyze" + ) + parser.add_argument( + "--recursive", "-r", + action="store_true", + default=True, + help="Recursively analyze directories (default: true)" + ) + parser.add_argument( + "--language", "-l", + choices=list(LANGUAGE_EXTENSIONS.keys()), + help="Filter by programming language" + ) + parser.add_argument( + "--json", + action="store_true", + help="Output in JSON format" + ) + parser.add_argument( + "--output", "-o", + help="Write output to file" + ) + + args = parser.parse_args() + + target = Path(args.path).resolve() + + if not target.exists(): + print(f"Error: Path does not exist: {target}", file=sys.stderr) + sys.exit(1) + + if target.is_file(): + analysis = analyze_file(target) + else: + analysis = analyze_directory(target, args.recursive, args.language) + + if args.json: + output = json.dumps(analysis, indent=2, default=str) + if args.output: + with open(args.output, "w") as f: + f.write(output) + print(f"Results written to {args.output}") + else: + print(output) + else: + print_report(analysis) + + +if __name__ == "__main__": + main() diff --git a/.cline/skills/code-reviewer/scripts/pr_analyzer.py b/.cline/skills/code-reviewer/scripts/pr_analyzer.py new file mode 100644 index 00000000..41bbadbd --- /dev/null +++ b/.cline/skills/code-reviewer/scripts/pr_analyzer.py @@ -0,0 +1,495 @@ +#!/usr/bin/env python3 +""" +PR Analyzer + +Analyzes pull request changes for review complexity, risk assessment, +and generates review priorities. + +Usage: + python .cline/skills/code-reviewer/scripts/pr_analyzer.py /path/to/repo + python .cline/skills/code-reviewer/scripts/pr_analyzer.py . --base main --head feature-branch + python .cline/skills/code-reviewer/scripts/pr_analyzer.py /path/to/repo --json +""" + +import argparse +import json +import os +import re +import subprocess +import sys +from pathlib import Path +from typing import Dict, List, Optional, Tuple + + +# File categories for review prioritization +FILE_CATEGORIES = { + "critical": { + "patterns": [ + r"auth", r"security", r"password", r"token", r"secret", + r"payment", r"billing", r"crypto", r"encrypt" + ], + "weight": 5, + "description": "Security-sensitive files requiring careful review" + }, + "high": { + "patterns": [ + r"api", r"database", r"migration", r"schema", r"model", + r"config", r"env", r"middleware" + ], + "weight": 4, + "description": "Core infrastructure files" + }, + "medium": { + "patterns": [ + r"service", r"controller", r"handler", r"util", r"helper" + ], + "weight": 3, + "description": "Business logic files" + }, + "low": { + "patterns": [ + r"test", r"spec", r"mock", r"fixture", r"story", + r"readme", r"docs", r"\.md$" + ], + "weight": 1, + "description": "Tests and documentation" + } +} + +# Risky patterns to flag +RISK_PATTERNS = [ + { + "name": "hardcoded_secrets", + "pattern": r"(password|secret|api_key|token)\s*[=:]\s*['\"][^'\"]+['\"]", + "severity": "critical", + "message": "Potential hardcoded secret detected" + }, + { + "name": "todo_fixme", + "pattern": r"(TODO|FIXME|HACK|XXX):", + "severity": "low", + "message": "TODO/FIXME comment found" + }, + { + "name": "console_log", + "pattern": r"console\.(log|debug|info|warn|error)\(", + "severity": "medium", + "message": "Console statement found (remove for production)" + }, + { + "name": "debugger", + "pattern": r"\bdebugger\b", + "severity": "high", + "message": "Debugger statement found" + }, + { + "name": "disable_eslint", + "pattern": r"eslint-disable", + "severity": "medium", + "message": "ESLint rule disabled" + }, + { + "name": "any_type", + "pattern": r":\s*any\b", + "severity": "medium", + "message": "TypeScript 'any' type used" + }, + { + "name": "sql_concatenation", + "pattern": r"(SELECT|INSERT|UPDATE|DELETE).*\+.*['\"]", + "severity": "critical", + "message": "Potential SQL injection (string concatenation in query)" + } +] + + +def run_git_command(cmd: List[str], cwd: Path) -> Tuple[bool, str]: + """Run a git command and return success status and output.""" + try: + result = subprocess.run( + cmd, + cwd=cwd, + capture_output=True, + text=True, + timeout=30 + ) + return result.returncode == 0, result.stdout.strip() + except subprocess.TimeoutExpired: + return False, "Command timed out" + except Exception as e: + return False, str(e) + + +def get_changed_files(repo_path: Path, base: str, head: str) -> List[Dict]: + """Get list of changed files between two refs.""" + success, output = run_git_command( + ["git", "diff", "--name-status", f"{base}...{head}"], + repo_path + ) + + if not success: + # Try without the triple dot (for uncommitted changes) + success, output = run_git_command( + ["git", "diff", "--name-status", base, head], + repo_path + ) + + if not success or not output: + # Fall back to staged changes + success, output = run_git_command( + ["git", "diff", "--name-status", "--cached"], + repo_path + ) + + files = [] + for line in output.split("\n"): + if not line.strip(): + continue + parts = line.split("\t") + if len(parts) >= 2: + status = parts[0][0] # First character of status + filepath = parts[-1] # Handle renames (R100\told\tnew) + status_map = { + "A": "added", + "M": "modified", + "D": "deleted", + "R": "renamed", + "C": "copied" + } + files.append({ + "path": filepath, + "status": status_map.get(status, "modified") + }) + + return files + + +def get_file_diff(repo_path: Path, filepath: str, base: str, head: str) -> str: + """Get diff content for a specific file.""" + success, output = run_git_command( + ["git", "diff", f"{base}...{head}", "--", filepath], + repo_path + ) + if not success: + success, output = run_git_command( + ["git", "diff", "--cached", "--", filepath], + repo_path + ) + return output if success else "" + + +def categorize_file(filepath: str) -> Tuple[str, int]: + """Categorize a file based on its path and name.""" + filepath_lower = filepath.lower() + + for category, info in FILE_CATEGORIES.items(): + for pattern in info["patterns"]: + if re.search(pattern, filepath_lower): + return category, info["weight"] + + return "medium", 2 # Default category + + +def analyze_diff_for_risks(diff_content: str, filepath: str) -> List[Dict]: + """Analyze diff content for risky patterns.""" + risks = [] + + # Only analyze added lines (starting with +) + added_lines = [ + line[1:] for line in diff_content.split("\n") + if line.startswith("+") and not line.startswith("+++") + ] + + content = "\n".join(added_lines) + + for risk in RISK_PATTERNS: + matches = re.findall(risk["pattern"], content, re.IGNORECASE) + if matches: + risks.append({ + "name": risk["name"], + "severity": risk["severity"], + "message": risk["message"], + "file": filepath, + "count": len(matches) + }) + + return risks + + +def count_changes(diff_content: str) -> Dict[str, int]: + """Count additions and deletions in diff.""" + additions = 0 + deletions = 0 + + for line in diff_content.split("\n"): + if line.startswith("+") and not line.startswith("+++"): + additions += 1 + elif line.startswith("-") and not line.startswith("---"): + deletions += 1 + + return {"additions": additions, "deletions": deletions} + + +def calculate_complexity_score(files: List[Dict], all_risks: List[Dict]) -> int: + """Calculate overall PR complexity score (1-10).""" + score = 0 + + # File count contribution (max 3 points) + file_count = len(files) + if file_count > 20: + score += 3 + elif file_count > 10: + score += 2 + elif file_count > 5: + score += 1 + + # Total changes contribution (max 3 points) + total_changes = sum(f.get("additions", 0) + f.get("deletions", 0) for f in files) + if total_changes > 500: + score += 3 + elif total_changes > 200: + score += 2 + elif total_changes > 50: + score += 1 + + # Risk severity contribution (max 4 points) + critical_risks = sum(1 for r in all_risks if r["severity"] == "critical") + high_risks = sum(1 for r in all_risks if r["severity"] == "high") + + score += min(2, critical_risks) + score += min(2, high_risks) + + return min(10, max(1, score)) + + +def analyze_commit_messages(repo_path: Path, base: str, head: str) -> Dict: + """Analyze commit messages in the PR.""" + success, output = run_git_command( + ["git", "log", "--oneline", f"{base}...{head}"], + repo_path + ) + + if not success or not output: + return {"commits": 0, "issues": []} + + commits = output.strip().split("\n") + issues = [] + + for commit in commits: + if len(commit) < 10: + continue + + # Check for conventional commit format + message = commit[8:] if len(commit) > 8 else commit # Skip hash + + if not re.match(r"^(feat|fix|docs|style|refactor|test|chore|perf|ci|build|revert)(\(.+\))?:", message): + issues.append({ + "commit": commit[:7], + "issue": "Does not follow conventional commit format" + }) + + if len(message) > 72: + issues.append({ + "commit": commit[:7], + "issue": "Commit message exceeds 72 characters" + }) + + return { + "commits": len(commits), + "issues": issues + } + + +def analyze_pr( + repo_path: Path, + base: str = "main", + head: str = "HEAD" +) -> Dict: + """Perform complete PR analysis.""" + # Get changed files + changed_files = get_changed_files(repo_path, base, head) + + if not changed_files: + return { + "status": "no_changes", + "message": "No changes detected between branches" + } + + # Analyze each file + all_risks = [] + file_analyses = [] + + for file_info in changed_files: + filepath = file_info["path"] + category, weight = categorize_file(filepath) + + # Get diff for the file + diff = get_file_diff(repo_path, filepath, base, head) + changes = count_changes(diff) + risks = analyze_diff_for_risks(diff, filepath) + + all_risks.extend(risks) + + file_analyses.append({ + "path": filepath, + "status": file_info["status"], + "category": category, + "priority_weight": weight, + "additions": changes["additions"], + "deletions": changes["deletions"], + "risks": risks + }) + + # Sort by priority (highest first) + file_analyses.sort(key=lambda x: (-x["priority_weight"], x["path"])) + + # Analyze commits + commit_analysis = analyze_commit_messages(repo_path, base, head) + + # Calculate metrics + complexity = calculate_complexity_score(file_analyses, all_risks) + + total_additions = sum(f["additions"] for f in file_analyses) + total_deletions = sum(f["deletions"] for f in file_analyses) + + return { + "status": "analyzed", + "summary": { + "files_changed": len(file_analyses), + "total_additions": total_additions, + "total_deletions": total_deletions, + "complexity_score": complexity, + "complexity_label": get_complexity_label(complexity), + "commits": commit_analysis["commits"] + }, + "risks": { + "critical": [r for r in all_risks if r["severity"] == "critical"], + "high": [r for r in all_risks if r["severity"] == "high"], + "medium": [r for r in all_risks if r["severity"] == "medium"], + "low": [r for r in all_risks if r["severity"] == "low"] + }, + "files": file_analyses, + "commit_issues": commit_analysis["issues"], + "review_order": [f["path"] for f in file_analyses[:10]] # Top 10 priority files + } + + +def get_complexity_label(score: int) -> str: + """Get human-readable complexity label.""" + if score <= 2: + return "Simple" + elif score <= 4: + return "Moderate" + elif score <= 6: + return "Complex" + elif score <= 8: + return "Very Complex" + else: + return "Critical" + + +def print_report(analysis: Dict) -> None: + """Print human-readable analysis report.""" + if analysis["status"] == "no_changes": + print("No changes detected.") + return + + summary = analysis["summary"] + risks = analysis["risks"] + + print("=" * 60) + print("PR ANALYSIS REPORT") + print("=" * 60) + + print(f"\nComplexity: {summary['complexity_score']}/10 ({summary['complexity_label']})") + print(f"Files Changed: {summary['files_changed']}") + print(f"Lines: +{summary['total_additions']} / -{summary['total_deletions']}") + print(f"Commits: {summary['commits']}") + + # Risk summary + print("\n--- RISK SUMMARY ---") + print(f"Critical: {len(risks['critical'])}") + print(f"High: {len(risks['high'])}") + print(f"Medium: {len(risks['medium'])}") + print(f"Low: {len(risks['low'])}") + + # Critical and high risks details + if risks["critical"]: + print("\n--- CRITICAL RISKS ---") + for risk in risks["critical"]: + print(f" [{risk['file']}] {risk['message']} (x{risk['count']})") + + if risks["high"]: + print("\n--- HIGH RISKS ---") + for risk in risks["high"]: + print(f" [{risk['file']}] {risk['message']} (x{risk['count']})") + + # Commit message issues + if analysis["commit_issues"]: + print("\n--- COMMIT MESSAGE ISSUES ---") + for issue in analysis["commit_issues"][:5]: + print(f" {issue['commit']}: {issue['issue']}") + + # Review order + print("\n--- SUGGESTED REVIEW ORDER ---") + for i, filepath in enumerate(analysis["review_order"], 1): + file_info = next(f for f in analysis["files"] if f["path"] == filepath) + print(f" {i}. [{file_info['category'].upper()}] {filepath}") + + print("\n" + "=" * 60) + + +def main(): + parser = argparse.ArgumentParser( + description="Analyze pull request for review complexity and risks" + ) + parser.add_argument( + "repo_path", + nargs="?", + default=".", + help="Path to git repository (default: current directory)" + ) + parser.add_argument( + "--base", "-b", + default="main", + help="Base branch for comparison (default: main)" + ) + parser.add_argument( + "--head", + default="HEAD", + help="Head branch/commit for comparison (default: HEAD)" + ) + parser.add_argument( + "--json", + action="store_true", + help="Output in JSON format" + ) + parser.add_argument( + "--output", "-o", + help="Write output to file" + ) + + args = parser.parse_args() + + repo_path = Path(args.repo_path).resolve() + + if not (repo_path / ".git").exists(): + print(f"Error: {repo_path} is not a git repository", file=sys.stderr) + sys.exit(1) + + analysis = analyze_pr(repo_path, args.base, args.head) + + if args.json: + output = json.dumps(analysis, indent=2) + if args.output: + with open(args.output, "w") as f: + f.write(output) + print(f"Results written to {args.output}") + else: + print(output) + else: + print_report(analysis) + + +if __name__ == "__main__": + main() diff --git a/.cline/skills/code-reviewer/scripts/review_report_generator.py b/.cline/skills/code-reviewer/scripts/review_report_generator.py new file mode 100644 index 00000000..94aea69c --- /dev/null +++ b/.cline/skills/code-reviewer/scripts/review_report_generator.py @@ -0,0 +1,505 @@ +#!/usr/bin/env python3 +""" +Review Report Generator + +Generates comprehensive code review reports by combining PR analysis +and code quality findings into structured, actionable reports. + +Usage: + python .cline/skills/code-reviewer/scripts/review_report_generator.py /path/to/repo + python .cline/skills/code-reviewer/scripts/review_report_generator.py . --pr-analysis pr_results.json --quality-analysis quality_results.json + python .cline/skills/code-reviewer/scripts/review_report_generator.py /path/to/repo --format markdown --output review.md +""" + +import argparse +import json +import os +import subprocess +import sys +from datetime import datetime +from pathlib import Path +from typing import Dict, List, Optional, Tuple + + +# Severity weights for prioritization +SEVERITY_WEIGHTS = { + "critical": 100, + "high": 75, + "medium": 50, + "low": 25, + "info": 10 +} + +# Review verdict thresholds +VERDICT_THRESHOLDS = { + "approve": {"max_critical": 0, "max_high": 0, "max_score": 100}, + "approve_with_suggestions": {"max_critical": 0, "max_high": 2, "max_score": 85}, + "request_changes": {"max_critical": 0, "max_high": 5, "max_score": 70}, + "block": {"max_critical": float("inf"), "max_high": float("inf"), "max_score": 0} +} + + +def load_json_file(filepath: str) -> Optional[Dict]: + """Load JSON file if it exists.""" + try: + with open(filepath, "r") as f: + return json.load(f) + except (FileNotFoundError, json.JSONDecodeError): + return None + + +def run_pr_analyzer(repo_path: Path) -> Dict: + """Run .cline/skills/code-reviewer/scripts/pr_analyzer.py and return results.""" + script_path = Path(__file__).parent / ".cline/skills/code-reviewer/scripts/pr_analyzer.py" + if not script_path.exists(): + return {"status": "error", "message": ".cline/skills/code-reviewer/scripts/pr_analyzer.py not found"} + + try: + result = subprocess.run( + [sys.executable, str(script_path), str(repo_path), "--json"], + capture_output=True, + text=True, + timeout=120 + ) + if result.returncode == 0: + return json.loads(result.stdout) + return {"status": "error", "message": result.stderr} + except Exception as e: + return {"status": "error", "message": str(e)} + + +def run_quality_checker(repo_path: Path) -> Dict: + """Run .cline/skills/code-reviewer/scripts/code_quality_checker.py and return results.""" + script_path = Path(__file__).parent / ".cline/skills/code-reviewer/scripts/code_quality_checker.py" + if not script_path.exists(): + return {"status": "error", "message": ".cline/skills/code-reviewer/scripts/code_quality_checker.py not found"} + + try: + result = subprocess.run( + [sys.executable, str(script_path), str(repo_path), "--json"], + capture_output=True, + text=True, + timeout=300 + ) + if result.returncode == 0: + return json.loads(result.stdout) + return {"status": "error", "message": result.stderr} + except Exception as e: + return {"status": "error", "message": str(e)} + + +def calculate_review_score(pr_analysis: Dict, quality_analysis: Dict) -> int: + """Calculate overall review score (0-100).""" + score = 100 + + # Deduct for PR risks + if "risks" in pr_analysis: + risks = pr_analysis["risks"] + score -= len(risks.get("critical", [])) * 15 + score -= len(risks.get("high", [])) * 10 + score -= len(risks.get("medium", [])) * 5 + score -= len(risks.get("low", [])) * 2 + + # Deduct for code quality issues + if "issues" in quality_analysis: + issues = quality_analysis["issues"] + score -= len([i for i in issues if i.get("severity") == "critical"]) * 12 + score -= len([i for i in issues if i.get("severity") == "high"]) * 8 + score -= len([i for i in issues if i.get("severity") == "medium"]) * 4 + score -= len([i for i in issues if i.get("severity") == "low"]) * 1 + + # Deduct for complexity + if "summary" in pr_analysis: + complexity = pr_analysis["summary"].get("complexity_score", 0) + if complexity > 7: + score -= 10 + elif complexity > 5: + score -= 5 + + return max(0, min(100, score)) + + +def determine_verdict(score: int, critical_count: int, high_count: int) -> Tuple[str, str]: + """Determine review verdict based on score and issue counts.""" + if critical_count > 0: + return "block", "Critical issues must be resolved before merge" + + if score >= 90 and high_count == 0: + return "approve", "Code meets quality standards" + + if score >= 75 and high_count <= 2: + return "approve_with_suggestions", "Minor improvements recommended" + + if score >= 50: + return "request_changes", "Several issues need to be addressed" + + return "block", "Significant issues prevent approval" + + +def generate_findings_list(pr_analysis: Dict, quality_analysis: Dict) -> List[Dict]: + """Combine and prioritize all findings.""" + findings = [] + + # Add PR risk findings + if "risks" in pr_analysis: + for severity, items in pr_analysis["risks"].items(): + for item in items: + findings.append({ + "source": "pr_analysis", + "severity": severity, + "category": item.get("name", "unknown"), + "message": item.get("message", ""), + "file": item.get("file", ""), + "count": item.get("count", 1) + }) + + # Add code quality findings + if "issues" in quality_analysis: + for issue in quality_analysis["issues"]: + findings.append({ + "source": "quality_analysis", + "severity": issue.get("severity", "medium"), + "category": issue.get("type", "unknown"), + "message": issue.get("message", ""), + "file": issue.get("file", ""), + "line": issue.get("line", 0) + }) + + # Sort by severity weight + findings.sort( + key=lambda x: -SEVERITY_WEIGHTS.get(x["severity"], 0) + ) + + return findings + + +def generate_action_items(findings: List[Dict]) -> List[Dict]: + """Generate prioritized action items from findings.""" + action_items = [] + seen_categories = set() + + for finding in findings: + category = finding["category"] + severity = finding["severity"] + + # Group similar issues + if category in seen_categories and severity not in ["critical", "high"]: + continue + + action = { + "priority": "P0" if severity == "critical" else "P1" if severity == "high" else "P2", + "action": get_action_for_category(category, finding), + "severity": severity, + "files_affected": [finding["file"]] if finding.get("file") else [] + } + action_items.append(action) + seen_categories.add(category) + + return action_items[:15] # Top 15 actions + + +def get_action_for_category(category: str, finding: Dict) -> str: + """Get actionable recommendation for issue category.""" + actions = { + "hardcoded_secrets": "Remove hardcoded credentials and use environment variables or a secrets manager", + "sql_concatenation": "Use parameterized queries to prevent SQL injection", + "debugger": "Remove debugger statements before merging", + "console_log": "Remove or replace console statements with proper logging", + "todo_fixme": "Address TODO/FIXME comments or create tracking issues", + "disable_eslint": "Address the underlying issue instead of disabling lint rules", + "any_type": "Replace 'any' types with proper type definitions", + "long_function": "Break down function into smaller, focused units", + "god_class": "Split class into smaller, single-responsibility classes", + "too_many_params": "Use parameter objects or builder pattern", + "deep_nesting": "Refactor using early returns, guard clauses, or extraction", + "high_complexity": "Reduce cyclomatic complexity through refactoring", + "missing_error_handling": "Add proper error handling and recovery logic", + "duplicate_code": "Extract duplicate code into shared functions", + "magic_numbers": "Replace magic numbers with named constants", + "large_file": "Consider splitting into multiple smaller modules" + } + return actions.get(category, f"Review and address: {finding.get('message', category)}") + + +def format_markdown_report(report: Dict) -> str: + """Generate markdown-formatted report.""" + lines = [] + + # Header + lines.append("# Code Review Report") + lines.append("") + lines.append(f"**Generated:** {report['metadata']['generated_at']}") + lines.append(f"**Repository:** {report['metadata']['repository']}") + lines.append("") + + # Executive Summary + lines.append("## Executive Summary") + lines.append("") + summary = report["summary"] + verdict = summary["verdict"] + verdict_emoji = { + "approve": "✅", + "approve_with_suggestions": "✅", + "request_changes": "⚠️", + "block": "❌" + }.get(verdict, "❓") + + lines.append(f"**Verdict:** {verdict_emoji} {verdict.upper().replace('_', ' ')}") + lines.append(f"**Score:** {summary['score']}/100") + lines.append(f"**Rationale:** {summary['rationale']}") + lines.append("") + + # Issue Counts + lines.append("### Issue Summary") + lines.append("") + lines.append("| Severity | Count |") + lines.append("|----------|-------|") + for severity in ["critical", "high", "medium", "low"]: + count = summary["issue_counts"].get(severity, 0) + lines.append(f"| {severity.capitalize()} | {count} |") + lines.append("") + + # PR Statistics (if available) + if "pr_summary" in report: + pr = report["pr_summary"] + lines.append("### Change Statistics") + lines.append("") + lines.append(f"- **Files Changed:** {pr.get('files_changed', 'N/A')}") + lines.append(f"- **Lines Added:** +{pr.get('total_additions', 0)}") + lines.append(f"- **Lines Removed:** -{pr.get('total_deletions', 0)}") + lines.append(f"- **Complexity:** {pr.get('complexity_label', 'N/A')}") + lines.append("") + + # Action Items + if report.get("action_items"): + lines.append("## Action Items") + lines.append("") + for i, item in enumerate(report["action_items"], 1): + priority = item["priority"] + emoji = "🔴" if priority == "P0" else "🟠" if priority == "P1" else "🟡" + lines.append(f"{i}. {emoji} **[{priority}]** {item['action']}") + if item.get("files_affected"): + lines.append(f" - Files: {', '.join(item['files_affected'][:3])}") + lines.append("") + + # Critical Findings + critical_findings = [f for f in report.get("findings", []) if f["severity"] == "critical"] + if critical_findings: + lines.append("## Critical Issues (Must Fix)") + lines.append("") + for finding in critical_findings: + lines.append(f"- **{finding['category']}** in `{finding.get('file', 'unknown')}`") + lines.append(f" - {finding['message']}") + lines.append("") + + # High Priority Findings + high_findings = [f for f in report.get("findings", []) if f["severity"] == "high"] + if high_findings: + lines.append("## High Priority Issues") + lines.append("") + for finding in high_findings[:10]: + lines.append(f"- **{finding['category']}** in `{finding.get('file', 'unknown')}`") + lines.append(f" - {finding['message']}") + lines.append("") + + # Review Order (if available) + if "review_order" in report: + lines.append("## Suggested Review Order") + lines.append("") + for i, filepath in enumerate(report["review_order"][:10], 1): + lines.append(f"{i}. `{filepath}`") + lines.append("") + + # Footer + lines.append("---") + lines.append("*Generated by Code Reviewer*") + + return "\n".join(lines) + + +def format_text_report(report: Dict) -> str: + """Generate plain text report.""" + lines = [] + + lines.append("=" * 60) + lines.append("CODE REVIEW REPORT") + lines.append("=" * 60) + lines.append("") + lines.append(f"Generated: {report['metadata']['generated_at']}") + lines.append(f"Repository: {report['metadata']['repository']}") + lines.append("") + + summary = report["summary"] + verdict = summary["verdict"].upper().replace("_", " ") + lines.append(f"VERDICT: {verdict}") + lines.append(f"SCORE: {summary['score']}/100") + lines.append(f"RATIONALE: {summary['rationale']}") + lines.append("") + + lines.append("--- ISSUE SUMMARY ---") + for severity in ["critical", "high", "medium", "low"]: + count = summary["issue_counts"].get(severity, 0) + lines.append(f" {severity.capitalize()}: {count}") + lines.append("") + + if report.get("action_items"): + lines.append("--- ACTION ITEMS ---") + for i, item in enumerate(report["action_items"][:10], 1): + lines.append(f" {i}. [{item['priority']}] {item['action']}") + lines.append("") + + critical = [f for f in report.get("findings", []) if f["severity"] == "critical"] + if critical: + lines.append("--- CRITICAL ISSUES ---") + for f in critical: + lines.append(f" [{f.get('file', 'unknown')}] {f['message']}") + lines.append("") + + lines.append("=" * 60) + + return "\n".join(lines) + + +def generate_report( + repo_path: Path, + pr_analysis: Optional[Dict] = None, + quality_analysis: Optional[Dict] = None +) -> Dict: + """Generate comprehensive review report.""" + # Run analyses if not provided + if pr_analysis is None: + pr_analysis = run_pr_analyzer(repo_path) + + if quality_analysis is None: + quality_analysis = run_quality_checker(repo_path) + + # Generate findings + findings = generate_findings_list(pr_analysis, quality_analysis) + + # Count issues by severity + issue_counts = { + "critical": len([f for f in findings if f["severity"] == "critical"]), + "high": len([f for f in findings if f["severity"] == "high"]), + "medium": len([f for f in findings if f["severity"] == "medium"]), + "low": len([f for f in findings if f["severity"] == "low"]) + } + + # Calculate score and verdict + score = calculate_review_score(pr_analysis, quality_analysis) + verdict, rationale = determine_verdict( + score, + issue_counts["critical"], + issue_counts["high"] + ) + + # Generate action items + action_items = generate_action_items(findings) + + # Build report + report = { + "metadata": { + "generated_at": datetime.now().isoformat(), + "repository": str(repo_path), + "version": "1.0.0" + }, + "summary": { + "score": score, + "verdict": verdict, + "rationale": rationale, + "issue_counts": issue_counts + }, + "findings": findings, + "action_items": action_items + } + + # Add PR summary if available + if pr_analysis.get("status") == "analyzed": + report["pr_summary"] = pr_analysis.get("summary", {}) + report["review_order"] = pr_analysis.get("review_order", []) + + # Add quality summary if available + if quality_analysis.get("status") == "analyzed": + report["quality_summary"] = quality_analysis.get("summary", {}) + + return report + + +def main(): + parser = argparse.ArgumentParser( + description="Generate comprehensive code review reports" + ) + parser.add_argument( + "repo_path", + nargs="?", + default=".", + help="Path to repository (default: current directory)" + ) + parser.add_argument( + "--pr-analysis", + help="Path to pre-computed PR analysis JSON" + ) + parser.add_argument( + "--quality-analysis", + help="Path to pre-computed quality analysis JSON" + ) + parser.add_argument( + "--format", "-f", + choices=["text", "markdown", "json"], + default="text", + help="Output format (default: text)" + ) + parser.add_argument( + "--output", "-o", + help="Write output to file" + ) + parser.add_argument( + "--json", + action="store_true", + help="Output as JSON (shortcut for --format json)" + ) + + args = parser.parse_args() + + repo_path = Path(args.repo_path).resolve() + if not repo_path.exists(): + print(f"Error: Path does not exist: {repo_path}", file=sys.stderr) + sys.exit(1) + + # Load pre-computed analyses if provided + pr_analysis = None + quality_analysis = None + + if args.pr_analysis: + pr_analysis = load_json_file(args.pr_analysis) + if not pr_analysis: + print(f"Warning: Could not load PR analysis from {args.pr_analysis}") + + if args.quality_analysis: + quality_analysis = load_json_file(args.quality_analysis) + if not quality_analysis: + print(f"Warning: Could not load quality analysis from {args.quality_analysis}") + + # Generate report + report = generate_report(repo_path, pr_analysis, quality_analysis) + + # Format output + output_format = "json" if args.json else args.format + + if output_format == "json": + output = json.dumps(report, indent=2) + elif output_format == "markdown": + output = format_markdown_report(report) + else: + output = format_text_report(report) + + # Write or print output + if args.output: + with open(args.output, "w") as f: + f.write(output) + print(f"Report written to {args.output}") + else: + print(output) + + +if __name__ == "__main__": + main() diff --git a/.continue/skills/code-reviewer/SKILL.md b/.continue/skills/code-reviewer/SKILL.md new file mode 100644 index 00000000..a11aaf0b --- /dev/null +++ b/.continue/skills/code-reviewer/SKILL.md @@ -0,0 +1,177 @@ +--- +name: code-reviewer +description: Code review automation for TypeScript, JavaScript, Python, Go, Swift, Kotlin. Analyzes PRs for complexity and risk, checks code quality for SOLID violations and code smells, generates review reports. Use when reviewing pull requests, analyzing code quality, identifying issues, generating review checklists. +--- + +# Code Reviewer + +Automated code review tools for analyzing pull requests, detecting code quality issues, and generating review reports. + +--- + +## Table of Contents + +- [Tools](#tools) + - [PR Analyzer](#pr-analyzer) + - [Code Quality Checker](#code-quality-checker) + - [Review Report Generator](#review-report-generator) +- [Reference Guides](#reference-guides) +- [Languages Supported](#languages-supported) + +--- + +## Tools + +### PR Analyzer + +Analyzes git diff between branches to assess review complexity and identify risks. + +```bash +# Analyze current branch against main +python scripts/pr_analyzer.py /path/to/repo + +# Compare specific branches +python scripts/pr_analyzer.py . --base main --head feature-branch + +# JSON output for integration +python scripts/pr_analyzer.py /path/to/repo --json +``` + +**What it detects:** +- Hardcoded secrets (passwords, API keys, tokens) +- SQL injection patterns (string concatenation in queries) +- Debug statements (debugger, console.log) +- ESLint rule disabling +- TypeScript `any` types +- TODO/FIXME comments + +**Output includes:** +- Complexity score (1-10) +- Risk categorization (critical, high, medium, low) +- File prioritization for review order +- Commit message validation + +--- + +### Code Quality Checker + +Analyzes source code for structural issues, code smells, and SOLID violations. + +```bash +# Analyze a directory +python scripts/code_quality_checker.py /path/to/code + +# Analyze specific language +python scripts/code_quality_checker.py . --language python + +# JSON output +python scripts/code_quality_checker.py /path/to/code --json +``` + +**What it detects:** +- Long functions (>50 lines) +- Large files (>500 lines) +- God classes (>20 methods) +- Deep nesting (>4 levels) +- Too many parameters (>5) +- High cyclomatic complexity +- Missing error handling +- Unused imports +- Magic numbers + +**Thresholds:** + +| Issue | Threshold | +|-------|-----------| +| Long function | >50 lines | +| Large file | >500 lines | +| God class | >20 methods | +| Too many params | >5 | +| Deep nesting | >4 levels | +| High complexity | >10 branches | + +--- + +### Review Report Generator + +Combines PR analysis and code quality findings into structured review reports. + +```bash +# Generate report for current repo +python scripts/review_report_generator.py /path/to/repo + +# Markdown output +python scripts/review_report_generator.py . --format markdown --output review.md + +# Use pre-computed analyses +python scripts/review_report_generator.py . \ + --pr-analysis pr_results.json \ + --quality-analysis quality_results.json +``` + +**Report includes:** +- Review verdict (approve, request changes, block) +- Score (0-100) +- Prioritized action items +- Issue summary by severity +- Suggested review order + +**Verdicts:** + +| Score | Verdict | +|-------|---------| +| 90+ with no high issues | Approve | +| 75+ with ≤2 high issues | Approve with suggestions | +| 50-74 | Request changes | +| <50 or critical issues | Block | + +--- + +## Reference Guides + +### Code Review Checklist +`.continue/skills/code-reviewer/references/code_review_checklist.md` + +Systematic checklists covering: +- Pre-review checks (build, tests, PR hygiene) +- Correctness (logic, data handling, error handling) +- Security (input validation, injection prevention) +- Performance (efficiency, caching, scalability) +- Maintainability (code quality, naming, structure) +- Testing (coverage, quality, mocking) +- Language-specific checks + +### Coding Standards +`.continue/skills/code-reviewer/references/coding_standards.md` + +Language-specific standards for: +- TypeScript (type annotations, null safety, async/await) +- JavaScript (declarations, patterns, modules) +- Python (type hints, exceptions, class design) +- Go (error handling, structs, concurrency) +- Swift (optionals, protocols, errors) +- Kotlin (null safety, data classes, coroutines) + +### Common Antipatterns +`.continue/skills/code-reviewer/references/common_antipatterns.md` + +Antipattern catalog with examples and fixes: +- Structural (god class, long method, deep nesting) +- Logic (boolean blindness, stringly typed code) +- Security (SQL injection, hardcoded credentials) +- Performance (N+1 queries, unbounded collections) +- Testing (duplication, testing implementation) +- Async (floating promises, callback hell) + +--- + +## Languages Supported + +| Language | Extensions | +|----------|------------| +| Python | `.py` | +| TypeScript | `.ts`, `.tsx` | +| JavaScript | `.js`, `.jsx`, `.mjs` | +| Go | `.go` | +| Swift | `.swift` | +| Kotlin | `.kt`, `.kts` | \ No newline at end of file diff --git a/.continue/skills/code-reviewer/references/code_review_checklist.md b/.continue/skills/code-reviewer/references/code_review_checklist.md new file mode 100644 index 00000000..b7bd0867 --- /dev/null +++ b/.continue/skills/code-reviewer/references/code_review_checklist.md @@ -0,0 +1,270 @@ +# Code Review Checklist + +Structured checklists for systematic code review across different aspects. + +--- + +## Table of Contents + +- [Pre-Review Checks](#pre-review-checks) +- [Correctness](#correctness) +- [Security](#security) +- [Performance](#performance) +- [Maintainability](#maintainability) +- [Testing](#testing) +- [Documentation](#documentation) +- [Language-Specific Checks](#language-specific-checks) + +--- + +## Pre-Review Checks + +Before diving into code, verify these basics: + +### Build and Tests +- [ ] Code compiles without errors +- [ ] All existing tests pass +- [ ] New tests are included for new functionality +- [ ] No unintended files included (build artifacts, IDE configs) + +### PR Hygiene +- [ ] PR has clear title and description +- [ ] Changes are scoped appropriately (not too large) +- [ ] Commits follow conventional commit format +- [ ] Branch is up to date with base branch + +### Scope Verification +- [ ] Changes match the stated purpose +- [ ] No unrelated changes bundled in +- [ ] Breaking changes are documented +- [ ] Migration path provided if needed + +--- + +## Correctness + +### Logic +- [ ] Algorithm implements requirements correctly +- [ ] Edge cases handled (null, empty, boundary values) +- [ ] Off-by-one errors checked +- [ ] Correct operators used (== vs ===, & vs &&) +- [ ] Loop termination conditions correct +- [ ] Recursion has proper base cases + +### Data Handling +- [ ] Data types appropriate for the use case +- [ ] Numeric overflow/underflow considered +- [ ] Date/time handling accounts for timezones +- [ ] Unicode and internationalization handled +- [ ] Data validation at entry points + +### State Management +- [ ] State transitions are valid +- [ ] Race conditions addressed +- [ ] Concurrent access handled correctly +- [ ] State cleanup on errors/exit + +### Error Handling +- [ ] Errors caught at appropriate levels +- [ ] Error messages are actionable +- [ ] Errors don't expose sensitive information +- [ ] Recovery or graceful degradation implemented +- [ ] Resources cleaned up in error paths + +--- + +## Security + +### Input Validation +- [ ] All user input validated and sanitized +- [ ] Input length limits enforced +- [ ] File uploads validated (type, size, content) +- [ ] URL parameters validated + +### Injection Prevention +- [ ] SQL queries parameterized +- [ ] Command execution uses safe APIs +- [ ] HTML output escaped to prevent XSS +- [ ] LDAP queries properly escaped +- [ ] XML parsing disables external entities + +### Authentication & Authorization +- [ ] Authentication required for protected resources +- [ ] Authorization checked before operations +- [ ] Session management secure +- [ ] Password handling follows best practices +- [ ] Token expiration implemented + +### Data Protection +- [ ] Sensitive data encrypted at rest +- [ ] Sensitive data encrypted in transit +- [ ] PII handled according to policy +- [ ] Secrets not hardcoded +- [ ] Logs don't contain sensitive data + +### API Security +- [ ] Rate limiting implemented +- [ ] CORS configured correctly +- [ ] CSRF protection in place +- [ ] API keys/tokens secured +- [ ] Endpoints use HTTPS + +--- + +## Performance + +### Efficiency +- [ ] Appropriate data structures used +- [ ] Algorithms have acceptable complexity +- [ ] Database queries are optimized +- [ ] N+1 query problems avoided +- [ ] Indexes used where beneficial + +### Resource Usage +- [ ] Memory usage bounded +- [ ] No memory leaks +- [ ] File handles properly closed +- [ ] Database connections pooled +- [ ] Network calls minimized + +### Caching +- [ ] Appropriate caching strategy +- [ ] Cache invalidation handled +- [ ] Cache keys are unique and predictable +- [ ] TTL values appropriate + +### Scalability +- [ ] Horizontal scaling considered +- [ ] Bottlenecks identified +- [ ] Async processing for long operations +- [ ] Batch operations where appropriate + +--- + +## Maintainability + +### Code Quality +- [ ] Functions/methods have single responsibility +- [ ] Classes follow SOLID principles +- [ ] Code is DRY (Don't Repeat Yourself) +- [ ] No dead code or commented-out code +- [ ] Magic numbers replaced with constants + +### Naming +- [ ] Names are descriptive and consistent +- [ ] Naming follows project conventions +- [ ] No abbreviations that obscure meaning +- [ ] Boolean variables/functions have is/has/can prefix + +### Structure +- [ ] Functions are appropriately sized (<50 lines preferred) +- [ ] Nesting depth is reasonable (<4 levels) +- [ ] Related code is grouped together +- [ ] Dependencies are minimal and explicit + +### Readability +- [ ] Code is self-documenting where possible +- [ ] Complex logic has explanatory comments +- [ ] Formatting is consistent +- [ ] No overly clever or obscure code + +--- + +## Testing + +### Coverage +- [ ] New code has unit tests +- [ ] Critical paths have integration tests +- [ ] Edge cases are tested +- [ ] Error conditions are tested + +### Quality +- [ ] Tests are independent +- [ ] Tests have clear assertions +- [ ] Test names describe what is tested +- [ ] Tests don't depend on external state + +### Mocking +- [ ] External dependencies are mocked +- [ ] Mocks are realistic +- [ ] Mock setup is not excessive + +--- + +## Documentation + +### Code Documentation +- [ ] Public APIs are documented +- [ ] Complex algorithms explained +- [ ] Non-obvious decisions documented +- [ ] TODO/FIXME comments have context + +### External Documentation +- [ ] README updated if needed +- [ ] API documentation updated +- [ ] Changelog updated +- [ ] Migration guides provided + +--- + +## Language-Specific Checks + +### TypeScript/JavaScript +- [ ] Types are explicit (avoid `any`) +- [ ] Null checks present (`?.`, `??`) +- [ ] Async/await errors handled +- [ ] No floating promises +- [ ] Memory leaks from closures checked + +### Python +- [ ] Type hints used for public APIs +- [ ] Context managers for resources (`with` statements) +- [ ] Exception handling is specific (not bare `except`) +- [ ] No mutable default arguments +- [ ] List comprehensions used appropriately + +### Go +- [ ] Errors checked and handled +- [ ] Goroutine leaks prevented +- [ ] Context propagation correct +- [ ] Defer statements in right order +- [ ] Interfaces minimal + +### Swift +- [ ] Optionals handled safely +- [ ] Memory management correct (weak/unowned) +- [ ] Error handling uses Result or throws +- [ ] Access control appropriate +- [ ] Codable implementation correct + +### Kotlin +- [ ] Null safety leveraged +- [ ] Coroutine cancellation handled +- [ ] Data classes used appropriately +- [ ] Extension functions don't obscure behavior +- [ ] Sealed classes for state + +--- + +## Review Process Tips + +### Before Approving +1. Verify all critical checks passed +2. Confirm tests are adequate +3. Consider deployment impact +4. Check for any security concerns +5. Ensure documentation is updated + +### Providing Feedback +- Be specific about issues +- Explain why something is problematic +- Suggest alternatives when possible +- Distinguish blockers from suggestions +- Acknowledge good patterns + +### When to Block +- Security vulnerabilities present +- Critical logic errors +- No tests for risky changes +- Breaking changes without migration +- Significant performance regressions diff --git a/.continue/skills/code-reviewer/references/coding_standards.md b/.continue/skills/code-reviewer/references/coding_standards.md new file mode 100644 index 00000000..9fbc6a06 --- /dev/null +++ b/.continue/skills/code-reviewer/references/coding_standards.md @@ -0,0 +1,555 @@ +# Coding Standards + +Language-specific coding standards and conventions for code review. + +--- + +## Table of Contents + +- [Universal Principles](#universal-principles) +- [TypeScript Standards](#typescript-standards) +- [JavaScript Standards](#javascript-standards) +- [Python Standards](#python-standards) +- [Go Standards](#go-standards) +- [Swift Standards](#swift-standards) +- [Kotlin Standards](#kotlin-standards) + +--- + +## Universal Principles + +These apply across all languages. + +### Naming Conventions + +| Element | Convention | Example | +|---------|------------|---------| +| Variables | camelCase (JS/TS), snake_case (Python/Go) | `userName`, `user_name` | +| Constants | SCREAMING_SNAKE_CASE | `MAX_RETRY_COUNT` | +| Functions | camelCase (JS/TS), snake_case (Python) | `getUserById`, `get_user_by_id` | +| Classes | PascalCase | `UserRepository` | +| Interfaces | PascalCase, optionally prefixed | `IUserService` or `UserService` | +| Private members | Prefix with underscore or use access modifiers | `_internalState` | + +### Function Design + +``` +Good functions: +- Do one thing well +- Have descriptive names (verb + noun) +- Take 3 or fewer parameters +- Return early for error cases +- Stay under 50 lines +``` + +### Error Handling + +``` +Good error handling: +- Catch specific errors, not generic exceptions +- Log with context (what, where, why) +- Clean up resources in error paths +- Don't swallow errors silently +- Provide actionable error messages +``` + +--- + +## TypeScript Standards + +### Type Annotations + +```typescript +// Avoid 'any' - use unknown for truly unknown types +function processData(data: unknown): ProcessedResult { + if (isValidData(data)) { + return transform(data); + } + throw new Error('Invalid data format'); +} + +// Use explicit return types for public APIs +export function calculateTotal(items: CartItem[]): number { + return items.reduce((sum, item) => sum + item.price, 0); +} + +// Use type guards for runtime checks +function isUser(obj: unknown): obj is User { + return ( + typeof obj === 'object' && + obj !== null && + 'id' in obj && + 'email' in obj + ); +} +``` + +### Null Safety + +```typescript +// Use optional chaining and nullish coalescing +const userName = user?.profile?.name ?? 'Anonymous'; + +// Be explicit about nullable types +interface Config { + timeout: number; + retries?: number; // Optional + fallbackUrl: string | null; // Explicitly nullable +} + +// Use assertion functions for validation +function assertDefined(value: T | null | undefined): asserts value is T { + if (value === null || value === undefined) { + throw new Error('Value is not defined'); + } +} +``` + +### Async/Await + +```typescript +// Always handle errors in async functions +async function fetchUser(id: string): Promise { + try { + const response = await api.get(`/users/${id}`); + return response.data; + } catch (error) { + logger.error('Failed to fetch user', { id, error }); + throw new UserFetchError(id, error); + } +} + +// Use Promise.all for parallel operations +async function loadDashboard(userId: string): Promise { + const [profile, stats, notifications] = await Promise.all([ + fetchProfile(userId), + fetchStats(userId), + fetchNotifications(userId) + ]); + return { profile, stats, notifications }; +} +``` + +### React/Component Standards + +```typescript +// Use explicit prop types +interface ButtonProps { + label: string; + onClick: () => void; + variant?: 'primary' | 'secondary'; + disabled?: boolean; +} + +// Prefer functional components with hooks +function Button({ label, onClick, variant = 'primary', disabled = false }: ButtonProps) { + return ( + + ); +} + +// Use custom hooks for reusable logic +function useDebounce(value: T, delay: number): T { + const [debouncedValue, setDebouncedValue] = useState(value); + + useEffect(() => { + const timer = setTimeout(() => setDebouncedValue(value), delay); + return () => clearTimeout(timer); + }, [value, delay]); + + return debouncedValue; +} +``` + +--- + +## JavaScript Standards + +### Variable Declarations + +```javascript +// Use const by default, let when reassignment needed +const MAX_ITEMS = 100; +let currentCount = 0; + +// Never use var +// var is function-scoped and hoisted, leading to bugs +``` + +### Object and Array Patterns + +```javascript +// Use object destructuring +const { name, email, role = 'user' } = user; + +// Use spread for immutable updates +const updatedUser = { ...user, lastLogin: new Date() }; +const updatedList = [...items, newItem]; + +// Use array methods over loops +const activeUsers = users.filter(u => u.isActive); +const emails = users.map(u => u.email); +const total = orders.reduce((sum, o) => sum + o.amount, 0); +``` + +### Module Patterns + +```javascript +// Use named exports for utilities +export function formatDate(date) { ... } +export function parseDate(str) { ... } + +// Use default export for main component/class +export default class UserService { ... } + +// Group related exports +export { formatDate, parseDate, isValidDate } from './dateUtils'; +``` + +--- + +## Python Standards + +### Type Hints (PEP 484) + +```python +from typing import Optional, List, Dict, Union + +def get_user(user_id: int) -> Optional[User]: + """Fetch user by ID, returns None if not found.""" + return db.query(User).filter(User.id == user_id).first() + +def process_items(items: List[str]) -> Dict[str, int]: + """Count occurrences of each item.""" + return {item: items.count(item) for item in set(items)} + +def send_notification( + user: User, + message: str, + *, + priority: str = "normal", + channels: List[str] = None +) -> bool: + """Send notification to user via specified channels.""" + channels = channels or ["email"] + # Implementation +``` + +### Exception Handling + +```python +# Catch specific exceptions +try: + result = api_client.fetch_data(endpoint) +except ConnectionError as e: + logger.warning(f"Connection failed: {e}") + return cached_data +except TimeoutError as e: + logger.error(f"Request timed out: {e}") + raise ServiceUnavailableError() from e + +# Use context managers for resources +with open(filepath, 'r') as f: + data = json.load(f) + +# Custom exceptions should be informative +class ValidationError(Exception): + def __init__(self, field: str, message: str): + self.field = field + self.message = message + super().__init__(f"{field}: {message}") +``` + +### Class Design + +```python +from dataclasses import dataclass +from abc import ABC, abstractmethod + +# Use dataclasses for data containers +@dataclass +class UserDTO: + id: int + email: str + name: str + is_active: bool = True + +# Use ABC for interfaces +class Repository(ABC): + @abstractmethod + def find_by_id(self, id: int) -> Optional[Entity]: + pass + + @abstractmethod + def save(self, entity: Entity) -> Entity: + pass + +# Use properties for computed attributes +class Order: + def __init__(self, items: List[OrderItem]): + self._items = items + + @property + def total(self) -> Decimal: + return sum(item.price * item.quantity for item in self._items) +``` + +--- + +## Go Standards + +### Error Handling + +```go +// Always check errors +file, err := os.Open(filename) +if err != nil { + return fmt.Errorf("failed to open %s: %w", filename, err) +} +defer file.Close() + +// Use custom error types for specific cases +type ValidationError struct { + Field string + Message string +} + +func (e *ValidationError) Error() string { + return fmt.Sprintf("%s: %s", e.Field, e.Message) +} + +// Wrap errors with context +if err := db.Query(query); err != nil { + return fmt.Errorf("query failed for user %d: %w", userID, err) +} +``` + +### Struct Design + +```go +// Use unexported fields with exported methods +type UserService struct { + repo UserRepository + cache Cache + logger Logger +} + +// Constructor functions for initialization +func NewUserService(repo UserRepository, cache Cache, logger Logger) *UserService { + return &UserService{ + repo: repo, + cache: cache, + logger: logger, + } +} + +// Keep interfaces small +type Reader interface { + Read(p []byte) (n int, err error) +} + +type Writer interface { + Write(p []byte) (n int, err error) +} +``` + +### Concurrency + +```go +// Use context for cancellation +func fetchData(ctx context.Context, url string) ([]byte, error) { + req, err := http.NewRequestWithContext(ctx, "GET", url, nil) + if err != nil { + return nil, err + } + // ... +} + +// Use channels for communication +func worker(jobs <-chan Job, results chan<- Result) { + for job := range jobs { + result := process(job) + results <- result + } +} + +// Use sync.WaitGroup for coordination +var wg sync.WaitGroup +for _, item := range items { + wg.Add(1) + go func(i Item) { + defer wg.Done() + processItem(i) + }(item) +} +wg.Wait() +``` + +--- + +## Swift Standards + +### Optionals + +```swift +// Use optional binding +if let user = fetchUser(id: userId) { + displayProfile(user) +} + +// Use guard for early exit +guard let data = response.data else { + throw NetworkError.noData +} + +// Use nil coalescing for defaults +let displayName = user.nickname ?? user.email + +// Avoid force unwrapping except in tests +// BAD: let name = user.name! +// GOOD: guard let name = user.name else { return } +``` + +### Protocol-Oriented Design + +```swift +// Define protocols with minimal requirements +protocol Identifiable { + var id: String { get } +} + +protocol Persistable: Identifiable { + func save() throws + static func find(by id: String) -> Self? +} + +// Use protocol extensions for default implementations +extension Persistable { + func save() throws { + try Storage.shared.save(self) + } +} + +// Prefer composition over inheritance +struct User: Identifiable, Codable { + let id: String + var name: String + var email: String +} +``` + +### Error Handling + +```swift +// Define domain-specific errors +enum AuthError: Error { + case invalidCredentials + case tokenExpired + case networkFailure(underlying: Error) +} + +// Use Result type for async operations +func authenticate( + email: String, + password: String, + completion: @escaping (Result) -> Void +) + +// Use throws for synchronous operations +func validate(_ input: String) throws -> ValidatedInput { + guard !input.isEmpty else { + throw ValidationError.emptyInput + } + return ValidatedInput(value: input) +} +``` + +--- + +## Kotlin Standards + +### Null Safety + +```kotlin +// Use nullable types explicitly +fun findUser(id: Int): User? { + return userRepository.find(id) +} + +// Use safe calls and elvis operator +val name = user?.profile?.name ?: "Unknown" + +// Use let for null checks with side effects +user?.let { activeUser -> + sendWelcomeEmail(activeUser.email) + logActivity(activeUser.id) +} + +// Use require/check for validation +fun processPayment(amount: Double) { + require(amount > 0) { "Amount must be positive: $amount" } + // Process +} +``` + +### Data Classes and Sealed Classes + +```kotlin +// Use data classes for DTOs +data class UserDTO( + val id: Int, + val email: String, + val name: String, + val isActive: Boolean = true +) + +// Use sealed classes for state +sealed class Result { + data class Success(val data: T) : Result() + data class Error(val message: String, val cause: Throwable? = null) : Result() + object Loading : Result() +} + +// Pattern matching with when +fun handleResult(result: Result) = when (result) { + is Result.Success -> showUser(result.data) + is Result.Error -> showError(result.message) + Result.Loading -> showLoading() +} +``` + +### Coroutines + +```kotlin +// Use structured concurrency +suspend fun loadDashboard(): Dashboard = coroutineScope { + val profile = async { fetchProfile() } + val stats = async { fetchStats() } + val notifications = async { fetchNotifications() } + + Dashboard( + profile = profile.await(), + stats = stats.await(), + notifications = notifications.await() + ) +} + +// Handle cancellation +suspend fun fetchWithRetry(url: String): Response { + repeat(3) { attempt -> + try { + return httpClient.get(url) + } catch (e: IOException) { + if (attempt == 2) throw e + delay(1000L * (attempt + 1)) + } + } + throw IllegalStateException("Unreachable") +} +``` diff --git a/.continue/skills/code-reviewer/references/common_antipatterns.md b/.continue/skills/code-reviewer/references/common_antipatterns.md new file mode 100644 index 00000000..26045452 --- /dev/null +++ b/.continue/skills/code-reviewer/references/common_antipatterns.md @@ -0,0 +1,739 @@ +# Common Antipatterns + +Code antipatterns to identify during review, with examples and fixes. + +--- + +## Table of Contents + +- [Structural Antipatterns](#structural-antipatterns) +- [Logic Antipatterns](#logic-antipatterns) +- [Security Antipatterns](#security-antipatterns) +- [Performance Antipatterns](#performance-antipatterns) +- [Testing Antipatterns](#testing-antipatterns) +- [Async Antipatterns](#async-antipatterns) + +--- + +## Structural Antipatterns + +### God Class + +A class that does too much and knows too much. + +```typescript +// BAD: God class handling everything +class UserManager { + createUser(data: UserData) { ... } + updateUser(id: string, data: UserData) { ... } + deleteUser(id: string) { ... } + sendEmail(userId: string, content: string) { ... } + generateReport(userId: string) { ... } + validatePassword(password: string) { ... } + hashPassword(password: string) { ... } + uploadAvatar(userId: string, file: File) { ... } + resizeImage(file: File) { ... } + logActivity(userId: string, action: string) { ... } + // 50 more methods... +} + +// GOOD: Single responsibility classes +class UserRepository { + create(data: UserData): User { ... } + update(id: string, data: Partial): User { ... } + delete(id: string): void { ... } +} + +class EmailService { + send(to: string, content: string): void { ... } +} + +class PasswordService { + validate(password: string): ValidationResult { ... } + hash(password: string): string { ... } +} +``` + +**Detection:** Class has >20 methods, >500 lines, or handles unrelated concerns. + +--- + +### Long Method + +Functions that do too much and are hard to understand. + +```python +# BAD: Long method doing everything +def process_order(order_data): + # Validate order (20 lines) + if not order_data.get('items'): + raise ValueError('No items') + if not order_data.get('customer_id'): + raise ValueError('No customer') + # ... more validation + + # Calculate totals (30 lines) + subtotal = 0 + for item in order_data['items']: + price = get_product_price(item['product_id']) + subtotal += price * item['quantity'] + # ... tax calculation, discounts + + # Process payment (40 lines) + payment_result = payment_gateway.charge(...) + # ... handle payment errors + + # Create order record (20 lines) + order = Order.create(...) + + # Send notifications (20 lines) + send_order_confirmation(...) + notify_warehouse(...) + + return order + +# GOOD: Composed of focused functions +def process_order(order_data): + validate_order(order_data) + totals = calculate_order_totals(order_data) + payment = process_payment(order_data['customer_id'], totals) + order = create_order_record(order_data, totals, payment) + send_order_notifications(order) + return order +``` + +**Detection:** Function >50 lines or requires scrolling to read. + +--- + +### Deep Nesting + +Excessive indentation making code hard to follow. + +```javascript +// BAD: Deep nesting +function processData(data) { + if (data) { + if (data.items) { + if (data.items.length > 0) { + for (const item of data.items) { + if (item.isValid) { + if (item.type === 'premium') { + if (item.price > 100) { + // Finally do something + processItem(item); + } + } + } + } + } + } + } +} + +// GOOD: Early returns and guard clauses +function processData(data) { + if (!data?.items?.length) { + return; + } + + const premiumItems = data.items.filter( + item => item.isValid && item.type === 'premium' && item.price > 100 + ); + + premiumItems.forEach(processItem); +} +``` + +**Detection:** Indentation >4 levels deep. + +--- + +### Magic Numbers and Strings + +Hard-coded values without explanation. + +```go +// BAD: Magic numbers +func calculateDiscount(total float64, userType int) float64 { + if userType == 1 { + return total * 0.15 + } else if userType == 2 { + return total * 0.25 + } + return total * 0.05 +} + +// GOOD: Named constants +const ( + UserTypeRegular = 1 + UserTypePremium = 2 + + DiscountRegular = 0.05 + DiscountStandard = 0.15 + DiscountPremium = 0.25 +) + +func calculateDiscount(total float64, userType int) float64 { + switch userType { + case UserTypePremium: + return total * DiscountPremium + case UserTypeRegular: + return total * DiscountStandard + default: + return total * DiscountRegular + } +} +``` + +**Detection:** Literal numbers (except 0, 1) or repeated string literals. + +--- + +### Primitive Obsession + +Using primitives instead of small objects. + +```typescript +// BAD: Primitives everywhere +function createUser( + name: string, + email: string, + phone: string, + street: string, + city: string, + zipCode: string, + country: string +): User { ... } + +// GOOD: Value objects +interface Address { + street: string; + city: string; + zipCode: string; + country: string; +} + +interface ContactInfo { + email: string; + phone: string; +} + +function createUser( + name: string, + contact: ContactInfo, + address: Address +): User { ... } +``` + +**Detection:** Functions with >4 parameters of same type, or related primitives always passed together. + +--- + +## Logic Antipatterns + +### Boolean Blindness + +Passing booleans that make code unreadable at call sites. + +```swift +// BAD: What do these booleans mean? +user.configure(true, false, true, false) + +// GOOD: Named parameters or option objects +user.configure( + sendWelcomeEmail: true, + requireVerification: false, + enableNotifications: true, + isAdmin: false +) + +// Or use an options struct +struct UserConfiguration { + var sendWelcomeEmail: Bool = true + var requireVerification: Bool = false + var enableNotifications: Bool = true + var isAdmin: Bool = false +} + +user.configure(UserConfiguration()) +``` + +**Detection:** Function calls with multiple boolean literals. + +--- + +### Null Returns for Collections + +Returning null instead of empty collections. + +```kotlin +// BAD: Returning null +fun findUsersByRole(role: String): List? { + val users = repository.findByRole(role) + return if (users.isEmpty()) null else users +} + +// Caller must handle null +val users = findUsersByRole("admin") +if (users != null) { + users.forEach { ... } +} + +// GOOD: Return empty collection +fun findUsersByRole(role: String): List { + return repository.findByRole(role) +} + +// Caller can iterate directly +findUsersByRole("admin").forEach { ... } +``` + +**Detection:** Functions returning nullable collections. + +--- + +### Stringly Typed Code + +Using strings where enums or types should be used. + +```python +# BAD: String-based logic +def handle_event(event_type: str, data: dict): + if event_type == "user_created": + handle_user_created(data) + elif event_type == "user_updated": + handle_user_updated(data) + elif event_type == "user_dleted": # Typo won't be caught + handle_user_deleted(data) + +# GOOD: Enum-based +from enum import Enum + +class EventType(Enum): + USER_CREATED = "user_created" + USER_UPDATED = "user_updated" + USER_DELETED = "user_deleted" + +def handle_event(event_type: EventType, data: dict): + handlers = { + EventType.USER_CREATED: handle_user_created, + EventType.USER_UPDATED: handle_user_updated, + EventType.USER_DELETED: handle_user_deleted, + } + handlers[event_type](data) +``` + +**Detection:** String comparisons for type/status/category values. + +--- + +## Security Antipatterns + +### SQL Injection + +String concatenation in SQL queries. + +```javascript +// BAD: String concatenation +const query = `SELECT * FROM users WHERE id = ${userId}`; +db.query(query); + +// BAD: String templates still vulnerable +const query = `SELECT * FROM users WHERE name = '${userName}'`; + +// GOOD: Parameterized queries +const query = 'SELECT * FROM users WHERE id = $1'; +db.query(query, [userId]); + +// GOOD: Using ORM safely +User.findOne({ where: { id: userId } }); +``` + +**Detection:** String concatenation or template literals with SQL keywords. + +--- + +### Hardcoded Credentials + +Secrets in source code. + +```python +# BAD: Hardcoded secrets +API_KEY = "sk-abc123xyz789" +DATABASE_URL = "postgresql://admin:password123@prod-db.internal:5432/app" + +# GOOD: Environment variables +import os + +API_KEY = os.environ["API_KEY"] +DATABASE_URL = os.environ["DATABASE_URL"] + +# GOOD: Secrets manager +from aws_secretsmanager import get_secret + +API_KEY = get_secret("api-key") +``` + +**Detection:** Variables named `password`, `secret`, `key`, `token` with string literals. + +--- + +### Unsafe Deserialization + +Deserializing untrusted data without validation. + +```python +# BAD: Binary serialization from untrusted source can execute arbitrary code +# Examples: Python's binary serialization, yaml.load without SafeLoader + +# GOOD: Use safe alternatives +import json + +def load_data(file_path): + with open(file_path, 'r') as f: + return json.load(f) + +# GOOD: Use SafeLoader for YAML +import yaml + +with open('config.yaml') as f: + config = yaml.safe_load(f) +``` + +**Detection:** Binary deserialization functions, yaml.load without safe loader, dynamic code execution on external data. + +--- + +### Missing Input Validation + +Trusting user input without validation. + +```typescript +// BAD: No validation +app.post('/user', (req, res) => { + const user = db.create({ + name: req.body.name, + email: req.body.email, + role: req.body.role // User can set themselves as admin! + }); + res.json(user); +}); + +// GOOD: Validate and sanitize +import { z } from 'zod'; + +const CreateUserSchema = z.object({ + name: z.string().min(1).max(100), + email: z.string().email(), + // role is NOT accepted from input +}); + +app.post('/user', (req, res) => { + const validated = CreateUserSchema.parse(req.body); + const user = db.create({ + ...validated, + role: 'user' // Default role, not from input + }); + res.json(user); +}); +``` + +**Detection:** Request body/params used directly without validation schema. + +--- + +## Performance Antipatterns + +### N+1 Query Problem + +Loading related data one record at a time. + +```python +# BAD: N+1 queries +def get_orders_with_items(): + orders = Order.query.all() # 1 query + for order in orders: + items = OrderItem.query.filter_by(order_id=order.id).all() # N queries + order.items = items + return orders + +# GOOD: Eager loading +def get_orders_with_items(): + return Order.query.options( + joinedload(Order.items) + ).all() # 1 query with JOIN + +# GOOD: Batch loading +def get_orders_with_items(): + orders = Order.query.all() + order_ids = [o.id for o in orders] + items = OrderItem.query.filter( + OrderItem.order_id.in_(order_ids) + ).all() # 2 queries total + # Group items by order_id... +``` + +**Detection:** Database queries inside loops. + +--- + +### Unbounded Collections + +Loading unlimited data into memory. + +```go +// BAD: Load all records +func GetAllUsers() ([]User, error) { + return db.Find(&[]User{}) // Could be millions +} + +// GOOD: Pagination +func GetUsers(page, pageSize int) ([]User, error) { + offset := (page - 1) * pageSize + return db.Limit(pageSize).Offset(offset).Find(&[]User{}) +} + +// GOOD: Streaming for large datasets +func ProcessAllUsers(handler func(User) error) error { + rows, err := db.Model(&User{}).Rows() + if err != nil { + return err + } + defer rows.Close() + + for rows.Next() { + var user User + db.ScanRows(rows, &user) + if err := handler(user); err != nil { + return err + } + } + return nil +} +``` + +**Detection:** `findAll()`, `find({})`, or queries without `LIMIT`. + +--- + +### Synchronous I/O in Hot Paths + +Blocking operations in request handlers. + +```javascript +// BAD: Sync file read on every request +app.get('/config', (req, res) => { + const config = fs.readFileSync('./config.json'); // Blocks event loop + res.json(JSON.parse(config)); +}); + +// GOOD: Load once at startup +const config = JSON.parse(fs.readFileSync('./config.json')); + +app.get('/config', (req, res) => { + res.json(config); +}); + +// GOOD: Async with caching +let configCache = null; + +app.get('/config', async (req, res) => { + if (!configCache) { + configCache = JSON.parse(await fs.promises.readFile('./config.json')); + } + res.json(configCache); +}); +``` + +**Detection:** `readFileSync`, `execSync`, or blocking calls in request handlers. + +--- + +## Testing Antipatterns + +### Test Code Duplication + +Repeating setup in every test. + +```typescript +// BAD: Duplicate setup +describe('UserService', () => { + it('should create user', async () => { + const db = await createTestDatabase(); + const userRepo = new UserRepository(db); + const emailService = new MockEmailService(); + const service = new UserService(userRepo, emailService); + + const user = await service.create({ name: 'Test' }); + expect(user.name).toBe('Test'); + }); + + it('should update user', async () => { + const db = await createTestDatabase(); // Duplicated + const userRepo = new UserRepository(db); // Duplicated + const emailService = new MockEmailService(); // Duplicated + const service = new UserService(userRepo, emailService); // Duplicated + + // ... + }); +}); + +// GOOD: Shared setup +describe('UserService', () => { + let service: UserService; + let db: TestDatabase; + + beforeEach(async () => { + db = await createTestDatabase(); + const userRepo = new UserRepository(db); + const emailService = new MockEmailService(); + service = new UserService(userRepo, emailService); + }); + + afterEach(async () => { + await db.cleanup(); + }); + + it('should create user', async () => { + const user = await service.create({ name: 'Test' }); + expect(user.name).toBe('Test'); + }); +}); +``` + +--- + +### Testing Implementation Instead of Behavior + +Tests coupled to internal implementation. + +```python +# BAD: Testing implementation details +def test_add_item_to_cart(): + cart = ShoppingCart() + cart.add_item(Product("Apple", 1.00)) + + # Testing internal structure + assert cart._items[0].name == "Apple" + assert cart._total == 1.00 + +# GOOD: Testing behavior +def test_add_item_to_cart(): + cart = ShoppingCart() + cart.add_item(Product("Apple", 1.00)) + + # Testing public behavior + assert cart.item_count == 1 + assert cart.total == 1.00 + assert cart.contains("Apple") +``` + +--- + +## Async Antipatterns + +### Floating Promises + +Promises without await or catch. + +```typescript +// BAD: Floating promise +async function saveUser(user: User) { + db.save(user); // Not awaited, errors lost + logger.info('User saved'); // Logs before save completes +} + +// BAD: Fire and forget in loop +for (const item of items) { + processItem(item); // All run in parallel, no error handling +} + +// GOOD: Await the promise +async function saveUser(user: User) { + await db.save(user); + logger.info('User saved'); +} + +// GOOD: Process with proper handling +await Promise.all(items.map(item => processItem(item))); + +// Or sequentially +for (const item of items) { + await processItem(item); +} +``` + +**Detection:** Async function calls without `await` or `.then()`. + +--- + +### Callback Hell + +Deeply nested callbacks. + +```javascript +// BAD: Callback hell +getUser(userId, (err, user) => { + if (err) return handleError(err); + getOrders(user.id, (err, orders) => { + if (err) return handleError(err); + getProducts(orders[0].productIds, (err, products) => { + if (err) return handleError(err); + renderPage(user, orders, products, (err) => { + if (err) return handleError(err); + console.log('Done'); + }); + }); + }); +}); + +// GOOD: Async/await +async function loadPage(userId) { + try { + const user = await getUser(userId); + const orders = await getOrders(user.id); + const products = await getProducts(orders[0].productIds); + await renderPage(user, orders, products); + console.log('Done'); + } catch (err) { + handleError(err); + } +} +``` + +**Detection:** >2 levels of callback nesting. + +--- + +### Async in Constructor + +Async operations in constructors. + +```typescript +// BAD: Async in constructor +class DatabaseConnection { + constructor(url: string) { + this.connect(url); // Fire-and-forget async + } + + private async connect(url: string) { + this.client = await createClient(url); + } +} + +// GOOD: Factory method +class DatabaseConnection { + private constructor(private client: Client) {} + + static async create(url: string): Promise { + const client = await createClient(url); + return new DatabaseConnection(client); + } +} + +// Usage +const db = await DatabaseConnection.create(url); +``` + +**Detection:** `async` calls or `.then()` in constructor. diff --git a/.continue/skills/code-reviewer/scripts/code_quality_checker.py b/.continue/skills/code-reviewer/scripts/code_quality_checker.py new file mode 100644 index 00000000..75ffe8d5 --- /dev/null +++ b/.continue/skills/code-reviewer/scripts/code_quality_checker.py @@ -0,0 +1,560 @@ +#!/usr/bin/env python3 +""" +Code Quality Checker + +Analyzes source code for quality issues, code smells, complexity metrics, +and SOLID principle violations. + +Usage: + python .continue/skills/code-reviewer/scripts/code_quality_checker.py /path/to/file.py + python .continue/skills/code-reviewer/scripts/code_quality_checker.py /path/to/directory --recursive + python .continue/skills/code-reviewer/scripts/code_quality_checker.py . --language typescript --json +""" + +import argparse +import json +import re +import sys +from pathlib import Path +from typing import Dict, List, Optional + + +# Language-specific file extensions +LANGUAGE_EXTENSIONS = { + "python": [".py"], + "typescript": [".ts", ".tsx"], + "javascript": [".js", ".jsx", ".mjs"], + "go": [".go"], + "swift": [".swift"], + "kotlin": [".kt", ".kts"] +} + +# Code smell thresholds +THRESHOLDS = { + "long_function_lines": 50, + "too_many_parameters": 5, + "high_complexity": 10, + "god_class_methods": 20, + "max_imports": 15 +} + + +def get_file_extension(filepath: Path) -> str: + """Get file extension.""" + return filepath.suffix.lower() + + +def detect_language(filepath: Path) -> Optional[str]: + """Detect programming language from file extension.""" + ext = get_file_extension(filepath) + for lang, extensions in LANGUAGE_EXTENSIONS.items(): + if ext in extensions: + return lang + return None + + +def read_file_content(filepath: Path) -> str: + """Read file content safely.""" + try: + with open(filepath, "r", encoding="utf-8", errors="ignore") as f: + return f.read() + except Exception: + return "" + + +def calculate_cyclomatic_complexity(content: str) -> int: + """ + Estimate cyclomatic complexity based on control flow keywords. + """ + complexity = 1 # Base complexity + + # Control flow patterns that increase complexity + patterns = [ + r"\bif\b", + r"\belif\b", + r"\belse\b", + r"\bfor\b", + r"\bwhile\b", + r"\bcase\b", + r"\bcatch\b", + r"\bexcept\b", + r"\band\b", + r"\bor\b", + r"\|\|", + r"&&" + ] + + for pattern in patterns: + matches = re.findall(pattern, content, re.IGNORECASE) + complexity += len(matches) + + return complexity + + +def count_lines(content: str) -> Dict[str, int]: + """Count different types of lines in code.""" + lines = content.split("\n") + total = len(lines) + blank = sum(1 for line in lines if not line.strip()) + comment = 0 + + for line in lines: + stripped = line.strip() + if stripped.startswith("#") or stripped.startswith("//"): + comment += 1 + elif stripped.startswith("/*") or stripped.startswith("'''") or stripped.startswith('"""'): + comment += 1 + + code = total - blank - comment + + return { + "total": total, + "code": code, + "blank": blank, + "comment": comment + } + + +def find_functions(content: str, language: str) -> List[Dict]: + """Find function definitions and their metrics.""" + functions = [] + + # Language-specific function patterns + patterns = { + "python": r"def\s+(\w+)\s*\(([^)]*)\)", + "typescript": r"(?:function\s+(\w+)|(?:const|let|var)\s+(\w+)\s*=\s*(?:async\s+)?\([^)]*\)\s*=>)", + "javascript": r"(?:function\s+(\w+)|(?:const|let|var)\s+(\w+)\s*=\s*(?:async\s+)?\([^)]*\)\s*=>)", + "go": r"func\s+(?:\([^)]+\)\s+)?(\w+)\s*\(([^)]*)\)", + "swift": r"func\s+(\w+)\s*\(([^)]*)\)", + "kotlin": r"fun\s+(\w+)\s*\(([^)]*)\)" + } + + pattern = patterns.get(language, patterns["python"]) + matches = re.finditer(pattern, content, re.MULTILINE) + + for match in matches: + name = next((g for g in match.groups() if g), "anonymous") + params_str = match.group(2) if len(match.groups()) > 1 and match.group(2) else "" + + # Count parameters + params = [p.strip() for p in params_str.split(",") if p.strip()] + param_count = len(params) + + # Estimate function length + start_pos = match.end() + remaining = content[start_pos:] + + next_func = re.search(pattern, remaining) + if next_func: + func_body = remaining[:next_func.start()] + else: + func_body = remaining[:min(2000, len(remaining))] + + line_count = len(func_body.split("\n")) + complexity = calculate_cyclomatic_complexity(func_body) + + functions.append({ + "name": name, + "parameters": param_count, + "lines": line_count, + "complexity": complexity + }) + + return functions + + +def find_classes(content: str, language: str) -> List[Dict]: + """Find class definitions and their metrics.""" + classes = [] + + patterns = { + "python": r"class\s+(\w+)", + "typescript": r"class\s+(\w+)", + "javascript": r"class\s+(\w+)", + "go": r"type\s+(\w+)\s+struct", + "swift": r"class\s+(\w+)", + "kotlin": r"class\s+(\w+)" + } + + pattern = patterns.get(language, patterns["python"]) + matches = re.finditer(pattern, content) + + for match in matches: + name = match.group(1) + + start_pos = match.end() + remaining = content[start_pos:] + + next_class = re.search(pattern, remaining) + if next_class: + class_body = remaining[:next_class.start()] + else: + class_body = remaining + + # Count methods + method_patterns = { + "python": r"def\s+\w+\s*\(", + "typescript": r"(?:public|private|protected)?\s*\w+\s*\([^)]*\)\s*[:{]", + "javascript": r"\w+\s*\([^)]*\)\s*\{", + "go": r"func\s+\(", + "swift": r"func\s+\w+", + "kotlin": r"fun\s+\w+" + } + method_pattern = method_patterns.get(language, method_patterns["python"]) + methods = len(re.findall(method_pattern, class_body)) + + classes.append({ + "name": name, + "methods": methods, + "lines": len(class_body.split("\n")) + }) + + return classes + + +def check_code_smells(content: str, functions: List[Dict], classes: List[Dict]) -> List[Dict]: + """Check for code smells in the content.""" + smells = [] + + # Long functions + for func in functions: + if func["lines"] > THRESHOLDS["long_function_lines"]: + smells.append({ + "type": "long_function", + "severity": "medium", + "message": f"Function '{func['name']}' has {func['lines']} lines (max: {THRESHOLDS['long_function_lines']})", + "location": func["name"] + }) + + # Too many parameters + for func in functions: + if func["parameters"] > THRESHOLDS["too_many_parameters"]: + smells.append({ + "type": "too_many_parameters", + "severity": "low", + "message": f"Function '{func['name']}' has {func['parameters']} parameters (max: {THRESHOLDS['too_many_parameters']})", + "location": func["name"] + }) + + # High complexity + for func in functions: + if func["complexity"] > THRESHOLDS["high_complexity"]: + severity = "high" if func["complexity"] > 20 else "medium" + smells.append({ + "type": "high_complexity", + "severity": severity, + "message": f"Function '{func['name']}' has complexity {func['complexity']} (max: {THRESHOLDS['high_complexity']})", + "location": func["name"] + }) + + # God classes + for cls in classes: + if cls["methods"] > THRESHOLDS["god_class_methods"]: + smells.append({ + "type": "god_class", + "severity": "high", + "message": f"Class '{cls['name']}' has {cls['methods']} methods (max: {THRESHOLDS['god_class_methods']})", + "location": cls["name"] + }) + + # Magic numbers + magic_pattern = r"\b(? List[Dict]: + """Check for potential SOLID principle violations.""" + violations = [] + + # OCP: Type checking instead of polymorphism + type_checks = len(re.findall(r"isinstance\(|type\(.*\)\s*==|typeof\s+\w+\s*===", content)) + if type_checks > 2: + violations.append({ + "principle": "OCP", + "name": "Open/Closed Principle", + "severity": "medium", + "message": f"Found {type_checks} type checks - consider using polymorphism" + }) + + # LSP/ISP: NotImplementedError + not_impl = len(re.findall(r"raise\s+NotImplementedError|not\s+implemented", content, re.IGNORECASE)) + if not_impl: + violations.append({ + "principle": "LSP/ISP", + "name": "Liskov/Interface Segregation", + "severity": "low", + "message": f"Found {not_impl} unimplemented methods - may indicate oversized interface" + }) + + # DIP: Too many direct imports + imports = len(re.findall(r"^(?:import|from)\s+", content, re.MULTILINE)) + if imports > THRESHOLDS["max_imports"]: + violations.append({ + "principle": "DIP", + "name": "Dependency Inversion Principle", + "severity": "low", + "message": f"File has {imports} imports - consider dependency injection" + }) + + return violations + + +def calculate_quality_score( + line_metrics: Dict, + functions: List[Dict], + classes: List[Dict], + smells: List[Dict], + violations: List[Dict] +) -> int: + """Calculate overall quality score (0-100).""" + score = 100 + + # Deduct for code smells + for smell in smells: + if smell["severity"] == "high": + score -= 10 + elif smell["severity"] == "medium": + score -= 5 + elif smell["severity"] == "low": + score -= 2 + + # Deduct for SOLID violations + for violation in violations: + if violation["severity"] == "high": + score -= 8 + elif violation["severity"] == "medium": + score -= 4 + elif violation["severity"] == "low": + score -= 2 + + # Bonus for good comment ratio (10-30%) + if line_metrics["total"] > 0: + comment_ratio = line_metrics["comment"] / line_metrics["total"] + if 0.1 <= comment_ratio <= 0.3: + score += 5 + + # Bonus for reasonable function sizes + if functions: + avg_lines = sum(f["lines"] for f in functions) / len(functions) + if avg_lines < 30: + score += 5 + + return max(0, min(100, score)) + + +def get_grade(score: int) -> str: + """Convert score to letter grade.""" + if score >= 90: + return "A" + elif score >= 80: + return "B" + elif score >= 70: + return "C" + elif score >= 60: + return "D" + else: + return "F" + + +def analyze_file(filepath: Path) -> Dict: + """Analyze a single file for code quality.""" + language = detect_language(filepath) + if not language: + return {"error": f"Unsupported file type: {filepath.suffix}"} + + content = read_file_content(filepath) + if not content: + return {"error": f"Could not read file: {filepath}"} + + line_metrics = count_lines(content) + functions = find_functions(content, language) + classes = find_classes(content, language) + smells = check_code_smells(content, functions, classes) + violations = check_solid_violations(content) + score = calculate_quality_score(line_metrics, functions, classes, smells, violations) + + return { + "file": str(filepath), + "language": language, + "metrics": { + "lines": line_metrics, + "functions": len(functions), + "classes": len(classes), + "avg_complexity": round(sum(f["complexity"] for f in functions) / max(1, len(functions)), 1) + }, + "quality_score": score, + "grade": get_grade(score), + "smells": smells, + "solid_violations": violations, + "function_details": functions[:10], + "class_details": classes[:10] + } + + +def analyze_directory( + dir_path: Path, + recursive: bool = True, + language: Optional[str] = None +) -> Dict: + """Analyze all files in a directory.""" + results = [] + extensions = [] + + if language: + extensions = LANGUAGE_EXTENSIONS.get(language, []) + else: + for exts in LANGUAGE_EXTENSIONS.values(): + extensions.extend(exts) + + pattern = "**/*" if recursive else "*" + + for ext in extensions: + for filepath in dir_path.glob(f"{pattern}{ext}"): + if "node_modules" in str(filepath) or ".git" in str(filepath): + continue + result = analyze_file(filepath) + if "error" not in result: + results.append(result) + + if not results: + return {"error": "No supported files found"} + + total_score = sum(r["quality_score"] for r in results) + avg_score = total_score / len(results) + total_smells = sum(len(r["smells"]) for r in results) + total_violations = sum(len(r["solid_violations"]) for r in results) + + return { + "directory": str(dir_path), + "files_analyzed": len(results), + "average_score": round(avg_score, 1), + "overall_grade": get_grade(int(avg_score)), + "total_code_smells": total_smells, + "total_solid_violations": total_violations, + "files": sorted(results, key=lambda x: x["quality_score"]) + } + + +def print_report(analysis: Dict) -> None: + """Print human-readable analysis report.""" + if "error" in analysis: + print(f"Error: {analysis['error']}") + return + + print("=" * 60) + print("CODE QUALITY REPORT") + print("=" * 60) + + if "file" in analysis: + print(f"\nFile: {analysis['file']}") + print(f"Language: {analysis['language']}") + print(f"Quality Score: {analysis['quality_score']}/100 ({analysis['grade']})") + + metrics = analysis["metrics"] + print(f"\nLines: {metrics['lines']['total']} ({metrics['lines']['code']} code, {metrics['lines']['comment']} comments)") + print(f"Functions: {metrics['functions']}") + print(f"Classes: {metrics['classes']}") + print(f"Avg Complexity: {metrics['avg_complexity']}") + + if analysis["smells"]: + print("\n--- CODE SMELLS ---") + for smell in analysis["smells"][:10]: + print(f" [{smell['severity'].upper()}] {smell['message']} ({smell['location']})") + + if analysis["solid_violations"]: + print("\n--- SOLID VIOLATIONS ---") + for v in analysis["solid_violations"]: + print(f" [{v['principle']}] {v['message']}") + else: + print(f"\nDirectory: {analysis['directory']}") + print(f"Files Analyzed: {analysis['files_analyzed']}") + print(f"Average Score: {analysis['average_score']}/100 ({analysis['overall_grade']})") + print(f"Total Code Smells: {analysis['total_code_smells']}") + print(f"Total SOLID Violations: {analysis['total_solid_violations']}") + + print("\n--- FILES BY QUALITY ---") + for f in analysis["files"][:10]: + print(f" {f['quality_score']:3d}/100 [{f['grade']}] {f['file']}") + + print("\n" + "=" * 60) + + +def main(): + parser = argparse.ArgumentParser( + description="Analyze code quality, smells, and SOLID violations" + ) + parser.add_argument( + "path", + help="File or directory to analyze" + ) + parser.add_argument( + "--recursive", "-r", + action="store_true", + default=True, + help="Recursively analyze directories (default: true)" + ) + parser.add_argument( + "--language", "-l", + choices=list(LANGUAGE_EXTENSIONS.keys()), + help="Filter by programming language" + ) + parser.add_argument( + "--json", + action="store_true", + help="Output in JSON format" + ) + parser.add_argument( + "--output", "-o", + help="Write output to file" + ) + + args = parser.parse_args() + + target = Path(args.path).resolve() + + if not target.exists(): + print(f"Error: Path does not exist: {target}", file=sys.stderr) + sys.exit(1) + + if target.is_file(): + analysis = analyze_file(target) + else: + analysis = analyze_directory(target, args.recursive, args.language) + + if args.json: + output = json.dumps(analysis, indent=2, default=str) + if args.output: + with open(args.output, "w") as f: + f.write(output) + print(f"Results written to {args.output}") + else: + print(output) + else: + print_report(analysis) + + +if __name__ == "__main__": + main() diff --git a/.continue/skills/code-reviewer/scripts/pr_analyzer.py b/.continue/skills/code-reviewer/scripts/pr_analyzer.py new file mode 100644 index 00000000..0d3028d1 --- /dev/null +++ b/.continue/skills/code-reviewer/scripts/pr_analyzer.py @@ -0,0 +1,495 @@ +#!/usr/bin/env python3 +""" +PR Analyzer + +Analyzes pull request changes for review complexity, risk assessment, +and generates review priorities. + +Usage: + python .continue/skills/code-reviewer/scripts/pr_analyzer.py /path/to/repo + python .continue/skills/code-reviewer/scripts/pr_analyzer.py . --base main --head feature-branch + python .continue/skills/code-reviewer/scripts/pr_analyzer.py /path/to/repo --json +""" + +import argparse +import json +import os +import re +import subprocess +import sys +from pathlib import Path +from typing import Dict, List, Optional, Tuple + + +# File categories for review prioritization +FILE_CATEGORIES = { + "critical": { + "patterns": [ + r"auth", r"security", r"password", r"token", r"secret", + r"payment", r"billing", r"crypto", r"encrypt" + ], + "weight": 5, + "description": "Security-sensitive files requiring careful review" + }, + "high": { + "patterns": [ + r"api", r"database", r"migration", r"schema", r"model", + r"config", r"env", r"middleware" + ], + "weight": 4, + "description": "Core infrastructure files" + }, + "medium": { + "patterns": [ + r"service", r"controller", r"handler", r"util", r"helper" + ], + "weight": 3, + "description": "Business logic files" + }, + "low": { + "patterns": [ + r"test", r"spec", r"mock", r"fixture", r"story", + r"readme", r"docs", r"\.md$" + ], + "weight": 1, + "description": "Tests and documentation" + } +} + +# Risky patterns to flag +RISK_PATTERNS = [ + { + "name": "hardcoded_secrets", + "pattern": r"(password|secret|api_key|token)\s*[=:]\s*['\"][^'\"]+['\"]", + "severity": "critical", + "message": "Potential hardcoded secret detected" + }, + { + "name": "todo_fixme", + "pattern": r"(TODO|FIXME|HACK|XXX):", + "severity": "low", + "message": "TODO/FIXME comment found" + }, + { + "name": "console_log", + "pattern": r"console\.(log|debug|info|warn|error)\(", + "severity": "medium", + "message": "Console statement found (remove for production)" + }, + { + "name": "debugger", + "pattern": r"\bdebugger\b", + "severity": "high", + "message": "Debugger statement found" + }, + { + "name": "disable_eslint", + "pattern": r"eslint-disable", + "severity": "medium", + "message": "ESLint rule disabled" + }, + { + "name": "any_type", + "pattern": r":\s*any\b", + "severity": "medium", + "message": "TypeScript 'any' type used" + }, + { + "name": "sql_concatenation", + "pattern": r"(SELECT|INSERT|UPDATE|DELETE).*\+.*['\"]", + "severity": "critical", + "message": "Potential SQL injection (string concatenation in query)" + } +] + + +def run_git_command(cmd: List[str], cwd: Path) -> Tuple[bool, str]: + """Run a git command and return success status and output.""" + try: + result = subprocess.run( + cmd, + cwd=cwd, + capture_output=True, + text=True, + timeout=30 + ) + return result.returncode == 0, result.stdout.strip() + except subprocess.TimeoutExpired: + return False, "Command timed out" + except Exception as e: + return False, str(e) + + +def get_changed_files(repo_path: Path, base: str, head: str) -> List[Dict]: + """Get list of changed files between two refs.""" + success, output = run_git_command( + ["git", "diff", "--name-status", f"{base}...{head}"], + repo_path + ) + + if not success: + # Try without the triple dot (for uncommitted changes) + success, output = run_git_command( + ["git", "diff", "--name-status", base, head], + repo_path + ) + + if not success or not output: + # Fall back to staged changes + success, output = run_git_command( + ["git", "diff", "--name-status", "--cached"], + repo_path + ) + + files = [] + for line in output.split("\n"): + if not line.strip(): + continue + parts = line.split("\t") + if len(parts) >= 2: + status = parts[0][0] # First character of status + filepath = parts[-1] # Handle renames (R100\told\tnew) + status_map = { + "A": "added", + "M": "modified", + "D": "deleted", + "R": "renamed", + "C": "copied" + } + files.append({ + "path": filepath, + "status": status_map.get(status, "modified") + }) + + return files + + +def get_file_diff(repo_path: Path, filepath: str, base: str, head: str) -> str: + """Get diff content for a specific file.""" + success, output = run_git_command( + ["git", "diff", f"{base}...{head}", "--", filepath], + repo_path + ) + if not success: + success, output = run_git_command( + ["git", "diff", "--cached", "--", filepath], + repo_path + ) + return output if success else "" + + +def categorize_file(filepath: str) -> Tuple[str, int]: + """Categorize a file based on its path and name.""" + filepath_lower = filepath.lower() + + for category, info in FILE_CATEGORIES.items(): + for pattern in info["patterns"]: + if re.search(pattern, filepath_lower): + return category, info["weight"] + + return "medium", 2 # Default category + + +def analyze_diff_for_risks(diff_content: str, filepath: str) -> List[Dict]: + """Analyze diff content for risky patterns.""" + risks = [] + + # Only analyze added lines (starting with +) + added_lines = [ + line[1:] for line in diff_content.split("\n") + if line.startswith("+") and not line.startswith("+++") + ] + + content = "\n".join(added_lines) + + for risk in RISK_PATTERNS: + matches = re.findall(risk["pattern"], content, re.IGNORECASE) + if matches: + risks.append({ + "name": risk["name"], + "severity": risk["severity"], + "message": risk["message"], + "file": filepath, + "count": len(matches) + }) + + return risks + + +def count_changes(diff_content: str) -> Dict[str, int]: + """Count additions and deletions in diff.""" + additions = 0 + deletions = 0 + + for line in diff_content.split("\n"): + if line.startswith("+") and not line.startswith("+++"): + additions += 1 + elif line.startswith("-") and not line.startswith("---"): + deletions += 1 + + return {"additions": additions, "deletions": deletions} + + +def calculate_complexity_score(files: List[Dict], all_risks: List[Dict]) -> int: + """Calculate overall PR complexity score (1-10).""" + score = 0 + + # File count contribution (max 3 points) + file_count = len(files) + if file_count > 20: + score += 3 + elif file_count > 10: + score += 2 + elif file_count > 5: + score += 1 + + # Total changes contribution (max 3 points) + total_changes = sum(f.get("additions", 0) + f.get("deletions", 0) for f in files) + if total_changes > 500: + score += 3 + elif total_changes > 200: + score += 2 + elif total_changes > 50: + score += 1 + + # Risk severity contribution (max 4 points) + critical_risks = sum(1 for r in all_risks if r["severity"] == "critical") + high_risks = sum(1 for r in all_risks if r["severity"] == "high") + + score += min(2, critical_risks) + score += min(2, high_risks) + + return min(10, max(1, score)) + + +def analyze_commit_messages(repo_path: Path, base: str, head: str) -> Dict: + """Analyze commit messages in the PR.""" + success, output = run_git_command( + ["git", "log", "--oneline", f"{base}...{head}"], + repo_path + ) + + if not success or not output: + return {"commits": 0, "issues": []} + + commits = output.strip().split("\n") + issues = [] + + for commit in commits: + if len(commit) < 10: + continue + + # Check for conventional commit format + message = commit[8:] if len(commit) > 8 else commit # Skip hash + + if not re.match(r"^(feat|fix|docs|style|refactor|test|chore|perf|ci|build|revert)(\(.+\))?:", message): + issues.append({ + "commit": commit[:7], + "issue": "Does not follow conventional commit format" + }) + + if len(message) > 72: + issues.append({ + "commit": commit[:7], + "issue": "Commit message exceeds 72 characters" + }) + + return { + "commits": len(commits), + "issues": issues + } + + +def analyze_pr( + repo_path: Path, + base: str = "main", + head: str = "HEAD" +) -> Dict: + """Perform complete PR analysis.""" + # Get changed files + changed_files = get_changed_files(repo_path, base, head) + + if not changed_files: + return { + "status": "no_changes", + "message": "No changes detected between branches" + } + + # Analyze each file + all_risks = [] + file_analyses = [] + + for file_info in changed_files: + filepath = file_info["path"] + category, weight = categorize_file(filepath) + + # Get diff for the file + diff = get_file_diff(repo_path, filepath, base, head) + changes = count_changes(diff) + risks = analyze_diff_for_risks(diff, filepath) + + all_risks.extend(risks) + + file_analyses.append({ + "path": filepath, + "status": file_info["status"], + "category": category, + "priority_weight": weight, + "additions": changes["additions"], + "deletions": changes["deletions"], + "risks": risks + }) + + # Sort by priority (highest first) + file_analyses.sort(key=lambda x: (-x["priority_weight"], x["path"])) + + # Analyze commits + commit_analysis = analyze_commit_messages(repo_path, base, head) + + # Calculate metrics + complexity = calculate_complexity_score(file_analyses, all_risks) + + total_additions = sum(f["additions"] for f in file_analyses) + total_deletions = sum(f["deletions"] for f in file_analyses) + + return { + "status": "analyzed", + "summary": { + "files_changed": len(file_analyses), + "total_additions": total_additions, + "total_deletions": total_deletions, + "complexity_score": complexity, + "complexity_label": get_complexity_label(complexity), + "commits": commit_analysis["commits"] + }, + "risks": { + "critical": [r for r in all_risks if r["severity"] == "critical"], + "high": [r for r in all_risks if r["severity"] == "high"], + "medium": [r for r in all_risks if r["severity"] == "medium"], + "low": [r for r in all_risks if r["severity"] == "low"] + }, + "files": file_analyses, + "commit_issues": commit_analysis["issues"], + "review_order": [f["path"] for f in file_analyses[:10]] # Top 10 priority files + } + + +def get_complexity_label(score: int) -> str: + """Get human-readable complexity label.""" + if score <= 2: + return "Simple" + elif score <= 4: + return "Moderate" + elif score <= 6: + return "Complex" + elif score <= 8: + return "Very Complex" + else: + return "Critical" + + +def print_report(analysis: Dict) -> None: + """Print human-readable analysis report.""" + if analysis["status"] == "no_changes": + print("No changes detected.") + return + + summary = analysis["summary"] + risks = analysis["risks"] + + print("=" * 60) + print("PR ANALYSIS REPORT") + print("=" * 60) + + print(f"\nComplexity: {summary['complexity_score']}/10 ({summary['complexity_label']})") + print(f"Files Changed: {summary['files_changed']}") + print(f"Lines: +{summary['total_additions']} / -{summary['total_deletions']}") + print(f"Commits: {summary['commits']}") + + # Risk summary + print("\n--- RISK SUMMARY ---") + print(f"Critical: {len(risks['critical'])}") + print(f"High: {len(risks['high'])}") + print(f"Medium: {len(risks['medium'])}") + print(f"Low: {len(risks['low'])}") + + # Critical and high risks details + if risks["critical"]: + print("\n--- CRITICAL RISKS ---") + for risk in risks["critical"]: + print(f" [{risk['file']}] {risk['message']} (x{risk['count']})") + + if risks["high"]: + print("\n--- HIGH RISKS ---") + for risk in risks["high"]: + print(f" [{risk['file']}] {risk['message']} (x{risk['count']})") + + # Commit message issues + if analysis["commit_issues"]: + print("\n--- COMMIT MESSAGE ISSUES ---") + for issue in analysis["commit_issues"][:5]: + print(f" {issue['commit']}: {issue['issue']}") + + # Review order + print("\n--- SUGGESTED REVIEW ORDER ---") + for i, filepath in enumerate(analysis["review_order"], 1): + file_info = next(f for f in analysis["files"] if f["path"] == filepath) + print(f" {i}. [{file_info['category'].upper()}] {filepath}") + + print("\n" + "=" * 60) + + +def main(): + parser = argparse.ArgumentParser( + description="Analyze pull request for review complexity and risks" + ) + parser.add_argument( + "repo_path", + nargs="?", + default=".", + help="Path to git repository (default: current directory)" + ) + parser.add_argument( + "--base", "-b", + default="main", + help="Base branch for comparison (default: main)" + ) + parser.add_argument( + "--head", + default="HEAD", + help="Head branch/commit for comparison (default: HEAD)" + ) + parser.add_argument( + "--json", + action="store_true", + help="Output in JSON format" + ) + parser.add_argument( + "--output", "-o", + help="Write output to file" + ) + + args = parser.parse_args() + + repo_path = Path(args.repo_path).resolve() + + if not (repo_path / ".git").exists(): + print(f"Error: {repo_path} is not a git repository", file=sys.stderr) + sys.exit(1) + + analysis = analyze_pr(repo_path, args.base, args.head) + + if args.json: + output = json.dumps(analysis, indent=2) + if args.output: + with open(args.output, "w") as f: + f.write(output) + print(f"Results written to {args.output}") + else: + print(output) + else: + print_report(analysis) + + +if __name__ == "__main__": + main() diff --git a/.continue/skills/code-reviewer/scripts/review_report_generator.py b/.continue/skills/code-reviewer/scripts/review_report_generator.py new file mode 100644 index 00000000..4578c51f --- /dev/null +++ b/.continue/skills/code-reviewer/scripts/review_report_generator.py @@ -0,0 +1,505 @@ +#!/usr/bin/env python3 +""" +Review Report Generator + +Generates comprehensive code review reports by combining PR analysis +and code quality findings into structured, actionable reports. + +Usage: + python .continue/skills/code-reviewer/scripts/review_report_generator.py /path/to/repo + python .continue/skills/code-reviewer/scripts/review_report_generator.py . --pr-analysis pr_results.json --quality-analysis quality_results.json + python .continue/skills/code-reviewer/scripts/review_report_generator.py /path/to/repo --format markdown --output review.md +""" + +import argparse +import json +import os +import subprocess +import sys +from datetime import datetime +from pathlib import Path +from typing import Dict, List, Optional, Tuple + + +# Severity weights for prioritization +SEVERITY_WEIGHTS = { + "critical": 100, + "high": 75, + "medium": 50, + "low": 25, + "info": 10 +} + +# Review verdict thresholds +VERDICT_THRESHOLDS = { + "approve": {"max_critical": 0, "max_high": 0, "max_score": 100}, + "approve_with_suggestions": {"max_critical": 0, "max_high": 2, "max_score": 85}, + "request_changes": {"max_critical": 0, "max_high": 5, "max_score": 70}, + "block": {"max_critical": float("inf"), "max_high": float("inf"), "max_score": 0} +} + + +def load_json_file(filepath: str) -> Optional[Dict]: + """Load JSON file if it exists.""" + try: + with open(filepath, "r") as f: + return json.load(f) + except (FileNotFoundError, json.JSONDecodeError): + return None + + +def run_pr_analyzer(repo_path: Path) -> Dict: + """Run .continue/skills/code-reviewer/scripts/pr_analyzer.py and return results.""" + script_path = Path(__file__).parent / ".continue/skills/code-reviewer/scripts/pr_analyzer.py" + if not script_path.exists(): + return {"status": "error", "message": ".continue/skills/code-reviewer/scripts/pr_analyzer.py not found"} + + try: + result = subprocess.run( + [sys.executable, str(script_path), str(repo_path), "--json"], + capture_output=True, + text=True, + timeout=120 + ) + if result.returncode == 0: + return json.loads(result.stdout) + return {"status": "error", "message": result.stderr} + except Exception as e: + return {"status": "error", "message": str(e)} + + +def run_quality_checker(repo_path: Path) -> Dict: + """Run .continue/skills/code-reviewer/scripts/code_quality_checker.py and return results.""" + script_path = Path(__file__).parent / ".continue/skills/code-reviewer/scripts/code_quality_checker.py" + if not script_path.exists(): + return {"status": "error", "message": ".continue/skills/code-reviewer/scripts/code_quality_checker.py not found"} + + try: + result = subprocess.run( + [sys.executable, str(script_path), str(repo_path), "--json"], + capture_output=True, + text=True, + timeout=300 + ) + if result.returncode == 0: + return json.loads(result.stdout) + return {"status": "error", "message": result.stderr} + except Exception as e: + return {"status": "error", "message": str(e)} + + +def calculate_review_score(pr_analysis: Dict, quality_analysis: Dict) -> int: + """Calculate overall review score (0-100).""" + score = 100 + + # Deduct for PR risks + if "risks" in pr_analysis: + risks = pr_analysis["risks"] + score -= len(risks.get("critical", [])) * 15 + score -= len(risks.get("high", [])) * 10 + score -= len(risks.get("medium", [])) * 5 + score -= len(risks.get("low", [])) * 2 + + # Deduct for code quality issues + if "issues" in quality_analysis: + issues = quality_analysis["issues"] + score -= len([i for i in issues if i.get("severity") == "critical"]) * 12 + score -= len([i for i in issues if i.get("severity") == "high"]) * 8 + score -= len([i for i in issues if i.get("severity") == "medium"]) * 4 + score -= len([i for i in issues if i.get("severity") == "low"]) * 1 + + # Deduct for complexity + if "summary" in pr_analysis: + complexity = pr_analysis["summary"].get("complexity_score", 0) + if complexity > 7: + score -= 10 + elif complexity > 5: + score -= 5 + + return max(0, min(100, score)) + + +def determine_verdict(score: int, critical_count: int, high_count: int) -> Tuple[str, str]: + """Determine review verdict based on score and issue counts.""" + if critical_count > 0: + return "block", "Critical issues must be resolved before merge" + + if score >= 90 and high_count == 0: + return "approve", "Code meets quality standards" + + if score >= 75 and high_count <= 2: + return "approve_with_suggestions", "Minor improvements recommended" + + if score >= 50: + return "request_changes", "Several issues need to be addressed" + + return "block", "Significant issues prevent approval" + + +def generate_findings_list(pr_analysis: Dict, quality_analysis: Dict) -> List[Dict]: + """Combine and prioritize all findings.""" + findings = [] + + # Add PR risk findings + if "risks" in pr_analysis: + for severity, items in pr_analysis["risks"].items(): + for item in items: + findings.append({ + "source": "pr_analysis", + "severity": severity, + "category": item.get("name", "unknown"), + "message": item.get("message", ""), + "file": item.get("file", ""), + "count": item.get("count", 1) + }) + + # Add code quality findings + if "issues" in quality_analysis: + for issue in quality_analysis["issues"]: + findings.append({ + "source": "quality_analysis", + "severity": issue.get("severity", "medium"), + "category": issue.get("type", "unknown"), + "message": issue.get("message", ""), + "file": issue.get("file", ""), + "line": issue.get("line", 0) + }) + + # Sort by severity weight + findings.sort( + key=lambda x: -SEVERITY_WEIGHTS.get(x["severity"], 0) + ) + + return findings + + +def generate_action_items(findings: List[Dict]) -> List[Dict]: + """Generate prioritized action items from findings.""" + action_items = [] + seen_categories = set() + + for finding in findings: + category = finding["category"] + severity = finding["severity"] + + # Group similar issues + if category in seen_categories and severity not in ["critical", "high"]: + continue + + action = { + "priority": "P0" if severity == "critical" else "P1" if severity == "high" else "P2", + "action": get_action_for_category(category, finding), + "severity": severity, + "files_affected": [finding["file"]] if finding.get("file") else [] + } + action_items.append(action) + seen_categories.add(category) + + return action_items[:15] # Top 15 actions + + +def get_action_for_category(category: str, finding: Dict) -> str: + """Get actionable recommendation for issue category.""" + actions = { + "hardcoded_secrets": "Remove hardcoded credentials and use environment variables or a secrets manager", + "sql_concatenation": "Use parameterized queries to prevent SQL injection", + "debugger": "Remove debugger statements before merging", + "console_log": "Remove or replace console statements with proper logging", + "todo_fixme": "Address TODO/FIXME comments or create tracking issues", + "disable_eslint": "Address the underlying issue instead of disabling lint rules", + "any_type": "Replace 'any' types with proper type definitions", + "long_function": "Break down function into smaller, focused units", + "god_class": "Split class into smaller, single-responsibility classes", + "too_many_params": "Use parameter objects or builder pattern", + "deep_nesting": "Refactor using early returns, guard clauses, or extraction", + "high_complexity": "Reduce cyclomatic complexity through refactoring", + "missing_error_handling": "Add proper error handling and recovery logic", + "duplicate_code": "Extract duplicate code into shared functions", + "magic_numbers": "Replace magic numbers with named constants", + "large_file": "Consider splitting into multiple smaller modules" + } + return actions.get(category, f"Review and address: {finding.get('message', category)}") + + +def format_markdown_report(report: Dict) -> str: + """Generate markdown-formatted report.""" + lines = [] + + # Header + lines.append("# Code Review Report") + lines.append("") + lines.append(f"**Generated:** {report['metadata']['generated_at']}") + lines.append(f"**Repository:** {report['metadata']['repository']}") + lines.append("") + + # Executive Summary + lines.append("## Executive Summary") + lines.append("") + summary = report["summary"] + verdict = summary["verdict"] + verdict_emoji = { + "approve": "✅", + "approve_with_suggestions": "✅", + "request_changes": "⚠️", + "block": "❌" + }.get(verdict, "❓") + + lines.append(f"**Verdict:** {verdict_emoji} {verdict.upper().replace('_', ' ')}") + lines.append(f"**Score:** {summary['score']}/100") + lines.append(f"**Rationale:** {summary['rationale']}") + lines.append("") + + # Issue Counts + lines.append("### Issue Summary") + lines.append("") + lines.append("| Severity | Count |") + lines.append("|----------|-------|") + for severity in ["critical", "high", "medium", "low"]: + count = summary["issue_counts"].get(severity, 0) + lines.append(f"| {severity.capitalize()} | {count} |") + lines.append("") + + # PR Statistics (if available) + if "pr_summary" in report: + pr = report["pr_summary"] + lines.append("### Change Statistics") + lines.append("") + lines.append(f"- **Files Changed:** {pr.get('files_changed', 'N/A')}") + lines.append(f"- **Lines Added:** +{pr.get('total_additions', 0)}") + lines.append(f"- **Lines Removed:** -{pr.get('total_deletions', 0)}") + lines.append(f"- **Complexity:** {pr.get('complexity_label', 'N/A')}") + lines.append("") + + # Action Items + if report.get("action_items"): + lines.append("## Action Items") + lines.append("") + for i, item in enumerate(report["action_items"], 1): + priority = item["priority"] + emoji = "🔴" if priority == "P0" else "🟠" if priority == "P1" else "🟡" + lines.append(f"{i}. {emoji} **[{priority}]** {item['action']}") + if item.get("files_affected"): + lines.append(f" - Files: {', '.join(item['files_affected'][:3])}") + lines.append("") + + # Critical Findings + critical_findings = [f for f in report.get("findings", []) if f["severity"] == "critical"] + if critical_findings: + lines.append("## Critical Issues (Must Fix)") + lines.append("") + for finding in critical_findings: + lines.append(f"- **{finding['category']}** in `{finding.get('file', 'unknown')}`") + lines.append(f" - {finding['message']}") + lines.append("") + + # High Priority Findings + high_findings = [f for f in report.get("findings", []) if f["severity"] == "high"] + if high_findings: + lines.append("## High Priority Issues") + lines.append("") + for finding in high_findings[:10]: + lines.append(f"- **{finding['category']}** in `{finding.get('file', 'unknown')}`") + lines.append(f" - {finding['message']}") + lines.append("") + + # Review Order (if available) + if "review_order" in report: + lines.append("## Suggested Review Order") + lines.append("") + for i, filepath in enumerate(report["review_order"][:10], 1): + lines.append(f"{i}. `{filepath}`") + lines.append("") + + # Footer + lines.append("---") + lines.append("*Generated by Code Reviewer*") + + return "\n".join(lines) + + +def format_text_report(report: Dict) -> str: + """Generate plain text report.""" + lines = [] + + lines.append("=" * 60) + lines.append("CODE REVIEW REPORT") + lines.append("=" * 60) + lines.append("") + lines.append(f"Generated: {report['metadata']['generated_at']}") + lines.append(f"Repository: {report['metadata']['repository']}") + lines.append("") + + summary = report["summary"] + verdict = summary["verdict"].upper().replace("_", " ") + lines.append(f"VERDICT: {verdict}") + lines.append(f"SCORE: {summary['score']}/100") + lines.append(f"RATIONALE: {summary['rationale']}") + lines.append("") + + lines.append("--- ISSUE SUMMARY ---") + for severity in ["critical", "high", "medium", "low"]: + count = summary["issue_counts"].get(severity, 0) + lines.append(f" {severity.capitalize()}: {count}") + lines.append("") + + if report.get("action_items"): + lines.append("--- ACTION ITEMS ---") + for i, item in enumerate(report["action_items"][:10], 1): + lines.append(f" {i}. [{item['priority']}] {item['action']}") + lines.append("") + + critical = [f for f in report.get("findings", []) if f["severity"] == "critical"] + if critical: + lines.append("--- CRITICAL ISSUES ---") + for f in critical: + lines.append(f" [{f.get('file', 'unknown')}] {f['message']}") + lines.append("") + + lines.append("=" * 60) + + return "\n".join(lines) + + +def generate_report( + repo_path: Path, + pr_analysis: Optional[Dict] = None, + quality_analysis: Optional[Dict] = None +) -> Dict: + """Generate comprehensive review report.""" + # Run analyses if not provided + if pr_analysis is None: + pr_analysis = run_pr_analyzer(repo_path) + + if quality_analysis is None: + quality_analysis = run_quality_checker(repo_path) + + # Generate findings + findings = generate_findings_list(pr_analysis, quality_analysis) + + # Count issues by severity + issue_counts = { + "critical": len([f for f in findings if f["severity"] == "critical"]), + "high": len([f for f in findings if f["severity"] == "high"]), + "medium": len([f for f in findings if f["severity"] == "medium"]), + "low": len([f for f in findings if f["severity"] == "low"]) + } + + # Calculate score and verdict + score = calculate_review_score(pr_analysis, quality_analysis) + verdict, rationale = determine_verdict( + score, + issue_counts["critical"], + issue_counts["high"] + ) + + # Generate action items + action_items = generate_action_items(findings) + + # Build report + report = { + "metadata": { + "generated_at": datetime.now().isoformat(), + "repository": str(repo_path), + "version": "1.0.0" + }, + "summary": { + "score": score, + "verdict": verdict, + "rationale": rationale, + "issue_counts": issue_counts + }, + "findings": findings, + "action_items": action_items + } + + # Add PR summary if available + if pr_analysis.get("status") == "analyzed": + report["pr_summary"] = pr_analysis.get("summary", {}) + report["review_order"] = pr_analysis.get("review_order", []) + + # Add quality summary if available + if quality_analysis.get("status") == "analyzed": + report["quality_summary"] = quality_analysis.get("summary", {}) + + return report + + +def main(): + parser = argparse.ArgumentParser( + description="Generate comprehensive code review reports" + ) + parser.add_argument( + "repo_path", + nargs="?", + default=".", + help="Path to repository (default: current directory)" + ) + parser.add_argument( + "--pr-analysis", + help="Path to pre-computed PR analysis JSON" + ) + parser.add_argument( + "--quality-analysis", + help="Path to pre-computed quality analysis JSON" + ) + parser.add_argument( + "--format", "-f", + choices=["text", "markdown", "json"], + default="text", + help="Output format (default: text)" + ) + parser.add_argument( + "--output", "-o", + help="Write output to file" + ) + parser.add_argument( + "--json", + action="store_true", + help="Output as JSON (shortcut for --format json)" + ) + + args = parser.parse_args() + + repo_path = Path(args.repo_path).resolve() + if not repo_path.exists(): + print(f"Error: Path does not exist: {repo_path}", file=sys.stderr) + sys.exit(1) + + # Load pre-computed analyses if provided + pr_analysis = None + quality_analysis = None + + if args.pr_analysis: + pr_analysis = load_json_file(args.pr_analysis) + if not pr_analysis: + print(f"Warning: Could not load PR analysis from {args.pr_analysis}") + + if args.quality_analysis: + quality_analysis = load_json_file(args.quality_analysis) + if not quality_analysis: + print(f"Warning: Could not load quality analysis from {args.quality_analysis}") + + # Generate report + report = generate_report(repo_path, pr_analysis, quality_analysis) + + # Format output + output_format = "json" if args.json else args.format + + if output_format == "json": + output = json.dumps(report, indent=2) + elif output_format == "markdown": + output = format_markdown_report(report) + else: + output = format_text_report(report) + + # Write or print output + if args.output: + with open(args.output, "w") as f: + f.write(output) + print(f"Report written to {args.output}") + else: + print(output) + + +if __name__ == "__main__": + main() diff --git a/.cursor/skills/code-reviewer/SKILL.md b/.cursor/skills/code-reviewer/SKILL.md new file mode 100644 index 00000000..471e895f --- /dev/null +++ b/.cursor/skills/code-reviewer/SKILL.md @@ -0,0 +1,177 @@ +--- +name: code-reviewer +description: Code review automation for TypeScript, JavaScript, Python, Go, Swift, Kotlin. Analyzes PRs for complexity and risk, checks code quality for SOLID violations and code smells, generates review reports. Use when reviewing pull requests, analyzing code quality, identifying issues, generating review checklists. +--- + +# Code Reviewer + +Automated code review tools for analyzing pull requests, detecting code quality issues, and generating review reports. + +--- + +## Table of Contents + +- [Tools](#tools) + - [PR Analyzer](#pr-analyzer) + - [Code Quality Checker](#code-quality-checker) + - [Review Report Generator](#review-report-generator) +- [Reference Guides](#reference-guides) +- [Languages Supported](#languages-supported) + +--- + +## Tools + +### PR Analyzer + +Analyzes git diff between branches to assess review complexity and identify risks. + +```bash +# Analyze current branch against main +python scripts/pr_analyzer.py /path/to/repo + +# Compare specific branches +python scripts/pr_analyzer.py . --base main --head feature-branch + +# JSON output for integration +python scripts/pr_analyzer.py /path/to/repo --json +``` + +**What it detects:** +- Hardcoded secrets (passwords, API keys, tokens) +- SQL injection patterns (string concatenation in queries) +- Debug statements (debugger, console.log) +- ESLint rule disabling +- TypeScript `any` types +- TODO/FIXME comments + +**Output includes:** +- Complexity score (1-10) +- Risk categorization (critical, high, medium, low) +- File prioritization for review order +- Commit message validation + +--- + +### Code Quality Checker + +Analyzes source code for structural issues, code smells, and SOLID violations. + +```bash +# Analyze a directory +python scripts/code_quality_checker.py /path/to/code + +# Analyze specific language +python scripts/code_quality_checker.py . --language python + +# JSON output +python scripts/code_quality_checker.py /path/to/code --json +``` + +**What it detects:** +- Long functions (>50 lines) +- Large files (>500 lines) +- God classes (>20 methods) +- Deep nesting (>4 levels) +- Too many parameters (>5) +- High cyclomatic complexity +- Missing error handling +- Unused imports +- Magic numbers + +**Thresholds:** + +| Issue | Threshold | +|-------|-----------| +| Long function | >50 lines | +| Large file | >500 lines | +| God class | >20 methods | +| Too many params | >5 | +| Deep nesting | >4 levels | +| High complexity | >10 branches | + +--- + +### Review Report Generator + +Combines PR analysis and code quality findings into structured review reports. + +```bash +# Generate report for current repo +python scripts/review_report_generator.py /path/to/repo + +# Markdown output +python scripts/review_report_generator.py . --format markdown --output review.md + +# Use pre-computed analyses +python scripts/review_report_generator.py . \ + --pr-analysis pr_results.json \ + --quality-analysis quality_results.json +``` + +**Report includes:** +- Review verdict (approve, request changes, block) +- Score (0-100) +- Prioritized action items +- Issue summary by severity +- Suggested review order + +**Verdicts:** + +| Score | Verdict | +|-------|---------| +| 90+ with no high issues | Approve | +| 75+ with ≤2 high issues | Approve with suggestions | +| 50-74 | Request changes | +| <50 or critical issues | Block | + +--- + +## Reference Guides + +### Code Review Checklist +`.cursor/skills/code-reviewer/references/code_review_checklist.md` + +Systematic checklists covering: +- Pre-review checks (build, tests, PR hygiene) +- Correctness (logic, data handling, error handling) +- Security (input validation, injection prevention) +- Performance (efficiency, caching, scalability) +- Maintainability (code quality, naming, structure) +- Testing (coverage, quality, mocking) +- Language-specific checks + +### Coding Standards +`.cursor/skills/code-reviewer/references/coding_standards.md` + +Language-specific standards for: +- TypeScript (type annotations, null safety, async/await) +- JavaScript (declarations, patterns, modules) +- Python (type hints, exceptions, class design) +- Go (error handling, structs, concurrency) +- Swift (optionals, protocols, errors) +- Kotlin (null safety, data classes, coroutines) + +### Common Antipatterns +`.cursor/skills/code-reviewer/references/common_antipatterns.md` + +Antipattern catalog with examples and fixes: +- Structural (god class, long method, deep nesting) +- Logic (boolean blindness, stringly typed code) +- Security (SQL injection, hardcoded credentials) +- Performance (N+1 queries, unbounded collections) +- Testing (duplication, testing implementation) +- Async (floating promises, callback hell) + +--- + +## Languages Supported + +| Language | Extensions | +|----------|------------| +| Python | `.py` | +| TypeScript | `.ts`, `.tsx` | +| JavaScript | `.js`, `.jsx`, `.mjs` | +| Go | `.go` | +| Swift | `.swift` | +| Kotlin | `.kt`, `.kts` | \ No newline at end of file diff --git a/.cursor/skills/code-reviewer/references/code_review_checklist.md b/.cursor/skills/code-reviewer/references/code_review_checklist.md new file mode 100644 index 00000000..b7bd0867 --- /dev/null +++ b/.cursor/skills/code-reviewer/references/code_review_checklist.md @@ -0,0 +1,270 @@ +# Code Review Checklist + +Structured checklists for systematic code review across different aspects. + +--- + +## Table of Contents + +- [Pre-Review Checks](#pre-review-checks) +- [Correctness](#correctness) +- [Security](#security) +- [Performance](#performance) +- [Maintainability](#maintainability) +- [Testing](#testing) +- [Documentation](#documentation) +- [Language-Specific Checks](#language-specific-checks) + +--- + +## Pre-Review Checks + +Before diving into code, verify these basics: + +### Build and Tests +- [ ] Code compiles without errors +- [ ] All existing tests pass +- [ ] New tests are included for new functionality +- [ ] No unintended files included (build artifacts, IDE configs) + +### PR Hygiene +- [ ] PR has clear title and description +- [ ] Changes are scoped appropriately (not too large) +- [ ] Commits follow conventional commit format +- [ ] Branch is up to date with base branch + +### Scope Verification +- [ ] Changes match the stated purpose +- [ ] No unrelated changes bundled in +- [ ] Breaking changes are documented +- [ ] Migration path provided if needed + +--- + +## Correctness + +### Logic +- [ ] Algorithm implements requirements correctly +- [ ] Edge cases handled (null, empty, boundary values) +- [ ] Off-by-one errors checked +- [ ] Correct operators used (== vs ===, & vs &&) +- [ ] Loop termination conditions correct +- [ ] Recursion has proper base cases + +### Data Handling +- [ ] Data types appropriate for the use case +- [ ] Numeric overflow/underflow considered +- [ ] Date/time handling accounts for timezones +- [ ] Unicode and internationalization handled +- [ ] Data validation at entry points + +### State Management +- [ ] State transitions are valid +- [ ] Race conditions addressed +- [ ] Concurrent access handled correctly +- [ ] State cleanup on errors/exit + +### Error Handling +- [ ] Errors caught at appropriate levels +- [ ] Error messages are actionable +- [ ] Errors don't expose sensitive information +- [ ] Recovery or graceful degradation implemented +- [ ] Resources cleaned up in error paths + +--- + +## Security + +### Input Validation +- [ ] All user input validated and sanitized +- [ ] Input length limits enforced +- [ ] File uploads validated (type, size, content) +- [ ] URL parameters validated + +### Injection Prevention +- [ ] SQL queries parameterized +- [ ] Command execution uses safe APIs +- [ ] HTML output escaped to prevent XSS +- [ ] LDAP queries properly escaped +- [ ] XML parsing disables external entities + +### Authentication & Authorization +- [ ] Authentication required for protected resources +- [ ] Authorization checked before operations +- [ ] Session management secure +- [ ] Password handling follows best practices +- [ ] Token expiration implemented + +### Data Protection +- [ ] Sensitive data encrypted at rest +- [ ] Sensitive data encrypted in transit +- [ ] PII handled according to policy +- [ ] Secrets not hardcoded +- [ ] Logs don't contain sensitive data + +### API Security +- [ ] Rate limiting implemented +- [ ] CORS configured correctly +- [ ] CSRF protection in place +- [ ] API keys/tokens secured +- [ ] Endpoints use HTTPS + +--- + +## Performance + +### Efficiency +- [ ] Appropriate data structures used +- [ ] Algorithms have acceptable complexity +- [ ] Database queries are optimized +- [ ] N+1 query problems avoided +- [ ] Indexes used where beneficial + +### Resource Usage +- [ ] Memory usage bounded +- [ ] No memory leaks +- [ ] File handles properly closed +- [ ] Database connections pooled +- [ ] Network calls minimized + +### Caching +- [ ] Appropriate caching strategy +- [ ] Cache invalidation handled +- [ ] Cache keys are unique and predictable +- [ ] TTL values appropriate + +### Scalability +- [ ] Horizontal scaling considered +- [ ] Bottlenecks identified +- [ ] Async processing for long operations +- [ ] Batch operations where appropriate + +--- + +## Maintainability + +### Code Quality +- [ ] Functions/methods have single responsibility +- [ ] Classes follow SOLID principles +- [ ] Code is DRY (Don't Repeat Yourself) +- [ ] No dead code or commented-out code +- [ ] Magic numbers replaced with constants + +### Naming +- [ ] Names are descriptive and consistent +- [ ] Naming follows project conventions +- [ ] No abbreviations that obscure meaning +- [ ] Boolean variables/functions have is/has/can prefix + +### Structure +- [ ] Functions are appropriately sized (<50 lines preferred) +- [ ] Nesting depth is reasonable (<4 levels) +- [ ] Related code is grouped together +- [ ] Dependencies are minimal and explicit + +### Readability +- [ ] Code is self-documenting where possible +- [ ] Complex logic has explanatory comments +- [ ] Formatting is consistent +- [ ] No overly clever or obscure code + +--- + +## Testing + +### Coverage +- [ ] New code has unit tests +- [ ] Critical paths have integration tests +- [ ] Edge cases are tested +- [ ] Error conditions are tested + +### Quality +- [ ] Tests are independent +- [ ] Tests have clear assertions +- [ ] Test names describe what is tested +- [ ] Tests don't depend on external state + +### Mocking +- [ ] External dependencies are mocked +- [ ] Mocks are realistic +- [ ] Mock setup is not excessive + +--- + +## Documentation + +### Code Documentation +- [ ] Public APIs are documented +- [ ] Complex algorithms explained +- [ ] Non-obvious decisions documented +- [ ] TODO/FIXME comments have context + +### External Documentation +- [ ] README updated if needed +- [ ] API documentation updated +- [ ] Changelog updated +- [ ] Migration guides provided + +--- + +## Language-Specific Checks + +### TypeScript/JavaScript +- [ ] Types are explicit (avoid `any`) +- [ ] Null checks present (`?.`, `??`) +- [ ] Async/await errors handled +- [ ] No floating promises +- [ ] Memory leaks from closures checked + +### Python +- [ ] Type hints used for public APIs +- [ ] Context managers for resources (`with` statements) +- [ ] Exception handling is specific (not bare `except`) +- [ ] No mutable default arguments +- [ ] List comprehensions used appropriately + +### Go +- [ ] Errors checked and handled +- [ ] Goroutine leaks prevented +- [ ] Context propagation correct +- [ ] Defer statements in right order +- [ ] Interfaces minimal + +### Swift +- [ ] Optionals handled safely +- [ ] Memory management correct (weak/unowned) +- [ ] Error handling uses Result or throws +- [ ] Access control appropriate +- [ ] Codable implementation correct + +### Kotlin +- [ ] Null safety leveraged +- [ ] Coroutine cancellation handled +- [ ] Data classes used appropriately +- [ ] Extension functions don't obscure behavior +- [ ] Sealed classes for state + +--- + +## Review Process Tips + +### Before Approving +1. Verify all critical checks passed +2. Confirm tests are adequate +3. Consider deployment impact +4. Check for any security concerns +5. Ensure documentation is updated + +### Providing Feedback +- Be specific about issues +- Explain why something is problematic +- Suggest alternatives when possible +- Distinguish blockers from suggestions +- Acknowledge good patterns + +### When to Block +- Security vulnerabilities present +- Critical logic errors +- No tests for risky changes +- Breaking changes without migration +- Significant performance regressions diff --git a/.cursor/skills/code-reviewer/references/coding_standards.md b/.cursor/skills/code-reviewer/references/coding_standards.md new file mode 100644 index 00000000..9fbc6a06 --- /dev/null +++ b/.cursor/skills/code-reviewer/references/coding_standards.md @@ -0,0 +1,555 @@ +# Coding Standards + +Language-specific coding standards and conventions for code review. + +--- + +## Table of Contents + +- [Universal Principles](#universal-principles) +- [TypeScript Standards](#typescript-standards) +- [JavaScript Standards](#javascript-standards) +- [Python Standards](#python-standards) +- [Go Standards](#go-standards) +- [Swift Standards](#swift-standards) +- [Kotlin Standards](#kotlin-standards) + +--- + +## Universal Principles + +These apply across all languages. + +### Naming Conventions + +| Element | Convention | Example | +|---------|------------|---------| +| Variables | camelCase (JS/TS), snake_case (Python/Go) | `userName`, `user_name` | +| Constants | SCREAMING_SNAKE_CASE | `MAX_RETRY_COUNT` | +| Functions | camelCase (JS/TS), snake_case (Python) | `getUserById`, `get_user_by_id` | +| Classes | PascalCase | `UserRepository` | +| Interfaces | PascalCase, optionally prefixed | `IUserService` or `UserService` | +| Private members | Prefix with underscore or use access modifiers | `_internalState` | + +### Function Design + +``` +Good functions: +- Do one thing well +- Have descriptive names (verb + noun) +- Take 3 or fewer parameters +- Return early for error cases +- Stay under 50 lines +``` + +### Error Handling + +``` +Good error handling: +- Catch specific errors, not generic exceptions +- Log with context (what, where, why) +- Clean up resources in error paths +- Don't swallow errors silently +- Provide actionable error messages +``` + +--- + +## TypeScript Standards + +### Type Annotations + +```typescript +// Avoid 'any' - use unknown for truly unknown types +function processData(data: unknown): ProcessedResult { + if (isValidData(data)) { + return transform(data); + } + throw new Error('Invalid data format'); +} + +// Use explicit return types for public APIs +export function calculateTotal(items: CartItem[]): number { + return items.reduce((sum, item) => sum + item.price, 0); +} + +// Use type guards for runtime checks +function isUser(obj: unknown): obj is User { + return ( + typeof obj === 'object' && + obj !== null && + 'id' in obj && + 'email' in obj + ); +} +``` + +### Null Safety + +```typescript +// Use optional chaining and nullish coalescing +const userName = user?.profile?.name ?? 'Anonymous'; + +// Be explicit about nullable types +interface Config { + timeout: number; + retries?: number; // Optional + fallbackUrl: string | null; // Explicitly nullable +} + +// Use assertion functions for validation +function assertDefined(value: T | null | undefined): asserts value is T { + if (value === null || value === undefined) { + throw new Error('Value is not defined'); + } +} +``` + +### Async/Await + +```typescript +// Always handle errors in async functions +async function fetchUser(id: string): Promise { + try { + const response = await api.get(`/users/${id}`); + return response.data; + } catch (error) { + logger.error('Failed to fetch user', { id, error }); + throw new UserFetchError(id, error); + } +} + +// Use Promise.all for parallel operations +async function loadDashboard(userId: string): Promise { + const [profile, stats, notifications] = await Promise.all([ + fetchProfile(userId), + fetchStats(userId), + fetchNotifications(userId) + ]); + return { profile, stats, notifications }; +} +``` + +### React/Component Standards + +```typescript +// Use explicit prop types +interface ButtonProps { + label: string; + onClick: () => void; + variant?: 'primary' | 'secondary'; + disabled?: boolean; +} + +// Prefer functional components with hooks +function Button({ label, onClick, variant = 'primary', disabled = false }: ButtonProps) { + return ( + + ); +} + +// Use custom hooks for reusable logic +function useDebounce(value: T, delay: number): T { + const [debouncedValue, setDebouncedValue] = useState(value); + + useEffect(() => { + const timer = setTimeout(() => setDebouncedValue(value), delay); + return () => clearTimeout(timer); + }, [value, delay]); + + return debouncedValue; +} +``` + +--- + +## JavaScript Standards + +### Variable Declarations + +```javascript +// Use const by default, let when reassignment needed +const MAX_ITEMS = 100; +let currentCount = 0; + +// Never use var +// var is function-scoped and hoisted, leading to bugs +``` + +### Object and Array Patterns + +```javascript +// Use object destructuring +const { name, email, role = 'user' } = user; + +// Use spread for immutable updates +const updatedUser = { ...user, lastLogin: new Date() }; +const updatedList = [...items, newItem]; + +// Use array methods over loops +const activeUsers = users.filter(u => u.isActive); +const emails = users.map(u => u.email); +const total = orders.reduce((sum, o) => sum + o.amount, 0); +``` + +### Module Patterns + +```javascript +// Use named exports for utilities +export function formatDate(date) { ... } +export function parseDate(str) { ... } + +// Use default export for main component/class +export default class UserService { ... } + +// Group related exports +export { formatDate, parseDate, isValidDate } from './dateUtils'; +``` + +--- + +## Python Standards + +### Type Hints (PEP 484) + +```python +from typing import Optional, List, Dict, Union + +def get_user(user_id: int) -> Optional[User]: + """Fetch user by ID, returns None if not found.""" + return db.query(User).filter(User.id == user_id).first() + +def process_items(items: List[str]) -> Dict[str, int]: + """Count occurrences of each item.""" + return {item: items.count(item) for item in set(items)} + +def send_notification( + user: User, + message: str, + *, + priority: str = "normal", + channels: List[str] = None +) -> bool: + """Send notification to user via specified channels.""" + channels = channels or ["email"] + # Implementation +``` + +### Exception Handling + +```python +# Catch specific exceptions +try: + result = api_client.fetch_data(endpoint) +except ConnectionError as e: + logger.warning(f"Connection failed: {e}") + return cached_data +except TimeoutError as e: + logger.error(f"Request timed out: {e}") + raise ServiceUnavailableError() from e + +# Use context managers for resources +with open(filepath, 'r') as f: + data = json.load(f) + +# Custom exceptions should be informative +class ValidationError(Exception): + def __init__(self, field: str, message: str): + self.field = field + self.message = message + super().__init__(f"{field}: {message}") +``` + +### Class Design + +```python +from dataclasses import dataclass +from abc import ABC, abstractmethod + +# Use dataclasses for data containers +@dataclass +class UserDTO: + id: int + email: str + name: str + is_active: bool = True + +# Use ABC for interfaces +class Repository(ABC): + @abstractmethod + def find_by_id(self, id: int) -> Optional[Entity]: + pass + + @abstractmethod + def save(self, entity: Entity) -> Entity: + pass + +# Use properties for computed attributes +class Order: + def __init__(self, items: List[OrderItem]): + self._items = items + + @property + def total(self) -> Decimal: + return sum(item.price * item.quantity for item in self._items) +``` + +--- + +## Go Standards + +### Error Handling + +```go +// Always check errors +file, err := os.Open(filename) +if err != nil { + return fmt.Errorf("failed to open %s: %w", filename, err) +} +defer file.Close() + +// Use custom error types for specific cases +type ValidationError struct { + Field string + Message string +} + +func (e *ValidationError) Error() string { + return fmt.Sprintf("%s: %s", e.Field, e.Message) +} + +// Wrap errors with context +if err := db.Query(query); err != nil { + return fmt.Errorf("query failed for user %d: %w", userID, err) +} +``` + +### Struct Design + +```go +// Use unexported fields with exported methods +type UserService struct { + repo UserRepository + cache Cache + logger Logger +} + +// Constructor functions for initialization +func NewUserService(repo UserRepository, cache Cache, logger Logger) *UserService { + return &UserService{ + repo: repo, + cache: cache, + logger: logger, + } +} + +// Keep interfaces small +type Reader interface { + Read(p []byte) (n int, err error) +} + +type Writer interface { + Write(p []byte) (n int, err error) +} +``` + +### Concurrency + +```go +// Use context for cancellation +func fetchData(ctx context.Context, url string) ([]byte, error) { + req, err := http.NewRequestWithContext(ctx, "GET", url, nil) + if err != nil { + return nil, err + } + // ... +} + +// Use channels for communication +func worker(jobs <-chan Job, results chan<- Result) { + for job := range jobs { + result := process(job) + results <- result + } +} + +// Use sync.WaitGroup for coordination +var wg sync.WaitGroup +for _, item := range items { + wg.Add(1) + go func(i Item) { + defer wg.Done() + processItem(i) + }(item) +} +wg.Wait() +``` + +--- + +## Swift Standards + +### Optionals + +```swift +// Use optional binding +if let user = fetchUser(id: userId) { + displayProfile(user) +} + +// Use guard for early exit +guard let data = response.data else { + throw NetworkError.noData +} + +// Use nil coalescing for defaults +let displayName = user.nickname ?? user.email + +// Avoid force unwrapping except in tests +// BAD: let name = user.name! +// GOOD: guard let name = user.name else { return } +``` + +### Protocol-Oriented Design + +```swift +// Define protocols with minimal requirements +protocol Identifiable { + var id: String { get } +} + +protocol Persistable: Identifiable { + func save() throws + static func find(by id: String) -> Self? +} + +// Use protocol extensions for default implementations +extension Persistable { + func save() throws { + try Storage.shared.save(self) + } +} + +// Prefer composition over inheritance +struct User: Identifiable, Codable { + let id: String + var name: String + var email: String +} +``` + +### Error Handling + +```swift +// Define domain-specific errors +enum AuthError: Error { + case invalidCredentials + case tokenExpired + case networkFailure(underlying: Error) +} + +// Use Result type for async operations +func authenticate( + email: String, + password: String, + completion: @escaping (Result) -> Void +) + +// Use throws for synchronous operations +func validate(_ input: String) throws -> ValidatedInput { + guard !input.isEmpty else { + throw ValidationError.emptyInput + } + return ValidatedInput(value: input) +} +``` + +--- + +## Kotlin Standards + +### Null Safety + +```kotlin +// Use nullable types explicitly +fun findUser(id: Int): User? { + return userRepository.find(id) +} + +// Use safe calls and elvis operator +val name = user?.profile?.name ?: "Unknown" + +// Use let for null checks with side effects +user?.let { activeUser -> + sendWelcomeEmail(activeUser.email) + logActivity(activeUser.id) +} + +// Use require/check for validation +fun processPayment(amount: Double) { + require(amount > 0) { "Amount must be positive: $amount" } + // Process +} +``` + +### Data Classes and Sealed Classes + +```kotlin +// Use data classes for DTOs +data class UserDTO( + val id: Int, + val email: String, + val name: String, + val isActive: Boolean = true +) + +// Use sealed classes for state +sealed class Result { + data class Success(val data: T) : Result() + data class Error(val message: String, val cause: Throwable? = null) : Result() + object Loading : Result() +} + +// Pattern matching with when +fun handleResult(result: Result) = when (result) { + is Result.Success -> showUser(result.data) + is Result.Error -> showError(result.message) + Result.Loading -> showLoading() +} +``` + +### Coroutines + +```kotlin +// Use structured concurrency +suspend fun loadDashboard(): Dashboard = coroutineScope { + val profile = async { fetchProfile() } + val stats = async { fetchStats() } + val notifications = async { fetchNotifications() } + + Dashboard( + profile = profile.await(), + stats = stats.await(), + notifications = notifications.await() + ) +} + +// Handle cancellation +suspend fun fetchWithRetry(url: String): Response { + repeat(3) { attempt -> + try { + return httpClient.get(url) + } catch (e: IOException) { + if (attempt == 2) throw e + delay(1000L * (attempt + 1)) + } + } + throw IllegalStateException("Unreachable") +} +``` diff --git a/.cursor/skills/code-reviewer/references/common_antipatterns.md b/.cursor/skills/code-reviewer/references/common_antipatterns.md new file mode 100644 index 00000000..26045452 --- /dev/null +++ b/.cursor/skills/code-reviewer/references/common_antipatterns.md @@ -0,0 +1,739 @@ +# Common Antipatterns + +Code antipatterns to identify during review, with examples and fixes. + +--- + +## Table of Contents + +- [Structural Antipatterns](#structural-antipatterns) +- [Logic Antipatterns](#logic-antipatterns) +- [Security Antipatterns](#security-antipatterns) +- [Performance Antipatterns](#performance-antipatterns) +- [Testing Antipatterns](#testing-antipatterns) +- [Async Antipatterns](#async-antipatterns) + +--- + +## Structural Antipatterns + +### God Class + +A class that does too much and knows too much. + +```typescript +// BAD: God class handling everything +class UserManager { + createUser(data: UserData) { ... } + updateUser(id: string, data: UserData) { ... } + deleteUser(id: string) { ... } + sendEmail(userId: string, content: string) { ... } + generateReport(userId: string) { ... } + validatePassword(password: string) { ... } + hashPassword(password: string) { ... } + uploadAvatar(userId: string, file: File) { ... } + resizeImage(file: File) { ... } + logActivity(userId: string, action: string) { ... } + // 50 more methods... +} + +// GOOD: Single responsibility classes +class UserRepository { + create(data: UserData): User { ... } + update(id: string, data: Partial): User { ... } + delete(id: string): void { ... } +} + +class EmailService { + send(to: string, content: string): void { ... } +} + +class PasswordService { + validate(password: string): ValidationResult { ... } + hash(password: string): string { ... } +} +``` + +**Detection:** Class has >20 methods, >500 lines, or handles unrelated concerns. + +--- + +### Long Method + +Functions that do too much and are hard to understand. + +```python +# BAD: Long method doing everything +def process_order(order_data): + # Validate order (20 lines) + if not order_data.get('items'): + raise ValueError('No items') + if not order_data.get('customer_id'): + raise ValueError('No customer') + # ... more validation + + # Calculate totals (30 lines) + subtotal = 0 + for item in order_data['items']: + price = get_product_price(item['product_id']) + subtotal += price * item['quantity'] + # ... tax calculation, discounts + + # Process payment (40 lines) + payment_result = payment_gateway.charge(...) + # ... handle payment errors + + # Create order record (20 lines) + order = Order.create(...) + + # Send notifications (20 lines) + send_order_confirmation(...) + notify_warehouse(...) + + return order + +# GOOD: Composed of focused functions +def process_order(order_data): + validate_order(order_data) + totals = calculate_order_totals(order_data) + payment = process_payment(order_data['customer_id'], totals) + order = create_order_record(order_data, totals, payment) + send_order_notifications(order) + return order +``` + +**Detection:** Function >50 lines or requires scrolling to read. + +--- + +### Deep Nesting + +Excessive indentation making code hard to follow. + +```javascript +// BAD: Deep nesting +function processData(data) { + if (data) { + if (data.items) { + if (data.items.length > 0) { + for (const item of data.items) { + if (item.isValid) { + if (item.type === 'premium') { + if (item.price > 100) { + // Finally do something + processItem(item); + } + } + } + } + } + } + } +} + +// GOOD: Early returns and guard clauses +function processData(data) { + if (!data?.items?.length) { + return; + } + + const premiumItems = data.items.filter( + item => item.isValid && item.type === 'premium' && item.price > 100 + ); + + premiumItems.forEach(processItem); +} +``` + +**Detection:** Indentation >4 levels deep. + +--- + +### Magic Numbers and Strings + +Hard-coded values without explanation. + +```go +// BAD: Magic numbers +func calculateDiscount(total float64, userType int) float64 { + if userType == 1 { + return total * 0.15 + } else if userType == 2 { + return total * 0.25 + } + return total * 0.05 +} + +// GOOD: Named constants +const ( + UserTypeRegular = 1 + UserTypePremium = 2 + + DiscountRegular = 0.05 + DiscountStandard = 0.15 + DiscountPremium = 0.25 +) + +func calculateDiscount(total float64, userType int) float64 { + switch userType { + case UserTypePremium: + return total * DiscountPremium + case UserTypeRegular: + return total * DiscountStandard + default: + return total * DiscountRegular + } +} +``` + +**Detection:** Literal numbers (except 0, 1) or repeated string literals. + +--- + +### Primitive Obsession + +Using primitives instead of small objects. + +```typescript +// BAD: Primitives everywhere +function createUser( + name: string, + email: string, + phone: string, + street: string, + city: string, + zipCode: string, + country: string +): User { ... } + +// GOOD: Value objects +interface Address { + street: string; + city: string; + zipCode: string; + country: string; +} + +interface ContactInfo { + email: string; + phone: string; +} + +function createUser( + name: string, + contact: ContactInfo, + address: Address +): User { ... } +``` + +**Detection:** Functions with >4 parameters of same type, or related primitives always passed together. + +--- + +## Logic Antipatterns + +### Boolean Blindness + +Passing booleans that make code unreadable at call sites. + +```swift +// BAD: What do these booleans mean? +user.configure(true, false, true, false) + +// GOOD: Named parameters or option objects +user.configure( + sendWelcomeEmail: true, + requireVerification: false, + enableNotifications: true, + isAdmin: false +) + +// Or use an options struct +struct UserConfiguration { + var sendWelcomeEmail: Bool = true + var requireVerification: Bool = false + var enableNotifications: Bool = true + var isAdmin: Bool = false +} + +user.configure(UserConfiguration()) +``` + +**Detection:** Function calls with multiple boolean literals. + +--- + +### Null Returns for Collections + +Returning null instead of empty collections. + +```kotlin +// BAD: Returning null +fun findUsersByRole(role: String): List? { + val users = repository.findByRole(role) + return if (users.isEmpty()) null else users +} + +// Caller must handle null +val users = findUsersByRole("admin") +if (users != null) { + users.forEach { ... } +} + +// GOOD: Return empty collection +fun findUsersByRole(role: String): List { + return repository.findByRole(role) +} + +// Caller can iterate directly +findUsersByRole("admin").forEach { ... } +``` + +**Detection:** Functions returning nullable collections. + +--- + +### Stringly Typed Code + +Using strings where enums or types should be used. + +```python +# BAD: String-based logic +def handle_event(event_type: str, data: dict): + if event_type == "user_created": + handle_user_created(data) + elif event_type == "user_updated": + handle_user_updated(data) + elif event_type == "user_dleted": # Typo won't be caught + handle_user_deleted(data) + +# GOOD: Enum-based +from enum import Enum + +class EventType(Enum): + USER_CREATED = "user_created" + USER_UPDATED = "user_updated" + USER_DELETED = "user_deleted" + +def handle_event(event_type: EventType, data: dict): + handlers = { + EventType.USER_CREATED: handle_user_created, + EventType.USER_UPDATED: handle_user_updated, + EventType.USER_DELETED: handle_user_deleted, + } + handlers[event_type](data) +``` + +**Detection:** String comparisons for type/status/category values. + +--- + +## Security Antipatterns + +### SQL Injection + +String concatenation in SQL queries. + +```javascript +// BAD: String concatenation +const query = `SELECT * FROM users WHERE id = ${userId}`; +db.query(query); + +// BAD: String templates still vulnerable +const query = `SELECT * FROM users WHERE name = '${userName}'`; + +// GOOD: Parameterized queries +const query = 'SELECT * FROM users WHERE id = $1'; +db.query(query, [userId]); + +// GOOD: Using ORM safely +User.findOne({ where: { id: userId } }); +``` + +**Detection:** String concatenation or template literals with SQL keywords. + +--- + +### Hardcoded Credentials + +Secrets in source code. + +```python +# BAD: Hardcoded secrets +API_KEY = "sk-abc123xyz789" +DATABASE_URL = "postgresql://admin:password123@prod-db.internal:5432/app" + +# GOOD: Environment variables +import os + +API_KEY = os.environ["API_KEY"] +DATABASE_URL = os.environ["DATABASE_URL"] + +# GOOD: Secrets manager +from aws_secretsmanager import get_secret + +API_KEY = get_secret("api-key") +``` + +**Detection:** Variables named `password`, `secret`, `key`, `token` with string literals. + +--- + +### Unsafe Deserialization + +Deserializing untrusted data without validation. + +```python +# BAD: Binary serialization from untrusted source can execute arbitrary code +# Examples: Python's binary serialization, yaml.load without SafeLoader + +# GOOD: Use safe alternatives +import json + +def load_data(file_path): + with open(file_path, 'r') as f: + return json.load(f) + +# GOOD: Use SafeLoader for YAML +import yaml + +with open('config.yaml') as f: + config = yaml.safe_load(f) +``` + +**Detection:** Binary deserialization functions, yaml.load without safe loader, dynamic code execution on external data. + +--- + +### Missing Input Validation + +Trusting user input without validation. + +```typescript +// BAD: No validation +app.post('/user', (req, res) => { + const user = db.create({ + name: req.body.name, + email: req.body.email, + role: req.body.role // User can set themselves as admin! + }); + res.json(user); +}); + +// GOOD: Validate and sanitize +import { z } from 'zod'; + +const CreateUserSchema = z.object({ + name: z.string().min(1).max(100), + email: z.string().email(), + // role is NOT accepted from input +}); + +app.post('/user', (req, res) => { + const validated = CreateUserSchema.parse(req.body); + const user = db.create({ + ...validated, + role: 'user' // Default role, not from input + }); + res.json(user); +}); +``` + +**Detection:** Request body/params used directly without validation schema. + +--- + +## Performance Antipatterns + +### N+1 Query Problem + +Loading related data one record at a time. + +```python +# BAD: N+1 queries +def get_orders_with_items(): + orders = Order.query.all() # 1 query + for order in orders: + items = OrderItem.query.filter_by(order_id=order.id).all() # N queries + order.items = items + return orders + +# GOOD: Eager loading +def get_orders_with_items(): + return Order.query.options( + joinedload(Order.items) + ).all() # 1 query with JOIN + +# GOOD: Batch loading +def get_orders_with_items(): + orders = Order.query.all() + order_ids = [o.id for o in orders] + items = OrderItem.query.filter( + OrderItem.order_id.in_(order_ids) + ).all() # 2 queries total + # Group items by order_id... +``` + +**Detection:** Database queries inside loops. + +--- + +### Unbounded Collections + +Loading unlimited data into memory. + +```go +// BAD: Load all records +func GetAllUsers() ([]User, error) { + return db.Find(&[]User{}) // Could be millions +} + +// GOOD: Pagination +func GetUsers(page, pageSize int) ([]User, error) { + offset := (page - 1) * pageSize + return db.Limit(pageSize).Offset(offset).Find(&[]User{}) +} + +// GOOD: Streaming for large datasets +func ProcessAllUsers(handler func(User) error) error { + rows, err := db.Model(&User{}).Rows() + if err != nil { + return err + } + defer rows.Close() + + for rows.Next() { + var user User + db.ScanRows(rows, &user) + if err := handler(user); err != nil { + return err + } + } + return nil +} +``` + +**Detection:** `findAll()`, `find({})`, or queries without `LIMIT`. + +--- + +### Synchronous I/O in Hot Paths + +Blocking operations in request handlers. + +```javascript +// BAD: Sync file read on every request +app.get('/config', (req, res) => { + const config = fs.readFileSync('./config.json'); // Blocks event loop + res.json(JSON.parse(config)); +}); + +// GOOD: Load once at startup +const config = JSON.parse(fs.readFileSync('./config.json')); + +app.get('/config', (req, res) => { + res.json(config); +}); + +// GOOD: Async with caching +let configCache = null; + +app.get('/config', async (req, res) => { + if (!configCache) { + configCache = JSON.parse(await fs.promises.readFile('./config.json')); + } + res.json(configCache); +}); +``` + +**Detection:** `readFileSync`, `execSync`, or blocking calls in request handlers. + +--- + +## Testing Antipatterns + +### Test Code Duplication + +Repeating setup in every test. + +```typescript +// BAD: Duplicate setup +describe('UserService', () => { + it('should create user', async () => { + const db = await createTestDatabase(); + const userRepo = new UserRepository(db); + const emailService = new MockEmailService(); + const service = new UserService(userRepo, emailService); + + const user = await service.create({ name: 'Test' }); + expect(user.name).toBe('Test'); + }); + + it('should update user', async () => { + const db = await createTestDatabase(); // Duplicated + const userRepo = new UserRepository(db); // Duplicated + const emailService = new MockEmailService(); // Duplicated + const service = new UserService(userRepo, emailService); // Duplicated + + // ... + }); +}); + +// GOOD: Shared setup +describe('UserService', () => { + let service: UserService; + let db: TestDatabase; + + beforeEach(async () => { + db = await createTestDatabase(); + const userRepo = new UserRepository(db); + const emailService = new MockEmailService(); + service = new UserService(userRepo, emailService); + }); + + afterEach(async () => { + await db.cleanup(); + }); + + it('should create user', async () => { + const user = await service.create({ name: 'Test' }); + expect(user.name).toBe('Test'); + }); +}); +``` + +--- + +### Testing Implementation Instead of Behavior + +Tests coupled to internal implementation. + +```python +# BAD: Testing implementation details +def test_add_item_to_cart(): + cart = ShoppingCart() + cart.add_item(Product("Apple", 1.00)) + + # Testing internal structure + assert cart._items[0].name == "Apple" + assert cart._total == 1.00 + +# GOOD: Testing behavior +def test_add_item_to_cart(): + cart = ShoppingCart() + cart.add_item(Product("Apple", 1.00)) + + # Testing public behavior + assert cart.item_count == 1 + assert cart.total == 1.00 + assert cart.contains("Apple") +``` + +--- + +## Async Antipatterns + +### Floating Promises + +Promises without await or catch. + +```typescript +// BAD: Floating promise +async function saveUser(user: User) { + db.save(user); // Not awaited, errors lost + logger.info('User saved'); // Logs before save completes +} + +// BAD: Fire and forget in loop +for (const item of items) { + processItem(item); // All run in parallel, no error handling +} + +// GOOD: Await the promise +async function saveUser(user: User) { + await db.save(user); + logger.info('User saved'); +} + +// GOOD: Process with proper handling +await Promise.all(items.map(item => processItem(item))); + +// Or sequentially +for (const item of items) { + await processItem(item); +} +``` + +**Detection:** Async function calls without `await` or `.then()`. + +--- + +### Callback Hell + +Deeply nested callbacks. + +```javascript +// BAD: Callback hell +getUser(userId, (err, user) => { + if (err) return handleError(err); + getOrders(user.id, (err, orders) => { + if (err) return handleError(err); + getProducts(orders[0].productIds, (err, products) => { + if (err) return handleError(err); + renderPage(user, orders, products, (err) => { + if (err) return handleError(err); + console.log('Done'); + }); + }); + }); +}); + +// GOOD: Async/await +async function loadPage(userId) { + try { + const user = await getUser(userId); + const orders = await getOrders(user.id); + const products = await getProducts(orders[0].productIds); + await renderPage(user, orders, products); + console.log('Done'); + } catch (err) { + handleError(err); + } +} +``` + +**Detection:** >2 levels of callback nesting. + +--- + +### Async in Constructor + +Async operations in constructors. + +```typescript +// BAD: Async in constructor +class DatabaseConnection { + constructor(url: string) { + this.connect(url); // Fire-and-forget async + } + + private async connect(url: string) { + this.client = await createClient(url); + } +} + +// GOOD: Factory method +class DatabaseConnection { + private constructor(private client: Client) {} + + static async create(url: string): Promise { + const client = await createClient(url); + return new DatabaseConnection(client); + } +} + +// Usage +const db = await DatabaseConnection.create(url); +``` + +**Detection:** `async` calls or `.then()` in constructor. diff --git a/.cursor/skills/code-reviewer/scripts/code_quality_checker.py b/.cursor/skills/code-reviewer/scripts/code_quality_checker.py new file mode 100644 index 00000000..2a420a92 --- /dev/null +++ b/.cursor/skills/code-reviewer/scripts/code_quality_checker.py @@ -0,0 +1,560 @@ +#!/usr/bin/env python3 +""" +Code Quality Checker + +Analyzes source code for quality issues, code smells, complexity metrics, +and SOLID principle violations. + +Usage: + python .cursor/skills/code-reviewer/scripts/code_quality_checker.py /path/to/file.py + python .cursor/skills/code-reviewer/scripts/code_quality_checker.py /path/to/directory --recursive + python .cursor/skills/code-reviewer/scripts/code_quality_checker.py . --language typescript --json +""" + +import argparse +import json +import re +import sys +from pathlib import Path +from typing import Dict, List, Optional + + +# Language-specific file extensions +LANGUAGE_EXTENSIONS = { + "python": [".py"], + "typescript": [".ts", ".tsx"], + "javascript": [".js", ".jsx", ".mjs"], + "go": [".go"], + "swift": [".swift"], + "kotlin": [".kt", ".kts"] +} + +# Code smell thresholds +THRESHOLDS = { + "long_function_lines": 50, + "too_many_parameters": 5, + "high_complexity": 10, + "god_class_methods": 20, + "max_imports": 15 +} + + +def get_file_extension(filepath: Path) -> str: + """Get file extension.""" + return filepath.suffix.lower() + + +def detect_language(filepath: Path) -> Optional[str]: + """Detect programming language from file extension.""" + ext = get_file_extension(filepath) + for lang, extensions in LANGUAGE_EXTENSIONS.items(): + if ext in extensions: + return lang + return None + + +def read_file_content(filepath: Path) -> str: + """Read file content safely.""" + try: + with open(filepath, "r", encoding="utf-8", errors="ignore") as f: + return f.read() + except Exception: + return "" + + +def calculate_cyclomatic_complexity(content: str) -> int: + """ + Estimate cyclomatic complexity based on control flow keywords. + """ + complexity = 1 # Base complexity + + # Control flow patterns that increase complexity + patterns = [ + r"\bif\b", + r"\belif\b", + r"\belse\b", + r"\bfor\b", + r"\bwhile\b", + r"\bcase\b", + r"\bcatch\b", + r"\bexcept\b", + r"\band\b", + r"\bor\b", + r"\|\|", + r"&&" + ] + + for pattern in patterns: + matches = re.findall(pattern, content, re.IGNORECASE) + complexity += len(matches) + + return complexity + + +def count_lines(content: str) -> Dict[str, int]: + """Count different types of lines in code.""" + lines = content.split("\n") + total = len(lines) + blank = sum(1 for line in lines if not line.strip()) + comment = 0 + + for line in lines: + stripped = line.strip() + if stripped.startswith("#") or stripped.startswith("//"): + comment += 1 + elif stripped.startswith("/*") or stripped.startswith("'''") or stripped.startswith('"""'): + comment += 1 + + code = total - blank - comment + + return { + "total": total, + "code": code, + "blank": blank, + "comment": comment + } + + +def find_functions(content: str, language: str) -> List[Dict]: + """Find function definitions and their metrics.""" + functions = [] + + # Language-specific function patterns + patterns = { + "python": r"def\s+(\w+)\s*\(([^)]*)\)", + "typescript": r"(?:function\s+(\w+)|(?:const|let|var)\s+(\w+)\s*=\s*(?:async\s+)?\([^)]*\)\s*=>)", + "javascript": r"(?:function\s+(\w+)|(?:const|let|var)\s+(\w+)\s*=\s*(?:async\s+)?\([^)]*\)\s*=>)", + "go": r"func\s+(?:\([^)]+\)\s+)?(\w+)\s*\(([^)]*)\)", + "swift": r"func\s+(\w+)\s*\(([^)]*)\)", + "kotlin": r"fun\s+(\w+)\s*\(([^)]*)\)" + } + + pattern = patterns.get(language, patterns["python"]) + matches = re.finditer(pattern, content, re.MULTILINE) + + for match in matches: + name = next((g for g in match.groups() if g), "anonymous") + params_str = match.group(2) if len(match.groups()) > 1 and match.group(2) else "" + + # Count parameters + params = [p.strip() for p in params_str.split(",") if p.strip()] + param_count = len(params) + + # Estimate function length + start_pos = match.end() + remaining = content[start_pos:] + + next_func = re.search(pattern, remaining) + if next_func: + func_body = remaining[:next_func.start()] + else: + func_body = remaining[:min(2000, len(remaining))] + + line_count = len(func_body.split("\n")) + complexity = calculate_cyclomatic_complexity(func_body) + + functions.append({ + "name": name, + "parameters": param_count, + "lines": line_count, + "complexity": complexity + }) + + return functions + + +def find_classes(content: str, language: str) -> List[Dict]: + """Find class definitions and their metrics.""" + classes = [] + + patterns = { + "python": r"class\s+(\w+)", + "typescript": r"class\s+(\w+)", + "javascript": r"class\s+(\w+)", + "go": r"type\s+(\w+)\s+struct", + "swift": r"class\s+(\w+)", + "kotlin": r"class\s+(\w+)" + } + + pattern = patterns.get(language, patterns["python"]) + matches = re.finditer(pattern, content) + + for match in matches: + name = match.group(1) + + start_pos = match.end() + remaining = content[start_pos:] + + next_class = re.search(pattern, remaining) + if next_class: + class_body = remaining[:next_class.start()] + else: + class_body = remaining + + # Count methods + method_patterns = { + "python": r"def\s+\w+\s*\(", + "typescript": r"(?:public|private|protected)?\s*\w+\s*\([^)]*\)\s*[:{]", + "javascript": r"\w+\s*\([^)]*\)\s*\{", + "go": r"func\s+\(", + "swift": r"func\s+\w+", + "kotlin": r"fun\s+\w+" + } + method_pattern = method_patterns.get(language, method_patterns["python"]) + methods = len(re.findall(method_pattern, class_body)) + + classes.append({ + "name": name, + "methods": methods, + "lines": len(class_body.split("\n")) + }) + + return classes + + +def check_code_smells(content: str, functions: List[Dict], classes: List[Dict]) -> List[Dict]: + """Check for code smells in the content.""" + smells = [] + + # Long functions + for func in functions: + if func["lines"] > THRESHOLDS["long_function_lines"]: + smells.append({ + "type": "long_function", + "severity": "medium", + "message": f"Function '{func['name']}' has {func['lines']} lines (max: {THRESHOLDS['long_function_lines']})", + "location": func["name"] + }) + + # Too many parameters + for func in functions: + if func["parameters"] > THRESHOLDS["too_many_parameters"]: + smells.append({ + "type": "too_many_parameters", + "severity": "low", + "message": f"Function '{func['name']}' has {func['parameters']} parameters (max: {THRESHOLDS['too_many_parameters']})", + "location": func["name"] + }) + + # High complexity + for func in functions: + if func["complexity"] > THRESHOLDS["high_complexity"]: + severity = "high" if func["complexity"] > 20 else "medium" + smells.append({ + "type": "high_complexity", + "severity": severity, + "message": f"Function '{func['name']}' has complexity {func['complexity']} (max: {THRESHOLDS['high_complexity']})", + "location": func["name"] + }) + + # God classes + for cls in classes: + if cls["methods"] > THRESHOLDS["god_class_methods"]: + smells.append({ + "type": "god_class", + "severity": "high", + "message": f"Class '{cls['name']}' has {cls['methods']} methods (max: {THRESHOLDS['god_class_methods']})", + "location": cls["name"] + }) + + # Magic numbers + magic_pattern = r"\b(? List[Dict]: + """Check for potential SOLID principle violations.""" + violations = [] + + # OCP: Type checking instead of polymorphism + type_checks = len(re.findall(r"isinstance\(|type\(.*\)\s*==|typeof\s+\w+\s*===", content)) + if type_checks > 2: + violations.append({ + "principle": "OCP", + "name": "Open/Closed Principle", + "severity": "medium", + "message": f"Found {type_checks} type checks - consider using polymorphism" + }) + + # LSP/ISP: NotImplementedError + not_impl = len(re.findall(r"raise\s+NotImplementedError|not\s+implemented", content, re.IGNORECASE)) + if not_impl: + violations.append({ + "principle": "LSP/ISP", + "name": "Liskov/Interface Segregation", + "severity": "low", + "message": f"Found {not_impl} unimplemented methods - may indicate oversized interface" + }) + + # DIP: Too many direct imports + imports = len(re.findall(r"^(?:import|from)\s+", content, re.MULTILINE)) + if imports > THRESHOLDS["max_imports"]: + violations.append({ + "principle": "DIP", + "name": "Dependency Inversion Principle", + "severity": "low", + "message": f"File has {imports} imports - consider dependency injection" + }) + + return violations + + +def calculate_quality_score( + line_metrics: Dict, + functions: List[Dict], + classes: List[Dict], + smells: List[Dict], + violations: List[Dict] +) -> int: + """Calculate overall quality score (0-100).""" + score = 100 + + # Deduct for code smells + for smell in smells: + if smell["severity"] == "high": + score -= 10 + elif smell["severity"] == "medium": + score -= 5 + elif smell["severity"] == "low": + score -= 2 + + # Deduct for SOLID violations + for violation in violations: + if violation["severity"] == "high": + score -= 8 + elif violation["severity"] == "medium": + score -= 4 + elif violation["severity"] == "low": + score -= 2 + + # Bonus for good comment ratio (10-30%) + if line_metrics["total"] > 0: + comment_ratio = line_metrics["comment"] / line_metrics["total"] + if 0.1 <= comment_ratio <= 0.3: + score += 5 + + # Bonus for reasonable function sizes + if functions: + avg_lines = sum(f["lines"] for f in functions) / len(functions) + if avg_lines < 30: + score += 5 + + return max(0, min(100, score)) + + +def get_grade(score: int) -> str: + """Convert score to letter grade.""" + if score >= 90: + return "A" + elif score >= 80: + return "B" + elif score >= 70: + return "C" + elif score >= 60: + return "D" + else: + return "F" + + +def analyze_file(filepath: Path) -> Dict: + """Analyze a single file for code quality.""" + language = detect_language(filepath) + if not language: + return {"error": f"Unsupported file type: {filepath.suffix}"} + + content = read_file_content(filepath) + if not content: + return {"error": f"Could not read file: {filepath}"} + + line_metrics = count_lines(content) + functions = find_functions(content, language) + classes = find_classes(content, language) + smells = check_code_smells(content, functions, classes) + violations = check_solid_violations(content) + score = calculate_quality_score(line_metrics, functions, classes, smells, violations) + + return { + "file": str(filepath), + "language": language, + "metrics": { + "lines": line_metrics, + "functions": len(functions), + "classes": len(classes), + "avg_complexity": round(sum(f["complexity"] for f in functions) / max(1, len(functions)), 1) + }, + "quality_score": score, + "grade": get_grade(score), + "smells": smells, + "solid_violations": violations, + "function_details": functions[:10], + "class_details": classes[:10] + } + + +def analyze_directory( + dir_path: Path, + recursive: bool = True, + language: Optional[str] = None +) -> Dict: + """Analyze all files in a directory.""" + results = [] + extensions = [] + + if language: + extensions = LANGUAGE_EXTENSIONS.get(language, []) + else: + for exts in LANGUAGE_EXTENSIONS.values(): + extensions.extend(exts) + + pattern = "**/*" if recursive else "*" + + for ext in extensions: + for filepath in dir_path.glob(f"{pattern}{ext}"): + if "node_modules" in str(filepath) or ".git" in str(filepath): + continue + result = analyze_file(filepath) + if "error" not in result: + results.append(result) + + if not results: + return {"error": "No supported files found"} + + total_score = sum(r["quality_score"] for r in results) + avg_score = total_score / len(results) + total_smells = sum(len(r["smells"]) for r in results) + total_violations = sum(len(r["solid_violations"]) for r in results) + + return { + "directory": str(dir_path), + "files_analyzed": len(results), + "average_score": round(avg_score, 1), + "overall_grade": get_grade(int(avg_score)), + "total_code_smells": total_smells, + "total_solid_violations": total_violations, + "files": sorted(results, key=lambda x: x["quality_score"]) + } + + +def print_report(analysis: Dict) -> None: + """Print human-readable analysis report.""" + if "error" in analysis: + print(f"Error: {analysis['error']}") + return + + print("=" * 60) + print("CODE QUALITY REPORT") + print("=" * 60) + + if "file" in analysis: + print(f"\nFile: {analysis['file']}") + print(f"Language: {analysis['language']}") + print(f"Quality Score: {analysis['quality_score']}/100 ({analysis['grade']})") + + metrics = analysis["metrics"] + print(f"\nLines: {metrics['lines']['total']} ({metrics['lines']['code']} code, {metrics['lines']['comment']} comments)") + print(f"Functions: {metrics['functions']}") + print(f"Classes: {metrics['classes']}") + print(f"Avg Complexity: {metrics['avg_complexity']}") + + if analysis["smells"]: + print("\n--- CODE SMELLS ---") + for smell in analysis["smells"][:10]: + print(f" [{smell['severity'].upper()}] {smell['message']} ({smell['location']})") + + if analysis["solid_violations"]: + print("\n--- SOLID VIOLATIONS ---") + for v in analysis["solid_violations"]: + print(f" [{v['principle']}] {v['message']}") + else: + print(f"\nDirectory: {analysis['directory']}") + print(f"Files Analyzed: {analysis['files_analyzed']}") + print(f"Average Score: {analysis['average_score']}/100 ({analysis['overall_grade']})") + print(f"Total Code Smells: {analysis['total_code_smells']}") + print(f"Total SOLID Violations: {analysis['total_solid_violations']}") + + print("\n--- FILES BY QUALITY ---") + for f in analysis["files"][:10]: + print(f" {f['quality_score']:3d}/100 [{f['grade']}] {f['file']}") + + print("\n" + "=" * 60) + + +def main(): + parser = argparse.ArgumentParser( + description="Analyze code quality, smells, and SOLID violations" + ) + parser.add_argument( + "path", + help="File or directory to analyze" + ) + parser.add_argument( + "--recursive", "-r", + action="store_true", + default=True, + help="Recursively analyze directories (default: true)" + ) + parser.add_argument( + "--language", "-l", + choices=list(LANGUAGE_EXTENSIONS.keys()), + help="Filter by programming language" + ) + parser.add_argument( + "--json", + action="store_true", + help="Output in JSON format" + ) + parser.add_argument( + "--output", "-o", + help="Write output to file" + ) + + args = parser.parse_args() + + target = Path(args.path).resolve() + + if not target.exists(): + print(f"Error: Path does not exist: {target}", file=sys.stderr) + sys.exit(1) + + if target.is_file(): + analysis = analyze_file(target) + else: + analysis = analyze_directory(target, args.recursive, args.language) + + if args.json: + output = json.dumps(analysis, indent=2, default=str) + if args.output: + with open(args.output, "w") as f: + f.write(output) + print(f"Results written to {args.output}") + else: + print(output) + else: + print_report(analysis) + + +if __name__ == "__main__": + main() diff --git a/.cursor/skills/code-reviewer/scripts/pr_analyzer.py b/.cursor/skills/code-reviewer/scripts/pr_analyzer.py new file mode 100644 index 00000000..9145adcc --- /dev/null +++ b/.cursor/skills/code-reviewer/scripts/pr_analyzer.py @@ -0,0 +1,495 @@ +#!/usr/bin/env python3 +""" +PR Analyzer + +Analyzes pull request changes for review complexity, risk assessment, +and generates review priorities. + +Usage: + python .cursor/skills/code-reviewer/scripts/pr_analyzer.py /path/to/repo + python .cursor/skills/code-reviewer/scripts/pr_analyzer.py . --base main --head feature-branch + python .cursor/skills/code-reviewer/scripts/pr_analyzer.py /path/to/repo --json +""" + +import argparse +import json +import os +import re +import subprocess +import sys +from pathlib import Path +from typing import Dict, List, Optional, Tuple + + +# File categories for review prioritization +FILE_CATEGORIES = { + "critical": { + "patterns": [ + r"auth", r"security", r"password", r"token", r"secret", + r"payment", r"billing", r"crypto", r"encrypt" + ], + "weight": 5, + "description": "Security-sensitive files requiring careful review" + }, + "high": { + "patterns": [ + r"api", r"database", r"migration", r"schema", r"model", + r"config", r"env", r"middleware" + ], + "weight": 4, + "description": "Core infrastructure files" + }, + "medium": { + "patterns": [ + r"service", r"controller", r"handler", r"util", r"helper" + ], + "weight": 3, + "description": "Business logic files" + }, + "low": { + "patterns": [ + r"test", r"spec", r"mock", r"fixture", r"story", + r"readme", r"docs", r"\.md$" + ], + "weight": 1, + "description": "Tests and documentation" + } +} + +# Risky patterns to flag +RISK_PATTERNS = [ + { + "name": "hardcoded_secrets", + "pattern": r"(password|secret|api_key|token)\s*[=:]\s*['\"][^'\"]+['\"]", + "severity": "critical", + "message": "Potential hardcoded secret detected" + }, + { + "name": "todo_fixme", + "pattern": r"(TODO|FIXME|HACK|XXX):", + "severity": "low", + "message": "TODO/FIXME comment found" + }, + { + "name": "console_log", + "pattern": r"console\.(log|debug|info|warn|error)\(", + "severity": "medium", + "message": "Console statement found (remove for production)" + }, + { + "name": "debugger", + "pattern": r"\bdebugger\b", + "severity": "high", + "message": "Debugger statement found" + }, + { + "name": "disable_eslint", + "pattern": r"eslint-disable", + "severity": "medium", + "message": "ESLint rule disabled" + }, + { + "name": "any_type", + "pattern": r":\s*any\b", + "severity": "medium", + "message": "TypeScript 'any' type used" + }, + { + "name": "sql_concatenation", + "pattern": r"(SELECT|INSERT|UPDATE|DELETE).*\+.*['\"]", + "severity": "critical", + "message": "Potential SQL injection (string concatenation in query)" + } +] + + +def run_git_command(cmd: List[str], cwd: Path) -> Tuple[bool, str]: + """Run a git command and return success status and output.""" + try: + result = subprocess.run( + cmd, + cwd=cwd, + capture_output=True, + text=True, + timeout=30 + ) + return result.returncode == 0, result.stdout.strip() + except subprocess.TimeoutExpired: + return False, "Command timed out" + except Exception as e: + return False, str(e) + + +def get_changed_files(repo_path: Path, base: str, head: str) -> List[Dict]: + """Get list of changed files between two refs.""" + success, output = run_git_command( + ["git", "diff", "--name-status", f"{base}...{head}"], + repo_path + ) + + if not success: + # Try without the triple dot (for uncommitted changes) + success, output = run_git_command( + ["git", "diff", "--name-status", base, head], + repo_path + ) + + if not success or not output: + # Fall back to staged changes + success, output = run_git_command( + ["git", "diff", "--name-status", "--cached"], + repo_path + ) + + files = [] + for line in output.split("\n"): + if not line.strip(): + continue + parts = line.split("\t") + if len(parts) >= 2: + status = parts[0][0] # First character of status + filepath = parts[-1] # Handle renames (R100\told\tnew) + status_map = { + "A": "added", + "M": "modified", + "D": "deleted", + "R": "renamed", + "C": "copied" + } + files.append({ + "path": filepath, + "status": status_map.get(status, "modified") + }) + + return files + + +def get_file_diff(repo_path: Path, filepath: str, base: str, head: str) -> str: + """Get diff content for a specific file.""" + success, output = run_git_command( + ["git", "diff", f"{base}...{head}", "--", filepath], + repo_path + ) + if not success: + success, output = run_git_command( + ["git", "diff", "--cached", "--", filepath], + repo_path + ) + return output if success else "" + + +def categorize_file(filepath: str) -> Tuple[str, int]: + """Categorize a file based on its path and name.""" + filepath_lower = filepath.lower() + + for category, info in FILE_CATEGORIES.items(): + for pattern in info["patterns"]: + if re.search(pattern, filepath_lower): + return category, info["weight"] + + return "medium", 2 # Default category + + +def analyze_diff_for_risks(diff_content: str, filepath: str) -> List[Dict]: + """Analyze diff content for risky patterns.""" + risks = [] + + # Only analyze added lines (starting with +) + added_lines = [ + line[1:] for line in diff_content.split("\n") + if line.startswith("+") and not line.startswith("+++") + ] + + content = "\n".join(added_lines) + + for risk in RISK_PATTERNS: + matches = re.findall(risk["pattern"], content, re.IGNORECASE) + if matches: + risks.append({ + "name": risk["name"], + "severity": risk["severity"], + "message": risk["message"], + "file": filepath, + "count": len(matches) + }) + + return risks + + +def count_changes(diff_content: str) -> Dict[str, int]: + """Count additions and deletions in diff.""" + additions = 0 + deletions = 0 + + for line in diff_content.split("\n"): + if line.startswith("+") and not line.startswith("+++"): + additions += 1 + elif line.startswith("-") and not line.startswith("---"): + deletions += 1 + + return {"additions": additions, "deletions": deletions} + + +def calculate_complexity_score(files: List[Dict], all_risks: List[Dict]) -> int: + """Calculate overall PR complexity score (1-10).""" + score = 0 + + # File count contribution (max 3 points) + file_count = len(files) + if file_count > 20: + score += 3 + elif file_count > 10: + score += 2 + elif file_count > 5: + score += 1 + + # Total changes contribution (max 3 points) + total_changes = sum(f.get("additions", 0) + f.get("deletions", 0) for f in files) + if total_changes > 500: + score += 3 + elif total_changes > 200: + score += 2 + elif total_changes > 50: + score += 1 + + # Risk severity contribution (max 4 points) + critical_risks = sum(1 for r in all_risks if r["severity"] == "critical") + high_risks = sum(1 for r in all_risks if r["severity"] == "high") + + score += min(2, critical_risks) + score += min(2, high_risks) + + return min(10, max(1, score)) + + +def analyze_commit_messages(repo_path: Path, base: str, head: str) -> Dict: + """Analyze commit messages in the PR.""" + success, output = run_git_command( + ["git", "log", "--oneline", f"{base}...{head}"], + repo_path + ) + + if not success or not output: + return {"commits": 0, "issues": []} + + commits = output.strip().split("\n") + issues = [] + + for commit in commits: + if len(commit) < 10: + continue + + # Check for conventional commit format + message = commit[8:] if len(commit) > 8 else commit # Skip hash + + if not re.match(r"^(feat|fix|docs|style|refactor|test|chore|perf|ci|build|revert)(\(.+\))?:", message): + issues.append({ + "commit": commit[:7], + "issue": "Does not follow conventional commit format" + }) + + if len(message) > 72: + issues.append({ + "commit": commit[:7], + "issue": "Commit message exceeds 72 characters" + }) + + return { + "commits": len(commits), + "issues": issues + } + + +def analyze_pr( + repo_path: Path, + base: str = "main", + head: str = "HEAD" +) -> Dict: + """Perform complete PR analysis.""" + # Get changed files + changed_files = get_changed_files(repo_path, base, head) + + if not changed_files: + return { + "status": "no_changes", + "message": "No changes detected between branches" + } + + # Analyze each file + all_risks = [] + file_analyses = [] + + for file_info in changed_files: + filepath = file_info["path"] + category, weight = categorize_file(filepath) + + # Get diff for the file + diff = get_file_diff(repo_path, filepath, base, head) + changes = count_changes(diff) + risks = analyze_diff_for_risks(diff, filepath) + + all_risks.extend(risks) + + file_analyses.append({ + "path": filepath, + "status": file_info["status"], + "category": category, + "priority_weight": weight, + "additions": changes["additions"], + "deletions": changes["deletions"], + "risks": risks + }) + + # Sort by priority (highest first) + file_analyses.sort(key=lambda x: (-x["priority_weight"], x["path"])) + + # Analyze commits + commit_analysis = analyze_commit_messages(repo_path, base, head) + + # Calculate metrics + complexity = calculate_complexity_score(file_analyses, all_risks) + + total_additions = sum(f["additions"] for f in file_analyses) + total_deletions = sum(f["deletions"] for f in file_analyses) + + return { + "status": "analyzed", + "summary": { + "files_changed": len(file_analyses), + "total_additions": total_additions, + "total_deletions": total_deletions, + "complexity_score": complexity, + "complexity_label": get_complexity_label(complexity), + "commits": commit_analysis["commits"] + }, + "risks": { + "critical": [r for r in all_risks if r["severity"] == "critical"], + "high": [r for r in all_risks if r["severity"] == "high"], + "medium": [r for r in all_risks if r["severity"] == "medium"], + "low": [r for r in all_risks if r["severity"] == "low"] + }, + "files": file_analyses, + "commit_issues": commit_analysis["issues"], + "review_order": [f["path"] for f in file_analyses[:10]] # Top 10 priority files + } + + +def get_complexity_label(score: int) -> str: + """Get human-readable complexity label.""" + if score <= 2: + return "Simple" + elif score <= 4: + return "Moderate" + elif score <= 6: + return "Complex" + elif score <= 8: + return "Very Complex" + else: + return "Critical" + + +def print_report(analysis: Dict) -> None: + """Print human-readable analysis report.""" + if analysis["status"] == "no_changes": + print("No changes detected.") + return + + summary = analysis["summary"] + risks = analysis["risks"] + + print("=" * 60) + print("PR ANALYSIS REPORT") + print("=" * 60) + + print(f"\nComplexity: {summary['complexity_score']}/10 ({summary['complexity_label']})") + print(f"Files Changed: {summary['files_changed']}") + print(f"Lines: +{summary['total_additions']} / -{summary['total_deletions']}") + print(f"Commits: {summary['commits']}") + + # Risk summary + print("\n--- RISK SUMMARY ---") + print(f"Critical: {len(risks['critical'])}") + print(f"High: {len(risks['high'])}") + print(f"Medium: {len(risks['medium'])}") + print(f"Low: {len(risks['low'])}") + + # Critical and high risks details + if risks["critical"]: + print("\n--- CRITICAL RISKS ---") + for risk in risks["critical"]: + print(f" [{risk['file']}] {risk['message']} (x{risk['count']})") + + if risks["high"]: + print("\n--- HIGH RISKS ---") + for risk in risks["high"]: + print(f" [{risk['file']}] {risk['message']} (x{risk['count']})") + + # Commit message issues + if analysis["commit_issues"]: + print("\n--- COMMIT MESSAGE ISSUES ---") + for issue in analysis["commit_issues"][:5]: + print(f" {issue['commit']}: {issue['issue']}") + + # Review order + print("\n--- SUGGESTED REVIEW ORDER ---") + for i, filepath in enumerate(analysis["review_order"], 1): + file_info = next(f for f in analysis["files"] if f["path"] == filepath) + print(f" {i}. [{file_info['category'].upper()}] {filepath}") + + print("\n" + "=" * 60) + + +def main(): + parser = argparse.ArgumentParser( + description="Analyze pull request for review complexity and risks" + ) + parser.add_argument( + "repo_path", + nargs="?", + default=".", + help="Path to git repository (default: current directory)" + ) + parser.add_argument( + "--base", "-b", + default="main", + help="Base branch for comparison (default: main)" + ) + parser.add_argument( + "--head", + default="HEAD", + help="Head branch/commit for comparison (default: HEAD)" + ) + parser.add_argument( + "--json", + action="store_true", + help="Output in JSON format" + ) + parser.add_argument( + "--output", "-o", + help="Write output to file" + ) + + args = parser.parse_args() + + repo_path = Path(args.repo_path).resolve() + + if not (repo_path / ".git").exists(): + print(f"Error: {repo_path} is not a git repository", file=sys.stderr) + sys.exit(1) + + analysis = analyze_pr(repo_path, args.base, args.head) + + if args.json: + output = json.dumps(analysis, indent=2) + if args.output: + with open(args.output, "w") as f: + f.write(output) + print(f"Results written to {args.output}") + else: + print(output) + else: + print_report(analysis) + + +if __name__ == "__main__": + main() diff --git a/.cursor/skills/code-reviewer/scripts/review_report_generator.py b/.cursor/skills/code-reviewer/scripts/review_report_generator.py new file mode 100644 index 00000000..a9e38910 --- /dev/null +++ b/.cursor/skills/code-reviewer/scripts/review_report_generator.py @@ -0,0 +1,505 @@ +#!/usr/bin/env python3 +""" +Review Report Generator + +Generates comprehensive code review reports by combining PR analysis +and code quality findings into structured, actionable reports. + +Usage: + python .cursor/skills/code-reviewer/scripts/review_report_generator.py /path/to/repo + python .cursor/skills/code-reviewer/scripts/review_report_generator.py . --pr-analysis pr_results.json --quality-analysis quality_results.json + python .cursor/skills/code-reviewer/scripts/review_report_generator.py /path/to/repo --format markdown --output review.md +""" + +import argparse +import json +import os +import subprocess +import sys +from datetime import datetime +from pathlib import Path +from typing import Dict, List, Optional, Tuple + + +# Severity weights for prioritization +SEVERITY_WEIGHTS = { + "critical": 100, + "high": 75, + "medium": 50, + "low": 25, + "info": 10 +} + +# Review verdict thresholds +VERDICT_THRESHOLDS = { + "approve": {"max_critical": 0, "max_high": 0, "max_score": 100}, + "approve_with_suggestions": {"max_critical": 0, "max_high": 2, "max_score": 85}, + "request_changes": {"max_critical": 0, "max_high": 5, "max_score": 70}, + "block": {"max_critical": float("inf"), "max_high": float("inf"), "max_score": 0} +} + + +def load_json_file(filepath: str) -> Optional[Dict]: + """Load JSON file if it exists.""" + try: + with open(filepath, "r") as f: + return json.load(f) + except (FileNotFoundError, json.JSONDecodeError): + return None + + +def run_pr_analyzer(repo_path: Path) -> Dict: + """Run .cursor/skills/code-reviewer/scripts/pr_analyzer.py and return results.""" + script_path = Path(__file__).parent / ".cursor/skills/code-reviewer/scripts/pr_analyzer.py" + if not script_path.exists(): + return {"status": "error", "message": ".cursor/skills/code-reviewer/scripts/pr_analyzer.py not found"} + + try: + result = subprocess.run( + [sys.executable, str(script_path), str(repo_path), "--json"], + capture_output=True, + text=True, + timeout=120 + ) + if result.returncode == 0: + return json.loads(result.stdout) + return {"status": "error", "message": result.stderr} + except Exception as e: + return {"status": "error", "message": str(e)} + + +def run_quality_checker(repo_path: Path) -> Dict: + """Run .cursor/skills/code-reviewer/scripts/code_quality_checker.py and return results.""" + script_path = Path(__file__).parent / ".cursor/skills/code-reviewer/scripts/code_quality_checker.py" + if not script_path.exists(): + return {"status": "error", "message": ".cursor/skills/code-reviewer/scripts/code_quality_checker.py not found"} + + try: + result = subprocess.run( + [sys.executable, str(script_path), str(repo_path), "--json"], + capture_output=True, + text=True, + timeout=300 + ) + if result.returncode == 0: + return json.loads(result.stdout) + return {"status": "error", "message": result.stderr} + except Exception as e: + return {"status": "error", "message": str(e)} + + +def calculate_review_score(pr_analysis: Dict, quality_analysis: Dict) -> int: + """Calculate overall review score (0-100).""" + score = 100 + + # Deduct for PR risks + if "risks" in pr_analysis: + risks = pr_analysis["risks"] + score -= len(risks.get("critical", [])) * 15 + score -= len(risks.get("high", [])) * 10 + score -= len(risks.get("medium", [])) * 5 + score -= len(risks.get("low", [])) * 2 + + # Deduct for code quality issues + if "issues" in quality_analysis: + issues = quality_analysis["issues"] + score -= len([i for i in issues if i.get("severity") == "critical"]) * 12 + score -= len([i for i in issues if i.get("severity") == "high"]) * 8 + score -= len([i for i in issues if i.get("severity") == "medium"]) * 4 + score -= len([i for i in issues if i.get("severity") == "low"]) * 1 + + # Deduct for complexity + if "summary" in pr_analysis: + complexity = pr_analysis["summary"].get("complexity_score", 0) + if complexity > 7: + score -= 10 + elif complexity > 5: + score -= 5 + + return max(0, min(100, score)) + + +def determine_verdict(score: int, critical_count: int, high_count: int) -> Tuple[str, str]: + """Determine review verdict based on score and issue counts.""" + if critical_count > 0: + return "block", "Critical issues must be resolved before merge" + + if score >= 90 and high_count == 0: + return "approve", "Code meets quality standards" + + if score >= 75 and high_count <= 2: + return "approve_with_suggestions", "Minor improvements recommended" + + if score >= 50: + return "request_changes", "Several issues need to be addressed" + + return "block", "Significant issues prevent approval" + + +def generate_findings_list(pr_analysis: Dict, quality_analysis: Dict) -> List[Dict]: + """Combine and prioritize all findings.""" + findings = [] + + # Add PR risk findings + if "risks" in pr_analysis: + for severity, items in pr_analysis["risks"].items(): + for item in items: + findings.append({ + "source": "pr_analysis", + "severity": severity, + "category": item.get("name", "unknown"), + "message": item.get("message", ""), + "file": item.get("file", ""), + "count": item.get("count", 1) + }) + + # Add code quality findings + if "issues" in quality_analysis: + for issue in quality_analysis["issues"]: + findings.append({ + "source": "quality_analysis", + "severity": issue.get("severity", "medium"), + "category": issue.get("type", "unknown"), + "message": issue.get("message", ""), + "file": issue.get("file", ""), + "line": issue.get("line", 0) + }) + + # Sort by severity weight + findings.sort( + key=lambda x: -SEVERITY_WEIGHTS.get(x["severity"], 0) + ) + + return findings + + +def generate_action_items(findings: List[Dict]) -> List[Dict]: + """Generate prioritized action items from findings.""" + action_items = [] + seen_categories = set() + + for finding in findings: + category = finding["category"] + severity = finding["severity"] + + # Group similar issues + if category in seen_categories and severity not in ["critical", "high"]: + continue + + action = { + "priority": "P0" if severity == "critical" else "P1" if severity == "high" else "P2", + "action": get_action_for_category(category, finding), + "severity": severity, + "files_affected": [finding["file"]] if finding.get("file") else [] + } + action_items.append(action) + seen_categories.add(category) + + return action_items[:15] # Top 15 actions + + +def get_action_for_category(category: str, finding: Dict) -> str: + """Get actionable recommendation for issue category.""" + actions = { + "hardcoded_secrets": "Remove hardcoded credentials and use environment variables or a secrets manager", + "sql_concatenation": "Use parameterized queries to prevent SQL injection", + "debugger": "Remove debugger statements before merging", + "console_log": "Remove or replace console statements with proper logging", + "todo_fixme": "Address TODO/FIXME comments or create tracking issues", + "disable_eslint": "Address the underlying issue instead of disabling lint rules", + "any_type": "Replace 'any' types with proper type definitions", + "long_function": "Break down function into smaller, focused units", + "god_class": "Split class into smaller, single-responsibility classes", + "too_many_params": "Use parameter objects or builder pattern", + "deep_nesting": "Refactor using early returns, guard clauses, or extraction", + "high_complexity": "Reduce cyclomatic complexity through refactoring", + "missing_error_handling": "Add proper error handling and recovery logic", + "duplicate_code": "Extract duplicate code into shared functions", + "magic_numbers": "Replace magic numbers with named constants", + "large_file": "Consider splitting into multiple smaller modules" + } + return actions.get(category, f"Review and address: {finding.get('message', category)}") + + +def format_markdown_report(report: Dict) -> str: + """Generate markdown-formatted report.""" + lines = [] + + # Header + lines.append("# Code Review Report") + lines.append("") + lines.append(f"**Generated:** {report['metadata']['generated_at']}") + lines.append(f"**Repository:** {report['metadata']['repository']}") + lines.append("") + + # Executive Summary + lines.append("## Executive Summary") + lines.append("") + summary = report["summary"] + verdict = summary["verdict"] + verdict_emoji = { + "approve": "✅", + "approve_with_suggestions": "✅", + "request_changes": "⚠️", + "block": "❌" + }.get(verdict, "❓") + + lines.append(f"**Verdict:** {verdict_emoji} {verdict.upper().replace('_', ' ')}") + lines.append(f"**Score:** {summary['score']}/100") + lines.append(f"**Rationale:** {summary['rationale']}") + lines.append("") + + # Issue Counts + lines.append("### Issue Summary") + lines.append("") + lines.append("| Severity | Count |") + lines.append("|----------|-------|") + for severity in ["critical", "high", "medium", "low"]: + count = summary["issue_counts"].get(severity, 0) + lines.append(f"| {severity.capitalize()} | {count} |") + lines.append("") + + # PR Statistics (if available) + if "pr_summary" in report: + pr = report["pr_summary"] + lines.append("### Change Statistics") + lines.append("") + lines.append(f"- **Files Changed:** {pr.get('files_changed', 'N/A')}") + lines.append(f"- **Lines Added:** +{pr.get('total_additions', 0)}") + lines.append(f"- **Lines Removed:** -{pr.get('total_deletions', 0)}") + lines.append(f"- **Complexity:** {pr.get('complexity_label', 'N/A')}") + lines.append("") + + # Action Items + if report.get("action_items"): + lines.append("## Action Items") + lines.append("") + for i, item in enumerate(report["action_items"], 1): + priority = item["priority"] + emoji = "🔴" if priority == "P0" else "🟠" if priority == "P1" else "🟡" + lines.append(f"{i}. {emoji} **[{priority}]** {item['action']}") + if item.get("files_affected"): + lines.append(f" - Files: {', '.join(item['files_affected'][:3])}") + lines.append("") + + # Critical Findings + critical_findings = [f for f in report.get("findings", []) if f["severity"] == "critical"] + if critical_findings: + lines.append("## Critical Issues (Must Fix)") + lines.append("") + for finding in critical_findings: + lines.append(f"- **{finding['category']}** in `{finding.get('file', 'unknown')}`") + lines.append(f" - {finding['message']}") + lines.append("") + + # High Priority Findings + high_findings = [f for f in report.get("findings", []) if f["severity"] == "high"] + if high_findings: + lines.append("## High Priority Issues") + lines.append("") + for finding in high_findings[:10]: + lines.append(f"- **{finding['category']}** in `{finding.get('file', 'unknown')}`") + lines.append(f" - {finding['message']}") + lines.append("") + + # Review Order (if available) + if "review_order" in report: + lines.append("## Suggested Review Order") + lines.append("") + for i, filepath in enumerate(report["review_order"][:10], 1): + lines.append(f"{i}. `{filepath}`") + lines.append("") + + # Footer + lines.append("---") + lines.append("*Generated by Code Reviewer*") + + return "\n".join(lines) + + +def format_text_report(report: Dict) -> str: + """Generate plain text report.""" + lines = [] + + lines.append("=" * 60) + lines.append("CODE REVIEW REPORT") + lines.append("=" * 60) + lines.append("") + lines.append(f"Generated: {report['metadata']['generated_at']}") + lines.append(f"Repository: {report['metadata']['repository']}") + lines.append("") + + summary = report["summary"] + verdict = summary["verdict"].upper().replace("_", " ") + lines.append(f"VERDICT: {verdict}") + lines.append(f"SCORE: {summary['score']}/100") + lines.append(f"RATIONALE: {summary['rationale']}") + lines.append("") + + lines.append("--- ISSUE SUMMARY ---") + for severity in ["critical", "high", "medium", "low"]: + count = summary["issue_counts"].get(severity, 0) + lines.append(f" {severity.capitalize()}: {count}") + lines.append("") + + if report.get("action_items"): + lines.append("--- ACTION ITEMS ---") + for i, item in enumerate(report["action_items"][:10], 1): + lines.append(f" {i}. [{item['priority']}] {item['action']}") + lines.append("") + + critical = [f for f in report.get("findings", []) if f["severity"] == "critical"] + if critical: + lines.append("--- CRITICAL ISSUES ---") + for f in critical: + lines.append(f" [{f.get('file', 'unknown')}] {f['message']}") + lines.append("") + + lines.append("=" * 60) + + return "\n".join(lines) + + +def generate_report( + repo_path: Path, + pr_analysis: Optional[Dict] = None, + quality_analysis: Optional[Dict] = None +) -> Dict: + """Generate comprehensive review report.""" + # Run analyses if not provided + if pr_analysis is None: + pr_analysis = run_pr_analyzer(repo_path) + + if quality_analysis is None: + quality_analysis = run_quality_checker(repo_path) + + # Generate findings + findings = generate_findings_list(pr_analysis, quality_analysis) + + # Count issues by severity + issue_counts = { + "critical": len([f for f in findings if f["severity"] == "critical"]), + "high": len([f for f in findings if f["severity"] == "high"]), + "medium": len([f for f in findings if f["severity"] == "medium"]), + "low": len([f for f in findings if f["severity"] == "low"]) + } + + # Calculate score and verdict + score = calculate_review_score(pr_analysis, quality_analysis) + verdict, rationale = determine_verdict( + score, + issue_counts["critical"], + issue_counts["high"] + ) + + # Generate action items + action_items = generate_action_items(findings) + + # Build report + report = { + "metadata": { + "generated_at": datetime.now().isoformat(), + "repository": str(repo_path), + "version": "1.0.0" + }, + "summary": { + "score": score, + "verdict": verdict, + "rationale": rationale, + "issue_counts": issue_counts + }, + "findings": findings, + "action_items": action_items + } + + # Add PR summary if available + if pr_analysis.get("status") == "analyzed": + report["pr_summary"] = pr_analysis.get("summary", {}) + report["review_order"] = pr_analysis.get("review_order", []) + + # Add quality summary if available + if quality_analysis.get("status") == "analyzed": + report["quality_summary"] = quality_analysis.get("summary", {}) + + return report + + +def main(): + parser = argparse.ArgumentParser( + description="Generate comprehensive code review reports" + ) + parser.add_argument( + "repo_path", + nargs="?", + default=".", + help="Path to repository (default: current directory)" + ) + parser.add_argument( + "--pr-analysis", + help="Path to pre-computed PR analysis JSON" + ) + parser.add_argument( + "--quality-analysis", + help="Path to pre-computed quality analysis JSON" + ) + parser.add_argument( + "--format", "-f", + choices=["text", "markdown", "json"], + default="text", + help="Output format (default: text)" + ) + parser.add_argument( + "--output", "-o", + help="Write output to file" + ) + parser.add_argument( + "--json", + action="store_true", + help="Output as JSON (shortcut for --format json)" + ) + + args = parser.parse_args() + + repo_path = Path(args.repo_path).resolve() + if not repo_path.exists(): + print(f"Error: Path does not exist: {repo_path}", file=sys.stderr) + sys.exit(1) + + # Load pre-computed analyses if provided + pr_analysis = None + quality_analysis = None + + if args.pr_analysis: + pr_analysis = load_json_file(args.pr_analysis) + if not pr_analysis: + print(f"Warning: Could not load PR analysis from {args.pr_analysis}") + + if args.quality_analysis: + quality_analysis = load_json_file(args.quality_analysis) + if not quality_analysis: + print(f"Warning: Could not load quality analysis from {args.quality_analysis}") + + # Generate report + report = generate_report(repo_path, pr_analysis, quality_analysis) + + # Format output + output_format = "json" if args.json else args.format + + if output_format == "json": + output = json.dumps(report, indent=2) + elif output_format == "markdown": + output = format_markdown_report(report) + else: + output = format_text_report(report) + + # Write or print output + if args.output: + with open(args.output, "w") as f: + f.write(output) + print(f"Report written to {args.output}") + else: + print(output) + + +if __name__ == "__main__": + main() diff --git a/.gemini/skills/code-reviewer/SKILL.md b/.gemini/skills/code-reviewer/SKILL.md new file mode 100644 index 00000000..37c6c7e3 --- /dev/null +++ b/.gemini/skills/code-reviewer/SKILL.md @@ -0,0 +1,177 @@ +--- +name: code-reviewer +description: Code review automation for TypeScript, JavaScript, Python, Go, Swift, Kotlin. Analyzes PRs for complexity and risk, checks code quality for SOLID violations and code smells, generates review reports. Use when reviewing pull requests, analyzing code quality, identifying issues, generating review checklists. +--- + +# Code Reviewer + +Automated code review tools for analyzing pull requests, detecting code quality issues, and generating review reports. + +--- + +## Table of Contents + +- [Tools](#tools) + - [PR Analyzer](#pr-analyzer) + - [Code Quality Checker](#code-quality-checker) + - [Review Report Generator](#review-report-generator) +- [Reference Guides](#reference-guides) +- [Languages Supported](#languages-supported) + +--- + +## Tools + +### PR Analyzer + +Analyzes git diff between branches to assess review complexity and identify risks. + +```bash +# Analyze current branch against main +python scripts/pr_analyzer.py /path/to/repo + +# Compare specific branches +python scripts/pr_analyzer.py . --base main --head feature-branch + +# JSON output for integration +python scripts/pr_analyzer.py /path/to/repo --json +``` + +**What it detects:** +- Hardcoded secrets (passwords, API keys, tokens) +- SQL injection patterns (string concatenation in queries) +- Debug statements (debugger, console.log) +- ESLint rule disabling +- TypeScript `any` types +- TODO/FIXME comments + +**Output includes:** +- Complexity score (1-10) +- Risk categorization (critical, high, medium, low) +- File prioritization for review order +- Commit message validation + +--- + +### Code Quality Checker + +Analyzes source code for structural issues, code smells, and SOLID violations. + +```bash +# Analyze a directory +python scripts/code_quality_checker.py /path/to/code + +# Analyze specific language +python scripts/code_quality_checker.py . --language python + +# JSON output +python scripts/code_quality_checker.py /path/to/code --json +``` + +**What it detects:** +- Long functions (>50 lines) +- Large files (>500 lines) +- God classes (>20 methods) +- Deep nesting (>4 levels) +- Too many parameters (>5) +- High cyclomatic complexity +- Missing error handling +- Unused imports +- Magic numbers + +**Thresholds:** + +| Issue | Threshold | +|-------|-----------| +| Long function | >50 lines | +| Large file | >500 lines | +| God class | >20 methods | +| Too many params | >5 | +| Deep nesting | >4 levels | +| High complexity | >10 branches | + +--- + +### Review Report Generator + +Combines PR analysis and code quality findings into structured review reports. + +```bash +# Generate report for current repo +python scripts/review_report_generator.py /path/to/repo + +# Markdown output +python scripts/review_report_generator.py . --format markdown --output review.md + +# Use pre-computed analyses +python scripts/review_report_generator.py . \ + --pr-analysis pr_results.json \ + --quality-analysis quality_results.json +``` + +**Report includes:** +- Review verdict (approve, request changes, block) +- Score (0-100) +- Prioritized action items +- Issue summary by severity +- Suggested review order + +**Verdicts:** + +| Score | Verdict | +|-------|---------| +| 90+ with no high issues | Approve | +| 75+ with ≤2 high issues | Approve with suggestions | +| 50-74 | Request changes | +| <50 or critical issues | Block | + +--- + +## Reference Guides + +### Code Review Checklist +`.gemini/skills/code-reviewer/references/code_review_checklist.md` + +Systematic checklists covering: +- Pre-review checks (build, tests, PR hygiene) +- Correctness (logic, data handling, error handling) +- Security (input validation, injection prevention) +- Performance (efficiency, caching, scalability) +- Maintainability (code quality, naming, structure) +- Testing (coverage, quality, mocking) +- Language-specific checks + +### Coding Standards +`.gemini/skills/code-reviewer/references/coding_standards.md` + +Language-specific standards for: +- TypeScript (type annotations, null safety, async/await) +- JavaScript (declarations, patterns, modules) +- Python (type hints, exceptions, class design) +- Go (error handling, structs, concurrency) +- Swift (optionals, protocols, errors) +- Kotlin (null safety, data classes, coroutines) + +### Common Antipatterns +`.gemini/skills/code-reviewer/references/common_antipatterns.md` + +Antipattern catalog with examples and fixes: +- Structural (god class, long method, deep nesting) +- Logic (boolean blindness, stringly typed code) +- Security (SQL injection, hardcoded credentials) +- Performance (N+1 queries, unbounded collections) +- Testing (duplication, testing implementation) +- Async (floating promises, callback hell) + +--- + +## Languages Supported + +| Language | Extensions | +|----------|------------| +| Python | `.py` | +| TypeScript | `.ts`, `.tsx` | +| JavaScript | `.js`, `.jsx`, `.mjs` | +| Go | `.go` | +| Swift | `.swift` | +| Kotlin | `.kt`, `.kts` | \ No newline at end of file diff --git a/.gemini/skills/code-reviewer/references/code_review_checklist.md b/.gemini/skills/code-reviewer/references/code_review_checklist.md new file mode 100644 index 00000000..b7bd0867 --- /dev/null +++ b/.gemini/skills/code-reviewer/references/code_review_checklist.md @@ -0,0 +1,270 @@ +# Code Review Checklist + +Structured checklists for systematic code review across different aspects. + +--- + +## Table of Contents + +- [Pre-Review Checks](#pre-review-checks) +- [Correctness](#correctness) +- [Security](#security) +- [Performance](#performance) +- [Maintainability](#maintainability) +- [Testing](#testing) +- [Documentation](#documentation) +- [Language-Specific Checks](#language-specific-checks) + +--- + +## Pre-Review Checks + +Before diving into code, verify these basics: + +### Build and Tests +- [ ] Code compiles without errors +- [ ] All existing tests pass +- [ ] New tests are included for new functionality +- [ ] No unintended files included (build artifacts, IDE configs) + +### PR Hygiene +- [ ] PR has clear title and description +- [ ] Changes are scoped appropriately (not too large) +- [ ] Commits follow conventional commit format +- [ ] Branch is up to date with base branch + +### Scope Verification +- [ ] Changes match the stated purpose +- [ ] No unrelated changes bundled in +- [ ] Breaking changes are documented +- [ ] Migration path provided if needed + +--- + +## Correctness + +### Logic +- [ ] Algorithm implements requirements correctly +- [ ] Edge cases handled (null, empty, boundary values) +- [ ] Off-by-one errors checked +- [ ] Correct operators used (== vs ===, & vs &&) +- [ ] Loop termination conditions correct +- [ ] Recursion has proper base cases + +### Data Handling +- [ ] Data types appropriate for the use case +- [ ] Numeric overflow/underflow considered +- [ ] Date/time handling accounts for timezones +- [ ] Unicode and internationalization handled +- [ ] Data validation at entry points + +### State Management +- [ ] State transitions are valid +- [ ] Race conditions addressed +- [ ] Concurrent access handled correctly +- [ ] State cleanup on errors/exit + +### Error Handling +- [ ] Errors caught at appropriate levels +- [ ] Error messages are actionable +- [ ] Errors don't expose sensitive information +- [ ] Recovery or graceful degradation implemented +- [ ] Resources cleaned up in error paths + +--- + +## Security + +### Input Validation +- [ ] All user input validated and sanitized +- [ ] Input length limits enforced +- [ ] File uploads validated (type, size, content) +- [ ] URL parameters validated + +### Injection Prevention +- [ ] SQL queries parameterized +- [ ] Command execution uses safe APIs +- [ ] HTML output escaped to prevent XSS +- [ ] LDAP queries properly escaped +- [ ] XML parsing disables external entities + +### Authentication & Authorization +- [ ] Authentication required for protected resources +- [ ] Authorization checked before operations +- [ ] Session management secure +- [ ] Password handling follows best practices +- [ ] Token expiration implemented + +### Data Protection +- [ ] Sensitive data encrypted at rest +- [ ] Sensitive data encrypted in transit +- [ ] PII handled according to policy +- [ ] Secrets not hardcoded +- [ ] Logs don't contain sensitive data + +### API Security +- [ ] Rate limiting implemented +- [ ] CORS configured correctly +- [ ] CSRF protection in place +- [ ] API keys/tokens secured +- [ ] Endpoints use HTTPS + +--- + +## Performance + +### Efficiency +- [ ] Appropriate data structures used +- [ ] Algorithms have acceptable complexity +- [ ] Database queries are optimized +- [ ] N+1 query problems avoided +- [ ] Indexes used where beneficial + +### Resource Usage +- [ ] Memory usage bounded +- [ ] No memory leaks +- [ ] File handles properly closed +- [ ] Database connections pooled +- [ ] Network calls minimized + +### Caching +- [ ] Appropriate caching strategy +- [ ] Cache invalidation handled +- [ ] Cache keys are unique and predictable +- [ ] TTL values appropriate + +### Scalability +- [ ] Horizontal scaling considered +- [ ] Bottlenecks identified +- [ ] Async processing for long operations +- [ ] Batch operations where appropriate + +--- + +## Maintainability + +### Code Quality +- [ ] Functions/methods have single responsibility +- [ ] Classes follow SOLID principles +- [ ] Code is DRY (Don't Repeat Yourself) +- [ ] No dead code or commented-out code +- [ ] Magic numbers replaced with constants + +### Naming +- [ ] Names are descriptive and consistent +- [ ] Naming follows project conventions +- [ ] No abbreviations that obscure meaning +- [ ] Boolean variables/functions have is/has/can prefix + +### Structure +- [ ] Functions are appropriately sized (<50 lines preferred) +- [ ] Nesting depth is reasonable (<4 levels) +- [ ] Related code is grouped together +- [ ] Dependencies are minimal and explicit + +### Readability +- [ ] Code is self-documenting where possible +- [ ] Complex logic has explanatory comments +- [ ] Formatting is consistent +- [ ] No overly clever or obscure code + +--- + +## Testing + +### Coverage +- [ ] New code has unit tests +- [ ] Critical paths have integration tests +- [ ] Edge cases are tested +- [ ] Error conditions are tested + +### Quality +- [ ] Tests are independent +- [ ] Tests have clear assertions +- [ ] Test names describe what is tested +- [ ] Tests don't depend on external state + +### Mocking +- [ ] External dependencies are mocked +- [ ] Mocks are realistic +- [ ] Mock setup is not excessive + +--- + +## Documentation + +### Code Documentation +- [ ] Public APIs are documented +- [ ] Complex algorithms explained +- [ ] Non-obvious decisions documented +- [ ] TODO/FIXME comments have context + +### External Documentation +- [ ] README updated if needed +- [ ] API documentation updated +- [ ] Changelog updated +- [ ] Migration guides provided + +--- + +## Language-Specific Checks + +### TypeScript/JavaScript +- [ ] Types are explicit (avoid `any`) +- [ ] Null checks present (`?.`, `??`) +- [ ] Async/await errors handled +- [ ] No floating promises +- [ ] Memory leaks from closures checked + +### Python +- [ ] Type hints used for public APIs +- [ ] Context managers for resources (`with` statements) +- [ ] Exception handling is specific (not bare `except`) +- [ ] No mutable default arguments +- [ ] List comprehensions used appropriately + +### Go +- [ ] Errors checked and handled +- [ ] Goroutine leaks prevented +- [ ] Context propagation correct +- [ ] Defer statements in right order +- [ ] Interfaces minimal + +### Swift +- [ ] Optionals handled safely +- [ ] Memory management correct (weak/unowned) +- [ ] Error handling uses Result or throws +- [ ] Access control appropriate +- [ ] Codable implementation correct + +### Kotlin +- [ ] Null safety leveraged +- [ ] Coroutine cancellation handled +- [ ] Data classes used appropriately +- [ ] Extension functions don't obscure behavior +- [ ] Sealed classes for state + +--- + +## Review Process Tips + +### Before Approving +1. Verify all critical checks passed +2. Confirm tests are adequate +3. Consider deployment impact +4. Check for any security concerns +5. Ensure documentation is updated + +### Providing Feedback +- Be specific about issues +- Explain why something is problematic +- Suggest alternatives when possible +- Distinguish blockers from suggestions +- Acknowledge good patterns + +### When to Block +- Security vulnerabilities present +- Critical logic errors +- No tests for risky changes +- Breaking changes without migration +- Significant performance regressions diff --git a/.gemini/skills/code-reviewer/references/coding_standards.md b/.gemini/skills/code-reviewer/references/coding_standards.md new file mode 100644 index 00000000..9fbc6a06 --- /dev/null +++ b/.gemini/skills/code-reviewer/references/coding_standards.md @@ -0,0 +1,555 @@ +# Coding Standards + +Language-specific coding standards and conventions for code review. + +--- + +## Table of Contents + +- [Universal Principles](#universal-principles) +- [TypeScript Standards](#typescript-standards) +- [JavaScript Standards](#javascript-standards) +- [Python Standards](#python-standards) +- [Go Standards](#go-standards) +- [Swift Standards](#swift-standards) +- [Kotlin Standards](#kotlin-standards) + +--- + +## Universal Principles + +These apply across all languages. + +### Naming Conventions + +| Element | Convention | Example | +|---------|------------|---------| +| Variables | camelCase (JS/TS), snake_case (Python/Go) | `userName`, `user_name` | +| Constants | SCREAMING_SNAKE_CASE | `MAX_RETRY_COUNT` | +| Functions | camelCase (JS/TS), snake_case (Python) | `getUserById`, `get_user_by_id` | +| Classes | PascalCase | `UserRepository` | +| Interfaces | PascalCase, optionally prefixed | `IUserService` or `UserService` | +| Private members | Prefix with underscore or use access modifiers | `_internalState` | + +### Function Design + +``` +Good functions: +- Do one thing well +- Have descriptive names (verb + noun) +- Take 3 or fewer parameters +- Return early for error cases +- Stay under 50 lines +``` + +### Error Handling + +``` +Good error handling: +- Catch specific errors, not generic exceptions +- Log with context (what, where, why) +- Clean up resources in error paths +- Don't swallow errors silently +- Provide actionable error messages +``` + +--- + +## TypeScript Standards + +### Type Annotations + +```typescript +// Avoid 'any' - use unknown for truly unknown types +function processData(data: unknown): ProcessedResult { + if (isValidData(data)) { + return transform(data); + } + throw new Error('Invalid data format'); +} + +// Use explicit return types for public APIs +export function calculateTotal(items: CartItem[]): number { + return items.reduce((sum, item) => sum + item.price, 0); +} + +// Use type guards for runtime checks +function isUser(obj: unknown): obj is User { + return ( + typeof obj === 'object' && + obj !== null && + 'id' in obj && + 'email' in obj + ); +} +``` + +### Null Safety + +```typescript +// Use optional chaining and nullish coalescing +const userName = user?.profile?.name ?? 'Anonymous'; + +// Be explicit about nullable types +interface Config { + timeout: number; + retries?: number; // Optional + fallbackUrl: string | null; // Explicitly nullable +} + +// Use assertion functions for validation +function assertDefined(value: T | null | undefined): asserts value is T { + if (value === null || value === undefined) { + throw new Error('Value is not defined'); + } +} +``` + +### Async/Await + +```typescript +// Always handle errors in async functions +async function fetchUser(id: string): Promise { + try { + const response = await api.get(`/users/${id}`); + return response.data; + } catch (error) { + logger.error('Failed to fetch user', { id, error }); + throw new UserFetchError(id, error); + } +} + +// Use Promise.all for parallel operations +async function loadDashboard(userId: string): Promise { + const [profile, stats, notifications] = await Promise.all([ + fetchProfile(userId), + fetchStats(userId), + fetchNotifications(userId) + ]); + return { profile, stats, notifications }; +} +``` + +### React/Component Standards + +```typescript +// Use explicit prop types +interface ButtonProps { + label: string; + onClick: () => void; + variant?: 'primary' | 'secondary'; + disabled?: boolean; +} + +// Prefer functional components with hooks +function Button({ label, onClick, variant = 'primary', disabled = false }: ButtonProps) { + return ( + + ); +} + +// Use custom hooks for reusable logic +function useDebounce(value: T, delay: number): T { + const [debouncedValue, setDebouncedValue] = useState(value); + + useEffect(() => { + const timer = setTimeout(() => setDebouncedValue(value), delay); + return () => clearTimeout(timer); + }, [value, delay]); + + return debouncedValue; +} +``` + +--- + +## JavaScript Standards + +### Variable Declarations + +```javascript +// Use const by default, let when reassignment needed +const MAX_ITEMS = 100; +let currentCount = 0; + +// Never use var +// var is function-scoped and hoisted, leading to bugs +``` + +### Object and Array Patterns + +```javascript +// Use object destructuring +const { name, email, role = 'user' } = user; + +// Use spread for immutable updates +const updatedUser = { ...user, lastLogin: new Date() }; +const updatedList = [...items, newItem]; + +// Use array methods over loops +const activeUsers = users.filter(u => u.isActive); +const emails = users.map(u => u.email); +const total = orders.reduce((sum, o) => sum + o.amount, 0); +``` + +### Module Patterns + +```javascript +// Use named exports for utilities +export function formatDate(date) { ... } +export function parseDate(str) { ... } + +// Use default export for main component/class +export default class UserService { ... } + +// Group related exports +export { formatDate, parseDate, isValidDate } from './dateUtils'; +``` + +--- + +## Python Standards + +### Type Hints (PEP 484) + +```python +from typing import Optional, List, Dict, Union + +def get_user(user_id: int) -> Optional[User]: + """Fetch user by ID, returns None if not found.""" + return db.query(User).filter(User.id == user_id).first() + +def process_items(items: List[str]) -> Dict[str, int]: + """Count occurrences of each item.""" + return {item: items.count(item) for item in set(items)} + +def send_notification( + user: User, + message: str, + *, + priority: str = "normal", + channels: List[str] = None +) -> bool: + """Send notification to user via specified channels.""" + channels = channels or ["email"] + # Implementation +``` + +### Exception Handling + +```python +# Catch specific exceptions +try: + result = api_client.fetch_data(endpoint) +except ConnectionError as e: + logger.warning(f"Connection failed: {e}") + return cached_data +except TimeoutError as e: + logger.error(f"Request timed out: {e}") + raise ServiceUnavailableError() from e + +# Use context managers for resources +with open(filepath, 'r') as f: + data = json.load(f) + +# Custom exceptions should be informative +class ValidationError(Exception): + def __init__(self, field: str, message: str): + self.field = field + self.message = message + super().__init__(f"{field}: {message}") +``` + +### Class Design + +```python +from dataclasses import dataclass +from abc import ABC, abstractmethod + +# Use dataclasses for data containers +@dataclass +class UserDTO: + id: int + email: str + name: str + is_active: bool = True + +# Use ABC for interfaces +class Repository(ABC): + @abstractmethod + def find_by_id(self, id: int) -> Optional[Entity]: + pass + + @abstractmethod + def save(self, entity: Entity) -> Entity: + pass + +# Use properties for computed attributes +class Order: + def __init__(self, items: List[OrderItem]): + self._items = items + + @property + def total(self) -> Decimal: + return sum(item.price * item.quantity for item in self._items) +``` + +--- + +## Go Standards + +### Error Handling + +```go +// Always check errors +file, err := os.Open(filename) +if err != nil { + return fmt.Errorf("failed to open %s: %w", filename, err) +} +defer file.Close() + +// Use custom error types for specific cases +type ValidationError struct { + Field string + Message string +} + +func (e *ValidationError) Error() string { + return fmt.Sprintf("%s: %s", e.Field, e.Message) +} + +// Wrap errors with context +if err := db.Query(query); err != nil { + return fmt.Errorf("query failed for user %d: %w", userID, err) +} +``` + +### Struct Design + +```go +// Use unexported fields with exported methods +type UserService struct { + repo UserRepository + cache Cache + logger Logger +} + +// Constructor functions for initialization +func NewUserService(repo UserRepository, cache Cache, logger Logger) *UserService { + return &UserService{ + repo: repo, + cache: cache, + logger: logger, + } +} + +// Keep interfaces small +type Reader interface { + Read(p []byte) (n int, err error) +} + +type Writer interface { + Write(p []byte) (n int, err error) +} +``` + +### Concurrency + +```go +// Use context for cancellation +func fetchData(ctx context.Context, url string) ([]byte, error) { + req, err := http.NewRequestWithContext(ctx, "GET", url, nil) + if err != nil { + return nil, err + } + // ... +} + +// Use channels for communication +func worker(jobs <-chan Job, results chan<- Result) { + for job := range jobs { + result := process(job) + results <- result + } +} + +// Use sync.WaitGroup for coordination +var wg sync.WaitGroup +for _, item := range items { + wg.Add(1) + go func(i Item) { + defer wg.Done() + processItem(i) + }(item) +} +wg.Wait() +``` + +--- + +## Swift Standards + +### Optionals + +```swift +// Use optional binding +if let user = fetchUser(id: userId) { + displayProfile(user) +} + +// Use guard for early exit +guard let data = response.data else { + throw NetworkError.noData +} + +// Use nil coalescing for defaults +let displayName = user.nickname ?? user.email + +// Avoid force unwrapping except in tests +// BAD: let name = user.name! +// GOOD: guard let name = user.name else { return } +``` + +### Protocol-Oriented Design + +```swift +// Define protocols with minimal requirements +protocol Identifiable { + var id: String { get } +} + +protocol Persistable: Identifiable { + func save() throws + static func find(by id: String) -> Self? +} + +// Use protocol extensions for default implementations +extension Persistable { + func save() throws { + try Storage.shared.save(self) + } +} + +// Prefer composition over inheritance +struct User: Identifiable, Codable { + let id: String + var name: String + var email: String +} +``` + +### Error Handling + +```swift +// Define domain-specific errors +enum AuthError: Error { + case invalidCredentials + case tokenExpired + case networkFailure(underlying: Error) +} + +// Use Result type for async operations +func authenticate( + email: String, + password: String, + completion: @escaping (Result) -> Void +) + +// Use throws for synchronous operations +func validate(_ input: String) throws -> ValidatedInput { + guard !input.isEmpty else { + throw ValidationError.emptyInput + } + return ValidatedInput(value: input) +} +``` + +--- + +## Kotlin Standards + +### Null Safety + +```kotlin +// Use nullable types explicitly +fun findUser(id: Int): User? { + return userRepository.find(id) +} + +// Use safe calls and elvis operator +val name = user?.profile?.name ?: "Unknown" + +// Use let for null checks with side effects +user?.let { activeUser -> + sendWelcomeEmail(activeUser.email) + logActivity(activeUser.id) +} + +// Use require/check for validation +fun processPayment(amount: Double) { + require(amount > 0) { "Amount must be positive: $amount" } + // Process +} +``` + +### Data Classes and Sealed Classes + +```kotlin +// Use data classes for DTOs +data class UserDTO( + val id: Int, + val email: String, + val name: String, + val isActive: Boolean = true +) + +// Use sealed classes for state +sealed class Result { + data class Success(val data: T) : Result() + data class Error(val message: String, val cause: Throwable? = null) : Result() + object Loading : Result() +} + +// Pattern matching with when +fun handleResult(result: Result) = when (result) { + is Result.Success -> showUser(result.data) + is Result.Error -> showError(result.message) + Result.Loading -> showLoading() +} +``` + +### Coroutines + +```kotlin +// Use structured concurrency +suspend fun loadDashboard(): Dashboard = coroutineScope { + val profile = async { fetchProfile() } + val stats = async { fetchStats() } + val notifications = async { fetchNotifications() } + + Dashboard( + profile = profile.await(), + stats = stats.await(), + notifications = notifications.await() + ) +} + +// Handle cancellation +suspend fun fetchWithRetry(url: String): Response { + repeat(3) { attempt -> + try { + return httpClient.get(url) + } catch (e: IOException) { + if (attempt == 2) throw e + delay(1000L * (attempt + 1)) + } + } + throw IllegalStateException("Unreachable") +} +``` diff --git a/.gemini/skills/code-reviewer/references/common_antipatterns.md b/.gemini/skills/code-reviewer/references/common_antipatterns.md new file mode 100644 index 00000000..26045452 --- /dev/null +++ b/.gemini/skills/code-reviewer/references/common_antipatterns.md @@ -0,0 +1,739 @@ +# Common Antipatterns + +Code antipatterns to identify during review, with examples and fixes. + +--- + +## Table of Contents + +- [Structural Antipatterns](#structural-antipatterns) +- [Logic Antipatterns](#logic-antipatterns) +- [Security Antipatterns](#security-antipatterns) +- [Performance Antipatterns](#performance-antipatterns) +- [Testing Antipatterns](#testing-antipatterns) +- [Async Antipatterns](#async-antipatterns) + +--- + +## Structural Antipatterns + +### God Class + +A class that does too much and knows too much. + +```typescript +// BAD: God class handling everything +class UserManager { + createUser(data: UserData) { ... } + updateUser(id: string, data: UserData) { ... } + deleteUser(id: string) { ... } + sendEmail(userId: string, content: string) { ... } + generateReport(userId: string) { ... } + validatePassword(password: string) { ... } + hashPassword(password: string) { ... } + uploadAvatar(userId: string, file: File) { ... } + resizeImage(file: File) { ... } + logActivity(userId: string, action: string) { ... } + // 50 more methods... +} + +// GOOD: Single responsibility classes +class UserRepository { + create(data: UserData): User { ... } + update(id: string, data: Partial): User { ... } + delete(id: string): void { ... } +} + +class EmailService { + send(to: string, content: string): void { ... } +} + +class PasswordService { + validate(password: string): ValidationResult { ... } + hash(password: string): string { ... } +} +``` + +**Detection:** Class has >20 methods, >500 lines, or handles unrelated concerns. + +--- + +### Long Method + +Functions that do too much and are hard to understand. + +```python +# BAD: Long method doing everything +def process_order(order_data): + # Validate order (20 lines) + if not order_data.get('items'): + raise ValueError('No items') + if not order_data.get('customer_id'): + raise ValueError('No customer') + # ... more validation + + # Calculate totals (30 lines) + subtotal = 0 + for item in order_data['items']: + price = get_product_price(item['product_id']) + subtotal += price * item['quantity'] + # ... tax calculation, discounts + + # Process payment (40 lines) + payment_result = payment_gateway.charge(...) + # ... handle payment errors + + # Create order record (20 lines) + order = Order.create(...) + + # Send notifications (20 lines) + send_order_confirmation(...) + notify_warehouse(...) + + return order + +# GOOD: Composed of focused functions +def process_order(order_data): + validate_order(order_data) + totals = calculate_order_totals(order_data) + payment = process_payment(order_data['customer_id'], totals) + order = create_order_record(order_data, totals, payment) + send_order_notifications(order) + return order +``` + +**Detection:** Function >50 lines or requires scrolling to read. + +--- + +### Deep Nesting + +Excessive indentation making code hard to follow. + +```javascript +// BAD: Deep nesting +function processData(data) { + if (data) { + if (data.items) { + if (data.items.length > 0) { + for (const item of data.items) { + if (item.isValid) { + if (item.type === 'premium') { + if (item.price > 100) { + // Finally do something + processItem(item); + } + } + } + } + } + } + } +} + +// GOOD: Early returns and guard clauses +function processData(data) { + if (!data?.items?.length) { + return; + } + + const premiumItems = data.items.filter( + item => item.isValid && item.type === 'premium' && item.price > 100 + ); + + premiumItems.forEach(processItem); +} +``` + +**Detection:** Indentation >4 levels deep. + +--- + +### Magic Numbers and Strings + +Hard-coded values without explanation. + +```go +// BAD: Magic numbers +func calculateDiscount(total float64, userType int) float64 { + if userType == 1 { + return total * 0.15 + } else if userType == 2 { + return total * 0.25 + } + return total * 0.05 +} + +// GOOD: Named constants +const ( + UserTypeRegular = 1 + UserTypePremium = 2 + + DiscountRegular = 0.05 + DiscountStandard = 0.15 + DiscountPremium = 0.25 +) + +func calculateDiscount(total float64, userType int) float64 { + switch userType { + case UserTypePremium: + return total * DiscountPremium + case UserTypeRegular: + return total * DiscountStandard + default: + return total * DiscountRegular + } +} +``` + +**Detection:** Literal numbers (except 0, 1) or repeated string literals. + +--- + +### Primitive Obsession + +Using primitives instead of small objects. + +```typescript +// BAD: Primitives everywhere +function createUser( + name: string, + email: string, + phone: string, + street: string, + city: string, + zipCode: string, + country: string +): User { ... } + +// GOOD: Value objects +interface Address { + street: string; + city: string; + zipCode: string; + country: string; +} + +interface ContactInfo { + email: string; + phone: string; +} + +function createUser( + name: string, + contact: ContactInfo, + address: Address +): User { ... } +``` + +**Detection:** Functions with >4 parameters of same type, or related primitives always passed together. + +--- + +## Logic Antipatterns + +### Boolean Blindness + +Passing booleans that make code unreadable at call sites. + +```swift +// BAD: What do these booleans mean? +user.configure(true, false, true, false) + +// GOOD: Named parameters or option objects +user.configure( + sendWelcomeEmail: true, + requireVerification: false, + enableNotifications: true, + isAdmin: false +) + +// Or use an options struct +struct UserConfiguration { + var sendWelcomeEmail: Bool = true + var requireVerification: Bool = false + var enableNotifications: Bool = true + var isAdmin: Bool = false +} + +user.configure(UserConfiguration()) +``` + +**Detection:** Function calls with multiple boolean literals. + +--- + +### Null Returns for Collections + +Returning null instead of empty collections. + +```kotlin +// BAD: Returning null +fun findUsersByRole(role: String): List? { + val users = repository.findByRole(role) + return if (users.isEmpty()) null else users +} + +// Caller must handle null +val users = findUsersByRole("admin") +if (users != null) { + users.forEach { ... } +} + +// GOOD: Return empty collection +fun findUsersByRole(role: String): List { + return repository.findByRole(role) +} + +// Caller can iterate directly +findUsersByRole("admin").forEach { ... } +``` + +**Detection:** Functions returning nullable collections. + +--- + +### Stringly Typed Code + +Using strings where enums or types should be used. + +```python +# BAD: String-based logic +def handle_event(event_type: str, data: dict): + if event_type == "user_created": + handle_user_created(data) + elif event_type == "user_updated": + handle_user_updated(data) + elif event_type == "user_dleted": # Typo won't be caught + handle_user_deleted(data) + +# GOOD: Enum-based +from enum import Enum + +class EventType(Enum): + USER_CREATED = "user_created" + USER_UPDATED = "user_updated" + USER_DELETED = "user_deleted" + +def handle_event(event_type: EventType, data: dict): + handlers = { + EventType.USER_CREATED: handle_user_created, + EventType.USER_UPDATED: handle_user_updated, + EventType.USER_DELETED: handle_user_deleted, + } + handlers[event_type](data) +``` + +**Detection:** String comparisons for type/status/category values. + +--- + +## Security Antipatterns + +### SQL Injection + +String concatenation in SQL queries. + +```javascript +// BAD: String concatenation +const query = `SELECT * FROM users WHERE id = ${userId}`; +db.query(query); + +// BAD: String templates still vulnerable +const query = `SELECT * FROM users WHERE name = '${userName}'`; + +// GOOD: Parameterized queries +const query = 'SELECT * FROM users WHERE id = $1'; +db.query(query, [userId]); + +// GOOD: Using ORM safely +User.findOne({ where: { id: userId } }); +``` + +**Detection:** String concatenation or template literals with SQL keywords. + +--- + +### Hardcoded Credentials + +Secrets in source code. + +```python +# BAD: Hardcoded secrets +API_KEY = "sk-abc123xyz789" +DATABASE_URL = "postgresql://admin:password123@prod-db.internal:5432/app" + +# GOOD: Environment variables +import os + +API_KEY = os.environ["API_KEY"] +DATABASE_URL = os.environ["DATABASE_URL"] + +# GOOD: Secrets manager +from aws_secretsmanager import get_secret + +API_KEY = get_secret("api-key") +``` + +**Detection:** Variables named `password`, `secret`, `key`, `token` with string literals. + +--- + +### Unsafe Deserialization + +Deserializing untrusted data without validation. + +```python +# BAD: Binary serialization from untrusted source can execute arbitrary code +# Examples: Python's binary serialization, yaml.load without SafeLoader + +# GOOD: Use safe alternatives +import json + +def load_data(file_path): + with open(file_path, 'r') as f: + return json.load(f) + +# GOOD: Use SafeLoader for YAML +import yaml + +with open('config.yaml') as f: + config = yaml.safe_load(f) +``` + +**Detection:** Binary deserialization functions, yaml.load without safe loader, dynamic code execution on external data. + +--- + +### Missing Input Validation + +Trusting user input without validation. + +```typescript +// BAD: No validation +app.post('/user', (req, res) => { + const user = db.create({ + name: req.body.name, + email: req.body.email, + role: req.body.role // User can set themselves as admin! + }); + res.json(user); +}); + +// GOOD: Validate and sanitize +import { z } from 'zod'; + +const CreateUserSchema = z.object({ + name: z.string().min(1).max(100), + email: z.string().email(), + // role is NOT accepted from input +}); + +app.post('/user', (req, res) => { + const validated = CreateUserSchema.parse(req.body); + const user = db.create({ + ...validated, + role: 'user' // Default role, not from input + }); + res.json(user); +}); +``` + +**Detection:** Request body/params used directly without validation schema. + +--- + +## Performance Antipatterns + +### N+1 Query Problem + +Loading related data one record at a time. + +```python +# BAD: N+1 queries +def get_orders_with_items(): + orders = Order.query.all() # 1 query + for order in orders: + items = OrderItem.query.filter_by(order_id=order.id).all() # N queries + order.items = items + return orders + +# GOOD: Eager loading +def get_orders_with_items(): + return Order.query.options( + joinedload(Order.items) + ).all() # 1 query with JOIN + +# GOOD: Batch loading +def get_orders_with_items(): + orders = Order.query.all() + order_ids = [o.id for o in orders] + items = OrderItem.query.filter( + OrderItem.order_id.in_(order_ids) + ).all() # 2 queries total + # Group items by order_id... +``` + +**Detection:** Database queries inside loops. + +--- + +### Unbounded Collections + +Loading unlimited data into memory. + +```go +// BAD: Load all records +func GetAllUsers() ([]User, error) { + return db.Find(&[]User{}) // Could be millions +} + +// GOOD: Pagination +func GetUsers(page, pageSize int) ([]User, error) { + offset := (page - 1) * pageSize + return db.Limit(pageSize).Offset(offset).Find(&[]User{}) +} + +// GOOD: Streaming for large datasets +func ProcessAllUsers(handler func(User) error) error { + rows, err := db.Model(&User{}).Rows() + if err != nil { + return err + } + defer rows.Close() + + for rows.Next() { + var user User + db.ScanRows(rows, &user) + if err := handler(user); err != nil { + return err + } + } + return nil +} +``` + +**Detection:** `findAll()`, `find({})`, or queries without `LIMIT`. + +--- + +### Synchronous I/O in Hot Paths + +Blocking operations in request handlers. + +```javascript +// BAD: Sync file read on every request +app.get('/config', (req, res) => { + const config = fs.readFileSync('./config.json'); // Blocks event loop + res.json(JSON.parse(config)); +}); + +// GOOD: Load once at startup +const config = JSON.parse(fs.readFileSync('./config.json')); + +app.get('/config', (req, res) => { + res.json(config); +}); + +// GOOD: Async with caching +let configCache = null; + +app.get('/config', async (req, res) => { + if (!configCache) { + configCache = JSON.parse(await fs.promises.readFile('./config.json')); + } + res.json(configCache); +}); +``` + +**Detection:** `readFileSync`, `execSync`, or blocking calls in request handlers. + +--- + +## Testing Antipatterns + +### Test Code Duplication + +Repeating setup in every test. + +```typescript +// BAD: Duplicate setup +describe('UserService', () => { + it('should create user', async () => { + const db = await createTestDatabase(); + const userRepo = new UserRepository(db); + const emailService = new MockEmailService(); + const service = new UserService(userRepo, emailService); + + const user = await service.create({ name: 'Test' }); + expect(user.name).toBe('Test'); + }); + + it('should update user', async () => { + const db = await createTestDatabase(); // Duplicated + const userRepo = new UserRepository(db); // Duplicated + const emailService = new MockEmailService(); // Duplicated + const service = new UserService(userRepo, emailService); // Duplicated + + // ... + }); +}); + +// GOOD: Shared setup +describe('UserService', () => { + let service: UserService; + let db: TestDatabase; + + beforeEach(async () => { + db = await createTestDatabase(); + const userRepo = new UserRepository(db); + const emailService = new MockEmailService(); + service = new UserService(userRepo, emailService); + }); + + afterEach(async () => { + await db.cleanup(); + }); + + it('should create user', async () => { + const user = await service.create({ name: 'Test' }); + expect(user.name).toBe('Test'); + }); +}); +``` + +--- + +### Testing Implementation Instead of Behavior + +Tests coupled to internal implementation. + +```python +# BAD: Testing implementation details +def test_add_item_to_cart(): + cart = ShoppingCart() + cart.add_item(Product("Apple", 1.00)) + + # Testing internal structure + assert cart._items[0].name == "Apple" + assert cart._total == 1.00 + +# GOOD: Testing behavior +def test_add_item_to_cart(): + cart = ShoppingCart() + cart.add_item(Product("Apple", 1.00)) + + # Testing public behavior + assert cart.item_count == 1 + assert cart.total == 1.00 + assert cart.contains("Apple") +``` + +--- + +## Async Antipatterns + +### Floating Promises + +Promises without await or catch. + +```typescript +// BAD: Floating promise +async function saveUser(user: User) { + db.save(user); // Not awaited, errors lost + logger.info('User saved'); // Logs before save completes +} + +// BAD: Fire and forget in loop +for (const item of items) { + processItem(item); // All run in parallel, no error handling +} + +// GOOD: Await the promise +async function saveUser(user: User) { + await db.save(user); + logger.info('User saved'); +} + +// GOOD: Process with proper handling +await Promise.all(items.map(item => processItem(item))); + +// Or sequentially +for (const item of items) { + await processItem(item); +} +``` + +**Detection:** Async function calls without `await` or `.then()`. + +--- + +### Callback Hell + +Deeply nested callbacks. + +```javascript +// BAD: Callback hell +getUser(userId, (err, user) => { + if (err) return handleError(err); + getOrders(user.id, (err, orders) => { + if (err) return handleError(err); + getProducts(orders[0].productIds, (err, products) => { + if (err) return handleError(err); + renderPage(user, orders, products, (err) => { + if (err) return handleError(err); + console.log('Done'); + }); + }); + }); +}); + +// GOOD: Async/await +async function loadPage(userId) { + try { + const user = await getUser(userId); + const orders = await getOrders(user.id); + const products = await getProducts(orders[0].productIds); + await renderPage(user, orders, products); + console.log('Done'); + } catch (err) { + handleError(err); + } +} +``` + +**Detection:** >2 levels of callback nesting. + +--- + +### Async in Constructor + +Async operations in constructors. + +```typescript +// BAD: Async in constructor +class DatabaseConnection { + constructor(url: string) { + this.connect(url); // Fire-and-forget async + } + + private async connect(url: string) { + this.client = await createClient(url); + } +} + +// GOOD: Factory method +class DatabaseConnection { + private constructor(private client: Client) {} + + static async create(url: string): Promise { + const client = await createClient(url); + return new DatabaseConnection(client); + } +} + +// Usage +const db = await DatabaseConnection.create(url); +``` + +**Detection:** `async` calls or `.then()` in constructor. diff --git a/.gemini/skills/code-reviewer/scripts/code_quality_checker.py b/.gemini/skills/code-reviewer/scripts/code_quality_checker.py new file mode 100644 index 00000000..b3e1cd8b --- /dev/null +++ b/.gemini/skills/code-reviewer/scripts/code_quality_checker.py @@ -0,0 +1,560 @@ +#!/usr/bin/env python3 +""" +Code Quality Checker + +Analyzes source code for quality issues, code smells, complexity metrics, +and SOLID principle violations. + +Usage: + python .gemini/skills/code-reviewer/scripts/code_quality_checker.py /path/to/file.py + python .gemini/skills/code-reviewer/scripts/code_quality_checker.py /path/to/directory --recursive + python .gemini/skills/code-reviewer/scripts/code_quality_checker.py . --language typescript --json +""" + +import argparse +import json +import re +import sys +from pathlib import Path +from typing import Dict, List, Optional + + +# Language-specific file extensions +LANGUAGE_EXTENSIONS = { + "python": [".py"], + "typescript": [".ts", ".tsx"], + "javascript": [".js", ".jsx", ".mjs"], + "go": [".go"], + "swift": [".swift"], + "kotlin": [".kt", ".kts"] +} + +# Code smell thresholds +THRESHOLDS = { + "long_function_lines": 50, + "too_many_parameters": 5, + "high_complexity": 10, + "god_class_methods": 20, + "max_imports": 15 +} + + +def get_file_extension(filepath: Path) -> str: + """Get file extension.""" + return filepath.suffix.lower() + + +def detect_language(filepath: Path) -> Optional[str]: + """Detect programming language from file extension.""" + ext = get_file_extension(filepath) + for lang, extensions in LANGUAGE_EXTENSIONS.items(): + if ext in extensions: + return lang + return None + + +def read_file_content(filepath: Path) -> str: + """Read file content safely.""" + try: + with open(filepath, "r", encoding="utf-8", errors="ignore") as f: + return f.read() + except Exception: + return "" + + +def calculate_cyclomatic_complexity(content: str) -> int: + """ + Estimate cyclomatic complexity based on control flow keywords. + """ + complexity = 1 # Base complexity + + # Control flow patterns that increase complexity + patterns = [ + r"\bif\b", + r"\belif\b", + r"\belse\b", + r"\bfor\b", + r"\bwhile\b", + r"\bcase\b", + r"\bcatch\b", + r"\bexcept\b", + r"\band\b", + r"\bor\b", + r"\|\|", + r"&&" + ] + + for pattern in patterns: + matches = re.findall(pattern, content, re.IGNORECASE) + complexity += len(matches) + + return complexity + + +def count_lines(content: str) -> Dict[str, int]: + """Count different types of lines in code.""" + lines = content.split("\n") + total = len(lines) + blank = sum(1 for line in lines if not line.strip()) + comment = 0 + + for line in lines: + stripped = line.strip() + if stripped.startswith("#") or stripped.startswith("//"): + comment += 1 + elif stripped.startswith("/*") or stripped.startswith("'''") or stripped.startswith('"""'): + comment += 1 + + code = total - blank - comment + + return { + "total": total, + "code": code, + "blank": blank, + "comment": comment + } + + +def find_functions(content: str, language: str) -> List[Dict]: + """Find function definitions and their metrics.""" + functions = [] + + # Language-specific function patterns + patterns = { + "python": r"def\s+(\w+)\s*\(([^)]*)\)", + "typescript": r"(?:function\s+(\w+)|(?:const|let|var)\s+(\w+)\s*=\s*(?:async\s+)?\([^)]*\)\s*=>)", + "javascript": r"(?:function\s+(\w+)|(?:const|let|var)\s+(\w+)\s*=\s*(?:async\s+)?\([^)]*\)\s*=>)", + "go": r"func\s+(?:\([^)]+\)\s+)?(\w+)\s*\(([^)]*)\)", + "swift": r"func\s+(\w+)\s*\(([^)]*)\)", + "kotlin": r"fun\s+(\w+)\s*\(([^)]*)\)" + } + + pattern = patterns.get(language, patterns["python"]) + matches = re.finditer(pattern, content, re.MULTILINE) + + for match in matches: + name = next((g for g in match.groups() if g), "anonymous") + params_str = match.group(2) if len(match.groups()) > 1 and match.group(2) else "" + + # Count parameters + params = [p.strip() for p in params_str.split(",") if p.strip()] + param_count = len(params) + + # Estimate function length + start_pos = match.end() + remaining = content[start_pos:] + + next_func = re.search(pattern, remaining) + if next_func: + func_body = remaining[:next_func.start()] + else: + func_body = remaining[:min(2000, len(remaining))] + + line_count = len(func_body.split("\n")) + complexity = calculate_cyclomatic_complexity(func_body) + + functions.append({ + "name": name, + "parameters": param_count, + "lines": line_count, + "complexity": complexity + }) + + return functions + + +def find_classes(content: str, language: str) -> List[Dict]: + """Find class definitions and their metrics.""" + classes = [] + + patterns = { + "python": r"class\s+(\w+)", + "typescript": r"class\s+(\w+)", + "javascript": r"class\s+(\w+)", + "go": r"type\s+(\w+)\s+struct", + "swift": r"class\s+(\w+)", + "kotlin": r"class\s+(\w+)" + } + + pattern = patterns.get(language, patterns["python"]) + matches = re.finditer(pattern, content) + + for match in matches: + name = match.group(1) + + start_pos = match.end() + remaining = content[start_pos:] + + next_class = re.search(pattern, remaining) + if next_class: + class_body = remaining[:next_class.start()] + else: + class_body = remaining + + # Count methods + method_patterns = { + "python": r"def\s+\w+\s*\(", + "typescript": r"(?:public|private|protected)?\s*\w+\s*\([^)]*\)\s*[:{]", + "javascript": r"\w+\s*\([^)]*\)\s*\{", + "go": r"func\s+\(", + "swift": r"func\s+\w+", + "kotlin": r"fun\s+\w+" + } + method_pattern = method_patterns.get(language, method_patterns["python"]) + methods = len(re.findall(method_pattern, class_body)) + + classes.append({ + "name": name, + "methods": methods, + "lines": len(class_body.split("\n")) + }) + + return classes + + +def check_code_smells(content: str, functions: List[Dict], classes: List[Dict]) -> List[Dict]: + """Check for code smells in the content.""" + smells = [] + + # Long functions + for func in functions: + if func["lines"] > THRESHOLDS["long_function_lines"]: + smells.append({ + "type": "long_function", + "severity": "medium", + "message": f"Function '{func['name']}' has {func['lines']} lines (max: {THRESHOLDS['long_function_lines']})", + "location": func["name"] + }) + + # Too many parameters + for func in functions: + if func["parameters"] > THRESHOLDS["too_many_parameters"]: + smells.append({ + "type": "too_many_parameters", + "severity": "low", + "message": f"Function '{func['name']}' has {func['parameters']} parameters (max: {THRESHOLDS['too_many_parameters']})", + "location": func["name"] + }) + + # High complexity + for func in functions: + if func["complexity"] > THRESHOLDS["high_complexity"]: + severity = "high" if func["complexity"] > 20 else "medium" + smells.append({ + "type": "high_complexity", + "severity": severity, + "message": f"Function '{func['name']}' has complexity {func['complexity']} (max: {THRESHOLDS['high_complexity']})", + "location": func["name"] + }) + + # God classes + for cls in classes: + if cls["methods"] > THRESHOLDS["god_class_methods"]: + smells.append({ + "type": "god_class", + "severity": "high", + "message": f"Class '{cls['name']}' has {cls['methods']} methods (max: {THRESHOLDS['god_class_methods']})", + "location": cls["name"] + }) + + # Magic numbers + magic_pattern = r"\b(? List[Dict]: + """Check for potential SOLID principle violations.""" + violations = [] + + # OCP: Type checking instead of polymorphism + type_checks = len(re.findall(r"isinstance\(|type\(.*\)\s*==|typeof\s+\w+\s*===", content)) + if type_checks > 2: + violations.append({ + "principle": "OCP", + "name": "Open/Closed Principle", + "severity": "medium", + "message": f"Found {type_checks} type checks - consider using polymorphism" + }) + + # LSP/ISP: NotImplementedError + not_impl = len(re.findall(r"raise\s+NotImplementedError|not\s+implemented", content, re.IGNORECASE)) + if not_impl: + violations.append({ + "principle": "LSP/ISP", + "name": "Liskov/Interface Segregation", + "severity": "low", + "message": f"Found {not_impl} unimplemented methods - may indicate oversized interface" + }) + + # DIP: Too many direct imports + imports = len(re.findall(r"^(?:import|from)\s+", content, re.MULTILINE)) + if imports > THRESHOLDS["max_imports"]: + violations.append({ + "principle": "DIP", + "name": "Dependency Inversion Principle", + "severity": "low", + "message": f"File has {imports} imports - consider dependency injection" + }) + + return violations + + +def calculate_quality_score( + line_metrics: Dict, + functions: List[Dict], + classes: List[Dict], + smells: List[Dict], + violations: List[Dict] +) -> int: + """Calculate overall quality score (0-100).""" + score = 100 + + # Deduct for code smells + for smell in smells: + if smell["severity"] == "high": + score -= 10 + elif smell["severity"] == "medium": + score -= 5 + elif smell["severity"] == "low": + score -= 2 + + # Deduct for SOLID violations + for violation in violations: + if violation["severity"] == "high": + score -= 8 + elif violation["severity"] == "medium": + score -= 4 + elif violation["severity"] == "low": + score -= 2 + + # Bonus for good comment ratio (10-30%) + if line_metrics["total"] > 0: + comment_ratio = line_metrics["comment"] / line_metrics["total"] + if 0.1 <= comment_ratio <= 0.3: + score += 5 + + # Bonus for reasonable function sizes + if functions: + avg_lines = sum(f["lines"] for f in functions) / len(functions) + if avg_lines < 30: + score += 5 + + return max(0, min(100, score)) + + +def get_grade(score: int) -> str: + """Convert score to letter grade.""" + if score >= 90: + return "A" + elif score >= 80: + return "B" + elif score >= 70: + return "C" + elif score >= 60: + return "D" + else: + return "F" + + +def analyze_file(filepath: Path) -> Dict: + """Analyze a single file for code quality.""" + language = detect_language(filepath) + if not language: + return {"error": f"Unsupported file type: {filepath.suffix}"} + + content = read_file_content(filepath) + if not content: + return {"error": f"Could not read file: {filepath}"} + + line_metrics = count_lines(content) + functions = find_functions(content, language) + classes = find_classes(content, language) + smells = check_code_smells(content, functions, classes) + violations = check_solid_violations(content) + score = calculate_quality_score(line_metrics, functions, classes, smells, violations) + + return { + "file": str(filepath), + "language": language, + "metrics": { + "lines": line_metrics, + "functions": len(functions), + "classes": len(classes), + "avg_complexity": round(sum(f["complexity"] for f in functions) / max(1, len(functions)), 1) + }, + "quality_score": score, + "grade": get_grade(score), + "smells": smells, + "solid_violations": violations, + "function_details": functions[:10], + "class_details": classes[:10] + } + + +def analyze_directory( + dir_path: Path, + recursive: bool = True, + language: Optional[str] = None +) -> Dict: + """Analyze all files in a directory.""" + results = [] + extensions = [] + + if language: + extensions = LANGUAGE_EXTENSIONS.get(language, []) + else: + for exts in LANGUAGE_EXTENSIONS.values(): + extensions.extend(exts) + + pattern = "**/*" if recursive else "*" + + for ext in extensions: + for filepath in dir_path.glob(f"{pattern}{ext}"): + if "node_modules" in str(filepath) or ".git" in str(filepath): + continue + result = analyze_file(filepath) + if "error" not in result: + results.append(result) + + if not results: + return {"error": "No supported files found"} + + total_score = sum(r["quality_score"] for r in results) + avg_score = total_score / len(results) + total_smells = sum(len(r["smells"]) for r in results) + total_violations = sum(len(r["solid_violations"]) for r in results) + + return { + "directory": str(dir_path), + "files_analyzed": len(results), + "average_score": round(avg_score, 1), + "overall_grade": get_grade(int(avg_score)), + "total_code_smells": total_smells, + "total_solid_violations": total_violations, + "files": sorted(results, key=lambda x: x["quality_score"]) + } + + +def print_report(analysis: Dict) -> None: + """Print human-readable analysis report.""" + if "error" in analysis: + print(f"Error: {analysis['error']}") + return + + print("=" * 60) + print("CODE QUALITY REPORT") + print("=" * 60) + + if "file" in analysis: + print(f"\nFile: {analysis['file']}") + print(f"Language: {analysis['language']}") + print(f"Quality Score: {analysis['quality_score']}/100 ({analysis['grade']})") + + metrics = analysis["metrics"] + print(f"\nLines: {metrics['lines']['total']} ({metrics['lines']['code']} code, {metrics['lines']['comment']} comments)") + print(f"Functions: {metrics['functions']}") + print(f"Classes: {metrics['classes']}") + print(f"Avg Complexity: {metrics['avg_complexity']}") + + if analysis["smells"]: + print("\n--- CODE SMELLS ---") + for smell in analysis["smells"][:10]: + print(f" [{smell['severity'].upper()}] {smell['message']} ({smell['location']})") + + if analysis["solid_violations"]: + print("\n--- SOLID VIOLATIONS ---") + for v in analysis["solid_violations"]: + print(f" [{v['principle']}] {v['message']}") + else: + print(f"\nDirectory: {analysis['directory']}") + print(f"Files Analyzed: {analysis['files_analyzed']}") + print(f"Average Score: {analysis['average_score']}/100 ({analysis['overall_grade']})") + print(f"Total Code Smells: {analysis['total_code_smells']}") + print(f"Total SOLID Violations: {analysis['total_solid_violations']}") + + print("\n--- FILES BY QUALITY ---") + for f in analysis["files"][:10]: + print(f" {f['quality_score']:3d}/100 [{f['grade']}] {f['file']}") + + print("\n" + "=" * 60) + + +def main(): + parser = argparse.ArgumentParser( + description="Analyze code quality, smells, and SOLID violations" + ) + parser.add_argument( + "path", + help="File or directory to analyze" + ) + parser.add_argument( + "--recursive", "-r", + action="store_true", + default=True, + help="Recursively analyze directories (default: true)" + ) + parser.add_argument( + "--language", "-l", + choices=list(LANGUAGE_EXTENSIONS.keys()), + help="Filter by programming language" + ) + parser.add_argument( + "--json", + action="store_true", + help="Output in JSON format" + ) + parser.add_argument( + "--output", "-o", + help="Write output to file" + ) + + args = parser.parse_args() + + target = Path(args.path).resolve() + + if not target.exists(): + print(f"Error: Path does not exist: {target}", file=sys.stderr) + sys.exit(1) + + if target.is_file(): + analysis = analyze_file(target) + else: + analysis = analyze_directory(target, args.recursive, args.language) + + if args.json: + output = json.dumps(analysis, indent=2, default=str) + if args.output: + with open(args.output, "w") as f: + f.write(output) + print(f"Results written to {args.output}") + else: + print(output) + else: + print_report(analysis) + + +if __name__ == "__main__": + main() diff --git a/.gemini/skills/code-reviewer/scripts/pr_analyzer.py b/.gemini/skills/code-reviewer/scripts/pr_analyzer.py new file mode 100644 index 00000000..0456abae --- /dev/null +++ b/.gemini/skills/code-reviewer/scripts/pr_analyzer.py @@ -0,0 +1,495 @@ +#!/usr/bin/env python3 +""" +PR Analyzer + +Analyzes pull request changes for review complexity, risk assessment, +and generates review priorities. + +Usage: + python .gemini/skills/code-reviewer/scripts/pr_analyzer.py /path/to/repo + python .gemini/skills/code-reviewer/scripts/pr_analyzer.py . --base main --head feature-branch + python .gemini/skills/code-reviewer/scripts/pr_analyzer.py /path/to/repo --json +""" + +import argparse +import json +import os +import re +import subprocess +import sys +from pathlib import Path +from typing import Dict, List, Optional, Tuple + + +# File categories for review prioritization +FILE_CATEGORIES = { + "critical": { + "patterns": [ + r"auth", r"security", r"password", r"token", r"secret", + r"payment", r"billing", r"crypto", r"encrypt" + ], + "weight": 5, + "description": "Security-sensitive files requiring careful review" + }, + "high": { + "patterns": [ + r"api", r"database", r"migration", r"schema", r"model", + r"config", r"env", r"middleware" + ], + "weight": 4, + "description": "Core infrastructure files" + }, + "medium": { + "patterns": [ + r"service", r"controller", r"handler", r"util", r"helper" + ], + "weight": 3, + "description": "Business logic files" + }, + "low": { + "patterns": [ + r"test", r"spec", r"mock", r"fixture", r"story", + r"readme", r"docs", r"\.md$" + ], + "weight": 1, + "description": "Tests and documentation" + } +} + +# Risky patterns to flag +RISK_PATTERNS = [ + { + "name": "hardcoded_secrets", + "pattern": r"(password|secret|api_key|token)\s*[=:]\s*['\"][^'\"]+['\"]", + "severity": "critical", + "message": "Potential hardcoded secret detected" + }, + { + "name": "todo_fixme", + "pattern": r"(TODO|FIXME|HACK|XXX):", + "severity": "low", + "message": "TODO/FIXME comment found" + }, + { + "name": "console_log", + "pattern": r"console\.(log|debug|info|warn|error)\(", + "severity": "medium", + "message": "Console statement found (remove for production)" + }, + { + "name": "debugger", + "pattern": r"\bdebugger\b", + "severity": "high", + "message": "Debugger statement found" + }, + { + "name": "disable_eslint", + "pattern": r"eslint-disable", + "severity": "medium", + "message": "ESLint rule disabled" + }, + { + "name": "any_type", + "pattern": r":\s*any\b", + "severity": "medium", + "message": "TypeScript 'any' type used" + }, + { + "name": "sql_concatenation", + "pattern": r"(SELECT|INSERT|UPDATE|DELETE).*\+.*['\"]", + "severity": "critical", + "message": "Potential SQL injection (string concatenation in query)" + } +] + + +def run_git_command(cmd: List[str], cwd: Path) -> Tuple[bool, str]: + """Run a git command and return success status and output.""" + try: + result = subprocess.run( + cmd, + cwd=cwd, + capture_output=True, + text=True, + timeout=30 + ) + return result.returncode == 0, result.stdout.strip() + except subprocess.TimeoutExpired: + return False, "Command timed out" + except Exception as e: + return False, str(e) + + +def get_changed_files(repo_path: Path, base: str, head: str) -> List[Dict]: + """Get list of changed files between two refs.""" + success, output = run_git_command( + ["git", "diff", "--name-status", f"{base}...{head}"], + repo_path + ) + + if not success: + # Try without the triple dot (for uncommitted changes) + success, output = run_git_command( + ["git", "diff", "--name-status", base, head], + repo_path + ) + + if not success or not output: + # Fall back to staged changes + success, output = run_git_command( + ["git", "diff", "--name-status", "--cached"], + repo_path + ) + + files = [] + for line in output.split("\n"): + if not line.strip(): + continue + parts = line.split("\t") + if len(parts) >= 2: + status = parts[0][0] # First character of status + filepath = parts[-1] # Handle renames (R100\told\tnew) + status_map = { + "A": "added", + "M": "modified", + "D": "deleted", + "R": "renamed", + "C": "copied" + } + files.append({ + "path": filepath, + "status": status_map.get(status, "modified") + }) + + return files + + +def get_file_diff(repo_path: Path, filepath: str, base: str, head: str) -> str: + """Get diff content for a specific file.""" + success, output = run_git_command( + ["git", "diff", f"{base}...{head}", "--", filepath], + repo_path + ) + if not success: + success, output = run_git_command( + ["git", "diff", "--cached", "--", filepath], + repo_path + ) + return output if success else "" + + +def categorize_file(filepath: str) -> Tuple[str, int]: + """Categorize a file based on its path and name.""" + filepath_lower = filepath.lower() + + for category, info in FILE_CATEGORIES.items(): + for pattern in info["patterns"]: + if re.search(pattern, filepath_lower): + return category, info["weight"] + + return "medium", 2 # Default category + + +def analyze_diff_for_risks(diff_content: str, filepath: str) -> List[Dict]: + """Analyze diff content for risky patterns.""" + risks = [] + + # Only analyze added lines (starting with +) + added_lines = [ + line[1:] for line in diff_content.split("\n") + if line.startswith("+") and not line.startswith("+++") + ] + + content = "\n".join(added_lines) + + for risk in RISK_PATTERNS: + matches = re.findall(risk["pattern"], content, re.IGNORECASE) + if matches: + risks.append({ + "name": risk["name"], + "severity": risk["severity"], + "message": risk["message"], + "file": filepath, + "count": len(matches) + }) + + return risks + + +def count_changes(diff_content: str) -> Dict[str, int]: + """Count additions and deletions in diff.""" + additions = 0 + deletions = 0 + + for line in diff_content.split("\n"): + if line.startswith("+") and not line.startswith("+++"): + additions += 1 + elif line.startswith("-") and not line.startswith("---"): + deletions += 1 + + return {"additions": additions, "deletions": deletions} + + +def calculate_complexity_score(files: List[Dict], all_risks: List[Dict]) -> int: + """Calculate overall PR complexity score (1-10).""" + score = 0 + + # File count contribution (max 3 points) + file_count = len(files) + if file_count > 20: + score += 3 + elif file_count > 10: + score += 2 + elif file_count > 5: + score += 1 + + # Total changes contribution (max 3 points) + total_changes = sum(f.get("additions", 0) + f.get("deletions", 0) for f in files) + if total_changes > 500: + score += 3 + elif total_changes > 200: + score += 2 + elif total_changes > 50: + score += 1 + + # Risk severity contribution (max 4 points) + critical_risks = sum(1 for r in all_risks if r["severity"] == "critical") + high_risks = sum(1 for r in all_risks if r["severity"] == "high") + + score += min(2, critical_risks) + score += min(2, high_risks) + + return min(10, max(1, score)) + + +def analyze_commit_messages(repo_path: Path, base: str, head: str) -> Dict: + """Analyze commit messages in the PR.""" + success, output = run_git_command( + ["git", "log", "--oneline", f"{base}...{head}"], + repo_path + ) + + if not success or not output: + return {"commits": 0, "issues": []} + + commits = output.strip().split("\n") + issues = [] + + for commit in commits: + if len(commit) < 10: + continue + + # Check for conventional commit format + message = commit[8:] if len(commit) > 8 else commit # Skip hash + + if not re.match(r"^(feat|fix|docs|style|refactor|test|chore|perf|ci|build|revert)(\(.+\))?:", message): + issues.append({ + "commit": commit[:7], + "issue": "Does not follow conventional commit format" + }) + + if len(message) > 72: + issues.append({ + "commit": commit[:7], + "issue": "Commit message exceeds 72 characters" + }) + + return { + "commits": len(commits), + "issues": issues + } + + +def analyze_pr( + repo_path: Path, + base: str = "main", + head: str = "HEAD" +) -> Dict: + """Perform complete PR analysis.""" + # Get changed files + changed_files = get_changed_files(repo_path, base, head) + + if not changed_files: + return { + "status": "no_changes", + "message": "No changes detected between branches" + } + + # Analyze each file + all_risks = [] + file_analyses = [] + + for file_info in changed_files: + filepath = file_info["path"] + category, weight = categorize_file(filepath) + + # Get diff for the file + diff = get_file_diff(repo_path, filepath, base, head) + changes = count_changes(diff) + risks = analyze_diff_for_risks(diff, filepath) + + all_risks.extend(risks) + + file_analyses.append({ + "path": filepath, + "status": file_info["status"], + "category": category, + "priority_weight": weight, + "additions": changes["additions"], + "deletions": changes["deletions"], + "risks": risks + }) + + # Sort by priority (highest first) + file_analyses.sort(key=lambda x: (-x["priority_weight"], x["path"])) + + # Analyze commits + commit_analysis = analyze_commit_messages(repo_path, base, head) + + # Calculate metrics + complexity = calculate_complexity_score(file_analyses, all_risks) + + total_additions = sum(f["additions"] for f in file_analyses) + total_deletions = sum(f["deletions"] for f in file_analyses) + + return { + "status": "analyzed", + "summary": { + "files_changed": len(file_analyses), + "total_additions": total_additions, + "total_deletions": total_deletions, + "complexity_score": complexity, + "complexity_label": get_complexity_label(complexity), + "commits": commit_analysis["commits"] + }, + "risks": { + "critical": [r for r in all_risks if r["severity"] == "critical"], + "high": [r for r in all_risks if r["severity"] == "high"], + "medium": [r for r in all_risks if r["severity"] == "medium"], + "low": [r for r in all_risks if r["severity"] == "low"] + }, + "files": file_analyses, + "commit_issues": commit_analysis["issues"], + "review_order": [f["path"] for f in file_analyses[:10]] # Top 10 priority files + } + + +def get_complexity_label(score: int) -> str: + """Get human-readable complexity label.""" + if score <= 2: + return "Simple" + elif score <= 4: + return "Moderate" + elif score <= 6: + return "Complex" + elif score <= 8: + return "Very Complex" + else: + return "Critical" + + +def print_report(analysis: Dict) -> None: + """Print human-readable analysis report.""" + if analysis["status"] == "no_changes": + print("No changes detected.") + return + + summary = analysis["summary"] + risks = analysis["risks"] + + print("=" * 60) + print("PR ANALYSIS REPORT") + print("=" * 60) + + print(f"\nComplexity: {summary['complexity_score']}/10 ({summary['complexity_label']})") + print(f"Files Changed: {summary['files_changed']}") + print(f"Lines: +{summary['total_additions']} / -{summary['total_deletions']}") + print(f"Commits: {summary['commits']}") + + # Risk summary + print("\n--- RISK SUMMARY ---") + print(f"Critical: {len(risks['critical'])}") + print(f"High: {len(risks['high'])}") + print(f"Medium: {len(risks['medium'])}") + print(f"Low: {len(risks['low'])}") + + # Critical and high risks details + if risks["critical"]: + print("\n--- CRITICAL RISKS ---") + for risk in risks["critical"]: + print(f" [{risk['file']}] {risk['message']} (x{risk['count']})") + + if risks["high"]: + print("\n--- HIGH RISKS ---") + for risk in risks["high"]: + print(f" [{risk['file']}] {risk['message']} (x{risk['count']})") + + # Commit message issues + if analysis["commit_issues"]: + print("\n--- COMMIT MESSAGE ISSUES ---") + for issue in analysis["commit_issues"][:5]: + print(f" {issue['commit']}: {issue['issue']}") + + # Review order + print("\n--- SUGGESTED REVIEW ORDER ---") + for i, filepath in enumerate(analysis["review_order"], 1): + file_info = next(f for f in analysis["files"] if f["path"] == filepath) + print(f" {i}. [{file_info['category'].upper()}] {filepath}") + + print("\n" + "=" * 60) + + +def main(): + parser = argparse.ArgumentParser( + description="Analyze pull request for review complexity and risks" + ) + parser.add_argument( + "repo_path", + nargs="?", + default=".", + help="Path to git repository (default: current directory)" + ) + parser.add_argument( + "--base", "-b", + default="main", + help="Base branch for comparison (default: main)" + ) + parser.add_argument( + "--head", + default="HEAD", + help="Head branch/commit for comparison (default: HEAD)" + ) + parser.add_argument( + "--json", + action="store_true", + help="Output in JSON format" + ) + parser.add_argument( + "--output", "-o", + help="Write output to file" + ) + + args = parser.parse_args() + + repo_path = Path(args.repo_path).resolve() + + if not (repo_path / ".git").exists(): + print(f"Error: {repo_path} is not a git repository", file=sys.stderr) + sys.exit(1) + + analysis = analyze_pr(repo_path, args.base, args.head) + + if args.json: + output = json.dumps(analysis, indent=2) + if args.output: + with open(args.output, "w") as f: + f.write(output) + print(f"Results written to {args.output}") + else: + print(output) + else: + print_report(analysis) + + +if __name__ == "__main__": + main() diff --git a/.gemini/skills/code-reviewer/scripts/review_report_generator.py b/.gemini/skills/code-reviewer/scripts/review_report_generator.py new file mode 100644 index 00000000..53e5c1b5 --- /dev/null +++ b/.gemini/skills/code-reviewer/scripts/review_report_generator.py @@ -0,0 +1,505 @@ +#!/usr/bin/env python3 +""" +Review Report Generator + +Generates comprehensive code review reports by combining PR analysis +and code quality findings into structured, actionable reports. + +Usage: + python .gemini/skills/code-reviewer/scripts/review_report_generator.py /path/to/repo + python .gemini/skills/code-reviewer/scripts/review_report_generator.py . --pr-analysis pr_results.json --quality-analysis quality_results.json + python .gemini/skills/code-reviewer/scripts/review_report_generator.py /path/to/repo --format markdown --output review.md +""" + +import argparse +import json +import os +import subprocess +import sys +from datetime import datetime +from pathlib import Path +from typing import Dict, List, Optional, Tuple + + +# Severity weights for prioritization +SEVERITY_WEIGHTS = { + "critical": 100, + "high": 75, + "medium": 50, + "low": 25, + "info": 10 +} + +# Review verdict thresholds +VERDICT_THRESHOLDS = { + "approve": {"max_critical": 0, "max_high": 0, "max_score": 100}, + "approve_with_suggestions": {"max_critical": 0, "max_high": 2, "max_score": 85}, + "request_changes": {"max_critical": 0, "max_high": 5, "max_score": 70}, + "block": {"max_critical": float("inf"), "max_high": float("inf"), "max_score": 0} +} + + +def load_json_file(filepath: str) -> Optional[Dict]: + """Load JSON file if it exists.""" + try: + with open(filepath, "r") as f: + return json.load(f) + except (FileNotFoundError, json.JSONDecodeError): + return None + + +def run_pr_analyzer(repo_path: Path) -> Dict: + """Run .gemini/skills/code-reviewer/scripts/pr_analyzer.py and return results.""" + script_path = Path(__file__).parent / ".gemini/skills/code-reviewer/scripts/pr_analyzer.py" + if not script_path.exists(): + return {"status": "error", "message": ".gemini/skills/code-reviewer/scripts/pr_analyzer.py not found"} + + try: + result = subprocess.run( + [sys.executable, str(script_path), str(repo_path), "--json"], + capture_output=True, + text=True, + timeout=120 + ) + if result.returncode == 0: + return json.loads(result.stdout) + return {"status": "error", "message": result.stderr} + except Exception as e: + return {"status": "error", "message": str(e)} + + +def run_quality_checker(repo_path: Path) -> Dict: + """Run .gemini/skills/code-reviewer/scripts/code_quality_checker.py and return results.""" + script_path = Path(__file__).parent / ".gemini/skills/code-reviewer/scripts/code_quality_checker.py" + if not script_path.exists(): + return {"status": "error", "message": ".gemini/skills/code-reviewer/scripts/code_quality_checker.py not found"} + + try: + result = subprocess.run( + [sys.executable, str(script_path), str(repo_path), "--json"], + capture_output=True, + text=True, + timeout=300 + ) + if result.returncode == 0: + return json.loads(result.stdout) + return {"status": "error", "message": result.stderr} + except Exception as e: + return {"status": "error", "message": str(e)} + + +def calculate_review_score(pr_analysis: Dict, quality_analysis: Dict) -> int: + """Calculate overall review score (0-100).""" + score = 100 + + # Deduct for PR risks + if "risks" in pr_analysis: + risks = pr_analysis["risks"] + score -= len(risks.get("critical", [])) * 15 + score -= len(risks.get("high", [])) * 10 + score -= len(risks.get("medium", [])) * 5 + score -= len(risks.get("low", [])) * 2 + + # Deduct for code quality issues + if "issues" in quality_analysis: + issues = quality_analysis["issues"] + score -= len([i for i in issues if i.get("severity") == "critical"]) * 12 + score -= len([i for i in issues if i.get("severity") == "high"]) * 8 + score -= len([i for i in issues if i.get("severity") == "medium"]) * 4 + score -= len([i for i in issues if i.get("severity") == "low"]) * 1 + + # Deduct for complexity + if "summary" in pr_analysis: + complexity = pr_analysis["summary"].get("complexity_score", 0) + if complexity > 7: + score -= 10 + elif complexity > 5: + score -= 5 + + return max(0, min(100, score)) + + +def determine_verdict(score: int, critical_count: int, high_count: int) -> Tuple[str, str]: + """Determine review verdict based on score and issue counts.""" + if critical_count > 0: + return "block", "Critical issues must be resolved before merge" + + if score >= 90 and high_count == 0: + return "approve", "Code meets quality standards" + + if score >= 75 and high_count <= 2: + return "approve_with_suggestions", "Minor improvements recommended" + + if score >= 50: + return "request_changes", "Several issues need to be addressed" + + return "block", "Significant issues prevent approval" + + +def generate_findings_list(pr_analysis: Dict, quality_analysis: Dict) -> List[Dict]: + """Combine and prioritize all findings.""" + findings = [] + + # Add PR risk findings + if "risks" in pr_analysis: + for severity, items in pr_analysis["risks"].items(): + for item in items: + findings.append({ + "source": "pr_analysis", + "severity": severity, + "category": item.get("name", "unknown"), + "message": item.get("message", ""), + "file": item.get("file", ""), + "count": item.get("count", 1) + }) + + # Add code quality findings + if "issues" in quality_analysis: + for issue in quality_analysis["issues"]: + findings.append({ + "source": "quality_analysis", + "severity": issue.get("severity", "medium"), + "category": issue.get("type", "unknown"), + "message": issue.get("message", ""), + "file": issue.get("file", ""), + "line": issue.get("line", 0) + }) + + # Sort by severity weight + findings.sort( + key=lambda x: -SEVERITY_WEIGHTS.get(x["severity"], 0) + ) + + return findings + + +def generate_action_items(findings: List[Dict]) -> List[Dict]: + """Generate prioritized action items from findings.""" + action_items = [] + seen_categories = set() + + for finding in findings: + category = finding["category"] + severity = finding["severity"] + + # Group similar issues + if category in seen_categories and severity not in ["critical", "high"]: + continue + + action = { + "priority": "P0" if severity == "critical" else "P1" if severity == "high" else "P2", + "action": get_action_for_category(category, finding), + "severity": severity, + "files_affected": [finding["file"]] if finding.get("file") else [] + } + action_items.append(action) + seen_categories.add(category) + + return action_items[:15] # Top 15 actions + + +def get_action_for_category(category: str, finding: Dict) -> str: + """Get actionable recommendation for issue category.""" + actions = { + "hardcoded_secrets": "Remove hardcoded credentials and use environment variables or a secrets manager", + "sql_concatenation": "Use parameterized queries to prevent SQL injection", + "debugger": "Remove debugger statements before merging", + "console_log": "Remove or replace console statements with proper logging", + "todo_fixme": "Address TODO/FIXME comments or create tracking issues", + "disable_eslint": "Address the underlying issue instead of disabling lint rules", + "any_type": "Replace 'any' types with proper type definitions", + "long_function": "Break down function into smaller, focused units", + "god_class": "Split class into smaller, single-responsibility classes", + "too_many_params": "Use parameter objects or builder pattern", + "deep_nesting": "Refactor using early returns, guard clauses, or extraction", + "high_complexity": "Reduce cyclomatic complexity through refactoring", + "missing_error_handling": "Add proper error handling and recovery logic", + "duplicate_code": "Extract duplicate code into shared functions", + "magic_numbers": "Replace magic numbers with named constants", + "large_file": "Consider splitting into multiple smaller modules" + } + return actions.get(category, f"Review and address: {finding.get('message', category)}") + + +def format_markdown_report(report: Dict) -> str: + """Generate markdown-formatted report.""" + lines = [] + + # Header + lines.append("# Code Review Report") + lines.append("") + lines.append(f"**Generated:** {report['metadata']['generated_at']}") + lines.append(f"**Repository:** {report['metadata']['repository']}") + lines.append("") + + # Executive Summary + lines.append("## Executive Summary") + lines.append("") + summary = report["summary"] + verdict = summary["verdict"] + verdict_emoji = { + "approve": "✅", + "approve_with_suggestions": "✅", + "request_changes": "⚠️", + "block": "❌" + }.get(verdict, "❓") + + lines.append(f"**Verdict:** {verdict_emoji} {verdict.upper().replace('_', ' ')}") + lines.append(f"**Score:** {summary['score']}/100") + lines.append(f"**Rationale:** {summary['rationale']}") + lines.append("") + + # Issue Counts + lines.append("### Issue Summary") + lines.append("") + lines.append("| Severity | Count |") + lines.append("|----------|-------|") + for severity in ["critical", "high", "medium", "low"]: + count = summary["issue_counts"].get(severity, 0) + lines.append(f"| {severity.capitalize()} | {count} |") + lines.append("") + + # PR Statistics (if available) + if "pr_summary" in report: + pr = report["pr_summary"] + lines.append("### Change Statistics") + lines.append("") + lines.append(f"- **Files Changed:** {pr.get('files_changed', 'N/A')}") + lines.append(f"- **Lines Added:** +{pr.get('total_additions', 0)}") + lines.append(f"- **Lines Removed:** -{pr.get('total_deletions', 0)}") + lines.append(f"- **Complexity:** {pr.get('complexity_label', 'N/A')}") + lines.append("") + + # Action Items + if report.get("action_items"): + lines.append("## Action Items") + lines.append("") + for i, item in enumerate(report["action_items"], 1): + priority = item["priority"] + emoji = "🔴" if priority == "P0" else "🟠" if priority == "P1" else "🟡" + lines.append(f"{i}. {emoji} **[{priority}]** {item['action']}") + if item.get("files_affected"): + lines.append(f" - Files: {', '.join(item['files_affected'][:3])}") + lines.append("") + + # Critical Findings + critical_findings = [f for f in report.get("findings", []) if f["severity"] == "critical"] + if critical_findings: + lines.append("## Critical Issues (Must Fix)") + lines.append("") + for finding in critical_findings: + lines.append(f"- **{finding['category']}** in `{finding.get('file', 'unknown')}`") + lines.append(f" - {finding['message']}") + lines.append("") + + # High Priority Findings + high_findings = [f for f in report.get("findings", []) if f["severity"] == "high"] + if high_findings: + lines.append("## High Priority Issues") + lines.append("") + for finding in high_findings[:10]: + lines.append(f"- **{finding['category']}** in `{finding.get('file', 'unknown')}`") + lines.append(f" - {finding['message']}") + lines.append("") + + # Review Order (if available) + if "review_order" in report: + lines.append("## Suggested Review Order") + lines.append("") + for i, filepath in enumerate(report["review_order"][:10], 1): + lines.append(f"{i}. `{filepath}`") + lines.append("") + + # Footer + lines.append("---") + lines.append("*Generated by Code Reviewer*") + + return "\n".join(lines) + + +def format_text_report(report: Dict) -> str: + """Generate plain text report.""" + lines = [] + + lines.append("=" * 60) + lines.append("CODE REVIEW REPORT") + lines.append("=" * 60) + lines.append("") + lines.append(f"Generated: {report['metadata']['generated_at']}") + lines.append(f"Repository: {report['metadata']['repository']}") + lines.append("") + + summary = report["summary"] + verdict = summary["verdict"].upper().replace("_", " ") + lines.append(f"VERDICT: {verdict}") + lines.append(f"SCORE: {summary['score']}/100") + lines.append(f"RATIONALE: {summary['rationale']}") + lines.append("") + + lines.append("--- ISSUE SUMMARY ---") + for severity in ["critical", "high", "medium", "low"]: + count = summary["issue_counts"].get(severity, 0) + lines.append(f" {severity.capitalize()}: {count}") + lines.append("") + + if report.get("action_items"): + lines.append("--- ACTION ITEMS ---") + for i, item in enumerate(report["action_items"][:10], 1): + lines.append(f" {i}. [{item['priority']}] {item['action']}") + lines.append("") + + critical = [f for f in report.get("findings", []) if f["severity"] == "critical"] + if critical: + lines.append("--- CRITICAL ISSUES ---") + for f in critical: + lines.append(f" [{f.get('file', 'unknown')}] {f['message']}") + lines.append("") + + lines.append("=" * 60) + + return "\n".join(lines) + + +def generate_report( + repo_path: Path, + pr_analysis: Optional[Dict] = None, + quality_analysis: Optional[Dict] = None +) -> Dict: + """Generate comprehensive review report.""" + # Run analyses if not provided + if pr_analysis is None: + pr_analysis = run_pr_analyzer(repo_path) + + if quality_analysis is None: + quality_analysis = run_quality_checker(repo_path) + + # Generate findings + findings = generate_findings_list(pr_analysis, quality_analysis) + + # Count issues by severity + issue_counts = { + "critical": len([f for f in findings if f["severity"] == "critical"]), + "high": len([f for f in findings if f["severity"] == "high"]), + "medium": len([f for f in findings if f["severity"] == "medium"]), + "low": len([f for f in findings if f["severity"] == "low"]) + } + + # Calculate score and verdict + score = calculate_review_score(pr_analysis, quality_analysis) + verdict, rationale = determine_verdict( + score, + issue_counts["critical"], + issue_counts["high"] + ) + + # Generate action items + action_items = generate_action_items(findings) + + # Build report + report = { + "metadata": { + "generated_at": datetime.now().isoformat(), + "repository": str(repo_path), + "version": "1.0.0" + }, + "summary": { + "score": score, + "verdict": verdict, + "rationale": rationale, + "issue_counts": issue_counts + }, + "findings": findings, + "action_items": action_items + } + + # Add PR summary if available + if pr_analysis.get("status") == "analyzed": + report["pr_summary"] = pr_analysis.get("summary", {}) + report["review_order"] = pr_analysis.get("review_order", []) + + # Add quality summary if available + if quality_analysis.get("status") == "analyzed": + report["quality_summary"] = quality_analysis.get("summary", {}) + + return report + + +def main(): + parser = argparse.ArgumentParser( + description="Generate comprehensive code review reports" + ) + parser.add_argument( + "repo_path", + nargs="?", + default=".", + help="Path to repository (default: current directory)" + ) + parser.add_argument( + "--pr-analysis", + help="Path to pre-computed PR analysis JSON" + ) + parser.add_argument( + "--quality-analysis", + help="Path to pre-computed quality analysis JSON" + ) + parser.add_argument( + "--format", "-f", + choices=["text", "markdown", "json"], + default="text", + help="Output format (default: text)" + ) + parser.add_argument( + "--output", "-o", + help="Write output to file" + ) + parser.add_argument( + "--json", + action="store_true", + help="Output as JSON (shortcut for --format json)" + ) + + args = parser.parse_args() + + repo_path = Path(args.repo_path).resolve() + if not repo_path.exists(): + print(f"Error: Path does not exist: {repo_path}", file=sys.stderr) + sys.exit(1) + + # Load pre-computed analyses if provided + pr_analysis = None + quality_analysis = None + + if args.pr_analysis: + pr_analysis = load_json_file(args.pr_analysis) + if not pr_analysis: + print(f"Warning: Could not load PR analysis from {args.pr_analysis}") + + if args.quality_analysis: + quality_analysis = load_json_file(args.quality_analysis) + if not quality_analysis: + print(f"Warning: Could not load quality analysis from {args.quality_analysis}") + + # Generate report + report = generate_report(repo_path, pr_analysis, quality_analysis) + + # Format output + output_format = "json" if args.json else args.format + + if output_format == "json": + output = json.dumps(report, indent=2) + elif output_format == "markdown": + output = format_markdown_report(report) + else: + output = format_text_report(report) + + # Write or print output + if args.output: + with open(args.output, "w") as f: + f.write(output) + print(f"Report written to {args.output}") + else: + print(output) + + +if __name__ == "__main__": + main() diff --git a/.github/skills/code-reviewer/SKILL.md b/.github/skills/code-reviewer/SKILL.md new file mode 100644 index 00000000..1c182296 --- /dev/null +++ b/.github/skills/code-reviewer/SKILL.md @@ -0,0 +1,177 @@ +--- +name: code-reviewer +description: Code review automation for TypeScript, JavaScript, Python, Go, Swift, Kotlin. Analyzes PRs for complexity and risk, checks code quality for SOLID violations and code smells, generates review reports. Use when reviewing pull requests, analyzing code quality, identifying issues, generating review checklists. +--- + +# Code Reviewer + +Automated code review tools for analyzing pull requests, detecting code quality issues, and generating review reports. + +--- + +## Table of Contents + +- [Tools](#tools) + - [PR Analyzer](#pr-analyzer) + - [Code Quality Checker](#code-quality-checker) + - [Review Report Generator](#review-report-generator) +- [Reference Guides](#reference-guides) +- [Languages Supported](#languages-supported) + +--- + +## Tools + +### PR Analyzer + +Analyzes git diff between branches to assess review complexity and identify risks. + +```bash +# Analyze current branch against main +python scripts/pr_analyzer.py /path/to/repo + +# Compare specific branches +python scripts/pr_analyzer.py . --base main --head feature-branch + +# JSON output for integration +python scripts/pr_analyzer.py /path/to/repo --json +``` + +**What it detects:** +- Hardcoded secrets (passwords, API keys, tokens) +- SQL injection patterns (string concatenation in queries) +- Debug statements (debugger, console.log) +- ESLint rule disabling +- TypeScript `any` types +- TODO/FIXME comments + +**Output includes:** +- Complexity score (1-10) +- Risk categorization (critical, high, medium, low) +- File prioritization for review order +- Commit message validation + +--- + +### Code Quality Checker + +Analyzes source code for structural issues, code smells, and SOLID violations. + +```bash +# Analyze a directory +python scripts/code_quality_checker.py /path/to/code + +# Analyze specific language +python scripts/code_quality_checker.py . --language python + +# JSON output +python scripts/code_quality_checker.py /path/to/code --json +``` + +**What it detects:** +- Long functions (>50 lines) +- Large files (>500 lines) +- God classes (>20 methods) +- Deep nesting (>4 levels) +- Too many parameters (>5) +- High cyclomatic complexity +- Missing error handling +- Unused imports +- Magic numbers + +**Thresholds:** + +| Issue | Threshold | +|-------|-----------| +| Long function | >50 lines | +| Large file | >500 lines | +| God class | >20 methods | +| Too many params | >5 | +| Deep nesting | >4 levels | +| High complexity | >10 branches | + +--- + +### Review Report Generator + +Combines PR analysis and code quality findings into structured review reports. + +```bash +# Generate report for current repo +python scripts/review_report_generator.py /path/to/repo + +# Markdown output +python scripts/review_report_generator.py . --format markdown --output review.md + +# Use pre-computed analyses +python scripts/review_report_generator.py . \ + --pr-analysis pr_results.json \ + --quality-analysis quality_results.json +``` + +**Report includes:** +- Review verdict (approve, request changes, block) +- Score (0-100) +- Prioritized action items +- Issue summary by severity +- Suggested review order + +**Verdicts:** + +| Score | Verdict | +|-------|---------| +| 90+ with no high issues | Approve | +| 75+ with ≤2 high issues | Approve with suggestions | +| 50-74 | Request changes | +| <50 or critical issues | Block | + +--- + +## Reference Guides + +### Code Review Checklist +`.github/skills/code-reviewer/references/code_review_checklist.md` + +Systematic checklists covering: +- Pre-review checks (build, tests, PR hygiene) +- Correctness (logic, data handling, error handling) +- Security (input validation, injection prevention) +- Performance (efficiency, caching, scalability) +- Maintainability (code quality, naming, structure) +- Testing (coverage, quality, mocking) +- Language-specific checks + +### Coding Standards +`.github/skills/code-reviewer/references/coding_standards.md` + +Language-specific standards for: +- TypeScript (type annotations, null safety, async/await) +- JavaScript (declarations, patterns, modules) +- Python (type hints, exceptions, class design) +- Go (error handling, structs, concurrency) +- Swift (optionals, protocols, errors) +- Kotlin (null safety, data classes, coroutines) + +### Common Antipatterns +`.github/skills/code-reviewer/references/common_antipatterns.md` + +Antipattern catalog with examples and fixes: +- Structural (god class, long method, deep nesting) +- Logic (boolean blindness, stringly typed code) +- Security (SQL injection, hardcoded credentials) +- Performance (N+1 queries, unbounded collections) +- Testing (duplication, testing implementation) +- Async (floating promises, callback hell) + +--- + +## Languages Supported + +| Language | Extensions | +|----------|------------| +| Python | `.py` | +| TypeScript | `.ts`, `.tsx` | +| JavaScript | `.js`, `.jsx`, `.mjs` | +| Go | `.go` | +| Swift | `.swift` | +| Kotlin | `.kt`, `.kts` | \ No newline at end of file diff --git a/.github/skills/code-reviewer/references/code_review_checklist.md b/.github/skills/code-reviewer/references/code_review_checklist.md new file mode 100644 index 00000000..b7bd0867 --- /dev/null +++ b/.github/skills/code-reviewer/references/code_review_checklist.md @@ -0,0 +1,270 @@ +# Code Review Checklist + +Structured checklists for systematic code review across different aspects. + +--- + +## Table of Contents + +- [Pre-Review Checks](#pre-review-checks) +- [Correctness](#correctness) +- [Security](#security) +- [Performance](#performance) +- [Maintainability](#maintainability) +- [Testing](#testing) +- [Documentation](#documentation) +- [Language-Specific Checks](#language-specific-checks) + +--- + +## Pre-Review Checks + +Before diving into code, verify these basics: + +### Build and Tests +- [ ] Code compiles without errors +- [ ] All existing tests pass +- [ ] New tests are included for new functionality +- [ ] No unintended files included (build artifacts, IDE configs) + +### PR Hygiene +- [ ] PR has clear title and description +- [ ] Changes are scoped appropriately (not too large) +- [ ] Commits follow conventional commit format +- [ ] Branch is up to date with base branch + +### Scope Verification +- [ ] Changes match the stated purpose +- [ ] No unrelated changes bundled in +- [ ] Breaking changes are documented +- [ ] Migration path provided if needed + +--- + +## Correctness + +### Logic +- [ ] Algorithm implements requirements correctly +- [ ] Edge cases handled (null, empty, boundary values) +- [ ] Off-by-one errors checked +- [ ] Correct operators used (== vs ===, & vs &&) +- [ ] Loop termination conditions correct +- [ ] Recursion has proper base cases + +### Data Handling +- [ ] Data types appropriate for the use case +- [ ] Numeric overflow/underflow considered +- [ ] Date/time handling accounts for timezones +- [ ] Unicode and internationalization handled +- [ ] Data validation at entry points + +### State Management +- [ ] State transitions are valid +- [ ] Race conditions addressed +- [ ] Concurrent access handled correctly +- [ ] State cleanup on errors/exit + +### Error Handling +- [ ] Errors caught at appropriate levels +- [ ] Error messages are actionable +- [ ] Errors don't expose sensitive information +- [ ] Recovery or graceful degradation implemented +- [ ] Resources cleaned up in error paths + +--- + +## Security + +### Input Validation +- [ ] All user input validated and sanitized +- [ ] Input length limits enforced +- [ ] File uploads validated (type, size, content) +- [ ] URL parameters validated + +### Injection Prevention +- [ ] SQL queries parameterized +- [ ] Command execution uses safe APIs +- [ ] HTML output escaped to prevent XSS +- [ ] LDAP queries properly escaped +- [ ] XML parsing disables external entities + +### Authentication & Authorization +- [ ] Authentication required for protected resources +- [ ] Authorization checked before operations +- [ ] Session management secure +- [ ] Password handling follows best practices +- [ ] Token expiration implemented + +### Data Protection +- [ ] Sensitive data encrypted at rest +- [ ] Sensitive data encrypted in transit +- [ ] PII handled according to policy +- [ ] Secrets not hardcoded +- [ ] Logs don't contain sensitive data + +### API Security +- [ ] Rate limiting implemented +- [ ] CORS configured correctly +- [ ] CSRF protection in place +- [ ] API keys/tokens secured +- [ ] Endpoints use HTTPS + +--- + +## Performance + +### Efficiency +- [ ] Appropriate data structures used +- [ ] Algorithms have acceptable complexity +- [ ] Database queries are optimized +- [ ] N+1 query problems avoided +- [ ] Indexes used where beneficial + +### Resource Usage +- [ ] Memory usage bounded +- [ ] No memory leaks +- [ ] File handles properly closed +- [ ] Database connections pooled +- [ ] Network calls minimized + +### Caching +- [ ] Appropriate caching strategy +- [ ] Cache invalidation handled +- [ ] Cache keys are unique and predictable +- [ ] TTL values appropriate + +### Scalability +- [ ] Horizontal scaling considered +- [ ] Bottlenecks identified +- [ ] Async processing for long operations +- [ ] Batch operations where appropriate + +--- + +## Maintainability + +### Code Quality +- [ ] Functions/methods have single responsibility +- [ ] Classes follow SOLID principles +- [ ] Code is DRY (Don't Repeat Yourself) +- [ ] No dead code or commented-out code +- [ ] Magic numbers replaced with constants + +### Naming +- [ ] Names are descriptive and consistent +- [ ] Naming follows project conventions +- [ ] No abbreviations that obscure meaning +- [ ] Boolean variables/functions have is/has/can prefix + +### Structure +- [ ] Functions are appropriately sized (<50 lines preferred) +- [ ] Nesting depth is reasonable (<4 levels) +- [ ] Related code is grouped together +- [ ] Dependencies are minimal and explicit + +### Readability +- [ ] Code is self-documenting where possible +- [ ] Complex logic has explanatory comments +- [ ] Formatting is consistent +- [ ] No overly clever or obscure code + +--- + +## Testing + +### Coverage +- [ ] New code has unit tests +- [ ] Critical paths have integration tests +- [ ] Edge cases are tested +- [ ] Error conditions are tested + +### Quality +- [ ] Tests are independent +- [ ] Tests have clear assertions +- [ ] Test names describe what is tested +- [ ] Tests don't depend on external state + +### Mocking +- [ ] External dependencies are mocked +- [ ] Mocks are realistic +- [ ] Mock setup is not excessive + +--- + +## Documentation + +### Code Documentation +- [ ] Public APIs are documented +- [ ] Complex algorithms explained +- [ ] Non-obvious decisions documented +- [ ] TODO/FIXME comments have context + +### External Documentation +- [ ] README updated if needed +- [ ] API documentation updated +- [ ] Changelog updated +- [ ] Migration guides provided + +--- + +## Language-Specific Checks + +### TypeScript/JavaScript +- [ ] Types are explicit (avoid `any`) +- [ ] Null checks present (`?.`, `??`) +- [ ] Async/await errors handled +- [ ] No floating promises +- [ ] Memory leaks from closures checked + +### Python +- [ ] Type hints used for public APIs +- [ ] Context managers for resources (`with` statements) +- [ ] Exception handling is specific (not bare `except`) +- [ ] No mutable default arguments +- [ ] List comprehensions used appropriately + +### Go +- [ ] Errors checked and handled +- [ ] Goroutine leaks prevented +- [ ] Context propagation correct +- [ ] Defer statements in right order +- [ ] Interfaces minimal + +### Swift +- [ ] Optionals handled safely +- [ ] Memory management correct (weak/unowned) +- [ ] Error handling uses Result or throws +- [ ] Access control appropriate +- [ ] Codable implementation correct + +### Kotlin +- [ ] Null safety leveraged +- [ ] Coroutine cancellation handled +- [ ] Data classes used appropriately +- [ ] Extension functions don't obscure behavior +- [ ] Sealed classes for state + +--- + +## Review Process Tips + +### Before Approving +1. Verify all critical checks passed +2. Confirm tests are adequate +3. Consider deployment impact +4. Check for any security concerns +5. Ensure documentation is updated + +### Providing Feedback +- Be specific about issues +- Explain why something is problematic +- Suggest alternatives when possible +- Distinguish blockers from suggestions +- Acknowledge good patterns + +### When to Block +- Security vulnerabilities present +- Critical logic errors +- No tests for risky changes +- Breaking changes without migration +- Significant performance regressions diff --git a/.github/skills/code-reviewer/references/coding_standards.md b/.github/skills/code-reviewer/references/coding_standards.md new file mode 100644 index 00000000..9fbc6a06 --- /dev/null +++ b/.github/skills/code-reviewer/references/coding_standards.md @@ -0,0 +1,555 @@ +# Coding Standards + +Language-specific coding standards and conventions for code review. + +--- + +## Table of Contents + +- [Universal Principles](#universal-principles) +- [TypeScript Standards](#typescript-standards) +- [JavaScript Standards](#javascript-standards) +- [Python Standards](#python-standards) +- [Go Standards](#go-standards) +- [Swift Standards](#swift-standards) +- [Kotlin Standards](#kotlin-standards) + +--- + +## Universal Principles + +These apply across all languages. + +### Naming Conventions + +| Element | Convention | Example | +|---------|------------|---------| +| Variables | camelCase (JS/TS), snake_case (Python/Go) | `userName`, `user_name` | +| Constants | SCREAMING_SNAKE_CASE | `MAX_RETRY_COUNT` | +| Functions | camelCase (JS/TS), snake_case (Python) | `getUserById`, `get_user_by_id` | +| Classes | PascalCase | `UserRepository` | +| Interfaces | PascalCase, optionally prefixed | `IUserService` or `UserService` | +| Private members | Prefix with underscore or use access modifiers | `_internalState` | + +### Function Design + +``` +Good functions: +- Do one thing well +- Have descriptive names (verb + noun) +- Take 3 or fewer parameters +- Return early for error cases +- Stay under 50 lines +``` + +### Error Handling + +``` +Good error handling: +- Catch specific errors, not generic exceptions +- Log with context (what, where, why) +- Clean up resources in error paths +- Don't swallow errors silently +- Provide actionable error messages +``` + +--- + +## TypeScript Standards + +### Type Annotations + +```typescript +// Avoid 'any' - use unknown for truly unknown types +function processData(data: unknown): ProcessedResult { + if (isValidData(data)) { + return transform(data); + } + throw new Error('Invalid data format'); +} + +// Use explicit return types for public APIs +export function calculateTotal(items: CartItem[]): number { + return items.reduce((sum, item) => sum + item.price, 0); +} + +// Use type guards for runtime checks +function isUser(obj: unknown): obj is User { + return ( + typeof obj === 'object' && + obj !== null && + 'id' in obj && + 'email' in obj + ); +} +``` + +### Null Safety + +```typescript +// Use optional chaining and nullish coalescing +const userName = user?.profile?.name ?? 'Anonymous'; + +// Be explicit about nullable types +interface Config { + timeout: number; + retries?: number; // Optional + fallbackUrl: string | null; // Explicitly nullable +} + +// Use assertion functions for validation +function assertDefined(value: T | null | undefined): asserts value is T { + if (value === null || value === undefined) { + throw new Error('Value is not defined'); + } +} +``` + +### Async/Await + +```typescript +// Always handle errors in async functions +async function fetchUser(id: string): Promise { + try { + const response = await api.get(`/users/${id}`); + return response.data; + } catch (error) { + logger.error('Failed to fetch user', { id, error }); + throw new UserFetchError(id, error); + } +} + +// Use Promise.all for parallel operations +async function loadDashboard(userId: string): Promise { + const [profile, stats, notifications] = await Promise.all([ + fetchProfile(userId), + fetchStats(userId), + fetchNotifications(userId) + ]); + return { profile, stats, notifications }; +} +``` + +### React/Component Standards + +```typescript +// Use explicit prop types +interface ButtonProps { + label: string; + onClick: () => void; + variant?: 'primary' | 'secondary'; + disabled?: boolean; +} + +// Prefer functional components with hooks +function Button({ label, onClick, variant = 'primary', disabled = false }: ButtonProps) { + return ( + + ); +} + +// Use custom hooks for reusable logic +function useDebounce(value: T, delay: number): T { + const [debouncedValue, setDebouncedValue] = useState(value); + + useEffect(() => { + const timer = setTimeout(() => setDebouncedValue(value), delay); + return () => clearTimeout(timer); + }, [value, delay]); + + return debouncedValue; +} +``` + +--- + +## JavaScript Standards + +### Variable Declarations + +```javascript +// Use const by default, let when reassignment needed +const MAX_ITEMS = 100; +let currentCount = 0; + +// Never use var +// var is function-scoped and hoisted, leading to bugs +``` + +### Object and Array Patterns + +```javascript +// Use object destructuring +const { name, email, role = 'user' } = user; + +// Use spread for immutable updates +const updatedUser = { ...user, lastLogin: new Date() }; +const updatedList = [...items, newItem]; + +// Use array methods over loops +const activeUsers = users.filter(u => u.isActive); +const emails = users.map(u => u.email); +const total = orders.reduce((sum, o) => sum + o.amount, 0); +``` + +### Module Patterns + +```javascript +// Use named exports for utilities +export function formatDate(date) { ... } +export function parseDate(str) { ... } + +// Use default export for main component/class +export default class UserService { ... } + +// Group related exports +export { formatDate, parseDate, isValidDate } from './dateUtils'; +``` + +--- + +## Python Standards + +### Type Hints (PEP 484) + +```python +from typing import Optional, List, Dict, Union + +def get_user(user_id: int) -> Optional[User]: + """Fetch user by ID, returns None if not found.""" + return db.query(User).filter(User.id == user_id).first() + +def process_items(items: List[str]) -> Dict[str, int]: + """Count occurrences of each item.""" + return {item: items.count(item) for item in set(items)} + +def send_notification( + user: User, + message: str, + *, + priority: str = "normal", + channels: List[str] = None +) -> bool: + """Send notification to user via specified channels.""" + channels = channels or ["email"] + # Implementation +``` + +### Exception Handling + +```python +# Catch specific exceptions +try: + result = api_client.fetch_data(endpoint) +except ConnectionError as e: + logger.warning(f"Connection failed: {e}") + return cached_data +except TimeoutError as e: + logger.error(f"Request timed out: {e}") + raise ServiceUnavailableError() from e + +# Use context managers for resources +with open(filepath, 'r') as f: + data = json.load(f) + +# Custom exceptions should be informative +class ValidationError(Exception): + def __init__(self, field: str, message: str): + self.field = field + self.message = message + super().__init__(f"{field}: {message}") +``` + +### Class Design + +```python +from dataclasses import dataclass +from abc import ABC, abstractmethod + +# Use dataclasses for data containers +@dataclass +class UserDTO: + id: int + email: str + name: str + is_active: bool = True + +# Use ABC for interfaces +class Repository(ABC): + @abstractmethod + def find_by_id(self, id: int) -> Optional[Entity]: + pass + + @abstractmethod + def save(self, entity: Entity) -> Entity: + pass + +# Use properties for computed attributes +class Order: + def __init__(self, items: List[OrderItem]): + self._items = items + + @property + def total(self) -> Decimal: + return sum(item.price * item.quantity for item in self._items) +``` + +--- + +## Go Standards + +### Error Handling + +```go +// Always check errors +file, err := os.Open(filename) +if err != nil { + return fmt.Errorf("failed to open %s: %w", filename, err) +} +defer file.Close() + +// Use custom error types for specific cases +type ValidationError struct { + Field string + Message string +} + +func (e *ValidationError) Error() string { + return fmt.Sprintf("%s: %s", e.Field, e.Message) +} + +// Wrap errors with context +if err := db.Query(query); err != nil { + return fmt.Errorf("query failed for user %d: %w", userID, err) +} +``` + +### Struct Design + +```go +// Use unexported fields with exported methods +type UserService struct { + repo UserRepository + cache Cache + logger Logger +} + +// Constructor functions for initialization +func NewUserService(repo UserRepository, cache Cache, logger Logger) *UserService { + return &UserService{ + repo: repo, + cache: cache, + logger: logger, + } +} + +// Keep interfaces small +type Reader interface { + Read(p []byte) (n int, err error) +} + +type Writer interface { + Write(p []byte) (n int, err error) +} +``` + +### Concurrency + +```go +// Use context for cancellation +func fetchData(ctx context.Context, url string) ([]byte, error) { + req, err := http.NewRequestWithContext(ctx, "GET", url, nil) + if err != nil { + return nil, err + } + // ... +} + +// Use channels for communication +func worker(jobs <-chan Job, results chan<- Result) { + for job := range jobs { + result := process(job) + results <- result + } +} + +// Use sync.WaitGroup for coordination +var wg sync.WaitGroup +for _, item := range items { + wg.Add(1) + go func(i Item) { + defer wg.Done() + processItem(i) + }(item) +} +wg.Wait() +``` + +--- + +## Swift Standards + +### Optionals + +```swift +// Use optional binding +if let user = fetchUser(id: userId) { + displayProfile(user) +} + +// Use guard for early exit +guard let data = response.data else { + throw NetworkError.noData +} + +// Use nil coalescing for defaults +let displayName = user.nickname ?? user.email + +// Avoid force unwrapping except in tests +// BAD: let name = user.name! +// GOOD: guard let name = user.name else { return } +``` + +### Protocol-Oriented Design + +```swift +// Define protocols with minimal requirements +protocol Identifiable { + var id: String { get } +} + +protocol Persistable: Identifiable { + func save() throws + static func find(by id: String) -> Self? +} + +// Use protocol extensions for default implementations +extension Persistable { + func save() throws { + try Storage.shared.save(self) + } +} + +// Prefer composition over inheritance +struct User: Identifiable, Codable { + let id: String + var name: String + var email: String +} +``` + +### Error Handling + +```swift +// Define domain-specific errors +enum AuthError: Error { + case invalidCredentials + case tokenExpired + case networkFailure(underlying: Error) +} + +// Use Result type for async operations +func authenticate( + email: String, + password: String, + completion: @escaping (Result) -> Void +) + +// Use throws for synchronous operations +func validate(_ input: String) throws -> ValidatedInput { + guard !input.isEmpty else { + throw ValidationError.emptyInput + } + return ValidatedInput(value: input) +} +``` + +--- + +## Kotlin Standards + +### Null Safety + +```kotlin +// Use nullable types explicitly +fun findUser(id: Int): User? { + return userRepository.find(id) +} + +// Use safe calls and elvis operator +val name = user?.profile?.name ?: "Unknown" + +// Use let for null checks with side effects +user?.let { activeUser -> + sendWelcomeEmail(activeUser.email) + logActivity(activeUser.id) +} + +// Use require/check for validation +fun processPayment(amount: Double) { + require(amount > 0) { "Amount must be positive: $amount" } + // Process +} +``` + +### Data Classes and Sealed Classes + +```kotlin +// Use data classes for DTOs +data class UserDTO( + val id: Int, + val email: String, + val name: String, + val isActive: Boolean = true +) + +// Use sealed classes for state +sealed class Result { + data class Success(val data: T) : Result() + data class Error(val message: String, val cause: Throwable? = null) : Result() + object Loading : Result() +} + +// Pattern matching with when +fun handleResult(result: Result) = when (result) { + is Result.Success -> showUser(result.data) + is Result.Error -> showError(result.message) + Result.Loading -> showLoading() +} +``` + +### Coroutines + +```kotlin +// Use structured concurrency +suspend fun loadDashboard(): Dashboard = coroutineScope { + val profile = async { fetchProfile() } + val stats = async { fetchStats() } + val notifications = async { fetchNotifications() } + + Dashboard( + profile = profile.await(), + stats = stats.await(), + notifications = notifications.await() + ) +} + +// Handle cancellation +suspend fun fetchWithRetry(url: String): Response { + repeat(3) { attempt -> + try { + return httpClient.get(url) + } catch (e: IOException) { + if (attempt == 2) throw e + delay(1000L * (attempt + 1)) + } + } + throw IllegalStateException("Unreachable") +} +``` diff --git a/.github/skills/code-reviewer/references/common_antipatterns.md b/.github/skills/code-reviewer/references/common_antipatterns.md new file mode 100644 index 00000000..26045452 --- /dev/null +++ b/.github/skills/code-reviewer/references/common_antipatterns.md @@ -0,0 +1,739 @@ +# Common Antipatterns + +Code antipatterns to identify during review, with examples and fixes. + +--- + +## Table of Contents + +- [Structural Antipatterns](#structural-antipatterns) +- [Logic Antipatterns](#logic-antipatterns) +- [Security Antipatterns](#security-antipatterns) +- [Performance Antipatterns](#performance-antipatterns) +- [Testing Antipatterns](#testing-antipatterns) +- [Async Antipatterns](#async-antipatterns) + +--- + +## Structural Antipatterns + +### God Class + +A class that does too much and knows too much. + +```typescript +// BAD: God class handling everything +class UserManager { + createUser(data: UserData) { ... } + updateUser(id: string, data: UserData) { ... } + deleteUser(id: string) { ... } + sendEmail(userId: string, content: string) { ... } + generateReport(userId: string) { ... } + validatePassword(password: string) { ... } + hashPassword(password: string) { ... } + uploadAvatar(userId: string, file: File) { ... } + resizeImage(file: File) { ... } + logActivity(userId: string, action: string) { ... } + // 50 more methods... +} + +// GOOD: Single responsibility classes +class UserRepository { + create(data: UserData): User { ... } + update(id: string, data: Partial): User { ... } + delete(id: string): void { ... } +} + +class EmailService { + send(to: string, content: string): void { ... } +} + +class PasswordService { + validate(password: string): ValidationResult { ... } + hash(password: string): string { ... } +} +``` + +**Detection:** Class has >20 methods, >500 lines, or handles unrelated concerns. + +--- + +### Long Method + +Functions that do too much and are hard to understand. + +```python +# BAD: Long method doing everything +def process_order(order_data): + # Validate order (20 lines) + if not order_data.get('items'): + raise ValueError('No items') + if not order_data.get('customer_id'): + raise ValueError('No customer') + # ... more validation + + # Calculate totals (30 lines) + subtotal = 0 + for item in order_data['items']: + price = get_product_price(item['product_id']) + subtotal += price * item['quantity'] + # ... tax calculation, discounts + + # Process payment (40 lines) + payment_result = payment_gateway.charge(...) + # ... handle payment errors + + # Create order record (20 lines) + order = Order.create(...) + + # Send notifications (20 lines) + send_order_confirmation(...) + notify_warehouse(...) + + return order + +# GOOD: Composed of focused functions +def process_order(order_data): + validate_order(order_data) + totals = calculate_order_totals(order_data) + payment = process_payment(order_data['customer_id'], totals) + order = create_order_record(order_data, totals, payment) + send_order_notifications(order) + return order +``` + +**Detection:** Function >50 lines or requires scrolling to read. + +--- + +### Deep Nesting + +Excessive indentation making code hard to follow. + +```javascript +// BAD: Deep nesting +function processData(data) { + if (data) { + if (data.items) { + if (data.items.length > 0) { + for (const item of data.items) { + if (item.isValid) { + if (item.type === 'premium') { + if (item.price > 100) { + // Finally do something + processItem(item); + } + } + } + } + } + } + } +} + +// GOOD: Early returns and guard clauses +function processData(data) { + if (!data?.items?.length) { + return; + } + + const premiumItems = data.items.filter( + item => item.isValid && item.type === 'premium' && item.price > 100 + ); + + premiumItems.forEach(processItem); +} +``` + +**Detection:** Indentation >4 levels deep. + +--- + +### Magic Numbers and Strings + +Hard-coded values without explanation. + +```go +// BAD: Magic numbers +func calculateDiscount(total float64, userType int) float64 { + if userType == 1 { + return total * 0.15 + } else if userType == 2 { + return total * 0.25 + } + return total * 0.05 +} + +// GOOD: Named constants +const ( + UserTypeRegular = 1 + UserTypePremium = 2 + + DiscountRegular = 0.05 + DiscountStandard = 0.15 + DiscountPremium = 0.25 +) + +func calculateDiscount(total float64, userType int) float64 { + switch userType { + case UserTypePremium: + return total * DiscountPremium + case UserTypeRegular: + return total * DiscountStandard + default: + return total * DiscountRegular + } +} +``` + +**Detection:** Literal numbers (except 0, 1) or repeated string literals. + +--- + +### Primitive Obsession + +Using primitives instead of small objects. + +```typescript +// BAD: Primitives everywhere +function createUser( + name: string, + email: string, + phone: string, + street: string, + city: string, + zipCode: string, + country: string +): User { ... } + +// GOOD: Value objects +interface Address { + street: string; + city: string; + zipCode: string; + country: string; +} + +interface ContactInfo { + email: string; + phone: string; +} + +function createUser( + name: string, + contact: ContactInfo, + address: Address +): User { ... } +``` + +**Detection:** Functions with >4 parameters of same type, or related primitives always passed together. + +--- + +## Logic Antipatterns + +### Boolean Blindness + +Passing booleans that make code unreadable at call sites. + +```swift +// BAD: What do these booleans mean? +user.configure(true, false, true, false) + +// GOOD: Named parameters or option objects +user.configure( + sendWelcomeEmail: true, + requireVerification: false, + enableNotifications: true, + isAdmin: false +) + +// Or use an options struct +struct UserConfiguration { + var sendWelcomeEmail: Bool = true + var requireVerification: Bool = false + var enableNotifications: Bool = true + var isAdmin: Bool = false +} + +user.configure(UserConfiguration()) +``` + +**Detection:** Function calls with multiple boolean literals. + +--- + +### Null Returns for Collections + +Returning null instead of empty collections. + +```kotlin +// BAD: Returning null +fun findUsersByRole(role: String): List? { + val users = repository.findByRole(role) + return if (users.isEmpty()) null else users +} + +// Caller must handle null +val users = findUsersByRole("admin") +if (users != null) { + users.forEach { ... } +} + +// GOOD: Return empty collection +fun findUsersByRole(role: String): List { + return repository.findByRole(role) +} + +// Caller can iterate directly +findUsersByRole("admin").forEach { ... } +``` + +**Detection:** Functions returning nullable collections. + +--- + +### Stringly Typed Code + +Using strings where enums or types should be used. + +```python +# BAD: String-based logic +def handle_event(event_type: str, data: dict): + if event_type == "user_created": + handle_user_created(data) + elif event_type == "user_updated": + handle_user_updated(data) + elif event_type == "user_dleted": # Typo won't be caught + handle_user_deleted(data) + +# GOOD: Enum-based +from enum import Enum + +class EventType(Enum): + USER_CREATED = "user_created" + USER_UPDATED = "user_updated" + USER_DELETED = "user_deleted" + +def handle_event(event_type: EventType, data: dict): + handlers = { + EventType.USER_CREATED: handle_user_created, + EventType.USER_UPDATED: handle_user_updated, + EventType.USER_DELETED: handle_user_deleted, + } + handlers[event_type](data) +``` + +**Detection:** String comparisons for type/status/category values. + +--- + +## Security Antipatterns + +### SQL Injection + +String concatenation in SQL queries. + +```javascript +// BAD: String concatenation +const query = `SELECT * FROM users WHERE id = ${userId}`; +db.query(query); + +// BAD: String templates still vulnerable +const query = `SELECT * FROM users WHERE name = '${userName}'`; + +// GOOD: Parameterized queries +const query = 'SELECT * FROM users WHERE id = $1'; +db.query(query, [userId]); + +// GOOD: Using ORM safely +User.findOne({ where: { id: userId } }); +``` + +**Detection:** String concatenation or template literals with SQL keywords. + +--- + +### Hardcoded Credentials + +Secrets in source code. + +```python +# BAD: Hardcoded secrets +API_KEY = "sk-abc123xyz789" +DATABASE_URL = "postgresql://admin:password123@prod-db.internal:5432/app" + +# GOOD: Environment variables +import os + +API_KEY = os.environ["API_KEY"] +DATABASE_URL = os.environ["DATABASE_URL"] + +# GOOD: Secrets manager +from aws_secretsmanager import get_secret + +API_KEY = get_secret("api-key") +``` + +**Detection:** Variables named `password`, `secret`, `key`, `token` with string literals. + +--- + +### Unsafe Deserialization + +Deserializing untrusted data without validation. + +```python +# BAD: Binary serialization from untrusted source can execute arbitrary code +# Examples: Python's binary serialization, yaml.load without SafeLoader + +# GOOD: Use safe alternatives +import json + +def load_data(file_path): + with open(file_path, 'r') as f: + return json.load(f) + +# GOOD: Use SafeLoader for YAML +import yaml + +with open('config.yaml') as f: + config = yaml.safe_load(f) +``` + +**Detection:** Binary deserialization functions, yaml.load without safe loader, dynamic code execution on external data. + +--- + +### Missing Input Validation + +Trusting user input without validation. + +```typescript +// BAD: No validation +app.post('/user', (req, res) => { + const user = db.create({ + name: req.body.name, + email: req.body.email, + role: req.body.role // User can set themselves as admin! + }); + res.json(user); +}); + +// GOOD: Validate and sanitize +import { z } from 'zod'; + +const CreateUserSchema = z.object({ + name: z.string().min(1).max(100), + email: z.string().email(), + // role is NOT accepted from input +}); + +app.post('/user', (req, res) => { + const validated = CreateUserSchema.parse(req.body); + const user = db.create({ + ...validated, + role: 'user' // Default role, not from input + }); + res.json(user); +}); +``` + +**Detection:** Request body/params used directly without validation schema. + +--- + +## Performance Antipatterns + +### N+1 Query Problem + +Loading related data one record at a time. + +```python +# BAD: N+1 queries +def get_orders_with_items(): + orders = Order.query.all() # 1 query + for order in orders: + items = OrderItem.query.filter_by(order_id=order.id).all() # N queries + order.items = items + return orders + +# GOOD: Eager loading +def get_orders_with_items(): + return Order.query.options( + joinedload(Order.items) + ).all() # 1 query with JOIN + +# GOOD: Batch loading +def get_orders_with_items(): + orders = Order.query.all() + order_ids = [o.id for o in orders] + items = OrderItem.query.filter( + OrderItem.order_id.in_(order_ids) + ).all() # 2 queries total + # Group items by order_id... +``` + +**Detection:** Database queries inside loops. + +--- + +### Unbounded Collections + +Loading unlimited data into memory. + +```go +// BAD: Load all records +func GetAllUsers() ([]User, error) { + return db.Find(&[]User{}) // Could be millions +} + +// GOOD: Pagination +func GetUsers(page, pageSize int) ([]User, error) { + offset := (page - 1) * pageSize + return db.Limit(pageSize).Offset(offset).Find(&[]User{}) +} + +// GOOD: Streaming for large datasets +func ProcessAllUsers(handler func(User) error) error { + rows, err := db.Model(&User{}).Rows() + if err != nil { + return err + } + defer rows.Close() + + for rows.Next() { + var user User + db.ScanRows(rows, &user) + if err := handler(user); err != nil { + return err + } + } + return nil +} +``` + +**Detection:** `findAll()`, `find({})`, or queries without `LIMIT`. + +--- + +### Synchronous I/O in Hot Paths + +Blocking operations in request handlers. + +```javascript +// BAD: Sync file read on every request +app.get('/config', (req, res) => { + const config = fs.readFileSync('./config.json'); // Blocks event loop + res.json(JSON.parse(config)); +}); + +// GOOD: Load once at startup +const config = JSON.parse(fs.readFileSync('./config.json')); + +app.get('/config', (req, res) => { + res.json(config); +}); + +// GOOD: Async with caching +let configCache = null; + +app.get('/config', async (req, res) => { + if (!configCache) { + configCache = JSON.parse(await fs.promises.readFile('./config.json')); + } + res.json(configCache); +}); +``` + +**Detection:** `readFileSync`, `execSync`, or blocking calls in request handlers. + +--- + +## Testing Antipatterns + +### Test Code Duplication + +Repeating setup in every test. + +```typescript +// BAD: Duplicate setup +describe('UserService', () => { + it('should create user', async () => { + const db = await createTestDatabase(); + const userRepo = new UserRepository(db); + const emailService = new MockEmailService(); + const service = new UserService(userRepo, emailService); + + const user = await service.create({ name: 'Test' }); + expect(user.name).toBe('Test'); + }); + + it('should update user', async () => { + const db = await createTestDatabase(); // Duplicated + const userRepo = new UserRepository(db); // Duplicated + const emailService = new MockEmailService(); // Duplicated + const service = new UserService(userRepo, emailService); // Duplicated + + // ... + }); +}); + +// GOOD: Shared setup +describe('UserService', () => { + let service: UserService; + let db: TestDatabase; + + beforeEach(async () => { + db = await createTestDatabase(); + const userRepo = new UserRepository(db); + const emailService = new MockEmailService(); + service = new UserService(userRepo, emailService); + }); + + afterEach(async () => { + await db.cleanup(); + }); + + it('should create user', async () => { + const user = await service.create({ name: 'Test' }); + expect(user.name).toBe('Test'); + }); +}); +``` + +--- + +### Testing Implementation Instead of Behavior + +Tests coupled to internal implementation. + +```python +# BAD: Testing implementation details +def test_add_item_to_cart(): + cart = ShoppingCart() + cart.add_item(Product("Apple", 1.00)) + + # Testing internal structure + assert cart._items[0].name == "Apple" + assert cart._total == 1.00 + +# GOOD: Testing behavior +def test_add_item_to_cart(): + cart = ShoppingCart() + cart.add_item(Product("Apple", 1.00)) + + # Testing public behavior + assert cart.item_count == 1 + assert cart.total == 1.00 + assert cart.contains("Apple") +``` + +--- + +## Async Antipatterns + +### Floating Promises + +Promises without await or catch. + +```typescript +// BAD: Floating promise +async function saveUser(user: User) { + db.save(user); // Not awaited, errors lost + logger.info('User saved'); // Logs before save completes +} + +// BAD: Fire and forget in loop +for (const item of items) { + processItem(item); // All run in parallel, no error handling +} + +// GOOD: Await the promise +async function saveUser(user: User) { + await db.save(user); + logger.info('User saved'); +} + +// GOOD: Process with proper handling +await Promise.all(items.map(item => processItem(item))); + +// Or sequentially +for (const item of items) { + await processItem(item); +} +``` + +**Detection:** Async function calls without `await` or `.then()`. + +--- + +### Callback Hell + +Deeply nested callbacks. + +```javascript +// BAD: Callback hell +getUser(userId, (err, user) => { + if (err) return handleError(err); + getOrders(user.id, (err, orders) => { + if (err) return handleError(err); + getProducts(orders[0].productIds, (err, products) => { + if (err) return handleError(err); + renderPage(user, orders, products, (err) => { + if (err) return handleError(err); + console.log('Done'); + }); + }); + }); +}); + +// GOOD: Async/await +async function loadPage(userId) { + try { + const user = await getUser(userId); + const orders = await getOrders(user.id); + const products = await getProducts(orders[0].productIds); + await renderPage(user, orders, products); + console.log('Done'); + } catch (err) { + handleError(err); + } +} +``` + +**Detection:** >2 levels of callback nesting. + +--- + +### Async in Constructor + +Async operations in constructors. + +```typescript +// BAD: Async in constructor +class DatabaseConnection { + constructor(url: string) { + this.connect(url); // Fire-and-forget async + } + + private async connect(url: string) { + this.client = await createClient(url); + } +} + +// GOOD: Factory method +class DatabaseConnection { + private constructor(private client: Client) {} + + static async create(url: string): Promise { + const client = await createClient(url); + return new DatabaseConnection(client); + } +} + +// Usage +const db = await DatabaseConnection.create(url); +``` + +**Detection:** `async` calls or `.then()` in constructor. diff --git a/.github/skills/code-reviewer/scripts/code_quality_checker.py b/.github/skills/code-reviewer/scripts/code_quality_checker.py new file mode 100644 index 00000000..8edaabcd --- /dev/null +++ b/.github/skills/code-reviewer/scripts/code_quality_checker.py @@ -0,0 +1,560 @@ +#!/usr/bin/env python3 +""" +Code Quality Checker + +Analyzes source code for quality issues, code smells, complexity metrics, +and SOLID principle violations. + +Usage: + python .github/skills/code-reviewer/scripts/code_quality_checker.py /path/to/file.py + python .github/skills/code-reviewer/scripts/code_quality_checker.py /path/to/directory --recursive + python .github/skills/code-reviewer/scripts/code_quality_checker.py . --language typescript --json +""" + +import argparse +import json +import re +import sys +from pathlib import Path +from typing import Dict, List, Optional + + +# Language-specific file extensions +LANGUAGE_EXTENSIONS = { + "python": [".py"], + "typescript": [".ts", ".tsx"], + "javascript": [".js", ".jsx", ".mjs"], + "go": [".go"], + "swift": [".swift"], + "kotlin": [".kt", ".kts"] +} + +# Code smell thresholds +THRESHOLDS = { + "long_function_lines": 50, + "too_many_parameters": 5, + "high_complexity": 10, + "god_class_methods": 20, + "max_imports": 15 +} + + +def get_file_extension(filepath: Path) -> str: + """Get file extension.""" + return filepath.suffix.lower() + + +def detect_language(filepath: Path) -> Optional[str]: + """Detect programming language from file extension.""" + ext = get_file_extension(filepath) + for lang, extensions in LANGUAGE_EXTENSIONS.items(): + if ext in extensions: + return lang + return None + + +def read_file_content(filepath: Path) -> str: + """Read file content safely.""" + try: + with open(filepath, "r", encoding="utf-8", errors="ignore") as f: + return f.read() + except Exception: + return "" + + +def calculate_cyclomatic_complexity(content: str) -> int: + """ + Estimate cyclomatic complexity based on control flow keywords. + """ + complexity = 1 # Base complexity + + # Control flow patterns that increase complexity + patterns = [ + r"\bif\b", + r"\belif\b", + r"\belse\b", + r"\bfor\b", + r"\bwhile\b", + r"\bcase\b", + r"\bcatch\b", + r"\bexcept\b", + r"\band\b", + r"\bor\b", + r"\|\|", + r"&&" + ] + + for pattern in patterns: + matches = re.findall(pattern, content, re.IGNORECASE) + complexity += len(matches) + + return complexity + + +def count_lines(content: str) -> Dict[str, int]: + """Count different types of lines in code.""" + lines = content.split("\n") + total = len(lines) + blank = sum(1 for line in lines if not line.strip()) + comment = 0 + + for line in lines: + stripped = line.strip() + if stripped.startswith("#") or stripped.startswith("//"): + comment += 1 + elif stripped.startswith("/*") or stripped.startswith("'''") or stripped.startswith('"""'): + comment += 1 + + code = total - blank - comment + + return { + "total": total, + "code": code, + "blank": blank, + "comment": comment + } + + +def find_functions(content: str, language: str) -> List[Dict]: + """Find function definitions and their metrics.""" + functions = [] + + # Language-specific function patterns + patterns = { + "python": r"def\s+(\w+)\s*\(([^)]*)\)", + "typescript": r"(?:function\s+(\w+)|(?:const|let|var)\s+(\w+)\s*=\s*(?:async\s+)?\([^)]*\)\s*=>)", + "javascript": r"(?:function\s+(\w+)|(?:const|let|var)\s+(\w+)\s*=\s*(?:async\s+)?\([^)]*\)\s*=>)", + "go": r"func\s+(?:\([^)]+\)\s+)?(\w+)\s*\(([^)]*)\)", + "swift": r"func\s+(\w+)\s*\(([^)]*)\)", + "kotlin": r"fun\s+(\w+)\s*\(([^)]*)\)" + } + + pattern = patterns.get(language, patterns["python"]) + matches = re.finditer(pattern, content, re.MULTILINE) + + for match in matches: + name = next((g for g in match.groups() if g), "anonymous") + params_str = match.group(2) if len(match.groups()) > 1 and match.group(2) else "" + + # Count parameters + params = [p.strip() for p in params_str.split(",") if p.strip()] + param_count = len(params) + + # Estimate function length + start_pos = match.end() + remaining = content[start_pos:] + + next_func = re.search(pattern, remaining) + if next_func: + func_body = remaining[:next_func.start()] + else: + func_body = remaining[:min(2000, len(remaining))] + + line_count = len(func_body.split("\n")) + complexity = calculate_cyclomatic_complexity(func_body) + + functions.append({ + "name": name, + "parameters": param_count, + "lines": line_count, + "complexity": complexity + }) + + return functions + + +def find_classes(content: str, language: str) -> List[Dict]: + """Find class definitions and their metrics.""" + classes = [] + + patterns = { + "python": r"class\s+(\w+)", + "typescript": r"class\s+(\w+)", + "javascript": r"class\s+(\w+)", + "go": r"type\s+(\w+)\s+struct", + "swift": r"class\s+(\w+)", + "kotlin": r"class\s+(\w+)" + } + + pattern = patterns.get(language, patterns["python"]) + matches = re.finditer(pattern, content) + + for match in matches: + name = match.group(1) + + start_pos = match.end() + remaining = content[start_pos:] + + next_class = re.search(pattern, remaining) + if next_class: + class_body = remaining[:next_class.start()] + else: + class_body = remaining + + # Count methods + method_patterns = { + "python": r"def\s+\w+\s*\(", + "typescript": r"(?:public|private|protected)?\s*\w+\s*\([^)]*\)\s*[:{]", + "javascript": r"\w+\s*\([^)]*\)\s*\{", + "go": r"func\s+\(", + "swift": r"func\s+\w+", + "kotlin": r"fun\s+\w+" + } + method_pattern = method_patterns.get(language, method_patterns["python"]) + methods = len(re.findall(method_pattern, class_body)) + + classes.append({ + "name": name, + "methods": methods, + "lines": len(class_body.split("\n")) + }) + + return classes + + +def check_code_smells(content: str, functions: List[Dict], classes: List[Dict]) -> List[Dict]: + """Check for code smells in the content.""" + smells = [] + + # Long functions + for func in functions: + if func["lines"] > THRESHOLDS["long_function_lines"]: + smells.append({ + "type": "long_function", + "severity": "medium", + "message": f"Function '{func['name']}' has {func['lines']} lines (max: {THRESHOLDS['long_function_lines']})", + "location": func["name"] + }) + + # Too many parameters + for func in functions: + if func["parameters"] > THRESHOLDS["too_many_parameters"]: + smells.append({ + "type": "too_many_parameters", + "severity": "low", + "message": f"Function '{func['name']}' has {func['parameters']} parameters (max: {THRESHOLDS['too_many_parameters']})", + "location": func["name"] + }) + + # High complexity + for func in functions: + if func["complexity"] > THRESHOLDS["high_complexity"]: + severity = "high" if func["complexity"] > 20 else "medium" + smells.append({ + "type": "high_complexity", + "severity": severity, + "message": f"Function '{func['name']}' has complexity {func['complexity']} (max: {THRESHOLDS['high_complexity']})", + "location": func["name"] + }) + + # God classes + for cls in classes: + if cls["methods"] > THRESHOLDS["god_class_methods"]: + smells.append({ + "type": "god_class", + "severity": "high", + "message": f"Class '{cls['name']}' has {cls['methods']} methods (max: {THRESHOLDS['god_class_methods']})", + "location": cls["name"] + }) + + # Magic numbers + magic_pattern = r"\b(? List[Dict]: + """Check for potential SOLID principle violations.""" + violations = [] + + # OCP: Type checking instead of polymorphism + type_checks = len(re.findall(r"isinstance\(|type\(.*\)\s*==|typeof\s+\w+\s*===", content)) + if type_checks > 2: + violations.append({ + "principle": "OCP", + "name": "Open/Closed Principle", + "severity": "medium", + "message": f"Found {type_checks} type checks - consider using polymorphism" + }) + + # LSP/ISP: NotImplementedError + not_impl = len(re.findall(r"raise\s+NotImplementedError|not\s+implemented", content, re.IGNORECASE)) + if not_impl: + violations.append({ + "principle": "LSP/ISP", + "name": "Liskov/Interface Segregation", + "severity": "low", + "message": f"Found {not_impl} unimplemented methods - may indicate oversized interface" + }) + + # DIP: Too many direct imports + imports = len(re.findall(r"^(?:import|from)\s+", content, re.MULTILINE)) + if imports > THRESHOLDS["max_imports"]: + violations.append({ + "principle": "DIP", + "name": "Dependency Inversion Principle", + "severity": "low", + "message": f"File has {imports} imports - consider dependency injection" + }) + + return violations + + +def calculate_quality_score( + line_metrics: Dict, + functions: List[Dict], + classes: List[Dict], + smells: List[Dict], + violations: List[Dict] +) -> int: + """Calculate overall quality score (0-100).""" + score = 100 + + # Deduct for code smells + for smell in smells: + if smell["severity"] == "high": + score -= 10 + elif smell["severity"] == "medium": + score -= 5 + elif smell["severity"] == "low": + score -= 2 + + # Deduct for SOLID violations + for violation in violations: + if violation["severity"] == "high": + score -= 8 + elif violation["severity"] == "medium": + score -= 4 + elif violation["severity"] == "low": + score -= 2 + + # Bonus for good comment ratio (10-30%) + if line_metrics["total"] > 0: + comment_ratio = line_metrics["comment"] / line_metrics["total"] + if 0.1 <= comment_ratio <= 0.3: + score += 5 + + # Bonus for reasonable function sizes + if functions: + avg_lines = sum(f["lines"] for f in functions) / len(functions) + if avg_lines < 30: + score += 5 + + return max(0, min(100, score)) + + +def get_grade(score: int) -> str: + """Convert score to letter grade.""" + if score >= 90: + return "A" + elif score >= 80: + return "B" + elif score >= 70: + return "C" + elif score >= 60: + return "D" + else: + return "F" + + +def analyze_file(filepath: Path) -> Dict: + """Analyze a single file for code quality.""" + language = detect_language(filepath) + if not language: + return {"error": f"Unsupported file type: {filepath.suffix}"} + + content = read_file_content(filepath) + if not content: + return {"error": f"Could not read file: {filepath}"} + + line_metrics = count_lines(content) + functions = find_functions(content, language) + classes = find_classes(content, language) + smells = check_code_smells(content, functions, classes) + violations = check_solid_violations(content) + score = calculate_quality_score(line_metrics, functions, classes, smells, violations) + + return { + "file": str(filepath), + "language": language, + "metrics": { + "lines": line_metrics, + "functions": len(functions), + "classes": len(classes), + "avg_complexity": round(sum(f["complexity"] for f in functions) / max(1, len(functions)), 1) + }, + "quality_score": score, + "grade": get_grade(score), + "smells": smells, + "solid_violations": violations, + "function_details": functions[:10], + "class_details": classes[:10] + } + + +def analyze_directory( + dir_path: Path, + recursive: bool = True, + language: Optional[str] = None +) -> Dict: + """Analyze all files in a directory.""" + results = [] + extensions = [] + + if language: + extensions = LANGUAGE_EXTENSIONS.get(language, []) + else: + for exts in LANGUAGE_EXTENSIONS.values(): + extensions.extend(exts) + + pattern = "**/*" if recursive else "*" + + for ext in extensions: + for filepath in dir_path.glob(f"{pattern}{ext}"): + if "node_modules" in str(filepath) or ".git" in str(filepath): + continue + result = analyze_file(filepath) + if "error" not in result: + results.append(result) + + if not results: + return {"error": "No supported files found"} + + total_score = sum(r["quality_score"] for r in results) + avg_score = total_score / len(results) + total_smells = sum(len(r["smells"]) for r in results) + total_violations = sum(len(r["solid_violations"]) for r in results) + + return { + "directory": str(dir_path), + "files_analyzed": len(results), + "average_score": round(avg_score, 1), + "overall_grade": get_grade(int(avg_score)), + "total_code_smells": total_smells, + "total_solid_violations": total_violations, + "files": sorted(results, key=lambda x: x["quality_score"]) + } + + +def print_report(analysis: Dict) -> None: + """Print human-readable analysis report.""" + if "error" in analysis: + print(f"Error: {analysis['error']}") + return + + print("=" * 60) + print("CODE QUALITY REPORT") + print("=" * 60) + + if "file" in analysis: + print(f"\nFile: {analysis['file']}") + print(f"Language: {analysis['language']}") + print(f"Quality Score: {analysis['quality_score']}/100 ({analysis['grade']})") + + metrics = analysis["metrics"] + print(f"\nLines: {metrics['lines']['total']} ({metrics['lines']['code']} code, {metrics['lines']['comment']} comments)") + print(f"Functions: {metrics['functions']}") + print(f"Classes: {metrics['classes']}") + print(f"Avg Complexity: {metrics['avg_complexity']}") + + if analysis["smells"]: + print("\n--- CODE SMELLS ---") + for smell in analysis["smells"][:10]: + print(f" [{smell['severity'].upper()}] {smell['message']} ({smell['location']})") + + if analysis["solid_violations"]: + print("\n--- SOLID VIOLATIONS ---") + for v in analysis["solid_violations"]: + print(f" [{v['principle']}] {v['message']}") + else: + print(f"\nDirectory: {analysis['directory']}") + print(f"Files Analyzed: {analysis['files_analyzed']}") + print(f"Average Score: {analysis['average_score']}/100 ({analysis['overall_grade']})") + print(f"Total Code Smells: {analysis['total_code_smells']}") + print(f"Total SOLID Violations: {analysis['total_solid_violations']}") + + print("\n--- FILES BY QUALITY ---") + for f in analysis["files"][:10]: + print(f" {f['quality_score']:3d}/100 [{f['grade']}] {f['file']}") + + print("\n" + "=" * 60) + + +def main(): + parser = argparse.ArgumentParser( + description="Analyze code quality, smells, and SOLID violations" + ) + parser.add_argument( + "path", + help="File or directory to analyze" + ) + parser.add_argument( + "--recursive", "-r", + action="store_true", + default=True, + help="Recursively analyze directories (default: true)" + ) + parser.add_argument( + "--language", "-l", + choices=list(LANGUAGE_EXTENSIONS.keys()), + help="Filter by programming language" + ) + parser.add_argument( + "--json", + action="store_true", + help="Output in JSON format" + ) + parser.add_argument( + "--output", "-o", + help="Write output to file" + ) + + args = parser.parse_args() + + target = Path(args.path).resolve() + + if not target.exists(): + print(f"Error: Path does not exist: {target}", file=sys.stderr) + sys.exit(1) + + if target.is_file(): + analysis = analyze_file(target) + else: + analysis = analyze_directory(target, args.recursive, args.language) + + if args.json: + output = json.dumps(analysis, indent=2, default=str) + if args.output: + with open(args.output, "w") as f: + f.write(output) + print(f"Results written to {args.output}") + else: + print(output) + else: + print_report(analysis) + + +if __name__ == "__main__": + main() diff --git a/.github/skills/code-reviewer/scripts/pr_analyzer.py b/.github/skills/code-reviewer/scripts/pr_analyzer.py new file mode 100644 index 00000000..49e09b99 --- /dev/null +++ b/.github/skills/code-reviewer/scripts/pr_analyzer.py @@ -0,0 +1,495 @@ +#!/usr/bin/env python3 +""" +PR Analyzer + +Analyzes pull request changes for review complexity, risk assessment, +and generates review priorities. + +Usage: + python .github/skills/code-reviewer/scripts/pr_analyzer.py /path/to/repo + python .github/skills/code-reviewer/scripts/pr_analyzer.py . --base main --head feature-branch + python .github/skills/code-reviewer/scripts/pr_analyzer.py /path/to/repo --json +""" + +import argparse +import json +import os +import re +import subprocess +import sys +from pathlib import Path +from typing import Dict, List, Optional, Tuple + + +# File categories for review prioritization +FILE_CATEGORIES = { + "critical": { + "patterns": [ + r"auth", r"security", r"password", r"token", r"secret", + r"payment", r"billing", r"crypto", r"encrypt" + ], + "weight": 5, + "description": "Security-sensitive files requiring careful review" + }, + "high": { + "patterns": [ + r"api", r"database", r"migration", r"schema", r"model", + r"config", r"env", r"middleware" + ], + "weight": 4, + "description": "Core infrastructure files" + }, + "medium": { + "patterns": [ + r"service", r"controller", r"handler", r"util", r"helper" + ], + "weight": 3, + "description": "Business logic files" + }, + "low": { + "patterns": [ + r"test", r"spec", r"mock", r"fixture", r"story", + r"readme", r"docs", r"\.md$" + ], + "weight": 1, + "description": "Tests and documentation" + } +} + +# Risky patterns to flag +RISK_PATTERNS = [ + { + "name": "hardcoded_secrets", + "pattern": r"(password|secret|api_key|token)\s*[=:]\s*['\"][^'\"]+['\"]", + "severity": "critical", + "message": "Potential hardcoded secret detected" + }, + { + "name": "todo_fixme", + "pattern": r"(TODO|FIXME|HACK|XXX):", + "severity": "low", + "message": "TODO/FIXME comment found" + }, + { + "name": "console_log", + "pattern": r"console\.(log|debug|info|warn|error)\(", + "severity": "medium", + "message": "Console statement found (remove for production)" + }, + { + "name": "debugger", + "pattern": r"\bdebugger\b", + "severity": "high", + "message": "Debugger statement found" + }, + { + "name": "disable_eslint", + "pattern": r"eslint-disable", + "severity": "medium", + "message": "ESLint rule disabled" + }, + { + "name": "any_type", + "pattern": r":\s*any\b", + "severity": "medium", + "message": "TypeScript 'any' type used" + }, + { + "name": "sql_concatenation", + "pattern": r"(SELECT|INSERT|UPDATE|DELETE).*\+.*['\"]", + "severity": "critical", + "message": "Potential SQL injection (string concatenation in query)" + } +] + + +def run_git_command(cmd: List[str], cwd: Path) -> Tuple[bool, str]: + """Run a git command and return success status and output.""" + try: + result = subprocess.run( + cmd, + cwd=cwd, + capture_output=True, + text=True, + timeout=30 + ) + return result.returncode == 0, result.stdout.strip() + except subprocess.TimeoutExpired: + return False, "Command timed out" + except Exception as e: + return False, str(e) + + +def get_changed_files(repo_path: Path, base: str, head: str) -> List[Dict]: + """Get list of changed files between two refs.""" + success, output = run_git_command( + ["git", "diff", "--name-status", f"{base}...{head}"], + repo_path + ) + + if not success: + # Try without the triple dot (for uncommitted changes) + success, output = run_git_command( + ["git", "diff", "--name-status", base, head], + repo_path + ) + + if not success or not output: + # Fall back to staged changes + success, output = run_git_command( + ["git", "diff", "--name-status", "--cached"], + repo_path + ) + + files = [] + for line in output.split("\n"): + if not line.strip(): + continue + parts = line.split("\t") + if len(parts) >= 2: + status = parts[0][0] # First character of status + filepath = parts[-1] # Handle renames (R100\told\tnew) + status_map = { + "A": "added", + "M": "modified", + "D": "deleted", + "R": "renamed", + "C": "copied" + } + files.append({ + "path": filepath, + "status": status_map.get(status, "modified") + }) + + return files + + +def get_file_diff(repo_path: Path, filepath: str, base: str, head: str) -> str: + """Get diff content for a specific file.""" + success, output = run_git_command( + ["git", "diff", f"{base}...{head}", "--", filepath], + repo_path + ) + if not success: + success, output = run_git_command( + ["git", "diff", "--cached", "--", filepath], + repo_path + ) + return output if success else "" + + +def categorize_file(filepath: str) -> Tuple[str, int]: + """Categorize a file based on its path and name.""" + filepath_lower = filepath.lower() + + for category, info in FILE_CATEGORIES.items(): + for pattern in info["patterns"]: + if re.search(pattern, filepath_lower): + return category, info["weight"] + + return "medium", 2 # Default category + + +def analyze_diff_for_risks(diff_content: str, filepath: str) -> List[Dict]: + """Analyze diff content for risky patterns.""" + risks = [] + + # Only analyze added lines (starting with +) + added_lines = [ + line[1:] for line in diff_content.split("\n") + if line.startswith("+") and not line.startswith("+++") + ] + + content = "\n".join(added_lines) + + for risk in RISK_PATTERNS: + matches = re.findall(risk["pattern"], content, re.IGNORECASE) + if matches: + risks.append({ + "name": risk["name"], + "severity": risk["severity"], + "message": risk["message"], + "file": filepath, + "count": len(matches) + }) + + return risks + + +def count_changes(diff_content: str) -> Dict[str, int]: + """Count additions and deletions in diff.""" + additions = 0 + deletions = 0 + + for line in diff_content.split("\n"): + if line.startswith("+") and not line.startswith("+++"): + additions += 1 + elif line.startswith("-") and not line.startswith("---"): + deletions += 1 + + return {"additions": additions, "deletions": deletions} + + +def calculate_complexity_score(files: List[Dict], all_risks: List[Dict]) -> int: + """Calculate overall PR complexity score (1-10).""" + score = 0 + + # File count contribution (max 3 points) + file_count = len(files) + if file_count > 20: + score += 3 + elif file_count > 10: + score += 2 + elif file_count > 5: + score += 1 + + # Total changes contribution (max 3 points) + total_changes = sum(f.get("additions", 0) + f.get("deletions", 0) for f in files) + if total_changes > 500: + score += 3 + elif total_changes > 200: + score += 2 + elif total_changes > 50: + score += 1 + + # Risk severity contribution (max 4 points) + critical_risks = sum(1 for r in all_risks if r["severity"] == "critical") + high_risks = sum(1 for r in all_risks if r["severity"] == "high") + + score += min(2, critical_risks) + score += min(2, high_risks) + + return min(10, max(1, score)) + + +def analyze_commit_messages(repo_path: Path, base: str, head: str) -> Dict: + """Analyze commit messages in the PR.""" + success, output = run_git_command( + ["git", "log", "--oneline", f"{base}...{head}"], + repo_path + ) + + if not success or not output: + return {"commits": 0, "issues": []} + + commits = output.strip().split("\n") + issues = [] + + for commit in commits: + if len(commit) < 10: + continue + + # Check for conventional commit format + message = commit[8:] if len(commit) > 8 else commit # Skip hash + + if not re.match(r"^(feat|fix|docs|style|refactor|test|chore|perf|ci|build|revert)(\(.+\))?:", message): + issues.append({ + "commit": commit[:7], + "issue": "Does not follow conventional commit format" + }) + + if len(message) > 72: + issues.append({ + "commit": commit[:7], + "issue": "Commit message exceeds 72 characters" + }) + + return { + "commits": len(commits), + "issues": issues + } + + +def analyze_pr( + repo_path: Path, + base: str = "main", + head: str = "HEAD" +) -> Dict: + """Perform complete PR analysis.""" + # Get changed files + changed_files = get_changed_files(repo_path, base, head) + + if not changed_files: + return { + "status": "no_changes", + "message": "No changes detected between branches" + } + + # Analyze each file + all_risks = [] + file_analyses = [] + + for file_info in changed_files: + filepath = file_info["path"] + category, weight = categorize_file(filepath) + + # Get diff for the file + diff = get_file_diff(repo_path, filepath, base, head) + changes = count_changes(diff) + risks = analyze_diff_for_risks(diff, filepath) + + all_risks.extend(risks) + + file_analyses.append({ + "path": filepath, + "status": file_info["status"], + "category": category, + "priority_weight": weight, + "additions": changes["additions"], + "deletions": changes["deletions"], + "risks": risks + }) + + # Sort by priority (highest first) + file_analyses.sort(key=lambda x: (-x["priority_weight"], x["path"])) + + # Analyze commits + commit_analysis = analyze_commit_messages(repo_path, base, head) + + # Calculate metrics + complexity = calculate_complexity_score(file_analyses, all_risks) + + total_additions = sum(f["additions"] for f in file_analyses) + total_deletions = sum(f["deletions"] for f in file_analyses) + + return { + "status": "analyzed", + "summary": { + "files_changed": len(file_analyses), + "total_additions": total_additions, + "total_deletions": total_deletions, + "complexity_score": complexity, + "complexity_label": get_complexity_label(complexity), + "commits": commit_analysis["commits"] + }, + "risks": { + "critical": [r for r in all_risks if r["severity"] == "critical"], + "high": [r for r in all_risks if r["severity"] == "high"], + "medium": [r for r in all_risks if r["severity"] == "medium"], + "low": [r for r in all_risks if r["severity"] == "low"] + }, + "files": file_analyses, + "commit_issues": commit_analysis["issues"], + "review_order": [f["path"] for f in file_analyses[:10]] # Top 10 priority files + } + + +def get_complexity_label(score: int) -> str: + """Get human-readable complexity label.""" + if score <= 2: + return "Simple" + elif score <= 4: + return "Moderate" + elif score <= 6: + return "Complex" + elif score <= 8: + return "Very Complex" + else: + return "Critical" + + +def print_report(analysis: Dict) -> None: + """Print human-readable analysis report.""" + if analysis["status"] == "no_changes": + print("No changes detected.") + return + + summary = analysis["summary"] + risks = analysis["risks"] + + print("=" * 60) + print("PR ANALYSIS REPORT") + print("=" * 60) + + print(f"\nComplexity: {summary['complexity_score']}/10 ({summary['complexity_label']})") + print(f"Files Changed: {summary['files_changed']}") + print(f"Lines: +{summary['total_additions']} / -{summary['total_deletions']}") + print(f"Commits: {summary['commits']}") + + # Risk summary + print("\n--- RISK SUMMARY ---") + print(f"Critical: {len(risks['critical'])}") + print(f"High: {len(risks['high'])}") + print(f"Medium: {len(risks['medium'])}") + print(f"Low: {len(risks['low'])}") + + # Critical and high risks details + if risks["critical"]: + print("\n--- CRITICAL RISKS ---") + for risk in risks["critical"]: + print(f" [{risk['file']}] {risk['message']} (x{risk['count']})") + + if risks["high"]: + print("\n--- HIGH RISKS ---") + for risk in risks["high"]: + print(f" [{risk['file']}] {risk['message']} (x{risk['count']})") + + # Commit message issues + if analysis["commit_issues"]: + print("\n--- COMMIT MESSAGE ISSUES ---") + for issue in analysis["commit_issues"][:5]: + print(f" {issue['commit']}: {issue['issue']}") + + # Review order + print("\n--- SUGGESTED REVIEW ORDER ---") + for i, filepath in enumerate(analysis["review_order"], 1): + file_info = next(f for f in analysis["files"] if f["path"] == filepath) + print(f" {i}. [{file_info['category'].upper()}] {filepath}") + + print("\n" + "=" * 60) + + +def main(): + parser = argparse.ArgumentParser( + description="Analyze pull request for review complexity and risks" + ) + parser.add_argument( + "repo_path", + nargs="?", + default=".", + help="Path to git repository (default: current directory)" + ) + parser.add_argument( + "--base", "-b", + default="main", + help="Base branch for comparison (default: main)" + ) + parser.add_argument( + "--head", + default="HEAD", + help="Head branch/commit for comparison (default: HEAD)" + ) + parser.add_argument( + "--json", + action="store_true", + help="Output in JSON format" + ) + parser.add_argument( + "--output", "-o", + help="Write output to file" + ) + + args = parser.parse_args() + + repo_path = Path(args.repo_path).resolve() + + if not (repo_path / ".git").exists(): + print(f"Error: {repo_path} is not a git repository", file=sys.stderr) + sys.exit(1) + + analysis = analyze_pr(repo_path, args.base, args.head) + + if args.json: + output = json.dumps(analysis, indent=2) + if args.output: + with open(args.output, "w") as f: + f.write(output) + print(f"Results written to {args.output}") + else: + print(output) + else: + print_report(analysis) + + +if __name__ == "__main__": + main() diff --git a/.github/skills/code-reviewer/scripts/review_report_generator.py b/.github/skills/code-reviewer/scripts/review_report_generator.py new file mode 100644 index 00000000..00f62619 --- /dev/null +++ b/.github/skills/code-reviewer/scripts/review_report_generator.py @@ -0,0 +1,505 @@ +#!/usr/bin/env python3 +""" +Review Report Generator + +Generates comprehensive code review reports by combining PR analysis +and code quality findings into structured, actionable reports. + +Usage: + python .github/skills/code-reviewer/scripts/review_report_generator.py /path/to/repo + python .github/skills/code-reviewer/scripts/review_report_generator.py . --pr-analysis pr_results.json --quality-analysis quality_results.json + python .github/skills/code-reviewer/scripts/review_report_generator.py /path/to/repo --format markdown --output review.md +""" + +import argparse +import json +import os +import subprocess +import sys +from datetime import datetime +from pathlib import Path +from typing import Dict, List, Optional, Tuple + + +# Severity weights for prioritization +SEVERITY_WEIGHTS = { + "critical": 100, + "high": 75, + "medium": 50, + "low": 25, + "info": 10 +} + +# Review verdict thresholds +VERDICT_THRESHOLDS = { + "approve": {"max_critical": 0, "max_high": 0, "max_score": 100}, + "approve_with_suggestions": {"max_critical": 0, "max_high": 2, "max_score": 85}, + "request_changes": {"max_critical": 0, "max_high": 5, "max_score": 70}, + "block": {"max_critical": float("inf"), "max_high": float("inf"), "max_score": 0} +} + + +def load_json_file(filepath: str) -> Optional[Dict]: + """Load JSON file if it exists.""" + try: + with open(filepath, "r") as f: + return json.load(f) + except (FileNotFoundError, json.JSONDecodeError): + return None + + +def run_pr_analyzer(repo_path: Path) -> Dict: + """Run .github/skills/code-reviewer/scripts/pr_analyzer.py and return results.""" + script_path = Path(__file__).parent / ".github/skills/code-reviewer/scripts/pr_analyzer.py" + if not script_path.exists(): + return {"status": "error", "message": ".github/skills/code-reviewer/scripts/pr_analyzer.py not found"} + + try: + result = subprocess.run( + [sys.executable, str(script_path), str(repo_path), "--json"], + capture_output=True, + text=True, + timeout=120 + ) + if result.returncode == 0: + return json.loads(result.stdout) + return {"status": "error", "message": result.stderr} + except Exception as e: + return {"status": "error", "message": str(e)} + + +def run_quality_checker(repo_path: Path) -> Dict: + """Run .github/skills/code-reviewer/scripts/code_quality_checker.py and return results.""" + script_path = Path(__file__).parent / ".github/skills/code-reviewer/scripts/code_quality_checker.py" + if not script_path.exists(): + return {"status": "error", "message": ".github/skills/code-reviewer/scripts/code_quality_checker.py not found"} + + try: + result = subprocess.run( + [sys.executable, str(script_path), str(repo_path), "--json"], + capture_output=True, + text=True, + timeout=300 + ) + if result.returncode == 0: + return json.loads(result.stdout) + return {"status": "error", "message": result.stderr} + except Exception as e: + return {"status": "error", "message": str(e)} + + +def calculate_review_score(pr_analysis: Dict, quality_analysis: Dict) -> int: + """Calculate overall review score (0-100).""" + score = 100 + + # Deduct for PR risks + if "risks" in pr_analysis: + risks = pr_analysis["risks"] + score -= len(risks.get("critical", [])) * 15 + score -= len(risks.get("high", [])) * 10 + score -= len(risks.get("medium", [])) * 5 + score -= len(risks.get("low", [])) * 2 + + # Deduct for code quality issues + if "issues" in quality_analysis: + issues = quality_analysis["issues"] + score -= len([i for i in issues if i.get("severity") == "critical"]) * 12 + score -= len([i for i in issues if i.get("severity") == "high"]) * 8 + score -= len([i for i in issues if i.get("severity") == "medium"]) * 4 + score -= len([i for i in issues if i.get("severity") == "low"]) * 1 + + # Deduct for complexity + if "summary" in pr_analysis: + complexity = pr_analysis["summary"].get("complexity_score", 0) + if complexity > 7: + score -= 10 + elif complexity > 5: + score -= 5 + + return max(0, min(100, score)) + + +def determine_verdict(score: int, critical_count: int, high_count: int) -> Tuple[str, str]: + """Determine review verdict based on score and issue counts.""" + if critical_count > 0: + return "block", "Critical issues must be resolved before merge" + + if score >= 90 and high_count == 0: + return "approve", "Code meets quality standards" + + if score >= 75 and high_count <= 2: + return "approve_with_suggestions", "Minor improvements recommended" + + if score >= 50: + return "request_changes", "Several issues need to be addressed" + + return "block", "Significant issues prevent approval" + + +def generate_findings_list(pr_analysis: Dict, quality_analysis: Dict) -> List[Dict]: + """Combine and prioritize all findings.""" + findings = [] + + # Add PR risk findings + if "risks" in pr_analysis: + for severity, items in pr_analysis["risks"].items(): + for item in items: + findings.append({ + "source": "pr_analysis", + "severity": severity, + "category": item.get("name", "unknown"), + "message": item.get("message", ""), + "file": item.get("file", ""), + "count": item.get("count", 1) + }) + + # Add code quality findings + if "issues" in quality_analysis: + for issue in quality_analysis["issues"]: + findings.append({ + "source": "quality_analysis", + "severity": issue.get("severity", "medium"), + "category": issue.get("type", "unknown"), + "message": issue.get("message", ""), + "file": issue.get("file", ""), + "line": issue.get("line", 0) + }) + + # Sort by severity weight + findings.sort( + key=lambda x: -SEVERITY_WEIGHTS.get(x["severity"], 0) + ) + + return findings + + +def generate_action_items(findings: List[Dict]) -> List[Dict]: + """Generate prioritized action items from findings.""" + action_items = [] + seen_categories = set() + + for finding in findings: + category = finding["category"] + severity = finding["severity"] + + # Group similar issues + if category in seen_categories and severity not in ["critical", "high"]: + continue + + action = { + "priority": "P0" if severity == "critical" else "P1" if severity == "high" else "P2", + "action": get_action_for_category(category, finding), + "severity": severity, + "files_affected": [finding["file"]] if finding.get("file") else [] + } + action_items.append(action) + seen_categories.add(category) + + return action_items[:15] # Top 15 actions + + +def get_action_for_category(category: str, finding: Dict) -> str: + """Get actionable recommendation for issue category.""" + actions = { + "hardcoded_secrets": "Remove hardcoded credentials and use environment variables or a secrets manager", + "sql_concatenation": "Use parameterized queries to prevent SQL injection", + "debugger": "Remove debugger statements before merging", + "console_log": "Remove or replace console statements with proper logging", + "todo_fixme": "Address TODO/FIXME comments or create tracking issues", + "disable_eslint": "Address the underlying issue instead of disabling lint rules", + "any_type": "Replace 'any' types with proper type definitions", + "long_function": "Break down function into smaller, focused units", + "god_class": "Split class into smaller, single-responsibility classes", + "too_many_params": "Use parameter objects or builder pattern", + "deep_nesting": "Refactor using early returns, guard clauses, or extraction", + "high_complexity": "Reduce cyclomatic complexity through refactoring", + "missing_error_handling": "Add proper error handling and recovery logic", + "duplicate_code": "Extract duplicate code into shared functions", + "magic_numbers": "Replace magic numbers with named constants", + "large_file": "Consider splitting into multiple smaller modules" + } + return actions.get(category, f"Review and address: {finding.get('message', category)}") + + +def format_markdown_report(report: Dict) -> str: + """Generate markdown-formatted report.""" + lines = [] + + # Header + lines.append("# Code Review Report") + lines.append("") + lines.append(f"**Generated:** {report['metadata']['generated_at']}") + lines.append(f"**Repository:** {report['metadata']['repository']}") + lines.append("") + + # Executive Summary + lines.append("## Executive Summary") + lines.append("") + summary = report["summary"] + verdict = summary["verdict"] + verdict_emoji = { + "approve": "✅", + "approve_with_suggestions": "✅", + "request_changes": "⚠️", + "block": "❌" + }.get(verdict, "❓") + + lines.append(f"**Verdict:** {verdict_emoji} {verdict.upper().replace('_', ' ')}") + lines.append(f"**Score:** {summary['score']}/100") + lines.append(f"**Rationale:** {summary['rationale']}") + lines.append("") + + # Issue Counts + lines.append("### Issue Summary") + lines.append("") + lines.append("| Severity | Count |") + lines.append("|----------|-------|") + for severity in ["critical", "high", "medium", "low"]: + count = summary["issue_counts"].get(severity, 0) + lines.append(f"| {severity.capitalize()} | {count} |") + lines.append("") + + # PR Statistics (if available) + if "pr_summary" in report: + pr = report["pr_summary"] + lines.append("### Change Statistics") + lines.append("") + lines.append(f"- **Files Changed:** {pr.get('files_changed', 'N/A')}") + lines.append(f"- **Lines Added:** +{pr.get('total_additions', 0)}") + lines.append(f"- **Lines Removed:** -{pr.get('total_deletions', 0)}") + lines.append(f"- **Complexity:** {pr.get('complexity_label', 'N/A')}") + lines.append("") + + # Action Items + if report.get("action_items"): + lines.append("## Action Items") + lines.append("") + for i, item in enumerate(report["action_items"], 1): + priority = item["priority"] + emoji = "🔴" if priority == "P0" else "🟠" if priority == "P1" else "🟡" + lines.append(f"{i}. {emoji} **[{priority}]** {item['action']}") + if item.get("files_affected"): + lines.append(f" - Files: {', '.join(item['files_affected'][:3])}") + lines.append("") + + # Critical Findings + critical_findings = [f for f in report.get("findings", []) if f["severity"] == "critical"] + if critical_findings: + lines.append("## Critical Issues (Must Fix)") + lines.append("") + for finding in critical_findings: + lines.append(f"- **{finding['category']}** in `{finding.get('file', 'unknown')}`") + lines.append(f" - {finding['message']}") + lines.append("") + + # High Priority Findings + high_findings = [f for f in report.get("findings", []) if f["severity"] == "high"] + if high_findings: + lines.append("## High Priority Issues") + lines.append("") + for finding in high_findings[:10]: + lines.append(f"- **{finding['category']}** in `{finding.get('file', 'unknown')}`") + lines.append(f" - {finding['message']}") + lines.append("") + + # Review Order (if available) + if "review_order" in report: + lines.append("## Suggested Review Order") + lines.append("") + for i, filepath in enumerate(report["review_order"][:10], 1): + lines.append(f"{i}. `{filepath}`") + lines.append("") + + # Footer + lines.append("---") + lines.append("*Generated by Code Reviewer*") + + return "\n".join(lines) + + +def format_text_report(report: Dict) -> str: + """Generate plain text report.""" + lines = [] + + lines.append("=" * 60) + lines.append("CODE REVIEW REPORT") + lines.append("=" * 60) + lines.append("") + lines.append(f"Generated: {report['metadata']['generated_at']}") + lines.append(f"Repository: {report['metadata']['repository']}") + lines.append("") + + summary = report["summary"] + verdict = summary["verdict"].upper().replace("_", " ") + lines.append(f"VERDICT: {verdict}") + lines.append(f"SCORE: {summary['score']}/100") + lines.append(f"RATIONALE: {summary['rationale']}") + lines.append("") + + lines.append("--- ISSUE SUMMARY ---") + for severity in ["critical", "high", "medium", "low"]: + count = summary["issue_counts"].get(severity, 0) + lines.append(f" {severity.capitalize()}: {count}") + lines.append("") + + if report.get("action_items"): + lines.append("--- ACTION ITEMS ---") + for i, item in enumerate(report["action_items"][:10], 1): + lines.append(f" {i}. [{item['priority']}] {item['action']}") + lines.append("") + + critical = [f for f in report.get("findings", []) if f["severity"] == "critical"] + if critical: + lines.append("--- CRITICAL ISSUES ---") + for f in critical: + lines.append(f" [{f.get('file', 'unknown')}] {f['message']}") + lines.append("") + + lines.append("=" * 60) + + return "\n".join(lines) + + +def generate_report( + repo_path: Path, + pr_analysis: Optional[Dict] = None, + quality_analysis: Optional[Dict] = None +) -> Dict: + """Generate comprehensive review report.""" + # Run analyses if not provided + if pr_analysis is None: + pr_analysis = run_pr_analyzer(repo_path) + + if quality_analysis is None: + quality_analysis = run_quality_checker(repo_path) + + # Generate findings + findings = generate_findings_list(pr_analysis, quality_analysis) + + # Count issues by severity + issue_counts = { + "critical": len([f for f in findings if f["severity"] == "critical"]), + "high": len([f for f in findings if f["severity"] == "high"]), + "medium": len([f for f in findings if f["severity"] == "medium"]), + "low": len([f for f in findings if f["severity"] == "low"]) + } + + # Calculate score and verdict + score = calculate_review_score(pr_analysis, quality_analysis) + verdict, rationale = determine_verdict( + score, + issue_counts["critical"], + issue_counts["high"] + ) + + # Generate action items + action_items = generate_action_items(findings) + + # Build report + report = { + "metadata": { + "generated_at": datetime.now().isoformat(), + "repository": str(repo_path), + "version": "1.0.0" + }, + "summary": { + "score": score, + "verdict": verdict, + "rationale": rationale, + "issue_counts": issue_counts + }, + "findings": findings, + "action_items": action_items + } + + # Add PR summary if available + if pr_analysis.get("status") == "analyzed": + report["pr_summary"] = pr_analysis.get("summary", {}) + report["review_order"] = pr_analysis.get("review_order", []) + + # Add quality summary if available + if quality_analysis.get("status") == "analyzed": + report["quality_summary"] = quality_analysis.get("summary", {}) + + return report + + +def main(): + parser = argparse.ArgumentParser( + description="Generate comprehensive code review reports" + ) + parser.add_argument( + "repo_path", + nargs="?", + default=".", + help="Path to repository (default: current directory)" + ) + parser.add_argument( + "--pr-analysis", + help="Path to pre-computed PR analysis JSON" + ) + parser.add_argument( + "--quality-analysis", + help="Path to pre-computed quality analysis JSON" + ) + parser.add_argument( + "--format", "-f", + choices=["text", "markdown", "json"], + default="text", + help="Output format (default: text)" + ) + parser.add_argument( + "--output", "-o", + help="Write output to file" + ) + parser.add_argument( + "--json", + action="store_true", + help="Output as JSON (shortcut for --format json)" + ) + + args = parser.parse_args() + + repo_path = Path(args.repo_path).resolve() + if not repo_path.exists(): + print(f"Error: Path does not exist: {repo_path}", file=sys.stderr) + sys.exit(1) + + # Load pre-computed analyses if provided + pr_analysis = None + quality_analysis = None + + if args.pr_analysis: + pr_analysis = load_json_file(args.pr_analysis) + if not pr_analysis: + print(f"Warning: Could not load PR analysis from {args.pr_analysis}") + + if args.quality_analysis: + quality_analysis = load_json_file(args.quality_analysis) + if not quality_analysis: + print(f"Warning: Could not load quality analysis from {args.quality_analysis}") + + # Generate report + report = generate_report(repo_path, pr_analysis, quality_analysis) + + # Format output + output_format = "json" if args.json else args.format + + if output_format == "json": + output = json.dumps(report, indent=2) + elif output_format == "markdown": + output = format_markdown_report(report) + else: + output = format_text_report(report) + + # Write or print output + if args.output: + with open(args.output, "w") as f: + f.write(output) + print(f"Report written to {args.output}") + else: + print(output) + + +if __name__ == "__main__": + main() diff --git a/.junie/skills/code-reviewer/SKILL.md b/.junie/skills/code-reviewer/SKILL.md new file mode 100644 index 00000000..263c01ac --- /dev/null +++ b/.junie/skills/code-reviewer/SKILL.md @@ -0,0 +1,177 @@ +--- +name: code-reviewer +description: Code review automation for TypeScript, JavaScript, Python, Go, Swift, Kotlin. Analyzes PRs for complexity and risk, checks code quality for SOLID violations and code smells, generates review reports. Use when reviewing pull requests, analyzing code quality, identifying issues, generating review checklists. +--- + +# Code Reviewer + +Automated code review tools for analyzing pull requests, detecting code quality issues, and generating review reports. + +--- + +## Table of Contents + +- [Tools](#tools) + - [PR Analyzer](#pr-analyzer) + - [Code Quality Checker](#code-quality-checker) + - [Review Report Generator](#review-report-generator) +- [Reference Guides](#reference-guides) +- [Languages Supported](#languages-supported) + +--- + +## Tools + +### PR Analyzer + +Analyzes git diff between branches to assess review complexity and identify risks. + +```bash +# Analyze current branch against main +python scripts/pr_analyzer.py /path/to/repo + +# Compare specific branches +python scripts/pr_analyzer.py . --base main --head feature-branch + +# JSON output for integration +python scripts/pr_analyzer.py /path/to/repo --json +``` + +**What it detects:** +- Hardcoded secrets (passwords, API keys, tokens) +- SQL injection patterns (string concatenation in queries) +- Debug statements (debugger, console.log) +- ESLint rule disabling +- TypeScript `any` types +- TODO/FIXME comments + +**Output includes:** +- Complexity score (1-10) +- Risk categorization (critical, high, medium, low) +- File prioritization for review order +- Commit message validation + +--- + +### Code Quality Checker + +Analyzes source code for structural issues, code smells, and SOLID violations. + +```bash +# Analyze a directory +python scripts/code_quality_checker.py /path/to/code + +# Analyze specific language +python scripts/code_quality_checker.py . --language python + +# JSON output +python scripts/code_quality_checker.py /path/to/code --json +``` + +**What it detects:** +- Long functions (>50 lines) +- Large files (>500 lines) +- God classes (>20 methods) +- Deep nesting (>4 levels) +- Too many parameters (>5) +- High cyclomatic complexity +- Missing error handling +- Unused imports +- Magic numbers + +**Thresholds:** + +| Issue | Threshold | +|-------|-----------| +| Long function | >50 lines | +| Large file | >500 lines | +| God class | >20 methods | +| Too many params | >5 | +| Deep nesting | >4 levels | +| High complexity | >10 branches | + +--- + +### Review Report Generator + +Combines PR analysis and code quality findings into structured review reports. + +```bash +# Generate report for current repo +python scripts/review_report_generator.py /path/to/repo + +# Markdown output +python scripts/review_report_generator.py . --format markdown --output review.md + +# Use pre-computed analyses +python scripts/review_report_generator.py . \ + --pr-analysis pr_results.json \ + --quality-analysis quality_results.json +``` + +**Report includes:** +- Review verdict (approve, request changes, block) +- Score (0-100) +- Prioritized action items +- Issue summary by severity +- Suggested review order + +**Verdicts:** + +| Score | Verdict | +|-------|---------| +| 90+ with no high issues | Approve | +| 75+ with ≤2 high issues | Approve with suggestions | +| 50-74 | Request changes | +| <50 or critical issues | Block | + +--- + +## Reference Guides + +### Code Review Checklist +`.junie/skills/code-reviewer/references/code_review_checklist.md` + +Systematic checklists covering: +- Pre-review checks (build, tests, PR hygiene) +- Correctness (logic, data handling, error handling) +- Security (input validation, injection prevention) +- Performance (efficiency, caching, scalability) +- Maintainability (code quality, naming, structure) +- Testing (coverage, quality, mocking) +- Language-specific checks + +### Coding Standards +`.junie/skills/code-reviewer/references/coding_standards.md` + +Language-specific standards for: +- TypeScript (type annotations, null safety, async/await) +- JavaScript (declarations, patterns, modules) +- Python (type hints, exceptions, class design) +- Go (error handling, structs, concurrency) +- Swift (optionals, protocols, errors) +- Kotlin (null safety, data classes, coroutines) + +### Common Antipatterns +`.junie/skills/code-reviewer/references/common_antipatterns.md` + +Antipattern catalog with examples and fixes: +- Structural (god class, long method, deep nesting) +- Logic (boolean blindness, stringly typed code) +- Security (SQL injection, hardcoded credentials) +- Performance (N+1 queries, unbounded collections) +- Testing (duplication, testing implementation) +- Async (floating promises, callback hell) + +--- + +## Languages Supported + +| Language | Extensions | +|----------|------------| +| Python | `.py` | +| TypeScript | `.ts`, `.tsx` | +| JavaScript | `.js`, `.jsx`, `.mjs` | +| Go | `.go` | +| Swift | `.swift` | +| Kotlin | `.kt`, `.kts` | \ No newline at end of file diff --git a/.junie/skills/code-reviewer/references/code_review_checklist.md b/.junie/skills/code-reviewer/references/code_review_checklist.md new file mode 100644 index 00000000..b7bd0867 --- /dev/null +++ b/.junie/skills/code-reviewer/references/code_review_checklist.md @@ -0,0 +1,270 @@ +# Code Review Checklist + +Structured checklists for systematic code review across different aspects. + +--- + +## Table of Contents + +- [Pre-Review Checks](#pre-review-checks) +- [Correctness](#correctness) +- [Security](#security) +- [Performance](#performance) +- [Maintainability](#maintainability) +- [Testing](#testing) +- [Documentation](#documentation) +- [Language-Specific Checks](#language-specific-checks) + +--- + +## Pre-Review Checks + +Before diving into code, verify these basics: + +### Build and Tests +- [ ] Code compiles without errors +- [ ] All existing tests pass +- [ ] New tests are included for new functionality +- [ ] No unintended files included (build artifacts, IDE configs) + +### PR Hygiene +- [ ] PR has clear title and description +- [ ] Changes are scoped appropriately (not too large) +- [ ] Commits follow conventional commit format +- [ ] Branch is up to date with base branch + +### Scope Verification +- [ ] Changes match the stated purpose +- [ ] No unrelated changes bundled in +- [ ] Breaking changes are documented +- [ ] Migration path provided if needed + +--- + +## Correctness + +### Logic +- [ ] Algorithm implements requirements correctly +- [ ] Edge cases handled (null, empty, boundary values) +- [ ] Off-by-one errors checked +- [ ] Correct operators used (== vs ===, & vs &&) +- [ ] Loop termination conditions correct +- [ ] Recursion has proper base cases + +### Data Handling +- [ ] Data types appropriate for the use case +- [ ] Numeric overflow/underflow considered +- [ ] Date/time handling accounts for timezones +- [ ] Unicode and internationalization handled +- [ ] Data validation at entry points + +### State Management +- [ ] State transitions are valid +- [ ] Race conditions addressed +- [ ] Concurrent access handled correctly +- [ ] State cleanup on errors/exit + +### Error Handling +- [ ] Errors caught at appropriate levels +- [ ] Error messages are actionable +- [ ] Errors don't expose sensitive information +- [ ] Recovery or graceful degradation implemented +- [ ] Resources cleaned up in error paths + +--- + +## Security + +### Input Validation +- [ ] All user input validated and sanitized +- [ ] Input length limits enforced +- [ ] File uploads validated (type, size, content) +- [ ] URL parameters validated + +### Injection Prevention +- [ ] SQL queries parameterized +- [ ] Command execution uses safe APIs +- [ ] HTML output escaped to prevent XSS +- [ ] LDAP queries properly escaped +- [ ] XML parsing disables external entities + +### Authentication & Authorization +- [ ] Authentication required for protected resources +- [ ] Authorization checked before operations +- [ ] Session management secure +- [ ] Password handling follows best practices +- [ ] Token expiration implemented + +### Data Protection +- [ ] Sensitive data encrypted at rest +- [ ] Sensitive data encrypted in transit +- [ ] PII handled according to policy +- [ ] Secrets not hardcoded +- [ ] Logs don't contain sensitive data + +### API Security +- [ ] Rate limiting implemented +- [ ] CORS configured correctly +- [ ] CSRF protection in place +- [ ] API keys/tokens secured +- [ ] Endpoints use HTTPS + +--- + +## Performance + +### Efficiency +- [ ] Appropriate data structures used +- [ ] Algorithms have acceptable complexity +- [ ] Database queries are optimized +- [ ] N+1 query problems avoided +- [ ] Indexes used where beneficial + +### Resource Usage +- [ ] Memory usage bounded +- [ ] No memory leaks +- [ ] File handles properly closed +- [ ] Database connections pooled +- [ ] Network calls minimized + +### Caching +- [ ] Appropriate caching strategy +- [ ] Cache invalidation handled +- [ ] Cache keys are unique and predictable +- [ ] TTL values appropriate + +### Scalability +- [ ] Horizontal scaling considered +- [ ] Bottlenecks identified +- [ ] Async processing for long operations +- [ ] Batch operations where appropriate + +--- + +## Maintainability + +### Code Quality +- [ ] Functions/methods have single responsibility +- [ ] Classes follow SOLID principles +- [ ] Code is DRY (Don't Repeat Yourself) +- [ ] No dead code or commented-out code +- [ ] Magic numbers replaced with constants + +### Naming +- [ ] Names are descriptive and consistent +- [ ] Naming follows project conventions +- [ ] No abbreviations that obscure meaning +- [ ] Boolean variables/functions have is/has/can prefix + +### Structure +- [ ] Functions are appropriately sized (<50 lines preferred) +- [ ] Nesting depth is reasonable (<4 levels) +- [ ] Related code is grouped together +- [ ] Dependencies are minimal and explicit + +### Readability +- [ ] Code is self-documenting where possible +- [ ] Complex logic has explanatory comments +- [ ] Formatting is consistent +- [ ] No overly clever or obscure code + +--- + +## Testing + +### Coverage +- [ ] New code has unit tests +- [ ] Critical paths have integration tests +- [ ] Edge cases are tested +- [ ] Error conditions are tested + +### Quality +- [ ] Tests are independent +- [ ] Tests have clear assertions +- [ ] Test names describe what is tested +- [ ] Tests don't depend on external state + +### Mocking +- [ ] External dependencies are mocked +- [ ] Mocks are realistic +- [ ] Mock setup is not excessive + +--- + +## Documentation + +### Code Documentation +- [ ] Public APIs are documented +- [ ] Complex algorithms explained +- [ ] Non-obvious decisions documented +- [ ] TODO/FIXME comments have context + +### External Documentation +- [ ] README updated if needed +- [ ] API documentation updated +- [ ] Changelog updated +- [ ] Migration guides provided + +--- + +## Language-Specific Checks + +### TypeScript/JavaScript +- [ ] Types are explicit (avoid `any`) +- [ ] Null checks present (`?.`, `??`) +- [ ] Async/await errors handled +- [ ] No floating promises +- [ ] Memory leaks from closures checked + +### Python +- [ ] Type hints used for public APIs +- [ ] Context managers for resources (`with` statements) +- [ ] Exception handling is specific (not bare `except`) +- [ ] No mutable default arguments +- [ ] List comprehensions used appropriately + +### Go +- [ ] Errors checked and handled +- [ ] Goroutine leaks prevented +- [ ] Context propagation correct +- [ ] Defer statements in right order +- [ ] Interfaces minimal + +### Swift +- [ ] Optionals handled safely +- [ ] Memory management correct (weak/unowned) +- [ ] Error handling uses Result or throws +- [ ] Access control appropriate +- [ ] Codable implementation correct + +### Kotlin +- [ ] Null safety leveraged +- [ ] Coroutine cancellation handled +- [ ] Data classes used appropriately +- [ ] Extension functions don't obscure behavior +- [ ] Sealed classes for state + +--- + +## Review Process Tips + +### Before Approving +1. Verify all critical checks passed +2. Confirm tests are adequate +3. Consider deployment impact +4. Check for any security concerns +5. Ensure documentation is updated + +### Providing Feedback +- Be specific about issues +- Explain why something is problematic +- Suggest alternatives when possible +- Distinguish blockers from suggestions +- Acknowledge good patterns + +### When to Block +- Security vulnerabilities present +- Critical logic errors +- No tests for risky changes +- Breaking changes without migration +- Significant performance regressions diff --git a/.junie/skills/code-reviewer/references/coding_standards.md b/.junie/skills/code-reviewer/references/coding_standards.md new file mode 100644 index 00000000..9fbc6a06 --- /dev/null +++ b/.junie/skills/code-reviewer/references/coding_standards.md @@ -0,0 +1,555 @@ +# Coding Standards + +Language-specific coding standards and conventions for code review. + +--- + +## Table of Contents + +- [Universal Principles](#universal-principles) +- [TypeScript Standards](#typescript-standards) +- [JavaScript Standards](#javascript-standards) +- [Python Standards](#python-standards) +- [Go Standards](#go-standards) +- [Swift Standards](#swift-standards) +- [Kotlin Standards](#kotlin-standards) + +--- + +## Universal Principles + +These apply across all languages. + +### Naming Conventions + +| Element | Convention | Example | +|---------|------------|---------| +| Variables | camelCase (JS/TS), snake_case (Python/Go) | `userName`, `user_name` | +| Constants | SCREAMING_SNAKE_CASE | `MAX_RETRY_COUNT` | +| Functions | camelCase (JS/TS), snake_case (Python) | `getUserById`, `get_user_by_id` | +| Classes | PascalCase | `UserRepository` | +| Interfaces | PascalCase, optionally prefixed | `IUserService` or `UserService` | +| Private members | Prefix with underscore or use access modifiers | `_internalState` | + +### Function Design + +``` +Good functions: +- Do one thing well +- Have descriptive names (verb + noun) +- Take 3 or fewer parameters +- Return early for error cases +- Stay under 50 lines +``` + +### Error Handling + +``` +Good error handling: +- Catch specific errors, not generic exceptions +- Log with context (what, where, why) +- Clean up resources in error paths +- Don't swallow errors silently +- Provide actionable error messages +``` + +--- + +## TypeScript Standards + +### Type Annotations + +```typescript +// Avoid 'any' - use unknown for truly unknown types +function processData(data: unknown): ProcessedResult { + if (isValidData(data)) { + return transform(data); + } + throw new Error('Invalid data format'); +} + +// Use explicit return types for public APIs +export function calculateTotal(items: CartItem[]): number { + return items.reduce((sum, item) => sum + item.price, 0); +} + +// Use type guards for runtime checks +function isUser(obj: unknown): obj is User { + return ( + typeof obj === 'object' && + obj !== null && + 'id' in obj && + 'email' in obj + ); +} +``` + +### Null Safety + +```typescript +// Use optional chaining and nullish coalescing +const userName = user?.profile?.name ?? 'Anonymous'; + +// Be explicit about nullable types +interface Config { + timeout: number; + retries?: number; // Optional + fallbackUrl: string | null; // Explicitly nullable +} + +// Use assertion functions for validation +function assertDefined(value: T | null | undefined): asserts value is T { + if (value === null || value === undefined) { + throw new Error('Value is not defined'); + } +} +``` + +### Async/Await + +```typescript +// Always handle errors in async functions +async function fetchUser(id: string): Promise { + try { + const response = await api.get(`/users/${id}`); + return response.data; + } catch (error) { + logger.error('Failed to fetch user', { id, error }); + throw new UserFetchError(id, error); + } +} + +// Use Promise.all for parallel operations +async function loadDashboard(userId: string): Promise { + const [profile, stats, notifications] = await Promise.all([ + fetchProfile(userId), + fetchStats(userId), + fetchNotifications(userId) + ]); + return { profile, stats, notifications }; +} +``` + +### React/Component Standards + +```typescript +// Use explicit prop types +interface ButtonProps { + label: string; + onClick: () => void; + variant?: 'primary' | 'secondary'; + disabled?: boolean; +} + +// Prefer functional components with hooks +function Button({ label, onClick, variant = 'primary', disabled = false }: ButtonProps) { + return ( + + ); +} + +// Use custom hooks for reusable logic +function useDebounce(value: T, delay: number): T { + const [debouncedValue, setDebouncedValue] = useState(value); + + useEffect(() => { + const timer = setTimeout(() => setDebouncedValue(value), delay); + return () => clearTimeout(timer); + }, [value, delay]); + + return debouncedValue; +} +``` + +--- + +## JavaScript Standards + +### Variable Declarations + +```javascript +// Use const by default, let when reassignment needed +const MAX_ITEMS = 100; +let currentCount = 0; + +// Never use var +// var is function-scoped and hoisted, leading to bugs +``` + +### Object and Array Patterns + +```javascript +// Use object destructuring +const { name, email, role = 'user' } = user; + +// Use spread for immutable updates +const updatedUser = { ...user, lastLogin: new Date() }; +const updatedList = [...items, newItem]; + +// Use array methods over loops +const activeUsers = users.filter(u => u.isActive); +const emails = users.map(u => u.email); +const total = orders.reduce((sum, o) => sum + o.amount, 0); +``` + +### Module Patterns + +```javascript +// Use named exports for utilities +export function formatDate(date) { ... } +export function parseDate(str) { ... } + +// Use default export for main component/class +export default class UserService { ... } + +// Group related exports +export { formatDate, parseDate, isValidDate } from './dateUtils'; +``` + +--- + +## Python Standards + +### Type Hints (PEP 484) + +```python +from typing import Optional, List, Dict, Union + +def get_user(user_id: int) -> Optional[User]: + """Fetch user by ID, returns None if not found.""" + return db.query(User).filter(User.id == user_id).first() + +def process_items(items: List[str]) -> Dict[str, int]: + """Count occurrences of each item.""" + return {item: items.count(item) for item in set(items)} + +def send_notification( + user: User, + message: str, + *, + priority: str = "normal", + channels: List[str] = None +) -> bool: + """Send notification to user via specified channels.""" + channels = channels or ["email"] + # Implementation +``` + +### Exception Handling + +```python +# Catch specific exceptions +try: + result = api_client.fetch_data(endpoint) +except ConnectionError as e: + logger.warning(f"Connection failed: {e}") + return cached_data +except TimeoutError as e: + logger.error(f"Request timed out: {e}") + raise ServiceUnavailableError() from e + +# Use context managers for resources +with open(filepath, 'r') as f: + data = json.load(f) + +# Custom exceptions should be informative +class ValidationError(Exception): + def __init__(self, field: str, message: str): + self.field = field + self.message = message + super().__init__(f"{field}: {message}") +``` + +### Class Design + +```python +from dataclasses import dataclass +from abc import ABC, abstractmethod + +# Use dataclasses for data containers +@dataclass +class UserDTO: + id: int + email: str + name: str + is_active: bool = True + +# Use ABC for interfaces +class Repository(ABC): + @abstractmethod + def find_by_id(self, id: int) -> Optional[Entity]: + pass + + @abstractmethod + def save(self, entity: Entity) -> Entity: + pass + +# Use properties for computed attributes +class Order: + def __init__(self, items: List[OrderItem]): + self._items = items + + @property + def total(self) -> Decimal: + return sum(item.price * item.quantity for item in self._items) +``` + +--- + +## Go Standards + +### Error Handling + +```go +// Always check errors +file, err := os.Open(filename) +if err != nil { + return fmt.Errorf("failed to open %s: %w", filename, err) +} +defer file.Close() + +// Use custom error types for specific cases +type ValidationError struct { + Field string + Message string +} + +func (e *ValidationError) Error() string { + return fmt.Sprintf("%s: %s", e.Field, e.Message) +} + +// Wrap errors with context +if err := db.Query(query); err != nil { + return fmt.Errorf("query failed for user %d: %w", userID, err) +} +``` + +### Struct Design + +```go +// Use unexported fields with exported methods +type UserService struct { + repo UserRepository + cache Cache + logger Logger +} + +// Constructor functions for initialization +func NewUserService(repo UserRepository, cache Cache, logger Logger) *UserService { + return &UserService{ + repo: repo, + cache: cache, + logger: logger, + } +} + +// Keep interfaces small +type Reader interface { + Read(p []byte) (n int, err error) +} + +type Writer interface { + Write(p []byte) (n int, err error) +} +``` + +### Concurrency + +```go +// Use context for cancellation +func fetchData(ctx context.Context, url string) ([]byte, error) { + req, err := http.NewRequestWithContext(ctx, "GET", url, nil) + if err != nil { + return nil, err + } + // ... +} + +// Use channels for communication +func worker(jobs <-chan Job, results chan<- Result) { + for job := range jobs { + result := process(job) + results <- result + } +} + +// Use sync.WaitGroup for coordination +var wg sync.WaitGroup +for _, item := range items { + wg.Add(1) + go func(i Item) { + defer wg.Done() + processItem(i) + }(item) +} +wg.Wait() +``` + +--- + +## Swift Standards + +### Optionals + +```swift +// Use optional binding +if let user = fetchUser(id: userId) { + displayProfile(user) +} + +// Use guard for early exit +guard let data = response.data else { + throw NetworkError.noData +} + +// Use nil coalescing for defaults +let displayName = user.nickname ?? user.email + +// Avoid force unwrapping except in tests +// BAD: let name = user.name! +// GOOD: guard let name = user.name else { return } +``` + +### Protocol-Oriented Design + +```swift +// Define protocols with minimal requirements +protocol Identifiable { + var id: String { get } +} + +protocol Persistable: Identifiable { + func save() throws + static func find(by id: String) -> Self? +} + +// Use protocol extensions for default implementations +extension Persistable { + func save() throws { + try Storage.shared.save(self) + } +} + +// Prefer composition over inheritance +struct User: Identifiable, Codable { + let id: String + var name: String + var email: String +} +``` + +### Error Handling + +```swift +// Define domain-specific errors +enum AuthError: Error { + case invalidCredentials + case tokenExpired + case networkFailure(underlying: Error) +} + +// Use Result type for async operations +func authenticate( + email: String, + password: String, + completion: @escaping (Result) -> Void +) + +// Use throws for synchronous operations +func validate(_ input: String) throws -> ValidatedInput { + guard !input.isEmpty else { + throw ValidationError.emptyInput + } + return ValidatedInput(value: input) +} +``` + +--- + +## Kotlin Standards + +### Null Safety + +```kotlin +// Use nullable types explicitly +fun findUser(id: Int): User? { + return userRepository.find(id) +} + +// Use safe calls and elvis operator +val name = user?.profile?.name ?: "Unknown" + +// Use let for null checks with side effects +user?.let { activeUser -> + sendWelcomeEmail(activeUser.email) + logActivity(activeUser.id) +} + +// Use require/check for validation +fun processPayment(amount: Double) { + require(amount > 0) { "Amount must be positive: $amount" } + // Process +} +``` + +### Data Classes and Sealed Classes + +```kotlin +// Use data classes for DTOs +data class UserDTO( + val id: Int, + val email: String, + val name: String, + val isActive: Boolean = true +) + +// Use sealed classes for state +sealed class Result { + data class Success(val data: T) : Result() + data class Error(val message: String, val cause: Throwable? = null) : Result() + object Loading : Result() +} + +// Pattern matching with when +fun handleResult(result: Result) = when (result) { + is Result.Success -> showUser(result.data) + is Result.Error -> showError(result.message) + Result.Loading -> showLoading() +} +``` + +### Coroutines + +```kotlin +// Use structured concurrency +suspend fun loadDashboard(): Dashboard = coroutineScope { + val profile = async { fetchProfile() } + val stats = async { fetchStats() } + val notifications = async { fetchNotifications() } + + Dashboard( + profile = profile.await(), + stats = stats.await(), + notifications = notifications.await() + ) +} + +// Handle cancellation +suspend fun fetchWithRetry(url: String): Response { + repeat(3) { attempt -> + try { + return httpClient.get(url) + } catch (e: IOException) { + if (attempt == 2) throw e + delay(1000L * (attempt + 1)) + } + } + throw IllegalStateException("Unreachable") +} +``` diff --git a/.junie/skills/code-reviewer/references/common_antipatterns.md b/.junie/skills/code-reviewer/references/common_antipatterns.md new file mode 100644 index 00000000..26045452 --- /dev/null +++ b/.junie/skills/code-reviewer/references/common_antipatterns.md @@ -0,0 +1,739 @@ +# Common Antipatterns + +Code antipatterns to identify during review, with examples and fixes. + +--- + +## Table of Contents + +- [Structural Antipatterns](#structural-antipatterns) +- [Logic Antipatterns](#logic-antipatterns) +- [Security Antipatterns](#security-antipatterns) +- [Performance Antipatterns](#performance-antipatterns) +- [Testing Antipatterns](#testing-antipatterns) +- [Async Antipatterns](#async-antipatterns) + +--- + +## Structural Antipatterns + +### God Class + +A class that does too much and knows too much. + +```typescript +// BAD: God class handling everything +class UserManager { + createUser(data: UserData) { ... } + updateUser(id: string, data: UserData) { ... } + deleteUser(id: string) { ... } + sendEmail(userId: string, content: string) { ... } + generateReport(userId: string) { ... } + validatePassword(password: string) { ... } + hashPassword(password: string) { ... } + uploadAvatar(userId: string, file: File) { ... } + resizeImage(file: File) { ... } + logActivity(userId: string, action: string) { ... } + // 50 more methods... +} + +// GOOD: Single responsibility classes +class UserRepository { + create(data: UserData): User { ... } + update(id: string, data: Partial): User { ... } + delete(id: string): void { ... } +} + +class EmailService { + send(to: string, content: string): void { ... } +} + +class PasswordService { + validate(password: string): ValidationResult { ... } + hash(password: string): string { ... } +} +``` + +**Detection:** Class has >20 methods, >500 lines, or handles unrelated concerns. + +--- + +### Long Method + +Functions that do too much and are hard to understand. + +```python +# BAD: Long method doing everything +def process_order(order_data): + # Validate order (20 lines) + if not order_data.get('items'): + raise ValueError('No items') + if not order_data.get('customer_id'): + raise ValueError('No customer') + # ... more validation + + # Calculate totals (30 lines) + subtotal = 0 + for item in order_data['items']: + price = get_product_price(item['product_id']) + subtotal += price * item['quantity'] + # ... tax calculation, discounts + + # Process payment (40 lines) + payment_result = payment_gateway.charge(...) + # ... handle payment errors + + # Create order record (20 lines) + order = Order.create(...) + + # Send notifications (20 lines) + send_order_confirmation(...) + notify_warehouse(...) + + return order + +# GOOD: Composed of focused functions +def process_order(order_data): + validate_order(order_data) + totals = calculate_order_totals(order_data) + payment = process_payment(order_data['customer_id'], totals) + order = create_order_record(order_data, totals, payment) + send_order_notifications(order) + return order +``` + +**Detection:** Function >50 lines or requires scrolling to read. + +--- + +### Deep Nesting + +Excessive indentation making code hard to follow. + +```javascript +// BAD: Deep nesting +function processData(data) { + if (data) { + if (data.items) { + if (data.items.length > 0) { + for (const item of data.items) { + if (item.isValid) { + if (item.type === 'premium') { + if (item.price > 100) { + // Finally do something + processItem(item); + } + } + } + } + } + } + } +} + +// GOOD: Early returns and guard clauses +function processData(data) { + if (!data?.items?.length) { + return; + } + + const premiumItems = data.items.filter( + item => item.isValid && item.type === 'premium' && item.price > 100 + ); + + premiumItems.forEach(processItem); +} +``` + +**Detection:** Indentation >4 levels deep. + +--- + +### Magic Numbers and Strings + +Hard-coded values without explanation. + +```go +// BAD: Magic numbers +func calculateDiscount(total float64, userType int) float64 { + if userType == 1 { + return total * 0.15 + } else if userType == 2 { + return total * 0.25 + } + return total * 0.05 +} + +// GOOD: Named constants +const ( + UserTypeRegular = 1 + UserTypePremium = 2 + + DiscountRegular = 0.05 + DiscountStandard = 0.15 + DiscountPremium = 0.25 +) + +func calculateDiscount(total float64, userType int) float64 { + switch userType { + case UserTypePremium: + return total * DiscountPremium + case UserTypeRegular: + return total * DiscountStandard + default: + return total * DiscountRegular + } +} +``` + +**Detection:** Literal numbers (except 0, 1) or repeated string literals. + +--- + +### Primitive Obsession + +Using primitives instead of small objects. + +```typescript +// BAD: Primitives everywhere +function createUser( + name: string, + email: string, + phone: string, + street: string, + city: string, + zipCode: string, + country: string +): User { ... } + +// GOOD: Value objects +interface Address { + street: string; + city: string; + zipCode: string; + country: string; +} + +interface ContactInfo { + email: string; + phone: string; +} + +function createUser( + name: string, + contact: ContactInfo, + address: Address +): User { ... } +``` + +**Detection:** Functions with >4 parameters of same type, or related primitives always passed together. + +--- + +## Logic Antipatterns + +### Boolean Blindness + +Passing booleans that make code unreadable at call sites. + +```swift +// BAD: What do these booleans mean? +user.configure(true, false, true, false) + +// GOOD: Named parameters or option objects +user.configure( + sendWelcomeEmail: true, + requireVerification: false, + enableNotifications: true, + isAdmin: false +) + +// Or use an options struct +struct UserConfiguration { + var sendWelcomeEmail: Bool = true + var requireVerification: Bool = false + var enableNotifications: Bool = true + var isAdmin: Bool = false +} + +user.configure(UserConfiguration()) +``` + +**Detection:** Function calls with multiple boolean literals. + +--- + +### Null Returns for Collections + +Returning null instead of empty collections. + +```kotlin +// BAD: Returning null +fun findUsersByRole(role: String): List? { + val users = repository.findByRole(role) + return if (users.isEmpty()) null else users +} + +// Caller must handle null +val users = findUsersByRole("admin") +if (users != null) { + users.forEach { ... } +} + +// GOOD: Return empty collection +fun findUsersByRole(role: String): List { + return repository.findByRole(role) +} + +// Caller can iterate directly +findUsersByRole("admin").forEach { ... } +``` + +**Detection:** Functions returning nullable collections. + +--- + +### Stringly Typed Code + +Using strings where enums or types should be used. + +```python +# BAD: String-based logic +def handle_event(event_type: str, data: dict): + if event_type == "user_created": + handle_user_created(data) + elif event_type == "user_updated": + handle_user_updated(data) + elif event_type == "user_dleted": # Typo won't be caught + handle_user_deleted(data) + +# GOOD: Enum-based +from enum import Enum + +class EventType(Enum): + USER_CREATED = "user_created" + USER_UPDATED = "user_updated" + USER_DELETED = "user_deleted" + +def handle_event(event_type: EventType, data: dict): + handlers = { + EventType.USER_CREATED: handle_user_created, + EventType.USER_UPDATED: handle_user_updated, + EventType.USER_DELETED: handle_user_deleted, + } + handlers[event_type](data) +``` + +**Detection:** String comparisons for type/status/category values. + +--- + +## Security Antipatterns + +### SQL Injection + +String concatenation in SQL queries. + +```javascript +// BAD: String concatenation +const query = `SELECT * FROM users WHERE id = ${userId}`; +db.query(query); + +// BAD: String templates still vulnerable +const query = `SELECT * FROM users WHERE name = '${userName}'`; + +// GOOD: Parameterized queries +const query = 'SELECT * FROM users WHERE id = $1'; +db.query(query, [userId]); + +// GOOD: Using ORM safely +User.findOne({ where: { id: userId } }); +``` + +**Detection:** String concatenation or template literals with SQL keywords. + +--- + +### Hardcoded Credentials + +Secrets in source code. + +```python +# BAD: Hardcoded secrets +API_KEY = "sk-abc123xyz789" +DATABASE_URL = "postgresql://admin:password123@prod-db.internal:5432/app" + +# GOOD: Environment variables +import os + +API_KEY = os.environ["API_KEY"] +DATABASE_URL = os.environ["DATABASE_URL"] + +# GOOD: Secrets manager +from aws_secretsmanager import get_secret + +API_KEY = get_secret("api-key") +``` + +**Detection:** Variables named `password`, `secret`, `key`, `token` with string literals. + +--- + +### Unsafe Deserialization + +Deserializing untrusted data without validation. + +```python +# BAD: Binary serialization from untrusted source can execute arbitrary code +# Examples: Python's binary serialization, yaml.load without SafeLoader + +# GOOD: Use safe alternatives +import json + +def load_data(file_path): + with open(file_path, 'r') as f: + return json.load(f) + +# GOOD: Use SafeLoader for YAML +import yaml + +with open('config.yaml') as f: + config = yaml.safe_load(f) +``` + +**Detection:** Binary deserialization functions, yaml.load without safe loader, dynamic code execution on external data. + +--- + +### Missing Input Validation + +Trusting user input without validation. + +```typescript +// BAD: No validation +app.post('/user', (req, res) => { + const user = db.create({ + name: req.body.name, + email: req.body.email, + role: req.body.role // User can set themselves as admin! + }); + res.json(user); +}); + +// GOOD: Validate and sanitize +import { z } from 'zod'; + +const CreateUserSchema = z.object({ + name: z.string().min(1).max(100), + email: z.string().email(), + // role is NOT accepted from input +}); + +app.post('/user', (req, res) => { + const validated = CreateUserSchema.parse(req.body); + const user = db.create({ + ...validated, + role: 'user' // Default role, not from input + }); + res.json(user); +}); +``` + +**Detection:** Request body/params used directly without validation schema. + +--- + +## Performance Antipatterns + +### N+1 Query Problem + +Loading related data one record at a time. + +```python +# BAD: N+1 queries +def get_orders_with_items(): + orders = Order.query.all() # 1 query + for order in orders: + items = OrderItem.query.filter_by(order_id=order.id).all() # N queries + order.items = items + return orders + +# GOOD: Eager loading +def get_orders_with_items(): + return Order.query.options( + joinedload(Order.items) + ).all() # 1 query with JOIN + +# GOOD: Batch loading +def get_orders_with_items(): + orders = Order.query.all() + order_ids = [o.id for o in orders] + items = OrderItem.query.filter( + OrderItem.order_id.in_(order_ids) + ).all() # 2 queries total + # Group items by order_id... +``` + +**Detection:** Database queries inside loops. + +--- + +### Unbounded Collections + +Loading unlimited data into memory. + +```go +// BAD: Load all records +func GetAllUsers() ([]User, error) { + return db.Find(&[]User{}) // Could be millions +} + +// GOOD: Pagination +func GetUsers(page, pageSize int) ([]User, error) { + offset := (page - 1) * pageSize + return db.Limit(pageSize).Offset(offset).Find(&[]User{}) +} + +// GOOD: Streaming for large datasets +func ProcessAllUsers(handler func(User) error) error { + rows, err := db.Model(&User{}).Rows() + if err != nil { + return err + } + defer rows.Close() + + for rows.Next() { + var user User + db.ScanRows(rows, &user) + if err := handler(user); err != nil { + return err + } + } + return nil +} +``` + +**Detection:** `findAll()`, `find({})`, or queries without `LIMIT`. + +--- + +### Synchronous I/O in Hot Paths + +Blocking operations in request handlers. + +```javascript +// BAD: Sync file read on every request +app.get('/config', (req, res) => { + const config = fs.readFileSync('./config.json'); // Blocks event loop + res.json(JSON.parse(config)); +}); + +// GOOD: Load once at startup +const config = JSON.parse(fs.readFileSync('./config.json')); + +app.get('/config', (req, res) => { + res.json(config); +}); + +// GOOD: Async with caching +let configCache = null; + +app.get('/config', async (req, res) => { + if (!configCache) { + configCache = JSON.parse(await fs.promises.readFile('./config.json')); + } + res.json(configCache); +}); +``` + +**Detection:** `readFileSync`, `execSync`, or blocking calls in request handlers. + +--- + +## Testing Antipatterns + +### Test Code Duplication + +Repeating setup in every test. + +```typescript +// BAD: Duplicate setup +describe('UserService', () => { + it('should create user', async () => { + const db = await createTestDatabase(); + const userRepo = new UserRepository(db); + const emailService = new MockEmailService(); + const service = new UserService(userRepo, emailService); + + const user = await service.create({ name: 'Test' }); + expect(user.name).toBe('Test'); + }); + + it('should update user', async () => { + const db = await createTestDatabase(); // Duplicated + const userRepo = new UserRepository(db); // Duplicated + const emailService = new MockEmailService(); // Duplicated + const service = new UserService(userRepo, emailService); // Duplicated + + // ... + }); +}); + +// GOOD: Shared setup +describe('UserService', () => { + let service: UserService; + let db: TestDatabase; + + beforeEach(async () => { + db = await createTestDatabase(); + const userRepo = new UserRepository(db); + const emailService = new MockEmailService(); + service = new UserService(userRepo, emailService); + }); + + afterEach(async () => { + await db.cleanup(); + }); + + it('should create user', async () => { + const user = await service.create({ name: 'Test' }); + expect(user.name).toBe('Test'); + }); +}); +``` + +--- + +### Testing Implementation Instead of Behavior + +Tests coupled to internal implementation. + +```python +# BAD: Testing implementation details +def test_add_item_to_cart(): + cart = ShoppingCart() + cart.add_item(Product("Apple", 1.00)) + + # Testing internal structure + assert cart._items[0].name == "Apple" + assert cart._total == 1.00 + +# GOOD: Testing behavior +def test_add_item_to_cart(): + cart = ShoppingCart() + cart.add_item(Product("Apple", 1.00)) + + # Testing public behavior + assert cart.item_count == 1 + assert cart.total == 1.00 + assert cart.contains("Apple") +``` + +--- + +## Async Antipatterns + +### Floating Promises + +Promises without await or catch. + +```typescript +// BAD: Floating promise +async function saveUser(user: User) { + db.save(user); // Not awaited, errors lost + logger.info('User saved'); // Logs before save completes +} + +// BAD: Fire and forget in loop +for (const item of items) { + processItem(item); // All run in parallel, no error handling +} + +// GOOD: Await the promise +async function saveUser(user: User) { + await db.save(user); + logger.info('User saved'); +} + +// GOOD: Process with proper handling +await Promise.all(items.map(item => processItem(item))); + +// Or sequentially +for (const item of items) { + await processItem(item); +} +``` + +**Detection:** Async function calls without `await` or `.then()`. + +--- + +### Callback Hell + +Deeply nested callbacks. + +```javascript +// BAD: Callback hell +getUser(userId, (err, user) => { + if (err) return handleError(err); + getOrders(user.id, (err, orders) => { + if (err) return handleError(err); + getProducts(orders[0].productIds, (err, products) => { + if (err) return handleError(err); + renderPage(user, orders, products, (err) => { + if (err) return handleError(err); + console.log('Done'); + }); + }); + }); +}); + +// GOOD: Async/await +async function loadPage(userId) { + try { + const user = await getUser(userId); + const orders = await getOrders(user.id); + const products = await getProducts(orders[0].productIds); + await renderPage(user, orders, products); + console.log('Done'); + } catch (err) { + handleError(err); + } +} +``` + +**Detection:** >2 levels of callback nesting. + +--- + +### Async in Constructor + +Async operations in constructors. + +```typescript +// BAD: Async in constructor +class DatabaseConnection { + constructor(url: string) { + this.connect(url); // Fire-and-forget async + } + + private async connect(url: string) { + this.client = await createClient(url); + } +} + +// GOOD: Factory method +class DatabaseConnection { + private constructor(private client: Client) {} + + static async create(url: string): Promise { + const client = await createClient(url); + return new DatabaseConnection(client); + } +} + +// Usage +const db = await DatabaseConnection.create(url); +``` + +**Detection:** `async` calls or `.then()` in constructor. diff --git a/.junie/skills/code-reviewer/scripts/code_quality_checker.py b/.junie/skills/code-reviewer/scripts/code_quality_checker.py new file mode 100644 index 00000000..d9f10df7 --- /dev/null +++ b/.junie/skills/code-reviewer/scripts/code_quality_checker.py @@ -0,0 +1,560 @@ +#!/usr/bin/env python3 +""" +Code Quality Checker + +Analyzes source code for quality issues, code smells, complexity metrics, +and SOLID principle violations. + +Usage: + python .junie/skills/code-reviewer/scripts/code_quality_checker.py /path/to/file.py + python .junie/skills/code-reviewer/scripts/code_quality_checker.py /path/to/directory --recursive + python .junie/skills/code-reviewer/scripts/code_quality_checker.py . --language typescript --json +""" + +import argparse +import json +import re +import sys +from pathlib import Path +from typing import Dict, List, Optional + + +# Language-specific file extensions +LANGUAGE_EXTENSIONS = { + "python": [".py"], + "typescript": [".ts", ".tsx"], + "javascript": [".js", ".jsx", ".mjs"], + "go": [".go"], + "swift": [".swift"], + "kotlin": [".kt", ".kts"] +} + +# Code smell thresholds +THRESHOLDS = { + "long_function_lines": 50, + "too_many_parameters": 5, + "high_complexity": 10, + "god_class_methods": 20, + "max_imports": 15 +} + + +def get_file_extension(filepath: Path) -> str: + """Get file extension.""" + return filepath.suffix.lower() + + +def detect_language(filepath: Path) -> Optional[str]: + """Detect programming language from file extension.""" + ext = get_file_extension(filepath) + for lang, extensions in LANGUAGE_EXTENSIONS.items(): + if ext in extensions: + return lang + return None + + +def read_file_content(filepath: Path) -> str: + """Read file content safely.""" + try: + with open(filepath, "r", encoding="utf-8", errors="ignore") as f: + return f.read() + except Exception: + return "" + + +def calculate_cyclomatic_complexity(content: str) -> int: + """ + Estimate cyclomatic complexity based on control flow keywords. + """ + complexity = 1 # Base complexity + + # Control flow patterns that increase complexity + patterns = [ + r"\bif\b", + r"\belif\b", + r"\belse\b", + r"\bfor\b", + r"\bwhile\b", + r"\bcase\b", + r"\bcatch\b", + r"\bexcept\b", + r"\band\b", + r"\bor\b", + r"\|\|", + r"&&" + ] + + for pattern in patterns: + matches = re.findall(pattern, content, re.IGNORECASE) + complexity += len(matches) + + return complexity + + +def count_lines(content: str) -> Dict[str, int]: + """Count different types of lines in code.""" + lines = content.split("\n") + total = len(lines) + blank = sum(1 for line in lines if not line.strip()) + comment = 0 + + for line in lines: + stripped = line.strip() + if stripped.startswith("#") or stripped.startswith("//"): + comment += 1 + elif stripped.startswith("/*") or stripped.startswith("'''") or stripped.startswith('"""'): + comment += 1 + + code = total - blank - comment + + return { + "total": total, + "code": code, + "blank": blank, + "comment": comment + } + + +def find_functions(content: str, language: str) -> List[Dict]: + """Find function definitions and their metrics.""" + functions = [] + + # Language-specific function patterns + patterns = { + "python": r"def\s+(\w+)\s*\(([^)]*)\)", + "typescript": r"(?:function\s+(\w+)|(?:const|let|var)\s+(\w+)\s*=\s*(?:async\s+)?\([^)]*\)\s*=>)", + "javascript": r"(?:function\s+(\w+)|(?:const|let|var)\s+(\w+)\s*=\s*(?:async\s+)?\([^)]*\)\s*=>)", + "go": r"func\s+(?:\([^)]+\)\s+)?(\w+)\s*\(([^)]*)\)", + "swift": r"func\s+(\w+)\s*\(([^)]*)\)", + "kotlin": r"fun\s+(\w+)\s*\(([^)]*)\)" + } + + pattern = patterns.get(language, patterns["python"]) + matches = re.finditer(pattern, content, re.MULTILINE) + + for match in matches: + name = next((g for g in match.groups() if g), "anonymous") + params_str = match.group(2) if len(match.groups()) > 1 and match.group(2) else "" + + # Count parameters + params = [p.strip() for p in params_str.split(",") if p.strip()] + param_count = len(params) + + # Estimate function length + start_pos = match.end() + remaining = content[start_pos:] + + next_func = re.search(pattern, remaining) + if next_func: + func_body = remaining[:next_func.start()] + else: + func_body = remaining[:min(2000, len(remaining))] + + line_count = len(func_body.split("\n")) + complexity = calculate_cyclomatic_complexity(func_body) + + functions.append({ + "name": name, + "parameters": param_count, + "lines": line_count, + "complexity": complexity + }) + + return functions + + +def find_classes(content: str, language: str) -> List[Dict]: + """Find class definitions and their metrics.""" + classes = [] + + patterns = { + "python": r"class\s+(\w+)", + "typescript": r"class\s+(\w+)", + "javascript": r"class\s+(\w+)", + "go": r"type\s+(\w+)\s+struct", + "swift": r"class\s+(\w+)", + "kotlin": r"class\s+(\w+)" + } + + pattern = patterns.get(language, patterns["python"]) + matches = re.finditer(pattern, content) + + for match in matches: + name = match.group(1) + + start_pos = match.end() + remaining = content[start_pos:] + + next_class = re.search(pattern, remaining) + if next_class: + class_body = remaining[:next_class.start()] + else: + class_body = remaining + + # Count methods + method_patterns = { + "python": r"def\s+\w+\s*\(", + "typescript": r"(?:public|private|protected)?\s*\w+\s*\([^)]*\)\s*[:{]", + "javascript": r"\w+\s*\([^)]*\)\s*\{", + "go": r"func\s+\(", + "swift": r"func\s+\w+", + "kotlin": r"fun\s+\w+" + } + method_pattern = method_patterns.get(language, method_patterns["python"]) + methods = len(re.findall(method_pattern, class_body)) + + classes.append({ + "name": name, + "methods": methods, + "lines": len(class_body.split("\n")) + }) + + return classes + + +def check_code_smells(content: str, functions: List[Dict], classes: List[Dict]) -> List[Dict]: + """Check for code smells in the content.""" + smells = [] + + # Long functions + for func in functions: + if func["lines"] > THRESHOLDS["long_function_lines"]: + smells.append({ + "type": "long_function", + "severity": "medium", + "message": f"Function '{func['name']}' has {func['lines']} lines (max: {THRESHOLDS['long_function_lines']})", + "location": func["name"] + }) + + # Too many parameters + for func in functions: + if func["parameters"] > THRESHOLDS["too_many_parameters"]: + smells.append({ + "type": "too_many_parameters", + "severity": "low", + "message": f"Function '{func['name']}' has {func['parameters']} parameters (max: {THRESHOLDS['too_many_parameters']})", + "location": func["name"] + }) + + # High complexity + for func in functions: + if func["complexity"] > THRESHOLDS["high_complexity"]: + severity = "high" if func["complexity"] > 20 else "medium" + smells.append({ + "type": "high_complexity", + "severity": severity, + "message": f"Function '{func['name']}' has complexity {func['complexity']} (max: {THRESHOLDS['high_complexity']})", + "location": func["name"] + }) + + # God classes + for cls in classes: + if cls["methods"] > THRESHOLDS["god_class_methods"]: + smells.append({ + "type": "god_class", + "severity": "high", + "message": f"Class '{cls['name']}' has {cls['methods']} methods (max: {THRESHOLDS['god_class_methods']})", + "location": cls["name"] + }) + + # Magic numbers + magic_pattern = r"\b(? List[Dict]: + """Check for potential SOLID principle violations.""" + violations = [] + + # OCP: Type checking instead of polymorphism + type_checks = len(re.findall(r"isinstance\(|type\(.*\)\s*==|typeof\s+\w+\s*===", content)) + if type_checks > 2: + violations.append({ + "principle": "OCP", + "name": "Open/Closed Principle", + "severity": "medium", + "message": f"Found {type_checks} type checks - consider using polymorphism" + }) + + # LSP/ISP: NotImplementedError + not_impl = len(re.findall(r"raise\s+NotImplementedError|not\s+implemented", content, re.IGNORECASE)) + if not_impl: + violations.append({ + "principle": "LSP/ISP", + "name": "Liskov/Interface Segregation", + "severity": "low", + "message": f"Found {not_impl} unimplemented methods - may indicate oversized interface" + }) + + # DIP: Too many direct imports + imports = len(re.findall(r"^(?:import|from)\s+", content, re.MULTILINE)) + if imports > THRESHOLDS["max_imports"]: + violations.append({ + "principle": "DIP", + "name": "Dependency Inversion Principle", + "severity": "low", + "message": f"File has {imports} imports - consider dependency injection" + }) + + return violations + + +def calculate_quality_score( + line_metrics: Dict, + functions: List[Dict], + classes: List[Dict], + smells: List[Dict], + violations: List[Dict] +) -> int: + """Calculate overall quality score (0-100).""" + score = 100 + + # Deduct for code smells + for smell in smells: + if smell["severity"] == "high": + score -= 10 + elif smell["severity"] == "medium": + score -= 5 + elif smell["severity"] == "low": + score -= 2 + + # Deduct for SOLID violations + for violation in violations: + if violation["severity"] == "high": + score -= 8 + elif violation["severity"] == "medium": + score -= 4 + elif violation["severity"] == "low": + score -= 2 + + # Bonus for good comment ratio (10-30%) + if line_metrics["total"] > 0: + comment_ratio = line_metrics["comment"] / line_metrics["total"] + if 0.1 <= comment_ratio <= 0.3: + score += 5 + + # Bonus for reasonable function sizes + if functions: + avg_lines = sum(f["lines"] for f in functions) / len(functions) + if avg_lines < 30: + score += 5 + + return max(0, min(100, score)) + + +def get_grade(score: int) -> str: + """Convert score to letter grade.""" + if score >= 90: + return "A" + elif score >= 80: + return "B" + elif score >= 70: + return "C" + elif score >= 60: + return "D" + else: + return "F" + + +def analyze_file(filepath: Path) -> Dict: + """Analyze a single file for code quality.""" + language = detect_language(filepath) + if not language: + return {"error": f"Unsupported file type: {filepath.suffix}"} + + content = read_file_content(filepath) + if not content: + return {"error": f"Could not read file: {filepath}"} + + line_metrics = count_lines(content) + functions = find_functions(content, language) + classes = find_classes(content, language) + smells = check_code_smells(content, functions, classes) + violations = check_solid_violations(content) + score = calculate_quality_score(line_metrics, functions, classes, smells, violations) + + return { + "file": str(filepath), + "language": language, + "metrics": { + "lines": line_metrics, + "functions": len(functions), + "classes": len(classes), + "avg_complexity": round(sum(f["complexity"] for f in functions) / max(1, len(functions)), 1) + }, + "quality_score": score, + "grade": get_grade(score), + "smells": smells, + "solid_violations": violations, + "function_details": functions[:10], + "class_details": classes[:10] + } + + +def analyze_directory( + dir_path: Path, + recursive: bool = True, + language: Optional[str] = None +) -> Dict: + """Analyze all files in a directory.""" + results = [] + extensions = [] + + if language: + extensions = LANGUAGE_EXTENSIONS.get(language, []) + else: + for exts in LANGUAGE_EXTENSIONS.values(): + extensions.extend(exts) + + pattern = "**/*" if recursive else "*" + + for ext in extensions: + for filepath in dir_path.glob(f"{pattern}{ext}"): + if "node_modules" in str(filepath) or ".git" in str(filepath): + continue + result = analyze_file(filepath) + if "error" not in result: + results.append(result) + + if not results: + return {"error": "No supported files found"} + + total_score = sum(r["quality_score"] for r in results) + avg_score = total_score / len(results) + total_smells = sum(len(r["smells"]) for r in results) + total_violations = sum(len(r["solid_violations"]) for r in results) + + return { + "directory": str(dir_path), + "files_analyzed": len(results), + "average_score": round(avg_score, 1), + "overall_grade": get_grade(int(avg_score)), + "total_code_smells": total_smells, + "total_solid_violations": total_violations, + "files": sorted(results, key=lambda x: x["quality_score"]) + } + + +def print_report(analysis: Dict) -> None: + """Print human-readable analysis report.""" + if "error" in analysis: + print(f"Error: {analysis['error']}") + return + + print("=" * 60) + print("CODE QUALITY REPORT") + print("=" * 60) + + if "file" in analysis: + print(f"\nFile: {analysis['file']}") + print(f"Language: {analysis['language']}") + print(f"Quality Score: {analysis['quality_score']}/100 ({analysis['grade']})") + + metrics = analysis["metrics"] + print(f"\nLines: {metrics['lines']['total']} ({metrics['lines']['code']} code, {metrics['lines']['comment']} comments)") + print(f"Functions: {metrics['functions']}") + print(f"Classes: {metrics['classes']}") + print(f"Avg Complexity: {metrics['avg_complexity']}") + + if analysis["smells"]: + print("\n--- CODE SMELLS ---") + for smell in analysis["smells"][:10]: + print(f" [{smell['severity'].upper()}] {smell['message']} ({smell['location']})") + + if analysis["solid_violations"]: + print("\n--- SOLID VIOLATIONS ---") + for v in analysis["solid_violations"]: + print(f" [{v['principle']}] {v['message']}") + else: + print(f"\nDirectory: {analysis['directory']}") + print(f"Files Analyzed: {analysis['files_analyzed']}") + print(f"Average Score: {analysis['average_score']}/100 ({analysis['overall_grade']})") + print(f"Total Code Smells: {analysis['total_code_smells']}") + print(f"Total SOLID Violations: {analysis['total_solid_violations']}") + + print("\n--- FILES BY QUALITY ---") + for f in analysis["files"][:10]: + print(f" {f['quality_score']:3d}/100 [{f['grade']}] {f['file']}") + + print("\n" + "=" * 60) + + +def main(): + parser = argparse.ArgumentParser( + description="Analyze code quality, smells, and SOLID violations" + ) + parser.add_argument( + "path", + help="File or directory to analyze" + ) + parser.add_argument( + "--recursive", "-r", + action="store_true", + default=True, + help="Recursively analyze directories (default: true)" + ) + parser.add_argument( + "--language", "-l", + choices=list(LANGUAGE_EXTENSIONS.keys()), + help="Filter by programming language" + ) + parser.add_argument( + "--json", + action="store_true", + help="Output in JSON format" + ) + parser.add_argument( + "--output", "-o", + help="Write output to file" + ) + + args = parser.parse_args() + + target = Path(args.path).resolve() + + if not target.exists(): + print(f"Error: Path does not exist: {target}", file=sys.stderr) + sys.exit(1) + + if target.is_file(): + analysis = analyze_file(target) + else: + analysis = analyze_directory(target, args.recursive, args.language) + + if args.json: + output = json.dumps(analysis, indent=2, default=str) + if args.output: + with open(args.output, "w") as f: + f.write(output) + print(f"Results written to {args.output}") + else: + print(output) + else: + print_report(analysis) + + +if __name__ == "__main__": + main() diff --git a/.junie/skills/code-reviewer/scripts/pr_analyzer.py b/.junie/skills/code-reviewer/scripts/pr_analyzer.py new file mode 100644 index 00000000..915c4573 --- /dev/null +++ b/.junie/skills/code-reviewer/scripts/pr_analyzer.py @@ -0,0 +1,495 @@ +#!/usr/bin/env python3 +""" +PR Analyzer + +Analyzes pull request changes for review complexity, risk assessment, +and generates review priorities. + +Usage: + python .junie/skills/code-reviewer/scripts/pr_analyzer.py /path/to/repo + python .junie/skills/code-reviewer/scripts/pr_analyzer.py . --base main --head feature-branch + python .junie/skills/code-reviewer/scripts/pr_analyzer.py /path/to/repo --json +""" + +import argparse +import json +import os +import re +import subprocess +import sys +from pathlib import Path +from typing import Dict, List, Optional, Tuple + + +# File categories for review prioritization +FILE_CATEGORIES = { + "critical": { + "patterns": [ + r"auth", r"security", r"password", r"token", r"secret", + r"payment", r"billing", r"crypto", r"encrypt" + ], + "weight": 5, + "description": "Security-sensitive files requiring careful review" + }, + "high": { + "patterns": [ + r"api", r"database", r"migration", r"schema", r"model", + r"config", r"env", r"middleware" + ], + "weight": 4, + "description": "Core infrastructure files" + }, + "medium": { + "patterns": [ + r"service", r"controller", r"handler", r"util", r"helper" + ], + "weight": 3, + "description": "Business logic files" + }, + "low": { + "patterns": [ + r"test", r"spec", r"mock", r"fixture", r"story", + r"readme", r"docs", r"\.md$" + ], + "weight": 1, + "description": "Tests and documentation" + } +} + +# Risky patterns to flag +RISK_PATTERNS = [ + { + "name": "hardcoded_secrets", + "pattern": r"(password|secret|api_key|token)\s*[=:]\s*['\"][^'\"]+['\"]", + "severity": "critical", + "message": "Potential hardcoded secret detected" + }, + { + "name": "todo_fixme", + "pattern": r"(TODO|FIXME|HACK|XXX):", + "severity": "low", + "message": "TODO/FIXME comment found" + }, + { + "name": "console_log", + "pattern": r"console\.(log|debug|info|warn|error)\(", + "severity": "medium", + "message": "Console statement found (remove for production)" + }, + { + "name": "debugger", + "pattern": r"\bdebugger\b", + "severity": "high", + "message": "Debugger statement found" + }, + { + "name": "disable_eslint", + "pattern": r"eslint-disable", + "severity": "medium", + "message": "ESLint rule disabled" + }, + { + "name": "any_type", + "pattern": r":\s*any\b", + "severity": "medium", + "message": "TypeScript 'any' type used" + }, + { + "name": "sql_concatenation", + "pattern": r"(SELECT|INSERT|UPDATE|DELETE).*\+.*['\"]", + "severity": "critical", + "message": "Potential SQL injection (string concatenation in query)" + } +] + + +def run_git_command(cmd: List[str], cwd: Path) -> Tuple[bool, str]: + """Run a git command and return success status and output.""" + try: + result = subprocess.run( + cmd, + cwd=cwd, + capture_output=True, + text=True, + timeout=30 + ) + return result.returncode == 0, result.stdout.strip() + except subprocess.TimeoutExpired: + return False, "Command timed out" + except Exception as e: + return False, str(e) + + +def get_changed_files(repo_path: Path, base: str, head: str) -> List[Dict]: + """Get list of changed files between two refs.""" + success, output = run_git_command( + ["git", "diff", "--name-status", f"{base}...{head}"], + repo_path + ) + + if not success: + # Try without the triple dot (for uncommitted changes) + success, output = run_git_command( + ["git", "diff", "--name-status", base, head], + repo_path + ) + + if not success or not output: + # Fall back to staged changes + success, output = run_git_command( + ["git", "diff", "--name-status", "--cached"], + repo_path + ) + + files = [] + for line in output.split("\n"): + if not line.strip(): + continue + parts = line.split("\t") + if len(parts) >= 2: + status = parts[0][0] # First character of status + filepath = parts[-1] # Handle renames (R100\told\tnew) + status_map = { + "A": "added", + "M": "modified", + "D": "deleted", + "R": "renamed", + "C": "copied" + } + files.append({ + "path": filepath, + "status": status_map.get(status, "modified") + }) + + return files + + +def get_file_diff(repo_path: Path, filepath: str, base: str, head: str) -> str: + """Get diff content for a specific file.""" + success, output = run_git_command( + ["git", "diff", f"{base}...{head}", "--", filepath], + repo_path + ) + if not success: + success, output = run_git_command( + ["git", "diff", "--cached", "--", filepath], + repo_path + ) + return output if success else "" + + +def categorize_file(filepath: str) -> Tuple[str, int]: + """Categorize a file based on its path and name.""" + filepath_lower = filepath.lower() + + for category, info in FILE_CATEGORIES.items(): + for pattern in info["patterns"]: + if re.search(pattern, filepath_lower): + return category, info["weight"] + + return "medium", 2 # Default category + + +def analyze_diff_for_risks(diff_content: str, filepath: str) -> List[Dict]: + """Analyze diff content for risky patterns.""" + risks = [] + + # Only analyze added lines (starting with +) + added_lines = [ + line[1:] for line in diff_content.split("\n") + if line.startswith("+") and not line.startswith("+++") + ] + + content = "\n".join(added_lines) + + for risk in RISK_PATTERNS: + matches = re.findall(risk["pattern"], content, re.IGNORECASE) + if matches: + risks.append({ + "name": risk["name"], + "severity": risk["severity"], + "message": risk["message"], + "file": filepath, + "count": len(matches) + }) + + return risks + + +def count_changes(diff_content: str) -> Dict[str, int]: + """Count additions and deletions in diff.""" + additions = 0 + deletions = 0 + + for line in diff_content.split("\n"): + if line.startswith("+") and not line.startswith("+++"): + additions += 1 + elif line.startswith("-") and not line.startswith("---"): + deletions += 1 + + return {"additions": additions, "deletions": deletions} + + +def calculate_complexity_score(files: List[Dict], all_risks: List[Dict]) -> int: + """Calculate overall PR complexity score (1-10).""" + score = 0 + + # File count contribution (max 3 points) + file_count = len(files) + if file_count > 20: + score += 3 + elif file_count > 10: + score += 2 + elif file_count > 5: + score += 1 + + # Total changes contribution (max 3 points) + total_changes = sum(f.get("additions", 0) + f.get("deletions", 0) for f in files) + if total_changes > 500: + score += 3 + elif total_changes > 200: + score += 2 + elif total_changes > 50: + score += 1 + + # Risk severity contribution (max 4 points) + critical_risks = sum(1 for r in all_risks if r["severity"] == "critical") + high_risks = sum(1 for r in all_risks if r["severity"] == "high") + + score += min(2, critical_risks) + score += min(2, high_risks) + + return min(10, max(1, score)) + + +def analyze_commit_messages(repo_path: Path, base: str, head: str) -> Dict: + """Analyze commit messages in the PR.""" + success, output = run_git_command( + ["git", "log", "--oneline", f"{base}...{head}"], + repo_path + ) + + if not success or not output: + return {"commits": 0, "issues": []} + + commits = output.strip().split("\n") + issues = [] + + for commit in commits: + if len(commit) < 10: + continue + + # Check for conventional commit format + message = commit[8:] if len(commit) > 8 else commit # Skip hash + + if not re.match(r"^(feat|fix|docs|style|refactor|test|chore|perf|ci|build|revert)(\(.+\))?:", message): + issues.append({ + "commit": commit[:7], + "issue": "Does not follow conventional commit format" + }) + + if len(message) > 72: + issues.append({ + "commit": commit[:7], + "issue": "Commit message exceeds 72 characters" + }) + + return { + "commits": len(commits), + "issues": issues + } + + +def analyze_pr( + repo_path: Path, + base: str = "main", + head: str = "HEAD" +) -> Dict: + """Perform complete PR analysis.""" + # Get changed files + changed_files = get_changed_files(repo_path, base, head) + + if not changed_files: + return { + "status": "no_changes", + "message": "No changes detected between branches" + } + + # Analyze each file + all_risks = [] + file_analyses = [] + + for file_info in changed_files: + filepath = file_info["path"] + category, weight = categorize_file(filepath) + + # Get diff for the file + diff = get_file_diff(repo_path, filepath, base, head) + changes = count_changes(diff) + risks = analyze_diff_for_risks(diff, filepath) + + all_risks.extend(risks) + + file_analyses.append({ + "path": filepath, + "status": file_info["status"], + "category": category, + "priority_weight": weight, + "additions": changes["additions"], + "deletions": changes["deletions"], + "risks": risks + }) + + # Sort by priority (highest first) + file_analyses.sort(key=lambda x: (-x["priority_weight"], x["path"])) + + # Analyze commits + commit_analysis = analyze_commit_messages(repo_path, base, head) + + # Calculate metrics + complexity = calculate_complexity_score(file_analyses, all_risks) + + total_additions = sum(f["additions"] for f in file_analyses) + total_deletions = sum(f["deletions"] for f in file_analyses) + + return { + "status": "analyzed", + "summary": { + "files_changed": len(file_analyses), + "total_additions": total_additions, + "total_deletions": total_deletions, + "complexity_score": complexity, + "complexity_label": get_complexity_label(complexity), + "commits": commit_analysis["commits"] + }, + "risks": { + "critical": [r for r in all_risks if r["severity"] == "critical"], + "high": [r for r in all_risks if r["severity"] == "high"], + "medium": [r for r in all_risks if r["severity"] == "medium"], + "low": [r for r in all_risks if r["severity"] == "low"] + }, + "files": file_analyses, + "commit_issues": commit_analysis["issues"], + "review_order": [f["path"] for f in file_analyses[:10]] # Top 10 priority files + } + + +def get_complexity_label(score: int) -> str: + """Get human-readable complexity label.""" + if score <= 2: + return "Simple" + elif score <= 4: + return "Moderate" + elif score <= 6: + return "Complex" + elif score <= 8: + return "Very Complex" + else: + return "Critical" + + +def print_report(analysis: Dict) -> None: + """Print human-readable analysis report.""" + if analysis["status"] == "no_changes": + print("No changes detected.") + return + + summary = analysis["summary"] + risks = analysis["risks"] + + print("=" * 60) + print("PR ANALYSIS REPORT") + print("=" * 60) + + print(f"\nComplexity: {summary['complexity_score']}/10 ({summary['complexity_label']})") + print(f"Files Changed: {summary['files_changed']}") + print(f"Lines: +{summary['total_additions']} / -{summary['total_deletions']}") + print(f"Commits: {summary['commits']}") + + # Risk summary + print("\n--- RISK SUMMARY ---") + print(f"Critical: {len(risks['critical'])}") + print(f"High: {len(risks['high'])}") + print(f"Medium: {len(risks['medium'])}") + print(f"Low: {len(risks['low'])}") + + # Critical and high risks details + if risks["critical"]: + print("\n--- CRITICAL RISKS ---") + for risk in risks["critical"]: + print(f" [{risk['file']}] {risk['message']} (x{risk['count']})") + + if risks["high"]: + print("\n--- HIGH RISKS ---") + for risk in risks["high"]: + print(f" [{risk['file']}] {risk['message']} (x{risk['count']})") + + # Commit message issues + if analysis["commit_issues"]: + print("\n--- COMMIT MESSAGE ISSUES ---") + for issue in analysis["commit_issues"][:5]: + print(f" {issue['commit']}: {issue['issue']}") + + # Review order + print("\n--- SUGGESTED REVIEW ORDER ---") + for i, filepath in enumerate(analysis["review_order"], 1): + file_info = next(f for f in analysis["files"] if f["path"] == filepath) + print(f" {i}. [{file_info['category'].upper()}] {filepath}") + + print("\n" + "=" * 60) + + +def main(): + parser = argparse.ArgumentParser( + description="Analyze pull request for review complexity and risks" + ) + parser.add_argument( + "repo_path", + nargs="?", + default=".", + help="Path to git repository (default: current directory)" + ) + parser.add_argument( + "--base", "-b", + default="main", + help="Base branch for comparison (default: main)" + ) + parser.add_argument( + "--head", + default="HEAD", + help="Head branch/commit for comparison (default: HEAD)" + ) + parser.add_argument( + "--json", + action="store_true", + help="Output in JSON format" + ) + parser.add_argument( + "--output", "-o", + help="Write output to file" + ) + + args = parser.parse_args() + + repo_path = Path(args.repo_path).resolve() + + if not (repo_path / ".git").exists(): + print(f"Error: {repo_path} is not a git repository", file=sys.stderr) + sys.exit(1) + + analysis = analyze_pr(repo_path, args.base, args.head) + + if args.json: + output = json.dumps(analysis, indent=2) + if args.output: + with open(args.output, "w") as f: + f.write(output) + print(f"Results written to {args.output}") + else: + print(output) + else: + print_report(analysis) + + +if __name__ == "__main__": + main() diff --git a/.junie/skills/code-reviewer/scripts/review_report_generator.py b/.junie/skills/code-reviewer/scripts/review_report_generator.py new file mode 100644 index 00000000..20d4c188 --- /dev/null +++ b/.junie/skills/code-reviewer/scripts/review_report_generator.py @@ -0,0 +1,505 @@ +#!/usr/bin/env python3 +""" +Review Report Generator + +Generates comprehensive code review reports by combining PR analysis +and code quality findings into structured, actionable reports. + +Usage: + python .junie/skills/code-reviewer/scripts/review_report_generator.py /path/to/repo + python .junie/skills/code-reviewer/scripts/review_report_generator.py . --pr-analysis pr_results.json --quality-analysis quality_results.json + python .junie/skills/code-reviewer/scripts/review_report_generator.py /path/to/repo --format markdown --output review.md +""" + +import argparse +import json +import os +import subprocess +import sys +from datetime import datetime +from pathlib import Path +from typing import Dict, List, Optional, Tuple + + +# Severity weights for prioritization +SEVERITY_WEIGHTS = { + "critical": 100, + "high": 75, + "medium": 50, + "low": 25, + "info": 10 +} + +# Review verdict thresholds +VERDICT_THRESHOLDS = { + "approve": {"max_critical": 0, "max_high": 0, "max_score": 100}, + "approve_with_suggestions": {"max_critical": 0, "max_high": 2, "max_score": 85}, + "request_changes": {"max_critical": 0, "max_high": 5, "max_score": 70}, + "block": {"max_critical": float("inf"), "max_high": float("inf"), "max_score": 0} +} + + +def load_json_file(filepath: str) -> Optional[Dict]: + """Load JSON file if it exists.""" + try: + with open(filepath, "r") as f: + return json.load(f) + except (FileNotFoundError, json.JSONDecodeError): + return None + + +def run_pr_analyzer(repo_path: Path) -> Dict: + """Run .junie/skills/code-reviewer/scripts/pr_analyzer.py and return results.""" + script_path = Path(__file__).parent / ".junie/skills/code-reviewer/scripts/pr_analyzer.py" + if not script_path.exists(): + return {"status": "error", "message": ".junie/skills/code-reviewer/scripts/pr_analyzer.py not found"} + + try: + result = subprocess.run( + [sys.executable, str(script_path), str(repo_path), "--json"], + capture_output=True, + text=True, + timeout=120 + ) + if result.returncode == 0: + return json.loads(result.stdout) + return {"status": "error", "message": result.stderr} + except Exception as e: + return {"status": "error", "message": str(e)} + + +def run_quality_checker(repo_path: Path) -> Dict: + """Run .junie/skills/code-reviewer/scripts/code_quality_checker.py and return results.""" + script_path = Path(__file__).parent / ".junie/skills/code-reviewer/scripts/code_quality_checker.py" + if not script_path.exists(): + return {"status": "error", "message": ".junie/skills/code-reviewer/scripts/code_quality_checker.py not found"} + + try: + result = subprocess.run( + [sys.executable, str(script_path), str(repo_path), "--json"], + capture_output=True, + text=True, + timeout=300 + ) + if result.returncode == 0: + return json.loads(result.stdout) + return {"status": "error", "message": result.stderr} + except Exception as e: + return {"status": "error", "message": str(e)} + + +def calculate_review_score(pr_analysis: Dict, quality_analysis: Dict) -> int: + """Calculate overall review score (0-100).""" + score = 100 + + # Deduct for PR risks + if "risks" in pr_analysis: + risks = pr_analysis["risks"] + score -= len(risks.get("critical", [])) * 15 + score -= len(risks.get("high", [])) * 10 + score -= len(risks.get("medium", [])) * 5 + score -= len(risks.get("low", [])) * 2 + + # Deduct for code quality issues + if "issues" in quality_analysis: + issues = quality_analysis["issues"] + score -= len([i for i in issues if i.get("severity") == "critical"]) * 12 + score -= len([i for i in issues if i.get("severity") == "high"]) * 8 + score -= len([i for i in issues if i.get("severity") == "medium"]) * 4 + score -= len([i for i in issues if i.get("severity") == "low"]) * 1 + + # Deduct for complexity + if "summary" in pr_analysis: + complexity = pr_analysis["summary"].get("complexity_score", 0) + if complexity > 7: + score -= 10 + elif complexity > 5: + score -= 5 + + return max(0, min(100, score)) + + +def determine_verdict(score: int, critical_count: int, high_count: int) -> Tuple[str, str]: + """Determine review verdict based on score and issue counts.""" + if critical_count > 0: + return "block", "Critical issues must be resolved before merge" + + if score >= 90 and high_count == 0: + return "approve", "Code meets quality standards" + + if score >= 75 and high_count <= 2: + return "approve_with_suggestions", "Minor improvements recommended" + + if score >= 50: + return "request_changes", "Several issues need to be addressed" + + return "block", "Significant issues prevent approval" + + +def generate_findings_list(pr_analysis: Dict, quality_analysis: Dict) -> List[Dict]: + """Combine and prioritize all findings.""" + findings = [] + + # Add PR risk findings + if "risks" in pr_analysis: + for severity, items in pr_analysis["risks"].items(): + for item in items: + findings.append({ + "source": "pr_analysis", + "severity": severity, + "category": item.get("name", "unknown"), + "message": item.get("message", ""), + "file": item.get("file", ""), + "count": item.get("count", 1) + }) + + # Add code quality findings + if "issues" in quality_analysis: + for issue in quality_analysis["issues"]: + findings.append({ + "source": "quality_analysis", + "severity": issue.get("severity", "medium"), + "category": issue.get("type", "unknown"), + "message": issue.get("message", ""), + "file": issue.get("file", ""), + "line": issue.get("line", 0) + }) + + # Sort by severity weight + findings.sort( + key=lambda x: -SEVERITY_WEIGHTS.get(x["severity"], 0) + ) + + return findings + + +def generate_action_items(findings: List[Dict]) -> List[Dict]: + """Generate prioritized action items from findings.""" + action_items = [] + seen_categories = set() + + for finding in findings: + category = finding["category"] + severity = finding["severity"] + + # Group similar issues + if category in seen_categories and severity not in ["critical", "high"]: + continue + + action = { + "priority": "P0" if severity == "critical" else "P1" if severity == "high" else "P2", + "action": get_action_for_category(category, finding), + "severity": severity, + "files_affected": [finding["file"]] if finding.get("file") else [] + } + action_items.append(action) + seen_categories.add(category) + + return action_items[:15] # Top 15 actions + + +def get_action_for_category(category: str, finding: Dict) -> str: + """Get actionable recommendation for issue category.""" + actions = { + "hardcoded_secrets": "Remove hardcoded credentials and use environment variables or a secrets manager", + "sql_concatenation": "Use parameterized queries to prevent SQL injection", + "debugger": "Remove debugger statements before merging", + "console_log": "Remove or replace console statements with proper logging", + "todo_fixme": "Address TODO/FIXME comments or create tracking issues", + "disable_eslint": "Address the underlying issue instead of disabling lint rules", + "any_type": "Replace 'any' types with proper type definitions", + "long_function": "Break down function into smaller, focused units", + "god_class": "Split class into smaller, single-responsibility classes", + "too_many_params": "Use parameter objects or builder pattern", + "deep_nesting": "Refactor using early returns, guard clauses, or extraction", + "high_complexity": "Reduce cyclomatic complexity through refactoring", + "missing_error_handling": "Add proper error handling and recovery logic", + "duplicate_code": "Extract duplicate code into shared functions", + "magic_numbers": "Replace magic numbers with named constants", + "large_file": "Consider splitting into multiple smaller modules" + } + return actions.get(category, f"Review and address: {finding.get('message', category)}") + + +def format_markdown_report(report: Dict) -> str: + """Generate markdown-formatted report.""" + lines = [] + + # Header + lines.append("# Code Review Report") + lines.append("") + lines.append(f"**Generated:** {report['metadata']['generated_at']}") + lines.append(f"**Repository:** {report['metadata']['repository']}") + lines.append("") + + # Executive Summary + lines.append("## Executive Summary") + lines.append("") + summary = report["summary"] + verdict = summary["verdict"] + verdict_emoji = { + "approve": "✅", + "approve_with_suggestions": "✅", + "request_changes": "⚠️", + "block": "❌" + }.get(verdict, "❓") + + lines.append(f"**Verdict:** {verdict_emoji} {verdict.upper().replace('_', ' ')}") + lines.append(f"**Score:** {summary['score']}/100") + lines.append(f"**Rationale:** {summary['rationale']}") + lines.append("") + + # Issue Counts + lines.append("### Issue Summary") + lines.append("") + lines.append("| Severity | Count |") + lines.append("|----------|-------|") + for severity in ["critical", "high", "medium", "low"]: + count = summary["issue_counts"].get(severity, 0) + lines.append(f"| {severity.capitalize()} | {count} |") + lines.append("") + + # PR Statistics (if available) + if "pr_summary" in report: + pr = report["pr_summary"] + lines.append("### Change Statistics") + lines.append("") + lines.append(f"- **Files Changed:** {pr.get('files_changed', 'N/A')}") + lines.append(f"- **Lines Added:** +{pr.get('total_additions', 0)}") + lines.append(f"- **Lines Removed:** -{pr.get('total_deletions', 0)}") + lines.append(f"- **Complexity:** {pr.get('complexity_label', 'N/A')}") + lines.append("") + + # Action Items + if report.get("action_items"): + lines.append("## Action Items") + lines.append("") + for i, item in enumerate(report["action_items"], 1): + priority = item["priority"] + emoji = "🔴" if priority == "P0" else "🟠" if priority == "P1" else "🟡" + lines.append(f"{i}. {emoji} **[{priority}]** {item['action']}") + if item.get("files_affected"): + lines.append(f" - Files: {', '.join(item['files_affected'][:3])}") + lines.append("") + + # Critical Findings + critical_findings = [f for f in report.get("findings", []) if f["severity"] == "critical"] + if critical_findings: + lines.append("## Critical Issues (Must Fix)") + lines.append("") + for finding in critical_findings: + lines.append(f"- **{finding['category']}** in `{finding.get('file', 'unknown')}`") + lines.append(f" - {finding['message']}") + lines.append("") + + # High Priority Findings + high_findings = [f for f in report.get("findings", []) if f["severity"] == "high"] + if high_findings: + lines.append("## High Priority Issues") + lines.append("") + for finding in high_findings[:10]: + lines.append(f"- **{finding['category']}** in `{finding.get('file', 'unknown')}`") + lines.append(f" - {finding['message']}") + lines.append("") + + # Review Order (if available) + if "review_order" in report: + lines.append("## Suggested Review Order") + lines.append("") + for i, filepath in enumerate(report["review_order"][:10], 1): + lines.append(f"{i}. `{filepath}`") + lines.append("") + + # Footer + lines.append("---") + lines.append("*Generated by Code Reviewer*") + + return "\n".join(lines) + + +def format_text_report(report: Dict) -> str: + """Generate plain text report.""" + lines = [] + + lines.append("=" * 60) + lines.append("CODE REVIEW REPORT") + lines.append("=" * 60) + lines.append("") + lines.append(f"Generated: {report['metadata']['generated_at']}") + lines.append(f"Repository: {report['metadata']['repository']}") + lines.append("") + + summary = report["summary"] + verdict = summary["verdict"].upper().replace("_", " ") + lines.append(f"VERDICT: {verdict}") + lines.append(f"SCORE: {summary['score']}/100") + lines.append(f"RATIONALE: {summary['rationale']}") + lines.append("") + + lines.append("--- ISSUE SUMMARY ---") + for severity in ["critical", "high", "medium", "low"]: + count = summary["issue_counts"].get(severity, 0) + lines.append(f" {severity.capitalize()}: {count}") + lines.append("") + + if report.get("action_items"): + lines.append("--- ACTION ITEMS ---") + for i, item in enumerate(report["action_items"][:10], 1): + lines.append(f" {i}. [{item['priority']}] {item['action']}") + lines.append("") + + critical = [f for f in report.get("findings", []) if f["severity"] == "critical"] + if critical: + lines.append("--- CRITICAL ISSUES ---") + for f in critical: + lines.append(f" [{f.get('file', 'unknown')}] {f['message']}") + lines.append("") + + lines.append("=" * 60) + + return "\n".join(lines) + + +def generate_report( + repo_path: Path, + pr_analysis: Optional[Dict] = None, + quality_analysis: Optional[Dict] = None +) -> Dict: + """Generate comprehensive review report.""" + # Run analyses if not provided + if pr_analysis is None: + pr_analysis = run_pr_analyzer(repo_path) + + if quality_analysis is None: + quality_analysis = run_quality_checker(repo_path) + + # Generate findings + findings = generate_findings_list(pr_analysis, quality_analysis) + + # Count issues by severity + issue_counts = { + "critical": len([f for f in findings if f["severity"] == "critical"]), + "high": len([f for f in findings if f["severity"] == "high"]), + "medium": len([f for f in findings if f["severity"] == "medium"]), + "low": len([f for f in findings if f["severity"] == "low"]) + } + + # Calculate score and verdict + score = calculate_review_score(pr_analysis, quality_analysis) + verdict, rationale = determine_verdict( + score, + issue_counts["critical"], + issue_counts["high"] + ) + + # Generate action items + action_items = generate_action_items(findings) + + # Build report + report = { + "metadata": { + "generated_at": datetime.now().isoformat(), + "repository": str(repo_path), + "version": "1.0.0" + }, + "summary": { + "score": score, + "verdict": verdict, + "rationale": rationale, + "issue_counts": issue_counts + }, + "findings": findings, + "action_items": action_items + } + + # Add PR summary if available + if pr_analysis.get("status") == "analyzed": + report["pr_summary"] = pr_analysis.get("summary", {}) + report["review_order"] = pr_analysis.get("review_order", []) + + # Add quality summary if available + if quality_analysis.get("status") == "analyzed": + report["quality_summary"] = quality_analysis.get("summary", {}) + + return report + + +def main(): + parser = argparse.ArgumentParser( + description="Generate comprehensive code review reports" + ) + parser.add_argument( + "repo_path", + nargs="?", + default=".", + help="Path to repository (default: current directory)" + ) + parser.add_argument( + "--pr-analysis", + help="Path to pre-computed PR analysis JSON" + ) + parser.add_argument( + "--quality-analysis", + help="Path to pre-computed quality analysis JSON" + ) + parser.add_argument( + "--format", "-f", + choices=["text", "markdown", "json"], + default="text", + help="Output format (default: text)" + ) + parser.add_argument( + "--output", "-o", + help="Write output to file" + ) + parser.add_argument( + "--json", + action="store_true", + help="Output as JSON (shortcut for --format json)" + ) + + args = parser.parse_args() + + repo_path = Path(args.repo_path).resolve() + if not repo_path.exists(): + print(f"Error: Path does not exist: {repo_path}", file=sys.stderr) + sys.exit(1) + + # Load pre-computed analyses if provided + pr_analysis = None + quality_analysis = None + + if args.pr_analysis: + pr_analysis = load_json_file(args.pr_analysis) + if not pr_analysis: + print(f"Warning: Could not load PR analysis from {args.pr_analysis}") + + if args.quality_analysis: + quality_analysis = load_json_file(args.quality_analysis) + if not quality_analysis: + print(f"Warning: Could not load quality analysis from {args.quality_analysis}") + + # Generate report + report = generate_report(repo_path, pr_analysis, quality_analysis) + + # Format output + output_format = "json" if args.json else args.format + + if output_format == "json": + output = json.dumps(report, indent=2) + elif output_format == "markdown": + output = format_markdown_report(report) + else: + output = format_text_report(report) + + # Write or print output + if args.output: + with open(args.output, "w") as f: + f.write(output) + print(f"Report written to {args.output}") + else: + print(output) + + +if __name__ == "__main__": + main() diff --git a/.roo/skills/code-reviewer/SKILL.md b/.roo/skills/code-reviewer/SKILL.md new file mode 100644 index 00000000..ec8c75c5 --- /dev/null +++ b/.roo/skills/code-reviewer/SKILL.md @@ -0,0 +1,177 @@ +--- +name: code-reviewer +description: Code review automation for TypeScript, JavaScript, Python, Go, Swift, Kotlin. Analyzes PRs for complexity and risk, checks code quality for SOLID violations and code smells, generates review reports. Use when reviewing pull requests, analyzing code quality, identifying issues, generating review checklists. +--- + +# Code Reviewer + +Automated code review tools for analyzing pull requests, detecting code quality issues, and generating review reports. + +--- + +## Table of Contents + +- [Tools](#tools) + - [PR Analyzer](#pr-analyzer) + - [Code Quality Checker](#code-quality-checker) + - [Review Report Generator](#review-report-generator) +- [Reference Guides](#reference-guides) +- [Languages Supported](#languages-supported) + +--- + +## Tools + +### PR Analyzer + +Analyzes git diff between branches to assess review complexity and identify risks. + +```bash +# Analyze current branch against main +python scripts/pr_analyzer.py /path/to/repo + +# Compare specific branches +python scripts/pr_analyzer.py . --base main --head feature-branch + +# JSON output for integration +python scripts/pr_analyzer.py /path/to/repo --json +``` + +**What it detects:** +- Hardcoded secrets (passwords, API keys, tokens) +- SQL injection patterns (string concatenation in queries) +- Debug statements (debugger, console.log) +- ESLint rule disabling +- TypeScript `any` types +- TODO/FIXME comments + +**Output includes:** +- Complexity score (1-10) +- Risk categorization (critical, high, medium, low) +- File prioritization for review order +- Commit message validation + +--- + +### Code Quality Checker + +Analyzes source code for structural issues, code smells, and SOLID violations. + +```bash +# Analyze a directory +python scripts/code_quality_checker.py /path/to/code + +# Analyze specific language +python scripts/code_quality_checker.py . --language python + +# JSON output +python scripts/code_quality_checker.py /path/to/code --json +``` + +**What it detects:** +- Long functions (>50 lines) +- Large files (>500 lines) +- God classes (>20 methods) +- Deep nesting (>4 levels) +- Too many parameters (>5) +- High cyclomatic complexity +- Missing error handling +- Unused imports +- Magic numbers + +**Thresholds:** + +| Issue | Threshold | +|-------|-----------| +| Long function | >50 lines | +| Large file | >500 lines | +| God class | >20 methods | +| Too many params | >5 | +| Deep nesting | >4 levels | +| High complexity | >10 branches | + +--- + +### Review Report Generator + +Combines PR analysis and code quality findings into structured review reports. + +```bash +# Generate report for current repo +python scripts/review_report_generator.py /path/to/repo + +# Markdown output +python scripts/review_report_generator.py . --format markdown --output review.md + +# Use pre-computed analyses +python scripts/review_report_generator.py . \ + --pr-analysis pr_results.json \ + --quality-analysis quality_results.json +``` + +**Report includes:** +- Review verdict (approve, request changes, block) +- Score (0-100) +- Prioritized action items +- Issue summary by severity +- Suggested review order + +**Verdicts:** + +| Score | Verdict | +|-------|---------| +| 90+ with no high issues | Approve | +| 75+ with ≤2 high issues | Approve with suggestions | +| 50-74 | Request changes | +| <50 or critical issues | Block | + +--- + +## Reference Guides + +### Code Review Checklist +`.roo/skills/code-reviewer/references/code_review_checklist.md` + +Systematic checklists covering: +- Pre-review checks (build, tests, PR hygiene) +- Correctness (logic, data handling, error handling) +- Security (input validation, injection prevention) +- Performance (efficiency, caching, scalability) +- Maintainability (code quality, naming, structure) +- Testing (coverage, quality, mocking) +- Language-specific checks + +### Coding Standards +`.roo/skills/code-reviewer/references/coding_standards.md` + +Language-specific standards for: +- TypeScript (type annotations, null safety, async/await) +- JavaScript (declarations, patterns, modules) +- Python (type hints, exceptions, class design) +- Go (error handling, structs, concurrency) +- Swift (optionals, protocols, errors) +- Kotlin (null safety, data classes, coroutines) + +### Common Antipatterns +`.roo/skills/code-reviewer/references/common_antipatterns.md` + +Antipattern catalog with examples and fixes: +- Structural (god class, long method, deep nesting) +- Logic (boolean blindness, stringly typed code) +- Security (SQL injection, hardcoded credentials) +- Performance (N+1 queries, unbounded collections) +- Testing (duplication, testing implementation) +- Async (floating promises, callback hell) + +--- + +## Languages Supported + +| Language | Extensions | +|----------|------------| +| Python | `.py` | +| TypeScript | `.ts`, `.tsx` | +| JavaScript | `.js`, `.jsx`, `.mjs` | +| Go | `.go` | +| Swift | `.swift` | +| Kotlin | `.kt`, `.kts` | \ No newline at end of file diff --git a/.roo/skills/code-reviewer/references/code_review_checklist.md b/.roo/skills/code-reviewer/references/code_review_checklist.md new file mode 100644 index 00000000..b7bd0867 --- /dev/null +++ b/.roo/skills/code-reviewer/references/code_review_checklist.md @@ -0,0 +1,270 @@ +# Code Review Checklist + +Structured checklists for systematic code review across different aspects. + +--- + +## Table of Contents + +- [Pre-Review Checks](#pre-review-checks) +- [Correctness](#correctness) +- [Security](#security) +- [Performance](#performance) +- [Maintainability](#maintainability) +- [Testing](#testing) +- [Documentation](#documentation) +- [Language-Specific Checks](#language-specific-checks) + +--- + +## Pre-Review Checks + +Before diving into code, verify these basics: + +### Build and Tests +- [ ] Code compiles without errors +- [ ] All existing tests pass +- [ ] New tests are included for new functionality +- [ ] No unintended files included (build artifacts, IDE configs) + +### PR Hygiene +- [ ] PR has clear title and description +- [ ] Changes are scoped appropriately (not too large) +- [ ] Commits follow conventional commit format +- [ ] Branch is up to date with base branch + +### Scope Verification +- [ ] Changes match the stated purpose +- [ ] No unrelated changes bundled in +- [ ] Breaking changes are documented +- [ ] Migration path provided if needed + +--- + +## Correctness + +### Logic +- [ ] Algorithm implements requirements correctly +- [ ] Edge cases handled (null, empty, boundary values) +- [ ] Off-by-one errors checked +- [ ] Correct operators used (== vs ===, & vs &&) +- [ ] Loop termination conditions correct +- [ ] Recursion has proper base cases + +### Data Handling +- [ ] Data types appropriate for the use case +- [ ] Numeric overflow/underflow considered +- [ ] Date/time handling accounts for timezones +- [ ] Unicode and internationalization handled +- [ ] Data validation at entry points + +### State Management +- [ ] State transitions are valid +- [ ] Race conditions addressed +- [ ] Concurrent access handled correctly +- [ ] State cleanup on errors/exit + +### Error Handling +- [ ] Errors caught at appropriate levels +- [ ] Error messages are actionable +- [ ] Errors don't expose sensitive information +- [ ] Recovery or graceful degradation implemented +- [ ] Resources cleaned up in error paths + +--- + +## Security + +### Input Validation +- [ ] All user input validated and sanitized +- [ ] Input length limits enforced +- [ ] File uploads validated (type, size, content) +- [ ] URL parameters validated + +### Injection Prevention +- [ ] SQL queries parameterized +- [ ] Command execution uses safe APIs +- [ ] HTML output escaped to prevent XSS +- [ ] LDAP queries properly escaped +- [ ] XML parsing disables external entities + +### Authentication & Authorization +- [ ] Authentication required for protected resources +- [ ] Authorization checked before operations +- [ ] Session management secure +- [ ] Password handling follows best practices +- [ ] Token expiration implemented + +### Data Protection +- [ ] Sensitive data encrypted at rest +- [ ] Sensitive data encrypted in transit +- [ ] PII handled according to policy +- [ ] Secrets not hardcoded +- [ ] Logs don't contain sensitive data + +### API Security +- [ ] Rate limiting implemented +- [ ] CORS configured correctly +- [ ] CSRF protection in place +- [ ] API keys/tokens secured +- [ ] Endpoints use HTTPS + +--- + +## Performance + +### Efficiency +- [ ] Appropriate data structures used +- [ ] Algorithms have acceptable complexity +- [ ] Database queries are optimized +- [ ] N+1 query problems avoided +- [ ] Indexes used where beneficial + +### Resource Usage +- [ ] Memory usage bounded +- [ ] No memory leaks +- [ ] File handles properly closed +- [ ] Database connections pooled +- [ ] Network calls minimized + +### Caching +- [ ] Appropriate caching strategy +- [ ] Cache invalidation handled +- [ ] Cache keys are unique and predictable +- [ ] TTL values appropriate + +### Scalability +- [ ] Horizontal scaling considered +- [ ] Bottlenecks identified +- [ ] Async processing for long operations +- [ ] Batch operations where appropriate + +--- + +## Maintainability + +### Code Quality +- [ ] Functions/methods have single responsibility +- [ ] Classes follow SOLID principles +- [ ] Code is DRY (Don't Repeat Yourself) +- [ ] No dead code or commented-out code +- [ ] Magic numbers replaced with constants + +### Naming +- [ ] Names are descriptive and consistent +- [ ] Naming follows project conventions +- [ ] No abbreviations that obscure meaning +- [ ] Boolean variables/functions have is/has/can prefix + +### Structure +- [ ] Functions are appropriately sized (<50 lines preferred) +- [ ] Nesting depth is reasonable (<4 levels) +- [ ] Related code is grouped together +- [ ] Dependencies are minimal and explicit + +### Readability +- [ ] Code is self-documenting where possible +- [ ] Complex logic has explanatory comments +- [ ] Formatting is consistent +- [ ] No overly clever or obscure code + +--- + +## Testing + +### Coverage +- [ ] New code has unit tests +- [ ] Critical paths have integration tests +- [ ] Edge cases are tested +- [ ] Error conditions are tested + +### Quality +- [ ] Tests are independent +- [ ] Tests have clear assertions +- [ ] Test names describe what is tested +- [ ] Tests don't depend on external state + +### Mocking +- [ ] External dependencies are mocked +- [ ] Mocks are realistic +- [ ] Mock setup is not excessive + +--- + +## Documentation + +### Code Documentation +- [ ] Public APIs are documented +- [ ] Complex algorithms explained +- [ ] Non-obvious decisions documented +- [ ] TODO/FIXME comments have context + +### External Documentation +- [ ] README updated if needed +- [ ] API documentation updated +- [ ] Changelog updated +- [ ] Migration guides provided + +--- + +## Language-Specific Checks + +### TypeScript/JavaScript +- [ ] Types are explicit (avoid `any`) +- [ ] Null checks present (`?.`, `??`) +- [ ] Async/await errors handled +- [ ] No floating promises +- [ ] Memory leaks from closures checked + +### Python +- [ ] Type hints used for public APIs +- [ ] Context managers for resources (`with` statements) +- [ ] Exception handling is specific (not bare `except`) +- [ ] No mutable default arguments +- [ ] List comprehensions used appropriately + +### Go +- [ ] Errors checked and handled +- [ ] Goroutine leaks prevented +- [ ] Context propagation correct +- [ ] Defer statements in right order +- [ ] Interfaces minimal + +### Swift +- [ ] Optionals handled safely +- [ ] Memory management correct (weak/unowned) +- [ ] Error handling uses Result or throws +- [ ] Access control appropriate +- [ ] Codable implementation correct + +### Kotlin +- [ ] Null safety leveraged +- [ ] Coroutine cancellation handled +- [ ] Data classes used appropriately +- [ ] Extension functions don't obscure behavior +- [ ] Sealed classes for state + +--- + +## Review Process Tips + +### Before Approving +1. Verify all critical checks passed +2. Confirm tests are adequate +3. Consider deployment impact +4. Check for any security concerns +5. Ensure documentation is updated + +### Providing Feedback +- Be specific about issues +- Explain why something is problematic +- Suggest alternatives when possible +- Distinguish blockers from suggestions +- Acknowledge good patterns + +### When to Block +- Security vulnerabilities present +- Critical logic errors +- No tests for risky changes +- Breaking changes without migration +- Significant performance regressions diff --git a/.roo/skills/code-reviewer/references/coding_standards.md b/.roo/skills/code-reviewer/references/coding_standards.md new file mode 100644 index 00000000..9fbc6a06 --- /dev/null +++ b/.roo/skills/code-reviewer/references/coding_standards.md @@ -0,0 +1,555 @@ +# Coding Standards + +Language-specific coding standards and conventions for code review. + +--- + +## Table of Contents + +- [Universal Principles](#universal-principles) +- [TypeScript Standards](#typescript-standards) +- [JavaScript Standards](#javascript-standards) +- [Python Standards](#python-standards) +- [Go Standards](#go-standards) +- [Swift Standards](#swift-standards) +- [Kotlin Standards](#kotlin-standards) + +--- + +## Universal Principles + +These apply across all languages. + +### Naming Conventions + +| Element | Convention | Example | +|---------|------------|---------| +| Variables | camelCase (JS/TS), snake_case (Python/Go) | `userName`, `user_name` | +| Constants | SCREAMING_SNAKE_CASE | `MAX_RETRY_COUNT` | +| Functions | camelCase (JS/TS), snake_case (Python) | `getUserById`, `get_user_by_id` | +| Classes | PascalCase | `UserRepository` | +| Interfaces | PascalCase, optionally prefixed | `IUserService` or `UserService` | +| Private members | Prefix with underscore or use access modifiers | `_internalState` | + +### Function Design + +``` +Good functions: +- Do one thing well +- Have descriptive names (verb + noun) +- Take 3 or fewer parameters +- Return early for error cases +- Stay under 50 lines +``` + +### Error Handling + +``` +Good error handling: +- Catch specific errors, not generic exceptions +- Log with context (what, where, why) +- Clean up resources in error paths +- Don't swallow errors silently +- Provide actionable error messages +``` + +--- + +## TypeScript Standards + +### Type Annotations + +```typescript +// Avoid 'any' - use unknown for truly unknown types +function processData(data: unknown): ProcessedResult { + if (isValidData(data)) { + return transform(data); + } + throw new Error('Invalid data format'); +} + +// Use explicit return types for public APIs +export function calculateTotal(items: CartItem[]): number { + return items.reduce((sum, item) => sum + item.price, 0); +} + +// Use type guards for runtime checks +function isUser(obj: unknown): obj is User { + return ( + typeof obj === 'object' && + obj !== null && + 'id' in obj && + 'email' in obj + ); +} +``` + +### Null Safety + +```typescript +// Use optional chaining and nullish coalescing +const userName = user?.profile?.name ?? 'Anonymous'; + +// Be explicit about nullable types +interface Config { + timeout: number; + retries?: number; // Optional + fallbackUrl: string | null; // Explicitly nullable +} + +// Use assertion functions for validation +function assertDefined(value: T | null | undefined): asserts value is T { + if (value === null || value === undefined) { + throw new Error('Value is not defined'); + } +} +``` + +### Async/Await + +```typescript +// Always handle errors in async functions +async function fetchUser(id: string): Promise { + try { + const response = await api.get(`/users/${id}`); + return response.data; + } catch (error) { + logger.error('Failed to fetch user', { id, error }); + throw new UserFetchError(id, error); + } +} + +// Use Promise.all for parallel operations +async function loadDashboard(userId: string): Promise { + const [profile, stats, notifications] = await Promise.all([ + fetchProfile(userId), + fetchStats(userId), + fetchNotifications(userId) + ]); + return { profile, stats, notifications }; +} +``` + +### React/Component Standards + +```typescript +// Use explicit prop types +interface ButtonProps { + label: string; + onClick: () => void; + variant?: 'primary' | 'secondary'; + disabled?: boolean; +} + +// Prefer functional components with hooks +function Button({ label, onClick, variant = 'primary', disabled = false }: ButtonProps) { + return ( + + ); +} + +// Use custom hooks for reusable logic +function useDebounce(value: T, delay: number): T { + const [debouncedValue, setDebouncedValue] = useState(value); + + useEffect(() => { + const timer = setTimeout(() => setDebouncedValue(value), delay); + return () => clearTimeout(timer); + }, [value, delay]); + + return debouncedValue; +} +``` + +--- + +## JavaScript Standards + +### Variable Declarations + +```javascript +// Use const by default, let when reassignment needed +const MAX_ITEMS = 100; +let currentCount = 0; + +// Never use var +// var is function-scoped and hoisted, leading to bugs +``` + +### Object and Array Patterns + +```javascript +// Use object destructuring +const { name, email, role = 'user' } = user; + +// Use spread for immutable updates +const updatedUser = { ...user, lastLogin: new Date() }; +const updatedList = [...items, newItem]; + +// Use array methods over loops +const activeUsers = users.filter(u => u.isActive); +const emails = users.map(u => u.email); +const total = orders.reduce((sum, o) => sum + o.amount, 0); +``` + +### Module Patterns + +```javascript +// Use named exports for utilities +export function formatDate(date) { ... } +export function parseDate(str) { ... } + +// Use default export for main component/class +export default class UserService { ... } + +// Group related exports +export { formatDate, parseDate, isValidDate } from './dateUtils'; +``` + +--- + +## Python Standards + +### Type Hints (PEP 484) + +```python +from typing import Optional, List, Dict, Union + +def get_user(user_id: int) -> Optional[User]: + """Fetch user by ID, returns None if not found.""" + return db.query(User).filter(User.id == user_id).first() + +def process_items(items: List[str]) -> Dict[str, int]: + """Count occurrences of each item.""" + return {item: items.count(item) for item in set(items)} + +def send_notification( + user: User, + message: str, + *, + priority: str = "normal", + channels: List[str] = None +) -> bool: + """Send notification to user via specified channels.""" + channels = channels or ["email"] + # Implementation +``` + +### Exception Handling + +```python +# Catch specific exceptions +try: + result = api_client.fetch_data(endpoint) +except ConnectionError as e: + logger.warning(f"Connection failed: {e}") + return cached_data +except TimeoutError as e: + logger.error(f"Request timed out: {e}") + raise ServiceUnavailableError() from e + +# Use context managers for resources +with open(filepath, 'r') as f: + data = json.load(f) + +# Custom exceptions should be informative +class ValidationError(Exception): + def __init__(self, field: str, message: str): + self.field = field + self.message = message + super().__init__(f"{field}: {message}") +``` + +### Class Design + +```python +from dataclasses import dataclass +from abc import ABC, abstractmethod + +# Use dataclasses for data containers +@dataclass +class UserDTO: + id: int + email: str + name: str + is_active: bool = True + +# Use ABC for interfaces +class Repository(ABC): + @abstractmethod + def find_by_id(self, id: int) -> Optional[Entity]: + pass + + @abstractmethod + def save(self, entity: Entity) -> Entity: + pass + +# Use properties for computed attributes +class Order: + def __init__(self, items: List[OrderItem]): + self._items = items + + @property + def total(self) -> Decimal: + return sum(item.price * item.quantity for item in self._items) +``` + +--- + +## Go Standards + +### Error Handling + +```go +// Always check errors +file, err := os.Open(filename) +if err != nil { + return fmt.Errorf("failed to open %s: %w", filename, err) +} +defer file.Close() + +// Use custom error types for specific cases +type ValidationError struct { + Field string + Message string +} + +func (e *ValidationError) Error() string { + return fmt.Sprintf("%s: %s", e.Field, e.Message) +} + +// Wrap errors with context +if err := db.Query(query); err != nil { + return fmt.Errorf("query failed for user %d: %w", userID, err) +} +``` + +### Struct Design + +```go +// Use unexported fields with exported methods +type UserService struct { + repo UserRepository + cache Cache + logger Logger +} + +// Constructor functions for initialization +func NewUserService(repo UserRepository, cache Cache, logger Logger) *UserService { + return &UserService{ + repo: repo, + cache: cache, + logger: logger, + } +} + +// Keep interfaces small +type Reader interface { + Read(p []byte) (n int, err error) +} + +type Writer interface { + Write(p []byte) (n int, err error) +} +``` + +### Concurrency + +```go +// Use context for cancellation +func fetchData(ctx context.Context, url string) ([]byte, error) { + req, err := http.NewRequestWithContext(ctx, "GET", url, nil) + if err != nil { + return nil, err + } + // ... +} + +// Use channels for communication +func worker(jobs <-chan Job, results chan<- Result) { + for job := range jobs { + result := process(job) + results <- result + } +} + +// Use sync.WaitGroup for coordination +var wg sync.WaitGroup +for _, item := range items { + wg.Add(1) + go func(i Item) { + defer wg.Done() + processItem(i) + }(item) +} +wg.Wait() +``` + +--- + +## Swift Standards + +### Optionals + +```swift +// Use optional binding +if let user = fetchUser(id: userId) { + displayProfile(user) +} + +// Use guard for early exit +guard let data = response.data else { + throw NetworkError.noData +} + +// Use nil coalescing for defaults +let displayName = user.nickname ?? user.email + +// Avoid force unwrapping except in tests +// BAD: let name = user.name! +// GOOD: guard let name = user.name else { return } +``` + +### Protocol-Oriented Design + +```swift +// Define protocols with minimal requirements +protocol Identifiable { + var id: String { get } +} + +protocol Persistable: Identifiable { + func save() throws + static func find(by id: String) -> Self? +} + +// Use protocol extensions for default implementations +extension Persistable { + func save() throws { + try Storage.shared.save(self) + } +} + +// Prefer composition over inheritance +struct User: Identifiable, Codable { + let id: String + var name: String + var email: String +} +``` + +### Error Handling + +```swift +// Define domain-specific errors +enum AuthError: Error { + case invalidCredentials + case tokenExpired + case networkFailure(underlying: Error) +} + +// Use Result type for async operations +func authenticate( + email: String, + password: String, + completion: @escaping (Result) -> Void +) + +// Use throws for synchronous operations +func validate(_ input: String) throws -> ValidatedInput { + guard !input.isEmpty else { + throw ValidationError.emptyInput + } + return ValidatedInput(value: input) +} +``` + +--- + +## Kotlin Standards + +### Null Safety + +```kotlin +// Use nullable types explicitly +fun findUser(id: Int): User? { + return userRepository.find(id) +} + +// Use safe calls and elvis operator +val name = user?.profile?.name ?: "Unknown" + +// Use let for null checks with side effects +user?.let { activeUser -> + sendWelcomeEmail(activeUser.email) + logActivity(activeUser.id) +} + +// Use require/check for validation +fun processPayment(amount: Double) { + require(amount > 0) { "Amount must be positive: $amount" } + // Process +} +``` + +### Data Classes and Sealed Classes + +```kotlin +// Use data classes for DTOs +data class UserDTO( + val id: Int, + val email: String, + val name: String, + val isActive: Boolean = true +) + +// Use sealed classes for state +sealed class Result { + data class Success(val data: T) : Result() + data class Error(val message: String, val cause: Throwable? = null) : Result() + object Loading : Result() +} + +// Pattern matching with when +fun handleResult(result: Result) = when (result) { + is Result.Success -> showUser(result.data) + is Result.Error -> showError(result.message) + Result.Loading -> showLoading() +} +``` + +### Coroutines + +```kotlin +// Use structured concurrency +suspend fun loadDashboard(): Dashboard = coroutineScope { + val profile = async { fetchProfile() } + val stats = async { fetchStats() } + val notifications = async { fetchNotifications() } + + Dashboard( + profile = profile.await(), + stats = stats.await(), + notifications = notifications.await() + ) +} + +// Handle cancellation +suspend fun fetchWithRetry(url: String): Response { + repeat(3) { attempt -> + try { + return httpClient.get(url) + } catch (e: IOException) { + if (attempt == 2) throw e + delay(1000L * (attempt + 1)) + } + } + throw IllegalStateException("Unreachable") +} +``` diff --git a/.roo/skills/code-reviewer/references/common_antipatterns.md b/.roo/skills/code-reviewer/references/common_antipatterns.md new file mode 100644 index 00000000..26045452 --- /dev/null +++ b/.roo/skills/code-reviewer/references/common_antipatterns.md @@ -0,0 +1,739 @@ +# Common Antipatterns + +Code antipatterns to identify during review, with examples and fixes. + +--- + +## Table of Contents + +- [Structural Antipatterns](#structural-antipatterns) +- [Logic Antipatterns](#logic-antipatterns) +- [Security Antipatterns](#security-antipatterns) +- [Performance Antipatterns](#performance-antipatterns) +- [Testing Antipatterns](#testing-antipatterns) +- [Async Antipatterns](#async-antipatterns) + +--- + +## Structural Antipatterns + +### God Class + +A class that does too much and knows too much. + +```typescript +// BAD: God class handling everything +class UserManager { + createUser(data: UserData) { ... } + updateUser(id: string, data: UserData) { ... } + deleteUser(id: string) { ... } + sendEmail(userId: string, content: string) { ... } + generateReport(userId: string) { ... } + validatePassword(password: string) { ... } + hashPassword(password: string) { ... } + uploadAvatar(userId: string, file: File) { ... } + resizeImage(file: File) { ... } + logActivity(userId: string, action: string) { ... } + // 50 more methods... +} + +// GOOD: Single responsibility classes +class UserRepository { + create(data: UserData): User { ... } + update(id: string, data: Partial): User { ... } + delete(id: string): void { ... } +} + +class EmailService { + send(to: string, content: string): void { ... } +} + +class PasswordService { + validate(password: string): ValidationResult { ... } + hash(password: string): string { ... } +} +``` + +**Detection:** Class has >20 methods, >500 lines, or handles unrelated concerns. + +--- + +### Long Method + +Functions that do too much and are hard to understand. + +```python +# BAD: Long method doing everything +def process_order(order_data): + # Validate order (20 lines) + if not order_data.get('items'): + raise ValueError('No items') + if not order_data.get('customer_id'): + raise ValueError('No customer') + # ... more validation + + # Calculate totals (30 lines) + subtotal = 0 + for item in order_data['items']: + price = get_product_price(item['product_id']) + subtotal += price * item['quantity'] + # ... tax calculation, discounts + + # Process payment (40 lines) + payment_result = payment_gateway.charge(...) + # ... handle payment errors + + # Create order record (20 lines) + order = Order.create(...) + + # Send notifications (20 lines) + send_order_confirmation(...) + notify_warehouse(...) + + return order + +# GOOD: Composed of focused functions +def process_order(order_data): + validate_order(order_data) + totals = calculate_order_totals(order_data) + payment = process_payment(order_data['customer_id'], totals) + order = create_order_record(order_data, totals, payment) + send_order_notifications(order) + return order +``` + +**Detection:** Function >50 lines or requires scrolling to read. + +--- + +### Deep Nesting + +Excessive indentation making code hard to follow. + +```javascript +// BAD: Deep nesting +function processData(data) { + if (data) { + if (data.items) { + if (data.items.length > 0) { + for (const item of data.items) { + if (item.isValid) { + if (item.type === 'premium') { + if (item.price > 100) { + // Finally do something + processItem(item); + } + } + } + } + } + } + } +} + +// GOOD: Early returns and guard clauses +function processData(data) { + if (!data?.items?.length) { + return; + } + + const premiumItems = data.items.filter( + item => item.isValid && item.type === 'premium' && item.price > 100 + ); + + premiumItems.forEach(processItem); +} +``` + +**Detection:** Indentation >4 levels deep. + +--- + +### Magic Numbers and Strings + +Hard-coded values without explanation. + +```go +// BAD: Magic numbers +func calculateDiscount(total float64, userType int) float64 { + if userType == 1 { + return total * 0.15 + } else if userType == 2 { + return total * 0.25 + } + return total * 0.05 +} + +// GOOD: Named constants +const ( + UserTypeRegular = 1 + UserTypePremium = 2 + + DiscountRegular = 0.05 + DiscountStandard = 0.15 + DiscountPremium = 0.25 +) + +func calculateDiscount(total float64, userType int) float64 { + switch userType { + case UserTypePremium: + return total * DiscountPremium + case UserTypeRegular: + return total * DiscountStandard + default: + return total * DiscountRegular + } +} +``` + +**Detection:** Literal numbers (except 0, 1) or repeated string literals. + +--- + +### Primitive Obsession + +Using primitives instead of small objects. + +```typescript +// BAD: Primitives everywhere +function createUser( + name: string, + email: string, + phone: string, + street: string, + city: string, + zipCode: string, + country: string +): User { ... } + +// GOOD: Value objects +interface Address { + street: string; + city: string; + zipCode: string; + country: string; +} + +interface ContactInfo { + email: string; + phone: string; +} + +function createUser( + name: string, + contact: ContactInfo, + address: Address +): User { ... } +``` + +**Detection:** Functions with >4 parameters of same type, or related primitives always passed together. + +--- + +## Logic Antipatterns + +### Boolean Blindness + +Passing booleans that make code unreadable at call sites. + +```swift +// BAD: What do these booleans mean? +user.configure(true, false, true, false) + +// GOOD: Named parameters or option objects +user.configure( + sendWelcomeEmail: true, + requireVerification: false, + enableNotifications: true, + isAdmin: false +) + +// Or use an options struct +struct UserConfiguration { + var sendWelcomeEmail: Bool = true + var requireVerification: Bool = false + var enableNotifications: Bool = true + var isAdmin: Bool = false +} + +user.configure(UserConfiguration()) +``` + +**Detection:** Function calls with multiple boolean literals. + +--- + +### Null Returns for Collections + +Returning null instead of empty collections. + +```kotlin +// BAD: Returning null +fun findUsersByRole(role: String): List? { + val users = repository.findByRole(role) + return if (users.isEmpty()) null else users +} + +// Caller must handle null +val users = findUsersByRole("admin") +if (users != null) { + users.forEach { ... } +} + +// GOOD: Return empty collection +fun findUsersByRole(role: String): List { + return repository.findByRole(role) +} + +// Caller can iterate directly +findUsersByRole("admin").forEach { ... } +``` + +**Detection:** Functions returning nullable collections. + +--- + +### Stringly Typed Code + +Using strings where enums or types should be used. + +```python +# BAD: String-based logic +def handle_event(event_type: str, data: dict): + if event_type == "user_created": + handle_user_created(data) + elif event_type == "user_updated": + handle_user_updated(data) + elif event_type == "user_dleted": # Typo won't be caught + handle_user_deleted(data) + +# GOOD: Enum-based +from enum import Enum + +class EventType(Enum): + USER_CREATED = "user_created" + USER_UPDATED = "user_updated" + USER_DELETED = "user_deleted" + +def handle_event(event_type: EventType, data: dict): + handlers = { + EventType.USER_CREATED: handle_user_created, + EventType.USER_UPDATED: handle_user_updated, + EventType.USER_DELETED: handle_user_deleted, + } + handlers[event_type](data) +``` + +**Detection:** String comparisons for type/status/category values. + +--- + +## Security Antipatterns + +### SQL Injection + +String concatenation in SQL queries. + +```javascript +// BAD: String concatenation +const query = `SELECT * FROM users WHERE id = ${userId}`; +db.query(query); + +// BAD: String templates still vulnerable +const query = `SELECT * FROM users WHERE name = '${userName}'`; + +// GOOD: Parameterized queries +const query = 'SELECT * FROM users WHERE id = $1'; +db.query(query, [userId]); + +// GOOD: Using ORM safely +User.findOne({ where: { id: userId } }); +``` + +**Detection:** String concatenation or template literals with SQL keywords. + +--- + +### Hardcoded Credentials + +Secrets in source code. + +```python +# BAD: Hardcoded secrets +API_KEY = "sk-abc123xyz789" +DATABASE_URL = "postgresql://admin:password123@prod-db.internal:5432/app" + +# GOOD: Environment variables +import os + +API_KEY = os.environ["API_KEY"] +DATABASE_URL = os.environ["DATABASE_URL"] + +# GOOD: Secrets manager +from aws_secretsmanager import get_secret + +API_KEY = get_secret("api-key") +``` + +**Detection:** Variables named `password`, `secret`, `key`, `token` with string literals. + +--- + +### Unsafe Deserialization + +Deserializing untrusted data without validation. + +```python +# BAD: Binary serialization from untrusted source can execute arbitrary code +# Examples: Python's binary serialization, yaml.load without SafeLoader + +# GOOD: Use safe alternatives +import json + +def load_data(file_path): + with open(file_path, 'r') as f: + return json.load(f) + +# GOOD: Use SafeLoader for YAML +import yaml + +with open('config.yaml') as f: + config = yaml.safe_load(f) +``` + +**Detection:** Binary deserialization functions, yaml.load without safe loader, dynamic code execution on external data. + +--- + +### Missing Input Validation + +Trusting user input without validation. + +```typescript +// BAD: No validation +app.post('/user', (req, res) => { + const user = db.create({ + name: req.body.name, + email: req.body.email, + role: req.body.role // User can set themselves as admin! + }); + res.json(user); +}); + +// GOOD: Validate and sanitize +import { z } from 'zod'; + +const CreateUserSchema = z.object({ + name: z.string().min(1).max(100), + email: z.string().email(), + // role is NOT accepted from input +}); + +app.post('/user', (req, res) => { + const validated = CreateUserSchema.parse(req.body); + const user = db.create({ + ...validated, + role: 'user' // Default role, not from input + }); + res.json(user); +}); +``` + +**Detection:** Request body/params used directly without validation schema. + +--- + +## Performance Antipatterns + +### N+1 Query Problem + +Loading related data one record at a time. + +```python +# BAD: N+1 queries +def get_orders_with_items(): + orders = Order.query.all() # 1 query + for order in orders: + items = OrderItem.query.filter_by(order_id=order.id).all() # N queries + order.items = items + return orders + +# GOOD: Eager loading +def get_orders_with_items(): + return Order.query.options( + joinedload(Order.items) + ).all() # 1 query with JOIN + +# GOOD: Batch loading +def get_orders_with_items(): + orders = Order.query.all() + order_ids = [o.id for o in orders] + items = OrderItem.query.filter( + OrderItem.order_id.in_(order_ids) + ).all() # 2 queries total + # Group items by order_id... +``` + +**Detection:** Database queries inside loops. + +--- + +### Unbounded Collections + +Loading unlimited data into memory. + +```go +// BAD: Load all records +func GetAllUsers() ([]User, error) { + return db.Find(&[]User{}) // Could be millions +} + +// GOOD: Pagination +func GetUsers(page, pageSize int) ([]User, error) { + offset := (page - 1) * pageSize + return db.Limit(pageSize).Offset(offset).Find(&[]User{}) +} + +// GOOD: Streaming for large datasets +func ProcessAllUsers(handler func(User) error) error { + rows, err := db.Model(&User{}).Rows() + if err != nil { + return err + } + defer rows.Close() + + for rows.Next() { + var user User + db.ScanRows(rows, &user) + if err := handler(user); err != nil { + return err + } + } + return nil +} +``` + +**Detection:** `findAll()`, `find({})`, or queries without `LIMIT`. + +--- + +### Synchronous I/O in Hot Paths + +Blocking operations in request handlers. + +```javascript +// BAD: Sync file read on every request +app.get('/config', (req, res) => { + const config = fs.readFileSync('./config.json'); // Blocks event loop + res.json(JSON.parse(config)); +}); + +// GOOD: Load once at startup +const config = JSON.parse(fs.readFileSync('./config.json')); + +app.get('/config', (req, res) => { + res.json(config); +}); + +// GOOD: Async with caching +let configCache = null; + +app.get('/config', async (req, res) => { + if (!configCache) { + configCache = JSON.parse(await fs.promises.readFile('./config.json')); + } + res.json(configCache); +}); +``` + +**Detection:** `readFileSync`, `execSync`, or blocking calls in request handlers. + +--- + +## Testing Antipatterns + +### Test Code Duplication + +Repeating setup in every test. + +```typescript +// BAD: Duplicate setup +describe('UserService', () => { + it('should create user', async () => { + const db = await createTestDatabase(); + const userRepo = new UserRepository(db); + const emailService = new MockEmailService(); + const service = new UserService(userRepo, emailService); + + const user = await service.create({ name: 'Test' }); + expect(user.name).toBe('Test'); + }); + + it('should update user', async () => { + const db = await createTestDatabase(); // Duplicated + const userRepo = new UserRepository(db); // Duplicated + const emailService = new MockEmailService(); // Duplicated + const service = new UserService(userRepo, emailService); // Duplicated + + // ... + }); +}); + +// GOOD: Shared setup +describe('UserService', () => { + let service: UserService; + let db: TestDatabase; + + beforeEach(async () => { + db = await createTestDatabase(); + const userRepo = new UserRepository(db); + const emailService = new MockEmailService(); + service = new UserService(userRepo, emailService); + }); + + afterEach(async () => { + await db.cleanup(); + }); + + it('should create user', async () => { + const user = await service.create({ name: 'Test' }); + expect(user.name).toBe('Test'); + }); +}); +``` + +--- + +### Testing Implementation Instead of Behavior + +Tests coupled to internal implementation. + +```python +# BAD: Testing implementation details +def test_add_item_to_cart(): + cart = ShoppingCart() + cart.add_item(Product("Apple", 1.00)) + + # Testing internal structure + assert cart._items[0].name == "Apple" + assert cart._total == 1.00 + +# GOOD: Testing behavior +def test_add_item_to_cart(): + cart = ShoppingCart() + cart.add_item(Product("Apple", 1.00)) + + # Testing public behavior + assert cart.item_count == 1 + assert cart.total == 1.00 + assert cart.contains("Apple") +``` + +--- + +## Async Antipatterns + +### Floating Promises + +Promises without await or catch. + +```typescript +// BAD: Floating promise +async function saveUser(user: User) { + db.save(user); // Not awaited, errors lost + logger.info('User saved'); // Logs before save completes +} + +// BAD: Fire and forget in loop +for (const item of items) { + processItem(item); // All run in parallel, no error handling +} + +// GOOD: Await the promise +async function saveUser(user: User) { + await db.save(user); + logger.info('User saved'); +} + +// GOOD: Process with proper handling +await Promise.all(items.map(item => processItem(item))); + +// Or sequentially +for (const item of items) { + await processItem(item); +} +``` + +**Detection:** Async function calls without `await` or `.then()`. + +--- + +### Callback Hell + +Deeply nested callbacks. + +```javascript +// BAD: Callback hell +getUser(userId, (err, user) => { + if (err) return handleError(err); + getOrders(user.id, (err, orders) => { + if (err) return handleError(err); + getProducts(orders[0].productIds, (err, products) => { + if (err) return handleError(err); + renderPage(user, orders, products, (err) => { + if (err) return handleError(err); + console.log('Done'); + }); + }); + }); +}); + +// GOOD: Async/await +async function loadPage(userId) { + try { + const user = await getUser(userId); + const orders = await getOrders(user.id); + const products = await getProducts(orders[0].productIds); + await renderPage(user, orders, products); + console.log('Done'); + } catch (err) { + handleError(err); + } +} +``` + +**Detection:** >2 levels of callback nesting. + +--- + +### Async in Constructor + +Async operations in constructors. + +```typescript +// BAD: Async in constructor +class DatabaseConnection { + constructor(url: string) { + this.connect(url); // Fire-and-forget async + } + + private async connect(url: string) { + this.client = await createClient(url); + } +} + +// GOOD: Factory method +class DatabaseConnection { + private constructor(private client: Client) {} + + static async create(url: string): Promise { + const client = await createClient(url); + return new DatabaseConnection(client); + } +} + +// Usage +const db = await DatabaseConnection.create(url); +``` + +**Detection:** `async` calls or `.then()` in constructor. diff --git a/.roo/skills/code-reviewer/scripts/code_quality_checker.py b/.roo/skills/code-reviewer/scripts/code_quality_checker.py new file mode 100644 index 00000000..5aaa3d9c --- /dev/null +++ b/.roo/skills/code-reviewer/scripts/code_quality_checker.py @@ -0,0 +1,560 @@ +#!/usr/bin/env python3 +""" +Code Quality Checker + +Analyzes source code for quality issues, code smells, complexity metrics, +and SOLID principle violations. + +Usage: + python .roo/skills/code-reviewer/scripts/code_quality_checker.py /path/to/file.py + python .roo/skills/code-reviewer/scripts/code_quality_checker.py /path/to/directory --recursive + python .roo/skills/code-reviewer/scripts/code_quality_checker.py . --language typescript --json +""" + +import argparse +import json +import re +import sys +from pathlib import Path +from typing import Dict, List, Optional + + +# Language-specific file extensions +LANGUAGE_EXTENSIONS = { + "python": [".py"], + "typescript": [".ts", ".tsx"], + "javascript": [".js", ".jsx", ".mjs"], + "go": [".go"], + "swift": [".swift"], + "kotlin": [".kt", ".kts"] +} + +# Code smell thresholds +THRESHOLDS = { + "long_function_lines": 50, + "too_many_parameters": 5, + "high_complexity": 10, + "god_class_methods": 20, + "max_imports": 15 +} + + +def get_file_extension(filepath: Path) -> str: + """Get file extension.""" + return filepath.suffix.lower() + + +def detect_language(filepath: Path) -> Optional[str]: + """Detect programming language from file extension.""" + ext = get_file_extension(filepath) + for lang, extensions in LANGUAGE_EXTENSIONS.items(): + if ext in extensions: + return lang + return None + + +def read_file_content(filepath: Path) -> str: + """Read file content safely.""" + try: + with open(filepath, "r", encoding="utf-8", errors="ignore") as f: + return f.read() + except Exception: + return "" + + +def calculate_cyclomatic_complexity(content: str) -> int: + """ + Estimate cyclomatic complexity based on control flow keywords. + """ + complexity = 1 # Base complexity + + # Control flow patterns that increase complexity + patterns = [ + r"\bif\b", + r"\belif\b", + r"\belse\b", + r"\bfor\b", + r"\bwhile\b", + r"\bcase\b", + r"\bcatch\b", + r"\bexcept\b", + r"\band\b", + r"\bor\b", + r"\|\|", + r"&&" + ] + + for pattern in patterns: + matches = re.findall(pattern, content, re.IGNORECASE) + complexity += len(matches) + + return complexity + + +def count_lines(content: str) -> Dict[str, int]: + """Count different types of lines in code.""" + lines = content.split("\n") + total = len(lines) + blank = sum(1 for line in lines if not line.strip()) + comment = 0 + + for line in lines: + stripped = line.strip() + if stripped.startswith("#") or stripped.startswith("//"): + comment += 1 + elif stripped.startswith("/*") or stripped.startswith("'''") or stripped.startswith('"""'): + comment += 1 + + code = total - blank - comment + + return { + "total": total, + "code": code, + "blank": blank, + "comment": comment + } + + +def find_functions(content: str, language: str) -> List[Dict]: + """Find function definitions and their metrics.""" + functions = [] + + # Language-specific function patterns + patterns = { + "python": r"def\s+(\w+)\s*\(([^)]*)\)", + "typescript": r"(?:function\s+(\w+)|(?:const|let|var)\s+(\w+)\s*=\s*(?:async\s+)?\([^)]*\)\s*=>)", + "javascript": r"(?:function\s+(\w+)|(?:const|let|var)\s+(\w+)\s*=\s*(?:async\s+)?\([^)]*\)\s*=>)", + "go": r"func\s+(?:\([^)]+\)\s+)?(\w+)\s*\(([^)]*)\)", + "swift": r"func\s+(\w+)\s*\(([^)]*)\)", + "kotlin": r"fun\s+(\w+)\s*\(([^)]*)\)" + } + + pattern = patterns.get(language, patterns["python"]) + matches = re.finditer(pattern, content, re.MULTILINE) + + for match in matches: + name = next((g for g in match.groups() if g), "anonymous") + params_str = match.group(2) if len(match.groups()) > 1 and match.group(2) else "" + + # Count parameters + params = [p.strip() for p in params_str.split(",") if p.strip()] + param_count = len(params) + + # Estimate function length + start_pos = match.end() + remaining = content[start_pos:] + + next_func = re.search(pattern, remaining) + if next_func: + func_body = remaining[:next_func.start()] + else: + func_body = remaining[:min(2000, len(remaining))] + + line_count = len(func_body.split("\n")) + complexity = calculate_cyclomatic_complexity(func_body) + + functions.append({ + "name": name, + "parameters": param_count, + "lines": line_count, + "complexity": complexity + }) + + return functions + + +def find_classes(content: str, language: str) -> List[Dict]: + """Find class definitions and their metrics.""" + classes = [] + + patterns = { + "python": r"class\s+(\w+)", + "typescript": r"class\s+(\w+)", + "javascript": r"class\s+(\w+)", + "go": r"type\s+(\w+)\s+struct", + "swift": r"class\s+(\w+)", + "kotlin": r"class\s+(\w+)" + } + + pattern = patterns.get(language, patterns["python"]) + matches = re.finditer(pattern, content) + + for match in matches: + name = match.group(1) + + start_pos = match.end() + remaining = content[start_pos:] + + next_class = re.search(pattern, remaining) + if next_class: + class_body = remaining[:next_class.start()] + else: + class_body = remaining + + # Count methods + method_patterns = { + "python": r"def\s+\w+\s*\(", + "typescript": r"(?:public|private|protected)?\s*\w+\s*\([^)]*\)\s*[:{]", + "javascript": r"\w+\s*\([^)]*\)\s*\{", + "go": r"func\s+\(", + "swift": r"func\s+\w+", + "kotlin": r"fun\s+\w+" + } + method_pattern = method_patterns.get(language, method_patterns["python"]) + methods = len(re.findall(method_pattern, class_body)) + + classes.append({ + "name": name, + "methods": methods, + "lines": len(class_body.split("\n")) + }) + + return classes + + +def check_code_smells(content: str, functions: List[Dict], classes: List[Dict]) -> List[Dict]: + """Check for code smells in the content.""" + smells = [] + + # Long functions + for func in functions: + if func["lines"] > THRESHOLDS["long_function_lines"]: + smells.append({ + "type": "long_function", + "severity": "medium", + "message": f"Function '{func['name']}' has {func['lines']} lines (max: {THRESHOLDS['long_function_lines']})", + "location": func["name"] + }) + + # Too many parameters + for func in functions: + if func["parameters"] > THRESHOLDS["too_many_parameters"]: + smells.append({ + "type": "too_many_parameters", + "severity": "low", + "message": f"Function '{func['name']}' has {func['parameters']} parameters (max: {THRESHOLDS['too_many_parameters']})", + "location": func["name"] + }) + + # High complexity + for func in functions: + if func["complexity"] > THRESHOLDS["high_complexity"]: + severity = "high" if func["complexity"] > 20 else "medium" + smells.append({ + "type": "high_complexity", + "severity": severity, + "message": f"Function '{func['name']}' has complexity {func['complexity']} (max: {THRESHOLDS['high_complexity']})", + "location": func["name"] + }) + + # God classes + for cls in classes: + if cls["methods"] > THRESHOLDS["god_class_methods"]: + smells.append({ + "type": "god_class", + "severity": "high", + "message": f"Class '{cls['name']}' has {cls['methods']} methods (max: {THRESHOLDS['god_class_methods']})", + "location": cls["name"] + }) + + # Magic numbers + magic_pattern = r"\b(? List[Dict]: + """Check for potential SOLID principle violations.""" + violations = [] + + # OCP: Type checking instead of polymorphism + type_checks = len(re.findall(r"isinstance\(|type\(.*\)\s*==|typeof\s+\w+\s*===", content)) + if type_checks > 2: + violations.append({ + "principle": "OCP", + "name": "Open/Closed Principle", + "severity": "medium", + "message": f"Found {type_checks} type checks - consider using polymorphism" + }) + + # LSP/ISP: NotImplementedError + not_impl = len(re.findall(r"raise\s+NotImplementedError|not\s+implemented", content, re.IGNORECASE)) + if not_impl: + violations.append({ + "principle": "LSP/ISP", + "name": "Liskov/Interface Segregation", + "severity": "low", + "message": f"Found {not_impl} unimplemented methods - may indicate oversized interface" + }) + + # DIP: Too many direct imports + imports = len(re.findall(r"^(?:import|from)\s+", content, re.MULTILINE)) + if imports > THRESHOLDS["max_imports"]: + violations.append({ + "principle": "DIP", + "name": "Dependency Inversion Principle", + "severity": "low", + "message": f"File has {imports} imports - consider dependency injection" + }) + + return violations + + +def calculate_quality_score( + line_metrics: Dict, + functions: List[Dict], + classes: List[Dict], + smells: List[Dict], + violations: List[Dict] +) -> int: + """Calculate overall quality score (0-100).""" + score = 100 + + # Deduct for code smells + for smell in smells: + if smell["severity"] == "high": + score -= 10 + elif smell["severity"] == "medium": + score -= 5 + elif smell["severity"] == "low": + score -= 2 + + # Deduct for SOLID violations + for violation in violations: + if violation["severity"] == "high": + score -= 8 + elif violation["severity"] == "medium": + score -= 4 + elif violation["severity"] == "low": + score -= 2 + + # Bonus for good comment ratio (10-30%) + if line_metrics["total"] > 0: + comment_ratio = line_metrics["comment"] / line_metrics["total"] + if 0.1 <= comment_ratio <= 0.3: + score += 5 + + # Bonus for reasonable function sizes + if functions: + avg_lines = sum(f["lines"] for f in functions) / len(functions) + if avg_lines < 30: + score += 5 + + return max(0, min(100, score)) + + +def get_grade(score: int) -> str: + """Convert score to letter grade.""" + if score >= 90: + return "A" + elif score >= 80: + return "B" + elif score >= 70: + return "C" + elif score >= 60: + return "D" + else: + return "F" + + +def analyze_file(filepath: Path) -> Dict: + """Analyze a single file for code quality.""" + language = detect_language(filepath) + if not language: + return {"error": f"Unsupported file type: {filepath.suffix}"} + + content = read_file_content(filepath) + if not content: + return {"error": f"Could not read file: {filepath}"} + + line_metrics = count_lines(content) + functions = find_functions(content, language) + classes = find_classes(content, language) + smells = check_code_smells(content, functions, classes) + violations = check_solid_violations(content) + score = calculate_quality_score(line_metrics, functions, classes, smells, violations) + + return { + "file": str(filepath), + "language": language, + "metrics": { + "lines": line_metrics, + "functions": len(functions), + "classes": len(classes), + "avg_complexity": round(sum(f["complexity"] for f in functions) / max(1, len(functions)), 1) + }, + "quality_score": score, + "grade": get_grade(score), + "smells": smells, + "solid_violations": violations, + "function_details": functions[:10], + "class_details": classes[:10] + } + + +def analyze_directory( + dir_path: Path, + recursive: bool = True, + language: Optional[str] = None +) -> Dict: + """Analyze all files in a directory.""" + results = [] + extensions = [] + + if language: + extensions = LANGUAGE_EXTENSIONS.get(language, []) + else: + for exts in LANGUAGE_EXTENSIONS.values(): + extensions.extend(exts) + + pattern = "**/*" if recursive else "*" + + for ext in extensions: + for filepath in dir_path.glob(f"{pattern}{ext}"): + if "node_modules" in str(filepath) or ".git" in str(filepath): + continue + result = analyze_file(filepath) + if "error" not in result: + results.append(result) + + if not results: + return {"error": "No supported files found"} + + total_score = sum(r["quality_score"] for r in results) + avg_score = total_score / len(results) + total_smells = sum(len(r["smells"]) for r in results) + total_violations = sum(len(r["solid_violations"]) for r in results) + + return { + "directory": str(dir_path), + "files_analyzed": len(results), + "average_score": round(avg_score, 1), + "overall_grade": get_grade(int(avg_score)), + "total_code_smells": total_smells, + "total_solid_violations": total_violations, + "files": sorted(results, key=lambda x: x["quality_score"]) + } + + +def print_report(analysis: Dict) -> None: + """Print human-readable analysis report.""" + if "error" in analysis: + print(f"Error: {analysis['error']}") + return + + print("=" * 60) + print("CODE QUALITY REPORT") + print("=" * 60) + + if "file" in analysis: + print(f"\nFile: {analysis['file']}") + print(f"Language: {analysis['language']}") + print(f"Quality Score: {analysis['quality_score']}/100 ({analysis['grade']})") + + metrics = analysis["metrics"] + print(f"\nLines: {metrics['lines']['total']} ({metrics['lines']['code']} code, {metrics['lines']['comment']} comments)") + print(f"Functions: {metrics['functions']}") + print(f"Classes: {metrics['classes']}") + print(f"Avg Complexity: {metrics['avg_complexity']}") + + if analysis["smells"]: + print("\n--- CODE SMELLS ---") + for smell in analysis["smells"][:10]: + print(f" [{smell['severity'].upper()}] {smell['message']} ({smell['location']})") + + if analysis["solid_violations"]: + print("\n--- SOLID VIOLATIONS ---") + for v in analysis["solid_violations"]: + print(f" [{v['principle']}] {v['message']}") + else: + print(f"\nDirectory: {analysis['directory']}") + print(f"Files Analyzed: {analysis['files_analyzed']}") + print(f"Average Score: {analysis['average_score']}/100 ({analysis['overall_grade']})") + print(f"Total Code Smells: {analysis['total_code_smells']}") + print(f"Total SOLID Violations: {analysis['total_solid_violations']}") + + print("\n--- FILES BY QUALITY ---") + for f in analysis["files"][:10]: + print(f" {f['quality_score']:3d}/100 [{f['grade']}] {f['file']}") + + print("\n" + "=" * 60) + + +def main(): + parser = argparse.ArgumentParser( + description="Analyze code quality, smells, and SOLID violations" + ) + parser.add_argument( + "path", + help="File or directory to analyze" + ) + parser.add_argument( + "--recursive", "-r", + action="store_true", + default=True, + help="Recursively analyze directories (default: true)" + ) + parser.add_argument( + "--language", "-l", + choices=list(LANGUAGE_EXTENSIONS.keys()), + help="Filter by programming language" + ) + parser.add_argument( + "--json", + action="store_true", + help="Output in JSON format" + ) + parser.add_argument( + "--output", "-o", + help="Write output to file" + ) + + args = parser.parse_args() + + target = Path(args.path).resolve() + + if not target.exists(): + print(f"Error: Path does not exist: {target}", file=sys.stderr) + sys.exit(1) + + if target.is_file(): + analysis = analyze_file(target) + else: + analysis = analyze_directory(target, args.recursive, args.language) + + if args.json: + output = json.dumps(analysis, indent=2, default=str) + if args.output: + with open(args.output, "w") as f: + f.write(output) + print(f"Results written to {args.output}") + else: + print(output) + else: + print_report(analysis) + + +if __name__ == "__main__": + main() diff --git a/.roo/skills/code-reviewer/scripts/pr_analyzer.py b/.roo/skills/code-reviewer/scripts/pr_analyzer.py new file mode 100644 index 00000000..15a5de7c --- /dev/null +++ b/.roo/skills/code-reviewer/scripts/pr_analyzer.py @@ -0,0 +1,495 @@ +#!/usr/bin/env python3 +""" +PR Analyzer + +Analyzes pull request changes for review complexity, risk assessment, +and generates review priorities. + +Usage: + python .roo/skills/code-reviewer/scripts/pr_analyzer.py /path/to/repo + python .roo/skills/code-reviewer/scripts/pr_analyzer.py . --base main --head feature-branch + python .roo/skills/code-reviewer/scripts/pr_analyzer.py /path/to/repo --json +""" + +import argparse +import json +import os +import re +import subprocess +import sys +from pathlib import Path +from typing import Dict, List, Optional, Tuple + + +# File categories for review prioritization +FILE_CATEGORIES = { + "critical": { + "patterns": [ + r"auth", r"security", r"password", r"token", r"secret", + r"payment", r"billing", r"crypto", r"encrypt" + ], + "weight": 5, + "description": "Security-sensitive files requiring careful review" + }, + "high": { + "patterns": [ + r"api", r"database", r"migration", r"schema", r"model", + r"config", r"env", r"middleware" + ], + "weight": 4, + "description": "Core infrastructure files" + }, + "medium": { + "patterns": [ + r"service", r"controller", r"handler", r"util", r"helper" + ], + "weight": 3, + "description": "Business logic files" + }, + "low": { + "patterns": [ + r"test", r"spec", r"mock", r"fixture", r"story", + r"readme", r"docs", r"\.md$" + ], + "weight": 1, + "description": "Tests and documentation" + } +} + +# Risky patterns to flag +RISK_PATTERNS = [ + { + "name": "hardcoded_secrets", + "pattern": r"(password|secret|api_key|token)\s*[=:]\s*['\"][^'\"]+['\"]", + "severity": "critical", + "message": "Potential hardcoded secret detected" + }, + { + "name": "todo_fixme", + "pattern": r"(TODO|FIXME|HACK|XXX):", + "severity": "low", + "message": "TODO/FIXME comment found" + }, + { + "name": "console_log", + "pattern": r"console\.(log|debug|info|warn|error)\(", + "severity": "medium", + "message": "Console statement found (remove for production)" + }, + { + "name": "debugger", + "pattern": r"\bdebugger\b", + "severity": "high", + "message": "Debugger statement found" + }, + { + "name": "disable_eslint", + "pattern": r"eslint-disable", + "severity": "medium", + "message": "ESLint rule disabled" + }, + { + "name": "any_type", + "pattern": r":\s*any\b", + "severity": "medium", + "message": "TypeScript 'any' type used" + }, + { + "name": "sql_concatenation", + "pattern": r"(SELECT|INSERT|UPDATE|DELETE).*\+.*['\"]", + "severity": "critical", + "message": "Potential SQL injection (string concatenation in query)" + } +] + + +def run_git_command(cmd: List[str], cwd: Path) -> Tuple[bool, str]: + """Run a git command and return success status and output.""" + try: + result = subprocess.run( + cmd, + cwd=cwd, + capture_output=True, + text=True, + timeout=30 + ) + return result.returncode == 0, result.stdout.strip() + except subprocess.TimeoutExpired: + return False, "Command timed out" + except Exception as e: + return False, str(e) + + +def get_changed_files(repo_path: Path, base: str, head: str) -> List[Dict]: + """Get list of changed files between two refs.""" + success, output = run_git_command( + ["git", "diff", "--name-status", f"{base}...{head}"], + repo_path + ) + + if not success: + # Try without the triple dot (for uncommitted changes) + success, output = run_git_command( + ["git", "diff", "--name-status", base, head], + repo_path + ) + + if not success or not output: + # Fall back to staged changes + success, output = run_git_command( + ["git", "diff", "--name-status", "--cached"], + repo_path + ) + + files = [] + for line in output.split("\n"): + if not line.strip(): + continue + parts = line.split("\t") + if len(parts) >= 2: + status = parts[0][0] # First character of status + filepath = parts[-1] # Handle renames (R100\told\tnew) + status_map = { + "A": "added", + "M": "modified", + "D": "deleted", + "R": "renamed", + "C": "copied" + } + files.append({ + "path": filepath, + "status": status_map.get(status, "modified") + }) + + return files + + +def get_file_diff(repo_path: Path, filepath: str, base: str, head: str) -> str: + """Get diff content for a specific file.""" + success, output = run_git_command( + ["git", "diff", f"{base}...{head}", "--", filepath], + repo_path + ) + if not success: + success, output = run_git_command( + ["git", "diff", "--cached", "--", filepath], + repo_path + ) + return output if success else "" + + +def categorize_file(filepath: str) -> Tuple[str, int]: + """Categorize a file based on its path and name.""" + filepath_lower = filepath.lower() + + for category, info in FILE_CATEGORIES.items(): + for pattern in info["patterns"]: + if re.search(pattern, filepath_lower): + return category, info["weight"] + + return "medium", 2 # Default category + + +def analyze_diff_for_risks(diff_content: str, filepath: str) -> List[Dict]: + """Analyze diff content for risky patterns.""" + risks = [] + + # Only analyze added lines (starting with +) + added_lines = [ + line[1:] for line in diff_content.split("\n") + if line.startswith("+") and not line.startswith("+++") + ] + + content = "\n".join(added_lines) + + for risk in RISK_PATTERNS: + matches = re.findall(risk["pattern"], content, re.IGNORECASE) + if matches: + risks.append({ + "name": risk["name"], + "severity": risk["severity"], + "message": risk["message"], + "file": filepath, + "count": len(matches) + }) + + return risks + + +def count_changes(diff_content: str) -> Dict[str, int]: + """Count additions and deletions in diff.""" + additions = 0 + deletions = 0 + + for line in diff_content.split("\n"): + if line.startswith("+") and not line.startswith("+++"): + additions += 1 + elif line.startswith("-") and not line.startswith("---"): + deletions += 1 + + return {"additions": additions, "deletions": deletions} + + +def calculate_complexity_score(files: List[Dict], all_risks: List[Dict]) -> int: + """Calculate overall PR complexity score (1-10).""" + score = 0 + + # File count contribution (max 3 points) + file_count = len(files) + if file_count > 20: + score += 3 + elif file_count > 10: + score += 2 + elif file_count > 5: + score += 1 + + # Total changes contribution (max 3 points) + total_changes = sum(f.get("additions", 0) + f.get("deletions", 0) for f in files) + if total_changes > 500: + score += 3 + elif total_changes > 200: + score += 2 + elif total_changes > 50: + score += 1 + + # Risk severity contribution (max 4 points) + critical_risks = sum(1 for r in all_risks if r["severity"] == "critical") + high_risks = sum(1 for r in all_risks if r["severity"] == "high") + + score += min(2, critical_risks) + score += min(2, high_risks) + + return min(10, max(1, score)) + + +def analyze_commit_messages(repo_path: Path, base: str, head: str) -> Dict: + """Analyze commit messages in the PR.""" + success, output = run_git_command( + ["git", "log", "--oneline", f"{base}...{head}"], + repo_path + ) + + if not success or not output: + return {"commits": 0, "issues": []} + + commits = output.strip().split("\n") + issues = [] + + for commit in commits: + if len(commit) < 10: + continue + + # Check for conventional commit format + message = commit[8:] if len(commit) > 8 else commit # Skip hash + + if not re.match(r"^(feat|fix|docs|style|refactor|test|chore|perf|ci|build|revert)(\(.+\))?:", message): + issues.append({ + "commit": commit[:7], + "issue": "Does not follow conventional commit format" + }) + + if len(message) > 72: + issues.append({ + "commit": commit[:7], + "issue": "Commit message exceeds 72 characters" + }) + + return { + "commits": len(commits), + "issues": issues + } + + +def analyze_pr( + repo_path: Path, + base: str = "main", + head: str = "HEAD" +) -> Dict: + """Perform complete PR analysis.""" + # Get changed files + changed_files = get_changed_files(repo_path, base, head) + + if not changed_files: + return { + "status": "no_changes", + "message": "No changes detected between branches" + } + + # Analyze each file + all_risks = [] + file_analyses = [] + + for file_info in changed_files: + filepath = file_info["path"] + category, weight = categorize_file(filepath) + + # Get diff for the file + diff = get_file_diff(repo_path, filepath, base, head) + changes = count_changes(diff) + risks = analyze_diff_for_risks(diff, filepath) + + all_risks.extend(risks) + + file_analyses.append({ + "path": filepath, + "status": file_info["status"], + "category": category, + "priority_weight": weight, + "additions": changes["additions"], + "deletions": changes["deletions"], + "risks": risks + }) + + # Sort by priority (highest first) + file_analyses.sort(key=lambda x: (-x["priority_weight"], x["path"])) + + # Analyze commits + commit_analysis = analyze_commit_messages(repo_path, base, head) + + # Calculate metrics + complexity = calculate_complexity_score(file_analyses, all_risks) + + total_additions = sum(f["additions"] for f in file_analyses) + total_deletions = sum(f["deletions"] for f in file_analyses) + + return { + "status": "analyzed", + "summary": { + "files_changed": len(file_analyses), + "total_additions": total_additions, + "total_deletions": total_deletions, + "complexity_score": complexity, + "complexity_label": get_complexity_label(complexity), + "commits": commit_analysis["commits"] + }, + "risks": { + "critical": [r for r in all_risks if r["severity"] == "critical"], + "high": [r for r in all_risks if r["severity"] == "high"], + "medium": [r for r in all_risks if r["severity"] == "medium"], + "low": [r for r in all_risks if r["severity"] == "low"] + }, + "files": file_analyses, + "commit_issues": commit_analysis["issues"], + "review_order": [f["path"] for f in file_analyses[:10]] # Top 10 priority files + } + + +def get_complexity_label(score: int) -> str: + """Get human-readable complexity label.""" + if score <= 2: + return "Simple" + elif score <= 4: + return "Moderate" + elif score <= 6: + return "Complex" + elif score <= 8: + return "Very Complex" + else: + return "Critical" + + +def print_report(analysis: Dict) -> None: + """Print human-readable analysis report.""" + if analysis["status"] == "no_changes": + print("No changes detected.") + return + + summary = analysis["summary"] + risks = analysis["risks"] + + print("=" * 60) + print("PR ANALYSIS REPORT") + print("=" * 60) + + print(f"\nComplexity: {summary['complexity_score']}/10 ({summary['complexity_label']})") + print(f"Files Changed: {summary['files_changed']}") + print(f"Lines: +{summary['total_additions']} / -{summary['total_deletions']}") + print(f"Commits: {summary['commits']}") + + # Risk summary + print("\n--- RISK SUMMARY ---") + print(f"Critical: {len(risks['critical'])}") + print(f"High: {len(risks['high'])}") + print(f"Medium: {len(risks['medium'])}") + print(f"Low: {len(risks['low'])}") + + # Critical and high risks details + if risks["critical"]: + print("\n--- CRITICAL RISKS ---") + for risk in risks["critical"]: + print(f" [{risk['file']}] {risk['message']} (x{risk['count']})") + + if risks["high"]: + print("\n--- HIGH RISKS ---") + for risk in risks["high"]: + print(f" [{risk['file']}] {risk['message']} (x{risk['count']})") + + # Commit message issues + if analysis["commit_issues"]: + print("\n--- COMMIT MESSAGE ISSUES ---") + for issue in analysis["commit_issues"][:5]: + print(f" {issue['commit']}: {issue['issue']}") + + # Review order + print("\n--- SUGGESTED REVIEW ORDER ---") + for i, filepath in enumerate(analysis["review_order"], 1): + file_info = next(f for f in analysis["files"] if f["path"] == filepath) + print(f" {i}. [{file_info['category'].upper()}] {filepath}") + + print("\n" + "=" * 60) + + +def main(): + parser = argparse.ArgumentParser( + description="Analyze pull request for review complexity and risks" + ) + parser.add_argument( + "repo_path", + nargs="?", + default=".", + help="Path to git repository (default: current directory)" + ) + parser.add_argument( + "--base", "-b", + default="main", + help="Base branch for comparison (default: main)" + ) + parser.add_argument( + "--head", + default="HEAD", + help="Head branch/commit for comparison (default: HEAD)" + ) + parser.add_argument( + "--json", + action="store_true", + help="Output in JSON format" + ) + parser.add_argument( + "--output", "-o", + help="Write output to file" + ) + + args = parser.parse_args() + + repo_path = Path(args.repo_path).resolve() + + if not (repo_path / ".git").exists(): + print(f"Error: {repo_path} is not a git repository", file=sys.stderr) + sys.exit(1) + + analysis = analyze_pr(repo_path, args.base, args.head) + + if args.json: + output = json.dumps(analysis, indent=2) + if args.output: + with open(args.output, "w") as f: + f.write(output) + print(f"Results written to {args.output}") + else: + print(output) + else: + print_report(analysis) + + +if __name__ == "__main__": + main() diff --git a/.roo/skills/code-reviewer/scripts/review_report_generator.py b/.roo/skills/code-reviewer/scripts/review_report_generator.py new file mode 100644 index 00000000..3625aee4 --- /dev/null +++ b/.roo/skills/code-reviewer/scripts/review_report_generator.py @@ -0,0 +1,505 @@ +#!/usr/bin/env python3 +""" +Review Report Generator + +Generates comprehensive code review reports by combining PR analysis +and code quality findings into structured, actionable reports. + +Usage: + python .roo/skills/code-reviewer/scripts/review_report_generator.py /path/to/repo + python .roo/skills/code-reviewer/scripts/review_report_generator.py . --pr-analysis pr_results.json --quality-analysis quality_results.json + python .roo/skills/code-reviewer/scripts/review_report_generator.py /path/to/repo --format markdown --output review.md +""" + +import argparse +import json +import os +import subprocess +import sys +from datetime import datetime +from pathlib import Path +from typing import Dict, List, Optional, Tuple + + +# Severity weights for prioritization +SEVERITY_WEIGHTS = { + "critical": 100, + "high": 75, + "medium": 50, + "low": 25, + "info": 10 +} + +# Review verdict thresholds +VERDICT_THRESHOLDS = { + "approve": {"max_critical": 0, "max_high": 0, "max_score": 100}, + "approve_with_suggestions": {"max_critical": 0, "max_high": 2, "max_score": 85}, + "request_changes": {"max_critical": 0, "max_high": 5, "max_score": 70}, + "block": {"max_critical": float("inf"), "max_high": float("inf"), "max_score": 0} +} + + +def load_json_file(filepath: str) -> Optional[Dict]: + """Load JSON file if it exists.""" + try: + with open(filepath, "r") as f: + return json.load(f) + except (FileNotFoundError, json.JSONDecodeError): + return None + + +def run_pr_analyzer(repo_path: Path) -> Dict: + """Run .roo/skills/code-reviewer/scripts/pr_analyzer.py and return results.""" + script_path = Path(__file__).parent / ".roo/skills/code-reviewer/scripts/pr_analyzer.py" + if not script_path.exists(): + return {"status": "error", "message": ".roo/skills/code-reviewer/scripts/pr_analyzer.py not found"} + + try: + result = subprocess.run( + [sys.executable, str(script_path), str(repo_path), "--json"], + capture_output=True, + text=True, + timeout=120 + ) + if result.returncode == 0: + return json.loads(result.stdout) + return {"status": "error", "message": result.stderr} + except Exception as e: + return {"status": "error", "message": str(e)} + + +def run_quality_checker(repo_path: Path) -> Dict: + """Run .roo/skills/code-reviewer/scripts/code_quality_checker.py and return results.""" + script_path = Path(__file__).parent / ".roo/skills/code-reviewer/scripts/code_quality_checker.py" + if not script_path.exists(): + return {"status": "error", "message": ".roo/skills/code-reviewer/scripts/code_quality_checker.py not found"} + + try: + result = subprocess.run( + [sys.executable, str(script_path), str(repo_path), "--json"], + capture_output=True, + text=True, + timeout=300 + ) + if result.returncode == 0: + return json.loads(result.stdout) + return {"status": "error", "message": result.stderr} + except Exception as e: + return {"status": "error", "message": str(e)} + + +def calculate_review_score(pr_analysis: Dict, quality_analysis: Dict) -> int: + """Calculate overall review score (0-100).""" + score = 100 + + # Deduct for PR risks + if "risks" in pr_analysis: + risks = pr_analysis["risks"] + score -= len(risks.get("critical", [])) * 15 + score -= len(risks.get("high", [])) * 10 + score -= len(risks.get("medium", [])) * 5 + score -= len(risks.get("low", [])) * 2 + + # Deduct for code quality issues + if "issues" in quality_analysis: + issues = quality_analysis["issues"] + score -= len([i for i in issues if i.get("severity") == "critical"]) * 12 + score -= len([i for i in issues if i.get("severity") == "high"]) * 8 + score -= len([i for i in issues if i.get("severity") == "medium"]) * 4 + score -= len([i for i in issues if i.get("severity") == "low"]) * 1 + + # Deduct for complexity + if "summary" in pr_analysis: + complexity = pr_analysis["summary"].get("complexity_score", 0) + if complexity > 7: + score -= 10 + elif complexity > 5: + score -= 5 + + return max(0, min(100, score)) + + +def determine_verdict(score: int, critical_count: int, high_count: int) -> Tuple[str, str]: + """Determine review verdict based on score and issue counts.""" + if critical_count > 0: + return "block", "Critical issues must be resolved before merge" + + if score >= 90 and high_count == 0: + return "approve", "Code meets quality standards" + + if score >= 75 and high_count <= 2: + return "approve_with_suggestions", "Minor improvements recommended" + + if score >= 50: + return "request_changes", "Several issues need to be addressed" + + return "block", "Significant issues prevent approval" + + +def generate_findings_list(pr_analysis: Dict, quality_analysis: Dict) -> List[Dict]: + """Combine and prioritize all findings.""" + findings = [] + + # Add PR risk findings + if "risks" in pr_analysis: + for severity, items in pr_analysis["risks"].items(): + for item in items: + findings.append({ + "source": "pr_analysis", + "severity": severity, + "category": item.get("name", "unknown"), + "message": item.get("message", ""), + "file": item.get("file", ""), + "count": item.get("count", 1) + }) + + # Add code quality findings + if "issues" in quality_analysis: + for issue in quality_analysis["issues"]: + findings.append({ + "source": "quality_analysis", + "severity": issue.get("severity", "medium"), + "category": issue.get("type", "unknown"), + "message": issue.get("message", ""), + "file": issue.get("file", ""), + "line": issue.get("line", 0) + }) + + # Sort by severity weight + findings.sort( + key=lambda x: -SEVERITY_WEIGHTS.get(x["severity"], 0) + ) + + return findings + + +def generate_action_items(findings: List[Dict]) -> List[Dict]: + """Generate prioritized action items from findings.""" + action_items = [] + seen_categories = set() + + for finding in findings: + category = finding["category"] + severity = finding["severity"] + + # Group similar issues + if category in seen_categories and severity not in ["critical", "high"]: + continue + + action = { + "priority": "P0" if severity == "critical" else "P1" if severity == "high" else "P2", + "action": get_action_for_category(category, finding), + "severity": severity, + "files_affected": [finding["file"]] if finding.get("file") else [] + } + action_items.append(action) + seen_categories.add(category) + + return action_items[:15] # Top 15 actions + + +def get_action_for_category(category: str, finding: Dict) -> str: + """Get actionable recommendation for issue category.""" + actions = { + "hardcoded_secrets": "Remove hardcoded credentials and use environment variables or a secrets manager", + "sql_concatenation": "Use parameterized queries to prevent SQL injection", + "debugger": "Remove debugger statements before merging", + "console_log": "Remove or replace console statements with proper logging", + "todo_fixme": "Address TODO/FIXME comments or create tracking issues", + "disable_eslint": "Address the underlying issue instead of disabling lint rules", + "any_type": "Replace 'any' types with proper type definitions", + "long_function": "Break down function into smaller, focused units", + "god_class": "Split class into smaller, single-responsibility classes", + "too_many_params": "Use parameter objects or builder pattern", + "deep_nesting": "Refactor using early returns, guard clauses, or extraction", + "high_complexity": "Reduce cyclomatic complexity through refactoring", + "missing_error_handling": "Add proper error handling and recovery logic", + "duplicate_code": "Extract duplicate code into shared functions", + "magic_numbers": "Replace magic numbers with named constants", + "large_file": "Consider splitting into multiple smaller modules" + } + return actions.get(category, f"Review and address: {finding.get('message', category)}") + + +def format_markdown_report(report: Dict) -> str: + """Generate markdown-formatted report.""" + lines = [] + + # Header + lines.append("# Code Review Report") + lines.append("") + lines.append(f"**Generated:** {report['metadata']['generated_at']}") + lines.append(f"**Repository:** {report['metadata']['repository']}") + lines.append("") + + # Executive Summary + lines.append("## Executive Summary") + lines.append("") + summary = report["summary"] + verdict = summary["verdict"] + verdict_emoji = { + "approve": "✅", + "approve_with_suggestions": "✅", + "request_changes": "⚠️", + "block": "❌" + }.get(verdict, "❓") + + lines.append(f"**Verdict:** {verdict_emoji} {verdict.upper().replace('_', ' ')}") + lines.append(f"**Score:** {summary['score']}/100") + lines.append(f"**Rationale:** {summary['rationale']}") + lines.append("") + + # Issue Counts + lines.append("### Issue Summary") + lines.append("") + lines.append("| Severity | Count |") + lines.append("|----------|-------|") + for severity in ["critical", "high", "medium", "low"]: + count = summary["issue_counts"].get(severity, 0) + lines.append(f"| {severity.capitalize()} | {count} |") + lines.append("") + + # PR Statistics (if available) + if "pr_summary" in report: + pr = report["pr_summary"] + lines.append("### Change Statistics") + lines.append("") + lines.append(f"- **Files Changed:** {pr.get('files_changed', 'N/A')}") + lines.append(f"- **Lines Added:** +{pr.get('total_additions', 0)}") + lines.append(f"- **Lines Removed:** -{pr.get('total_deletions', 0)}") + lines.append(f"- **Complexity:** {pr.get('complexity_label', 'N/A')}") + lines.append("") + + # Action Items + if report.get("action_items"): + lines.append("## Action Items") + lines.append("") + for i, item in enumerate(report["action_items"], 1): + priority = item["priority"] + emoji = "🔴" if priority == "P0" else "🟠" if priority == "P1" else "🟡" + lines.append(f"{i}. {emoji} **[{priority}]** {item['action']}") + if item.get("files_affected"): + lines.append(f" - Files: {', '.join(item['files_affected'][:3])}") + lines.append("") + + # Critical Findings + critical_findings = [f for f in report.get("findings", []) if f["severity"] == "critical"] + if critical_findings: + lines.append("## Critical Issues (Must Fix)") + lines.append("") + for finding in critical_findings: + lines.append(f"- **{finding['category']}** in `{finding.get('file', 'unknown')}`") + lines.append(f" - {finding['message']}") + lines.append("") + + # High Priority Findings + high_findings = [f for f in report.get("findings", []) if f["severity"] == "high"] + if high_findings: + lines.append("## High Priority Issues") + lines.append("") + for finding in high_findings[:10]: + lines.append(f"- **{finding['category']}** in `{finding.get('file', 'unknown')}`") + lines.append(f" - {finding['message']}") + lines.append("") + + # Review Order (if available) + if "review_order" in report: + lines.append("## Suggested Review Order") + lines.append("") + for i, filepath in enumerate(report["review_order"][:10], 1): + lines.append(f"{i}. `{filepath}`") + lines.append("") + + # Footer + lines.append("---") + lines.append("*Generated by Code Reviewer*") + + return "\n".join(lines) + + +def format_text_report(report: Dict) -> str: + """Generate plain text report.""" + lines = [] + + lines.append("=" * 60) + lines.append("CODE REVIEW REPORT") + lines.append("=" * 60) + lines.append("") + lines.append(f"Generated: {report['metadata']['generated_at']}") + lines.append(f"Repository: {report['metadata']['repository']}") + lines.append("") + + summary = report["summary"] + verdict = summary["verdict"].upper().replace("_", " ") + lines.append(f"VERDICT: {verdict}") + lines.append(f"SCORE: {summary['score']}/100") + lines.append(f"RATIONALE: {summary['rationale']}") + lines.append("") + + lines.append("--- ISSUE SUMMARY ---") + for severity in ["critical", "high", "medium", "low"]: + count = summary["issue_counts"].get(severity, 0) + lines.append(f" {severity.capitalize()}: {count}") + lines.append("") + + if report.get("action_items"): + lines.append("--- ACTION ITEMS ---") + for i, item in enumerate(report["action_items"][:10], 1): + lines.append(f" {i}. [{item['priority']}] {item['action']}") + lines.append("") + + critical = [f for f in report.get("findings", []) if f["severity"] == "critical"] + if critical: + lines.append("--- CRITICAL ISSUES ---") + for f in critical: + lines.append(f" [{f.get('file', 'unknown')}] {f['message']}") + lines.append("") + + lines.append("=" * 60) + + return "\n".join(lines) + + +def generate_report( + repo_path: Path, + pr_analysis: Optional[Dict] = None, + quality_analysis: Optional[Dict] = None +) -> Dict: + """Generate comprehensive review report.""" + # Run analyses if not provided + if pr_analysis is None: + pr_analysis = run_pr_analyzer(repo_path) + + if quality_analysis is None: + quality_analysis = run_quality_checker(repo_path) + + # Generate findings + findings = generate_findings_list(pr_analysis, quality_analysis) + + # Count issues by severity + issue_counts = { + "critical": len([f for f in findings if f["severity"] == "critical"]), + "high": len([f for f in findings if f["severity"] == "high"]), + "medium": len([f for f in findings if f["severity"] == "medium"]), + "low": len([f for f in findings if f["severity"] == "low"]) + } + + # Calculate score and verdict + score = calculate_review_score(pr_analysis, quality_analysis) + verdict, rationale = determine_verdict( + score, + issue_counts["critical"], + issue_counts["high"] + ) + + # Generate action items + action_items = generate_action_items(findings) + + # Build report + report = { + "metadata": { + "generated_at": datetime.now().isoformat(), + "repository": str(repo_path), + "version": "1.0.0" + }, + "summary": { + "score": score, + "verdict": verdict, + "rationale": rationale, + "issue_counts": issue_counts + }, + "findings": findings, + "action_items": action_items + } + + # Add PR summary if available + if pr_analysis.get("status") == "analyzed": + report["pr_summary"] = pr_analysis.get("summary", {}) + report["review_order"] = pr_analysis.get("review_order", []) + + # Add quality summary if available + if quality_analysis.get("status") == "analyzed": + report["quality_summary"] = quality_analysis.get("summary", {}) + + return report + + +def main(): + parser = argparse.ArgumentParser( + description="Generate comprehensive code review reports" + ) + parser.add_argument( + "repo_path", + nargs="?", + default=".", + help="Path to repository (default: current directory)" + ) + parser.add_argument( + "--pr-analysis", + help="Path to pre-computed PR analysis JSON" + ) + parser.add_argument( + "--quality-analysis", + help="Path to pre-computed quality analysis JSON" + ) + parser.add_argument( + "--format", "-f", + choices=["text", "markdown", "json"], + default="text", + help="Output format (default: text)" + ) + parser.add_argument( + "--output", "-o", + help="Write output to file" + ) + parser.add_argument( + "--json", + action="store_true", + help="Output as JSON (shortcut for --format json)" + ) + + args = parser.parse_args() + + repo_path = Path(args.repo_path).resolve() + if not repo_path.exists(): + print(f"Error: Path does not exist: {repo_path}", file=sys.stderr) + sys.exit(1) + + # Load pre-computed analyses if provided + pr_analysis = None + quality_analysis = None + + if args.pr_analysis: + pr_analysis = load_json_file(args.pr_analysis) + if not pr_analysis: + print(f"Warning: Could not load PR analysis from {args.pr_analysis}") + + if args.quality_analysis: + quality_analysis = load_json_file(args.quality_analysis) + if not quality_analysis: + print(f"Warning: Could not load quality analysis from {args.quality_analysis}") + + # Generate report + report = generate_report(repo_path, pr_analysis, quality_analysis) + + # Format output + output_format = "json" if args.json else args.format + + if output_format == "json": + output = json.dumps(report, indent=2) + elif output_format == "markdown": + output = format_markdown_report(report) + else: + output = format_text_report(report) + + # Write or print output + if args.output: + with open(args.output, "w") as f: + f.write(output) + print(f"Report written to {args.output}") + else: + print(output) + + +if __name__ == "__main__": + main() diff --git a/.windsurf/skills/code-reviewer/SKILL.md b/.windsurf/skills/code-reviewer/SKILL.md new file mode 100644 index 00000000..5ef36178 --- /dev/null +++ b/.windsurf/skills/code-reviewer/SKILL.md @@ -0,0 +1,177 @@ +--- +name: code-reviewer +description: Code review automation for TypeScript, JavaScript, Python, Go, Swift, Kotlin. Analyzes PRs for complexity and risk, checks code quality for SOLID violations and code smells, generates review reports. Use when reviewing pull requests, analyzing code quality, identifying issues, generating review checklists. +--- + +# Code Reviewer + +Automated code review tools for analyzing pull requests, detecting code quality issues, and generating review reports. + +--- + +## Table of Contents + +- [Tools](#tools) + - [PR Analyzer](#pr-analyzer) + - [Code Quality Checker](#code-quality-checker) + - [Review Report Generator](#review-report-generator) +- [Reference Guides](#reference-guides) +- [Languages Supported](#languages-supported) + +--- + +## Tools + +### PR Analyzer + +Analyzes git diff between branches to assess review complexity and identify risks. + +```bash +# Analyze current branch against main +python scripts/pr_analyzer.py /path/to/repo + +# Compare specific branches +python scripts/pr_analyzer.py . --base main --head feature-branch + +# JSON output for integration +python scripts/pr_analyzer.py /path/to/repo --json +``` + +**What it detects:** +- Hardcoded secrets (passwords, API keys, tokens) +- SQL injection patterns (string concatenation in queries) +- Debug statements (debugger, console.log) +- ESLint rule disabling +- TypeScript `any` types +- TODO/FIXME comments + +**Output includes:** +- Complexity score (1-10) +- Risk categorization (critical, high, medium, low) +- File prioritization for review order +- Commit message validation + +--- + +### Code Quality Checker + +Analyzes source code for structural issues, code smells, and SOLID violations. + +```bash +# Analyze a directory +python scripts/code_quality_checker.py /path/to/code + +# Analyze specific language +python scripts/code_quality_checker.py . --language python + +# JSON output +python scripts/code_quality_checker.py /path/to/code --json +``` + +**What it detects:** +- Long functions (>50 lines) +- Large files (>500 lines) +- God classes (>20 methods) +- Deep nesting (>4 levels) +- Too many parameters (>5) +- High cyclomatic complexity +- Missing error handling +- Unused imports +- Magic numbers + +**Thresholds:** + +| Issue | Threshold | +|-------|-----------| +| Long function | >50 lines | +| Large file | >500 lines | +| God class | >20 methods | +| Too many params | >5 | +| Deep nesting | >4 levels | +| High complexity | >10 branches | + +--- + +### Review Report Generator + +Combines PR analysis and code quality findings into structured review reports. + +```bash +# Generate report for current repo +python scripts/review_report_generator.py /path/to/repo + +# Markdown output +python scripts/review_report_generator.py . --format markdown --output review.md + +# Use pre-computed analyses +python scripts/review_report_generator.py . \ + --pr-analysis pr_results.json \ + --quality-analysis quality_results.json +``` + +**Report includes:** +- Review verdict (approve, request changes, block) +- Score (0-100) +- Prioritized action items +- Issue summary by severity +- Suggested review order + +**Verdicts:** + +| Score | Verdict | +|-------|---------| +| 90+ with no high issues | Approve | +| 75+ with ≤2 high issues | Approve with suggestions | +| 50-74 | Request changes | +| <50 or critical issues | Block | + +--- + +## Reference Guides + +### Code Review Checklist +`.windsurf/skills/code-reviewer/references/code_review_checklist.md` + +Systematic checklists covering: +- Pre-review checks (build, tests, PR hygiene) +- Correctness (logic, data handling, error handling) +- Security (input validation, injection prevention) +- Performance (efficiency, caching, scalability) +- Maintainability (code quality, naming, structure) +- Testing (coverage, quality, mocking) +- Language-specific checks + +### Coding Standards +`.windsurf/skills/code-reviewer/references/coding_standards.md` + +Language-specific standards for: +- TypeScript (type annotations, null safety, async/await) +- JavaScript (declarations, patterns, modules) +- Python (type hints, exceptions, class design) +- Go (error handling, structs, concurrency) +- Swift (optionals, protocols, errors) +- Kotlin (null safety, data classes, coroutines) + +### Common Antipatterns +`.windsurf/skills/code-reviewer/references/common_antipatterns.md` + +Antipattern catalog with examples and fixes: +- Structural (god class, long method, deep nesting) +- Logic (boolean blindness, stringly typed code) +- Security (SQL injection, hardcoded credentials) +- Performance (N+1 queries, unbounded collections) +- Testing (duplication, testing implementation) +- Async (floating promises, callback hell) + +--- + +## Languages Supported + +| Language | Extensions | +|----------|------------| +| Python | `.py` | +| TypeScript | `.ts`, `.tsx` | +| JavaScript | `.js`, `.jsx`, `.mjs` | +| Go | `.go` | +| Swift | `.swift` | +| Kotlin | `.kt`, `.kts` | \ No newline at end of file diff --git a/.windsurf/skills/code-reviewer/references/code_review_checklist.md b/.windsurf/skills/code-reviewer/references/code_review_checklist.md new file mode 100644 index 00000000..b7bd0867 --- /dev/null +++ b/.windsurf/skills/code-reviewer/references/code_review_checklist.md @@ -0,0 +1,270 @@ +# Code Review Checklist + +Structured checklists for systematic code review across different aspects. + +--- + +## Table of Contents + +- [Pre-Review Checks](#pre-review-checks) +- [Correctness](#correctness) +- [Security](#security) +- [Performance](#performance) +- [Maintainability](#maintainability) +- [Testing](#testing) +- [Documentation](#documentation) +- [Language-Specific Checks](#language-specific-checks) + +--- + +## Pre-Review Checks + +Before diving into code, verify these basics: + +### Build and Tests +- [ ] Code compiles without errors +- [ ] All existing tests pass +- [ ] New tests are included for new functionality +- [ ] No unintended files included (build artifacts, IDE configs) + +### PR Hygiene +- [ ] PR has clear title and description +- [ ] Changes are scoped appropriately (not too large) +- [ ] Commits follow conventional commit format +- [ ] Branch is up to date with base branch + +### Scope Verification +- [ ] Changes match the stated purpose +- [ ] No unrelated changes bundled in +- [ ] Breaking changes are documented +- [ ] Migration path provided if needed + +--- + +## Correctness + +### Logic +- [ ] Algorithm implements requirements correctly +- [ ] Edge cases handled (null, empty, boundary values) +- [ ] Off-by-one errors checked +- [ ] Correct operators used (== vs ===, & vs &&) +- [ ] Loop termination conditions correct +- [ ] Recursion has proper base cases + +### Data Handling +- [ ] Data types appropriate for the use case +- [ ] Numeric overflow/underflow considered +- [ ] Date/time handling accounts for timezones +- [ ] Unicode and internationalization handled +- [ ] Data validation at entry points + +### State Management +- [ ] State transitions are valid +- [ ] Race conditions addressed +- [ ] Concurrent access handled correctly +- [ ] State cleanup on errors/exit + +### Error Handling +- [ ] Errors caught at appropriate levels +- [ ] Error messages are actionable +- [ ] Errors don't expose sensitive information +- [ ] Recovery or graceful degradation implemented +- [ ] Resources cleaned up in error paths + +--- + +## Security + +### Input Validation +- [ ] All user input validated and sanitized +- [ ] Input length limits enforced +- [ ] File uploads validated (type, size, content) +- [ ] URL parameters validated + +### Injection Prevention +- [ ] SQL queries parameterized +- [ ] Command execution uses safe APIs +- [ ] HTML output escaped to prevent XSS +- [ ] LDAP queries properly escaped +- [ ] XML parsing disables external entities + +### Authentication & Authorization +- [ ] Authentication required for protected resources +- [ ] Authorization checked before operations +- [ ] Session management secure +- [ ] Password handling follows best practices +- [ ] Token expiration implemented + +### Data Protection +- [ ] Sensitive data encrypted at rest +- [ ] Sensitive data encrypted in transit +- [ ] PII handled according to policy +- [ ] Secrets not hardcoded +- [ ] Logs don't contain sensitive data + +### API Security +- [ ] Rate limiting implemented +- [ ] CORS configured correctly +- [ ] CSRF protection in place +- [ ] API keys/tokens secured +- [ ] Endpoints use HTTPS + +--- + +## Performance + +### Efficiency +- [ ] Appropriate data structures used +- [ ] Algorithms have acceptable complexity +- [ ] Database queries are optimized +- [ ] N+1 query problems avoided +- [ ] Indexes used where beneficial + +### Resource Usage +- [ ] Memory usage bounded +- [ ] No memory leaks +- [ ] File handles properly closed +- [ ] Database connections pooled +- [ ] Network calls minimized + +### Caching +- [ ] Appropriate caching strategy +- [ ] Cache invalidation handled +- [ ] Cache keys are unique and predictable +- [ ] TTL values appropriate + +### Scalability +- [ ] Horizontal scaling considered +- [ ] Bottlenecks identified +- [ ] Async processing for long operations +- [ ] Batch operations where appropriate + +--- + +## Maintainability + +### Code Quality +- [ ] Functions/methods have single responsibility +- [ ] Classes follow SOLID principles +- [ ] Code is DRY (Don't Repeat Yourself) +- [ ] No dead code or commented-out code +- [ ] Magic numbers replaced with constants + +### Naming +- [ ] Names are descriptive and consistent +- [ ] Naming follows project conventions +- [ ] No abbreviations that obscure meaning +- [ ] Boolean variables/functions have is/has/can prefix + +### Structure +- [ ] Functions are appropriately sized (<50 lines preferred) +- [ ] Nesting depth is reasonable (<4 levels) +- [ ] Related code is grouped together +- [ ] Dependencies are minimal and explicit + +### Readability +- [ ] Code is self-documenting where possible +- [ ] Complex logic has explanatory comments +- [ ] Formatting is consistent +- [ ] No overly clever or obscure code + +--- + +## Testing + +### Coverage +- [ ] New code has unit tests +- [ ] Critical paths have integration tests +- [ ] Edge cases are tested +- [ ] Error conditions are tested + +### Quality +- [ ] Tests are independent +- [ ] Tests have clear assertions +- [ ] Test names describe what is tested +- [ ] Tests don't depend on external state + +### Mocking +- [ ] External dependencies are mocked +- [ ] Mocks are realistic +- [ ] Mock setup is not excessive + +--- + +## Documentation + +### Code Documentation +- [ ] Public APIs are documented +- [ ] Complex algorithms explained +- [ ] Non-obvious decisions documented +- [ ] TODO/FIXME comments have context + +### External Documentation +- [ ] README updated if needed +- [ ] API documentation updated +- [ ] Changelog updated +- [ ] Migration guides provided + +--- + +## Language-Specific Checks + +### TypeScript/JavaScript +- [ ] Types are explicit (avoid `any`) +- [ ] Null checks present (`?.`, `??`) +- [ ] Async/await errors handled +- [ ] No floating promises +- [ ] Memory leaks from closures checked + +### Python +- [ ] Type hints used for public APIs +- [ ] Context managers for resources (`with` statements) +- [ ] Exception handling is specific (not bare `except`) +- [ ] No mutable default arguments +- [ ] List comprehensions used appropriately + +### Go +- [ ] Errors checked and handled +- [ ] Goroutine leaks prevented +- [ ] Context propagation correct +- [ ] Defer statements in right order +- [ ] Interfaces minimal + +### Swift +- [ ] Optionals handled safely +- [ ] Memory management correct (weak/unowned) +- [ ] Error handling uses Result or throws +- [ ] Access control appropriate +- [ ] Codable implementation correct + +### Kotlin +- [ ] Null safety leveraged +- [ ] Coroutine cancellation handled +- [ ] Data classes used appropriately +- [ ] Extension functions don't obscure behavior +- [ ] Sealed classes for state + +--- + +## Review Process Tips + +### Before Approving +1. Verify all critical checks passed +2. Confirm tests are adequate +3. Consider deployment impact +4. Check for any security concerns +5. Ensure documentation is updated + +### Providing Feedback +- Be specific about issues +- Explain why something is problematic +- Suggest alternatives when possible +- Distinguish blockers from suggestions +- Acknowledge good patterns + +### When to Block +- Security vulnerabilities present +- Critical logic errors +- No tests for risky changes +- Breaking changes without migration +- Significant performance regressions diff --git a/.windsurf/skills/code-reviewer/references/coding_standards.md b/.windsurf/skills/code-reviewer/references/coding_standards.md new file mode 100644 index 00000000..9fbc6a06 --- /dev/null +++ b/.windsurf/skills/code-reviewer/references/coding_standards.md @@ -0,0 +1,555 @@ +# Coding Standards + +Language-specific coding standards and conventions for code review. + +--- + +## Table of Contents + +- [Universal Principles](#universal-principles) +- [TypeScript Standards](#typescript-standards) +- [JavaScript Standards](#javascript-standards) +- [Python Standards](#python-standards) +- [Go Standards](#go-standards) +- [Swift Standards](#swift-standards) +- [Kotlin Standards](#kotlin-standards) + +--- + +## Universal Principles + +These apply across all languages. + +### Naming Conventions + +| Element | Convention | Example | +|---------|------------|---------| +| Variables | camelCase (JS/TS), snake_case (Python/Go) | `userName`, `user_name` | +| Constants | SCREAMING_SNAKE_CASE | `MAX_RETRY_COUNT` | +| Functions | camelCase (JS/TS), snake_case (Python) | `getUserById`, `get_user_by_id` | +| Classes | PascalCase | `UserRepository` | +| Interfaces | PascalCase, optionally prefixed | `IUserService` or `UserService` | +| Private members | Prefix with underscore or use access modifiers | `_internalState` | + +### Function Design + +``` +Good functions: +- Do one thing well +- Have descriptive names (verb + noun) +- Take 3 or fewer parameters +- Return early for error cases +- Stay under 50 lines +``` + +### Error Handling + +``` +Good error handling: +- Catch specific errors, not generic exceptions +- Log with context (what, where, why) +- Clean up resources in error paths +- Don't swallow errors silently +- Provide actionable error messages +``` + +--- + +## TypeScript Standards + +### Type Annotations + +```typescript +// Avoid 'any' - use unknown for truly unknown types +function processData(data: unknown): ProcessedResult { + if (isValidData(data)) { + return transform(data); + } + throw new Error('Invalid data format'); +} + +// Use explicit return types for public APIs +export function calculateTotal(items: CartItem[]): number { + return items.reduce((sum, item) => sum + item.price, 0); +} + +// Use type guards for runtime checks +function isUser(obj: unknown): obj is User { + return ( + typeof obj === 'object' && + obj !== null && + 'id' in obj && + 'email' in obj + ); +} +``` + +### Null Safety + +```typescript +// Use optional chaining and nullish coalescing +const userName = user?.profile?.name ?? 'Anonymous'; + +// Be explicit about nullable types +interface Config { + timeout: number; + retries?: number; // Optional + fallbackUrl: string | null; // Explicitly nullable +} + +// Use assertion functions for validation +function assertDefined(value: T | null | undefined): asserts value is T { + if (value === null || value === undefined) { + throw new Error('Value is not defined'); + } +} +``` + +### Async/Await + +```typescript +// Always handle errors in async functions +async function fetchUser(id: string): Promise { + try { + const response = await api.get(`/users/${id}`); + return response.data; + } catch (error) { + logger.error('Failed to fetch user', { id, error }); + throw new UserFetchError(id, error); + } +} + +// Use Promise.all for parallel operations +async function loadDashboard(userId: string): Promise { + const [profile, stats, notifications] = await Promise.all([ + fetchProfile(userId), + fetchStats(userId), + fetchNotifications(userId) + ]); + return { profile, stats, notifications }; +} +``` + +### React/Component Standards + +```typescript +// Use explicit prop types +interface ButtonProps { + label: string; + onClick: () => void; + variant?: 'primary' | 'secondary'; + disabled?: boolean; +} + +// Prefer functional components with hooks +function Button({ label, onClick, variant = 'primary', disabled = false }: ButtonProps) { + return ( + + ); +} + +// Use custom hooks for reusable logic +function useDebounce(value: T, delay: number): T { + const [debouncedValue, setDebouncedValue] = useState(value); + + useEffect(() => { + const timer = setTimeout(() => setDebouncedValue(value), delay); + return () => clearTimeout(timer); + }, [value, delay]); + + return debouncedValue; +} +``` + +--- + +## JavaScript Standards + +### Variable Declarations + +```javascript +// Use const by default, let when reassignment needed +const MAX_ITEMS = 100; +let currentCount = 0; + +// Never use var +// var is function-scoped and hoisted, leading to bugs +``` + +### Object and Array Patterns + +```javascript +// Use object destructuring +const { name, email, role = 'user' } = user; + +// Use spread for immutable updates +const updatedUser = { ...user, lastLogin: new Date() }; +const updatedList = [...items, newItem]; + +// Use array methods over loops +const activeUsers = users.filter(u => u.isActive); +const emails = users.map(u => u.email); +const total = orders.reduce((sum, o) => sum + o.amount, 0); +``` + +### Module Patterns + +```javascript +// Use named exports for utilities +export function formatDate(date) { ... } +export function parseDate(str) { ... } + +// Use default export for main component/class +export default class UserService { ... } + +// Group related exports +export { formatDate, parseDate, isValidDate } from './dateUtils'; +``` + +--- + +## Python Standards + +### Type Hints (PEP 484) + +```python +from typing import Optional, List, Dict, Union + +def get_user(user_id: int) -> Optional[User]: + """Fetch user by ID, returns None if not found.""" + return db.query(User).filter(User.id == user_id).first() + +def process_items(items: List[str]) -> Dict[str, int]: + """Count occurrences of each item.""" + return {item: items.count(item) for item in set(items)} + +def send_notification( + user: User, + message: str, + *, + priority: str = "normal", + channels: List[str] = None +) -> bool: + """Send notification to user via specified channels.""" + channels = channels or ["email"] + # Implementation +``` + +### Exception Handling + +```python +# Catch specific exceptions +try: + result = api_client.fetch_data(endpoint) +except ConnectionError as e: + logger.warning(f"Connection failed: {e}") + return cached_data +except TimeoutError as e: + logger.error(f"Request timed out: {e}") + raise ServiceUnavailableError() from e + +# Use context managers for resources +with open(filepath, 'r') as f: + data = json.load(f) + +# Custom exceptions should be informative +class ValidationError(Exception): + def __init__(self, field: str, message: str): + self.field = field + self.message = message + super().__init__(f"{field}: {message}") +``` + +### Class Design + +```python +from dataclasses import dataclass +from abc import ABC, abstractmethod + +# Use dataclasses for data containers +@dataclass +class UserDTO: + id: int + email: str + name: str + is_active: bool = True + +# Use ABC for interfaces +class Repository(ABC): + @abstractmethod + def find_by_id(self, id: int) -> Optional[Entity]: + pass + + @abstractmethod + def save(self, entity: Entity) -> Entity: + pass + +# Use properties for computed attributes +class Order: + def __init__(self, items: List[OrderItem]): + self._items = items + + @property + def total(self) -> Decimal: + return sum(item.price * item.quantity for item in self._items) +``` + +--- + +## Go Standards + +### Error Handling + +```go +// Always check errors +file, err := os.Open(filename) +if err != nil { + return fmt.Errorf("failed to open %s: %w", filename, err) +} +defer file.Close() + +// Use custom error types for specific cases +type ValidationError struct { + Field string + Message string +} + +func (e *ValidationError) Error() string { + return fmt.Sprintf("%s: %s", e.Field, e.Message) +} + +// Wrap errors with context +if err := db.Query(query); err != nil { + return fmt.Errorf("query failed for user %d: %w", userID, err) +} +``` + +### Struct Design + +```go +// Use unexported fields with exported methods +type UserService struct { + repo UserRepository + cache Cache + logger Logger +} + +// Constructor functions for initialization +func NewUserService(repo UserRepository, cache Cache, logger Logger) *UserService { + return &UserService{ + repo: repo, + cache: cache, + logger: logger, + } +} + +// Keep interfaces small +type Reader interface { + Read(p []byte) (n int, err error) +} + +type Writer interface { + Write(p []byte) (n int, err error) +} +``` + +### Concurrency + +```go +// Use context for cancellation +func fetchData(ctx context.Context, url string) ([]byte, error) { + req, err := http.NewRequestWithContext(ctx, "GET", url, nil) + if err != nil { + return nil, err + } + // ... +} + +// Use channels for communication +func worker(jobs <-chan Job, results chan<- Result) { + for job := range jobs { + result := process(job) + results <- result + } +} + +// Use sync.WaitGroup for coordination +var wg sync.WaitGroup +for _, item := range items { + wg.Add(1) + go func(i Item) { + defer wg.Done() + processItem(i) + }(item) +} +wg.Wait() +``` + +--- + +## Swift Standards + +### Optionals + +```swift +// Use optional binding +if let user = fetchUser(id: userId) { + displayProfile(user) +} + +// Use guard for early exit +guard let data = response.data else { + throw NetworkError.noData +} + +// Use nil coalescing for defaults +let displayName = user.nickname ?? user.email + +// Avoid force unwrapping except in tests +// BAD: let name = user.name! +// GOOD: guard let name = user.name else { return } +``` + +### Protocol-Oriented Design + +```swift +// Define protocols with minimal requirements +protocol Identifiable { + var id: String { get } +} + +protocol Persistable: Identifiable { + func save() throws + static func find(by id: String) -> Self? +} + +// Use protocol extensions for default implementations +extension Persistable { + func save() throws { + try Storage.shared.save(self) + } +} + +// Prefer composition over inheritance +struct User: Identifiable, Codable { + let id: String + var name: String + var email: String +} +``` + +### Error Handling + +```swift +// Define domain-specific errors +enum AuthError: Error { + case invalidCredentials + case tokenExpired + case networkFailure(underlying: Error) +} + +// Use Result type for async operations +func authenticate( + email: String, + password: String, + completion: @escaping (Result) -> Void +) + +// Use throws for synchronous operations +func validate(_ input: String) throws -> ValidatedInput { + guard !input.isEmpty else { + throw ValidationError.emptyInput + } + return ValidatedInput(value: input) +} +``` + +--- + +## Kotlin Standards + +### Null Safety + +```kotlin +// Use nullable types explicitly +fun findUser(id: Int): User? { + return userRepository.find(id) +} + +// Use safe calls and elvis operator +val name = user?.profile?.name ?: "Unknown" + +// Use let for null checks with side effects +user?.let { activeUser -> + sendWelcomeEmail(activeUser.email) + logActivity(activeUser.id) +} + +// Use require/check for validation +fun processPayment(amount: Double) { + require(amount > 0) { "Amount must be positive: $amount" } + // Process +} +``` + +### Data Classes and Sealed Classes + +```kotlin +// Use data classes for DTOs +data class UserDTO( + val id: Int, + val email: String, + val name: String, + val isActive: Boolean = true +) + +// Use sealed classes for state +sealed class Result { + data class Success(val data: T) : Result() + data class Error(val message: String, val cause: Throwable? = null) : Result() + object Loading : Result() +} + +// Pattern matching with when +fun handleResult(result: Result) = when (result) { + is Result.Success -> showUser(result.data) + is Result.Error -> showError(result.message) + Result.Loading -> showLoading() +} +``` + +### Coroutines + +```kotlin +// Use structured concurrency +suspend fun loadDashboard(): Dashboard = coroutineScope { + val profile = async { fetchProfile() } + val stats = async { fetchStats() } + val notifications = async { fetchNotifications() } + + Dashboard( + profile = profile.await(), + stats = stats.await(), + notifications = notifications.await() + ) +} + +// Handle cancellation +suspend fun fetchWithRetry(url: String): Response { + repeat(3) { attempt -> + try { + return httpClient.get(url) + } catch (e: IOException) { + if (attempt == 2) throw e + delay(1000L * (attempt + 1)) + } + } + throw IllegalStateException("Unreachable") +} +``` diff --git a/.windsurf/skills/code-reviewer/references/common_antipatterns.md b/.windsurf/skills/code-reviewer/references/common_antipatterns.md new file mode 100644 index 00000000..26045452 --- /dev/null +++ b/.windsurf/skills/code-reviewer/references/common_antipatterns.md @@ -0,0 +1,739 @@ +# Common Antipatterns + +Code antipatterns to identify during review, with examples and fixes. + +--- + +## Table of Contents + +- [Structural Antipatterns](#structural-antipatterns) +- [Logic Antipatterns](#logic-antipatterns) +- [Security Antipatterns](#security-antipatterns) +- [Performance Antipatterns](#performance-antipatterns) +- [Testing Antipatterns](#testing-antipatterns) +- [Async Antipatterns](#async-antipatterns) + +--- + +## Structural Antipatterns + +### God Class + +A class that does too much and knows too much. + +```typescript +// BAD: God class handling everything +class UserManager { + createUser(data: UserData) { ... } + updateUser(id: string, data: UserData) { ... } + deleteUser(id: string) { ... } + sendEmail(userId: string, content: string) { ... } + generateReport(userId: string) { ... } + validatePassword(password: string) { ... } + hashPassword(password: string) { ... } + uploadAvatar(userId: string, file: File) { ... } + resizeImage(file: File) { ... } + logActivity(userId: string, action: string) { ... } + // 50 more methods... +} + +// GOOD: Single responsibility classes +class UserRepository { + create(data: UserData): User { ... } + update(id: string, data: Partial): User { ... } + delete(id: string): void { ... } +} + +class EmailService { + send(to: string, content: string): void { ... } +} + +class PasswordService { + validate(password: string): ValidationResult { ... } + hash(password: string): string { ... } +} +``` + +**Detection:** Class has >20 methods, >500 lines, or handles unrelated concerns. + +--- + +### Long Method + +Functions that do too much and are hard to understand. + +```python +# BAD: Long method doing everything +def process_order(order_data): + # Validate order (20 lines) + if not order_data.get('items'): + raise ValueError('No items') + if not order_data.get('customer_id'): + raise ValueError('No customer') + # ... more validation + + # Calculate totals (30 lines) + subtotal = 0 + for item in order_data['items']: + price = get_product_price(item['product_id']) + subtotal += price * item['quantity'] + # ... tax calculation, discounts + + # Process payment (40 lines) + payment_result = payment_gateway.charge(...) + # ... handle payment errors + + # Create order record (20 lines) + order = Order.create(...) + + # Send notifications (20 lines) + send_order_confirmation(...) + notify_warehouse(...) + + return order + +# GOOD: Composed of focused functions +def process_order(order_data): + validate_order(order_data) + totals = calculate_order_totals(order_data) + payment = process_payment(order_data['customer_id'], totals) + order = create_order_record(order_data, totals, payment) + send_order_notifications(order) + return order +``` + +**Detection:** Function >50 lines or requires scrolling to read. + +--- + +### Deep Nesting + +Excessive indentation making code hard to follow. + +```javascript +// BAD: Deep nesting +function processData(data) { + if (data) { + if (data.items) { + if (data.items.length > 0) { + for (const item of data.items) { + if (item.isValid) { + if (item.type === 'premium') { + if (item.price > 100) { + // Finally do something + processItem(item); + } + } + } + } + } + } + } +} + +// GOOD: Early returns and guard clauses +function processData(data) { + if (!data?.items?.length) { + return; + } + + const premiumItems = data.items.filter( + item => item.isValid && item.type === 'premium' && item.price > 100 + ); + + premiumItems.forEach(processItem); +} +``` + +**Detection:** Indentation >4 levels deep. + +--- + +### Magic Numbers and Strings + +Hard-coded values without explanation. + +```go +// BAD: Magic numbers +func calculateDiscount(total float64, userType int) float64 { + if userType == 1 { + return total * 0.15 + } else if userType == 2 { + return total * 0.25 + } + return total * 0.05 +} + +// GOOD: Named constants +const ( + UserTypeRegular = 1 + UserTypePremium = 2 + + DiscountRegular = 0.05 + DiscountStandard = 0.15 + DiscountPremium = 0.25 +) + +func calculateDiscount(total float64, userType int) float64 { + switch userType { + case UserTypePremium: + return total * DiscountPremium + case UserTypeRegular: + return total * DiscountStandard + default: + return total * DiscountRegular + } +} +``` + +**Detection:** Literal numbers (except 0, 1) or repeated string literals. + +--- + +### Primitive Obsession + +Using primitives instead of small objects. + +```typescript +// BAD: Primitives everywhere +function createUser( + name: string, + email: string, + phone: string, + street: string, + city: string, + zipCode: string, + country: string +): User { ... } + +// GOOD: Value objects +interface Address { + street: string; + city: string; + zipCode: string; + country: string; +} + +interface ContactInfo { + email: string; + phone: string; +} + +function createUser( + name: string, + contact: ContactInfo, + address: Address +): User { ... } +``` + +**Detection:** Functions with >4 parameters of same type, or related primitives always passed together. + +--- + +## Logic Antipatterns + +### Boolean Blindness + +Passing booleans that make code unreadable at call sites. + +```swift +// BAD: What do these booleans mean? +user.configure(true, false, true, false) + +// GOOD: Named parameters or option objects +user.configure( + sendWelcomeEmail: true, + requireVerification: false, + enableNotifications: true, + isAdmin: false +) + +// Or use an options struct +struct UserConfiguration { + var sendWelcomeEmail: Bool = true + var requireVerification: Bool = false + var enableNotifications: Bool = true + var isAdmin: Bool = false +} + +user.configure(UserConfiguration()) +``` + +**Detection:** Function calls with multiple boolean literals. + +--- + +### Null Returns for Collections + +Returning null instead of empty collections. + +```kotlin +// BAD: Returning null +fun findUsersByRole(role: String): List? { + val users = repository.findByRole(role) + return if (users.isEmpty()) null else users +} + +// Caller must handle null +val users = findUsersByRole("admin") +if (users != null) { + users.forEach { ... } +} + +// GOOD: Return empty collection +fun findUsersByRole(role: String): List { + return repository.findByRole(role) +} + +// Caller can iterate directly +findUsersByRole("admin").forEach { ... } +``` + +**Detection:** Functions returning nullable collections. + +--- + +### Stringly Typed Code + +Using strings where enums or types should be used. + +```python +# BAD: String-based logic +def handle_event(event_type: str, data: dict): + if event_type == "user_created": + handle_user_created(data) + elif event_type == "user_updated": + handle_user_updated(data) + elif event_type == "user_dleted": # Typo won't be caught + handle_user_deleted(data) + +# GOOD: Enum-based +from enum import Enum + +class EventType(Enum): + USER_CREATED = "user_created" + USER_UPDATED = "user_updated" + USER_DELETED = "user_deleted" + +def handle_event(event_type: EventType, data: dict): + handlers = { + EventType.USER_CREATED: handle_user_created, + EventType.USER_UPDATED: handle_user_updated, + EventType.USER_DELETED: handle_user_deleted, + } + handlers[event_type](data) +``` + +**Detection:** String comparisons for type/status/category values. + +--- + +## Security Antipatterns + +### SQL Injection + +String concatenation in SQL queries. + +```javascript +// BAD: String concatenation +const query = `SELECT * FROM users WHERE id = ${userId}`; +db.query(query); + +// BAD: String templates still vulnerable +const query = `SELECT * FROM users WHERE name = '${userName}'`; + +// GOOD: Parameterized queries +const query = 'SELECT * FROM users WHERE id = $1'; +db.query(query, [userId]); + +// GOOD: Using ORM safely +User.findOne({ where: { id: userId } }); +``` + +**Detection:** String concatenation or template literals with SQL keywords. + +--- + +### Hardcoded Credentials + +Secrets in source code. + +```python +# BAD: Hardcoded secrets +API_KEY = "sk-abc123xyz789" +DATABASE_URL = "postgresql://admin:password123@prod-db.internal:5432/app" + +# GOOD: Environment variables +import os + +API_KEY = os.environ["API_KEY"] +DATABASE_URL = os.environ["DATABASE_URL"] + +# GOOD: Secrets manager +from aws_secretsmanager import get_secret + +API_KEY = get_secret("api-key") +``` + +**Detection:** Variables named `password`, `secret`, `key`, `token` with string literals. + +--- + +### Unsafe Deserialization + +Deserializing untrusted data without validation. + +```python +# BAD: Binary serialization from untrusted source can execute arbitrary code +# Examples: Python's binary serialization, yaml.load without SafeLoader + +# GOOD: Use safe alternatives +import json + +def load_data(file_path): + with open(file_path, 'r') as f: + return json.load(f) + +# GOOD: Use SafeLoader for YAML +import yaml + +with open('config.yaml') as f: + config = yaml.safe_load(f) +``` + +**Detection:** Binary deserialization functions, yaml.load without safe loader, dynamic code execution on external data. + +--- + +### Missing Input Validation + +Trusting user input without validation. + +```typescript +// BAD: No validation +app.post('/user', (req, res) => { + const user = db.create({ + name: req.body.name, + email: req.body.email, + role: req.body.role // User can set themselves as admin! + }); + res.json(user); +}); + +// GOOD: Validate and sanitize +import { z } from 'zod'; + +const CreateUserSchema = z.object({ + name: z.string().min(1).max(100), + email: z.string().email(), + // role is NOT accepted from input +}); + +app.post('/user', (req, res) => { + const validated = CreateUserSchema.parse(req.body); + const user = db.create({ + ...validated, + role: 'user' // Default role, not from input + }); + res.json(user); +}); +``` + +**Detection:** Request body/params used directly without validation schema. + +--- + +## Performance Antipatterns + +### N+1 Query Problem + +Loading related data one record at a time. + +```python +# BAD: N+1 queries +def get_orders_with_items(): + orders = Order.query.all() # 1 query + for order in orders: + items = OrderItem.query.filter_by(order_id=order.id).all() # N queries + order.items = items + return orders + +# GOOD: Eager loading +def get_orders_with_items(): + return Order.query.options( + joinedload(Order.items) + ).all() # 1 query with JOIN + +# GOOD: Batch loading +def get_orders_with_items(): + orders = Order.query.all() + order_ids = [o.id for o in orders] + items = OrderItem.query.filter( + OrderItem.order_id.in_(order_ids) + ).all() # 2 queries total + # Group items by order_id... +``` + +**Detection:** Database queries inside loops. + +--- + +### Unbounded Collections + +Loading unlimited data into memory. + +```go +// BAD: Load all records +func GetAllUsers() ([]User, error) { + return db.Find(&[]User{}) // Could be millions +} + +// GOOD: Pagination +func GetUsers(page, pageSize int) ([]User, error) { + offset := (page - 1) * pageSize + return db.Limit(pageSize).Offset(offset).Find(&[]User{}) +} + +// GOOD: Streaming for large datasets +func ProcessAllUsers(handler func(User) error) error { + rows, err := db.Model(&User{}).Rows() + if err != nil { + return err + } + defer rows.Close() + + for rows.Next() { + var user User + db.ScanRows(rows, &user) + if err := handler(user); err != nil { + return err + } + } + return nil +} +``` + +**Detection:** `findAll()`, `find({})`, or queries without `LIMIT`. + +--- + +### Synchronous I/O in Hot Paths + +Blocking operations in request handlers. + +```javascript +// BAD: Sync file read on every request +app.get('/config', (req, res) => { + const config = fs.readFileSync('./config.json'); // Blocks event loop + res.json(JSON.parse(config)); +}); + +// GOOD: Load once at startup +const config = JSON.parse(fs.readFileSync('./config.json')); + +app.get('/config', (req, res) => { + res.json(config); +}); + +// GOOD: Async with caching +let configCache = null; + +app.get('/config', async (req, res) => { + if (!configCache) { + configCache = JSON.parse(await fs.promises.readFile('./config.json')); + } + res.json(configCache); +}); +``` + +**Detection:** `readFileSync`, `execSync`, or blocking calls in request handlers. + +--- + +## Testing Antipatterns + +### Test Code Duplication + +Repeating setup in every test. + +```typescript +// BAD: Duplicate setup +describe('UserService', () => { + it('should create user', async () => { + const db = await createTestDatabase(); + const userRepo = new UserRepository(db); + const emailService = new MockEmailService(); + const service = new UserService(userRepo, emailService); + + const user = await service.create({ name: 'Test' }); + expect(user.name).toBe('Test'); + }); + + it('should update user', async () => { + const db = await createTestDatabase(); // Duplicated + const userRepo = new UserRepository(db); // Duplicated + const emailService = new MockEmailService(); // Duplicated + const service = new UserService(userRepo, emailService); // Duplicated + + // ... + }); +}); + +// GOOD: Shared setup +describe('UserService', () => { + let service: UserService; + let db: TestDatabase; + + beforeEach(async () => { + db = await createTestDatabase(); + const userRepo = new UserRepository(db); + const emailService = new MockEmailService(); + service = new UserService(userRepo, emailService); + }); + + afterEach(async () => { + await db.cleanup(); + }); + + it('should create user', async () => { + const user = await service.create({ name: 'Test' }); + expect(user.name).toBe('Test'); + }); +}); +``` + +--- + +### Testing Implementation Instead of Behavior + +Tests coupled to internal implementation. + +```python +# BAD: Testing implementation details +def test_add_item_to_cart(): + cart = ShoppingCart() + cart.add_item(Product("Apple", 1.00)) + + # Testing internal structure + assert cart._items[0].name == "Apple" + assert cart._total == 1.00 + +# GOOD: Testing behavior +def test_add_item_to_cart(): + cart = ShoppingCart() + cart.add_item(Product("Apple", 1.00)) + + # Testing public behavior + assert cart.item_count == 1 + assert cart.total == 1.00 + assert cart.contains("Apple") +``` + +--- + +## Async Antipatterns + +### Floating Promises + +Promises without await or catch. + +```typescript +// BAD: Floating promise +async function saveUser(user: User) { + db.save(user); // Not awaited, errors lost + logger.info('User saved'); // Logs before save completes +} + +// BAD: Fire and forget in loop +for (const item of items) { + processItem(item); // All run in parallel, no error handling +} + +// GOOD: Await the promise +async function saveUser(user: User) { + await db.save(user); + logger.info('User saved'); +} + +// GOOD: Process with proper handling +await Promise.all(items.map(item => processItem(item))); + +// Or sequentially +for (const item of items) { + await processItem(item); +} +``` + +**Detection:** Async function calls without `await` or `.then()`. + +--- + +### Callback Hell + +Deeply nested callbacks. + +```javascript +// BAD: Callback hell +getUser(userId, (err, user) => { + if (err) return handleError(err); + getOrders(user.id, (err, orders) => { + if (err) return handleError(err); + getProducts(orders[0].productIds, (err, products) => { + if (err) return handleError(err); + renderPage(user, orders, products, (err) => { + if (err) return handleError(err); + console.log('Done'); + }); + }); + }); +}); + +// GOOD: Async/await +async function loadPage(userId) { + try { + const user = await getUser(userId); + const orders = await getOrders(user.id); + const products = await getProducts(orders[0].productIds); + await renderPage(user, orders, products); + console.log('Done'); + } catch (err) { + handleError(err); + } +} +``` + +**Detection:** >2 levels of callback nesting. + +--- + +### Async in Constructor + +Async operations in constructors. + +```typescript +// BAD: Async in constructor +class DatabaseConnection { + constructor(url: string) { + this.connect(url); // Fire-and-forget async + } + + private async connect(url: string) { + this.client = await createClient(url); + } +} + +// GOOD: Factory method +class DatabaseConnection { + private constructor(private client: Client) {} + + static async create(url: string): Promise { + const client = await createClient(url); + return new DatabaseConnection(client); + } +} + +// Usage +const db = await DatabaseConnection.create(url); +``` + +**Detection:** `async` calls or `.then()` in constructor. diff --git a/.windsurf/skills/code-reviewer/scripts/code_quality_checker.py b/.windsurf/skills/code-reviewer/scripts/code_quality_checker.py new file mode 100644 index 00000000..c0df791b --- /dev/null +++ b/.windsurf/skills/code-reviewer/scripts/code_quality_checker.py @@ -0,0 +1,560 @@ +#!/usr/bin/env python3 +""" +Code Quality Checker + +Analyzes source code for quality issues, code smells, complexity metrics, +and SOLID principle violations. + +Usage: + python .windsurf/skills/code-reviewer/scripts/code_quality_checker.py /path/to/file.py + python .windsurf/skills/code-reviewer/scripts/code_quality_checker.py /path/to/directory --recursive + python .windsurf/skills/code-reviewer/scripts/code_quality_checker.py . --language typescript --json +""" + +import argparse +import json +import re +import sys +from pathlib import Path +from typing import Dict, List, Optional + + +# Language-specific file extensions +LANGUAGE_EXTENSIONS = { + "python": [".py"], + "typescript": [".ts", ".tsx"], + "javascript": [".js", ".jsx", ".mjs"], + "go": [".go"], + "swift": [".swift"], + "kotlin": [".kt", ".kts"] +} + +# Code smell thresholds +THRESHOLDS = { + "long_function_lines": 50, + "too_many_parameters": 5, + "high_complexity": 10, + "god_class_methods": 20, + "max_imports": 15 +} + + +def get_file_extension(filepath: Path) -> str: + """Get file extension.""" + return filepath.suffix.lower() + + +def detect_language(filepath: Path) -> Optional[str]: + """Detect programming language from file extension.""" + ext = get_file_extension(filepath) + for lang, extensions in LANGUAGE_EXTENSIONS.items(): + if ext in extensions: + return lang + return None + + +def read_file_content(filepath: Path) -> str: + """Read file content safely.""" + try: + with open(filepath, "r", encoding="utf-8", errors="ignore") as f: + return f.read() + except Exception: + return "" + + +def calculate_cyclomatic_complexity(content: str) -> int: + """ + Estimate cyclomatic complexity based on control flow keywords. + """ + complexity = 1 # Base complexity + + # Control flow patterns that increase complexity + patterns = [ + r"\bif\b", + r"\belif\b", + r"\belse\b", + r"\bfor\b", + r"\bwhile\b", + r"\bcase\b", + r"\bcatch\b", + r"\bexcept\b", + r"\band\b", + r"\bor\b", + r"\|\|", + r"&&" + ] + + for pattern in patterns: + matches = re.findall(pattern, content, re.IGNORECASE) + complexity += len(matches) + + return complexity + + +def count_lines(content: str) -> Dict[str, int]: + """Count different types of lines in code.""" + lines = content.split("\n") + total = len(lines) + blank = sum(1 for line in lines if not line.strip()) + comment = 0 + + for line in lines: + stripped = line.strip() + if stripped.startswith("#") or stripped.startswith("//"): + comment += 1 + elif stripped.startswith("/*") or stripped.startswith("'''") or stripped.startswith('"""'): + comment += 1 + + code = total - blank - comment + + return { + "total": total, + "code": code, + "blank": blank, + "comment": comment + } + + +def find_functions(content: str, language: str) -> List[Dict]: + """Find function definitions and their metrics.""" + functions = [] + + # Language-specific function patterns + patterns = { + "python": r"def\s+(\w+)\s*\(([^)]*)\)", + "typescript": r"(?:function\s+(\w+)|(?:const|let|var)\s+(\w+)\s*=\s*(?:async\s+)?\([^)]*\)\s*=>)", + "javascript": r"(?:function\s+(\w+)|(?:const|let|var)\s+(\w+)\s*=\s*(?:async\s+)?\([^)]*\)\s*=>)", + "go": r"func\s+(?:\([^)]+\)\s+)?(\w+)\s*\(([^)]*)\)", + "swift": r"func\s+(\w+)\s*\(([^)]*)\)", + "kotlin": r"fun\s+(\w+)\s*\(([^)]*)\)" + } + + pattern = patterns.get(language, patterns["python"]) + matches = re.finditer(pattern, content, re.MULTILINE) + + for match in matches: + name = next((g for g in match.groups() if g), "anonymous") + params_str = match.group(2) if len(match.groups()) > 1 and match.group(2) else "" + + # Count parameters + params = [p.strip() for p in params_str.split(",") if p.strip()] + param_count = len(params) + + # Estimate function length + start_pos = match.end() + remaining = content[start_pos:] + + next_func = re.search(pattern, remaining) + if next_func: + func_body = remaining[:next_func.start()] + else: + func_body = remaining[:min(2000, len(remaining))] + + line_count = len(func_body.split("\n")) + complexity = calculate_cyclomatic_complexity(func_body) + + functions.append({ + "name": name, + "parameters": param_count, + "lines": line_count, + "complexity": complexity + }) + + return functions + + +def find_classes(content: str, language: str) -> List[Dict]: + """Find class definitions and their metrics.""" + classes = [] + + patterns = { + "python": r"class\s+(\w+)", + "typescript": r"class\s+(\w+)", + "javascript": r"class\s+(\w+)", + "go": r"type\s+(\w+)\s+struct", + "swift": r"class\s+(\w+)", + "kotlin": r"class\s+(\w+)" + } + + pattern = patterns.get(language, patterns["python"]) + matches = re.finditer(pattern, content) + + for match in matches: + name = match.group(1) + + start_pos = match.end() + remaining = content[start_pos:] + + next_class = re.search(pattern, remaining) + if next_class: + class_body = remaining[:next_class.start()] + else: + class_body = remaining + + # Count methods + method_patterns = { + "python": r"def\s+\w+\s*\(", + "typescript": r"(?:public|private|protected)?\s*\w+\s*\([^)]*\)\s*[:{]", + "javascript": r"\w+\s*\([^)]*\)\s*\{", + "go": r"func\s+\(", + "swift": r"func\s+\w+", + "kotlin": r"fun\s+\w+" + } + method_pattern = method_patterns.get(language, method_patterns["python"]) + methods = len(re.findall(method_pattern, class_body)) + + classes.append({ + "name": name, + "methods": methods, + "lines": len(class_body.split("\n")) + }) + + return classes + + +def check_code_smells(content: str, functions: List[Dict], classes: List[Dict]) -> List[Dict]: + """Check for code smells in the content.""" + smells = [] + + # Long functions + for func in functions: + if func["lines"] > THRESHOLDS["long_function_lines"]: + smells.append({ + "type": "long_function", + "severity": "medium", + "message": f"Function '{func['name']}' has {func['lines']} lines (max: {THRESHOLDS['long_function_lines']})", + "location": func["name"] + }) + + # Too many parameters + for func in functions: + if func["parameters"] > THRESHOLDS["too_many_parameters"]: + smells.append({ + "type": "too_many_parameters", + "severity": "low", + "message": f"Function '{func['name']}' has {func['parameters']} parameters (max: {THRESHOLDS['too_many_parameters']})", + "location": func["name"] + }) + + # High complexity + for func in functions: + if func["complexity"] > THRESHOLDS["high_complexity"]: + severity = "high" if func["complexity"] > 20 else "medium" + smells.append({ + "type": "high_complexity", + "severity": severity, + "message": f"Function '{func['name']}' has complexity {func['complexity']} (max: {THRESHOLDS['high_complexity']})", + "location": func["name"] + }) + + # God classes + for cls in classes: + if cls["methods"] > THRESHOLDS["god_class_methods"]: + smells.append({ + "type": "god_class", + "severity": "high", + "message": f"Class '{cls['name']}' has {cls['methods']} methods (max: {THRESHOLDS['god_class_methods']})", + "location": cls["name"] + }) + + # Magic numbers + magic_pattern = r"\b(? List[Dict]: + """Check for potential SOLID principle violations.""" + violations = [] + + # OCP: Type checking instead of polymorphism + type_checks = len(re.findall(r"isinstance\(|type\(.*\)\s*==|typeof\s+\w+\s*===", content)) + if type_checks > 2: + violations.append({ + "principle": "OCP", + "name": "Open/Closed Principle", + "severity": "medium", + "message": f"Found {type_checks} type checks - consider using polymorphism" + }) + + # LSP/ISP: NotImplementedError + not_impl = len(re.findall(r"raise\s+NotImplementedError|not\s+implemented", content, re.IGNORECASE)) + if not_impl: + violations.append({ + "principle": "LSP/ISP", + "name": "Liskov/Interface Segregation", + "severity": "low", + "message": f"Found {not_impl} unimplemented methods - may indicate oversized interface" + }) + + # DIP: Too many direct imports + imports = len(re.findall(r"^(?:import|from)\s+", content, re.MULTILINE)) + if imports > THRESHOLDS["max_imports"]: + violations.append({ + "principle": "DIP", + "name": "Dependency Inversion Principle", + "severity": "low", + "message": f"File has {imports} imports - consider dependency injection" + }) + + return violations + + +def calculate_quality_score( + line_metrics: Dict, + functions: List[Dict], + classes: List[Dict], + smells: List[Dict], + violations: List[Dict] +) -> int: + """Calculate overall quality score (0-100).""" + score = 100 + + # Deduct for code smells + for smell in smells: + if smell["severity"] == "high": + score -= 10 + elif smell["severity"] == "medium": + score -= 5 + elif smell["severity"] == "low": + score -= 2 + + # Deduct for SOLID violations + for violation in violations: + if violation["severity"] == "high": + score -= 8 + elif violation["severity"] == "medium": + score -= 4 + elif violation["severity"] == "low": + score -= 2 + + # Bonus for good comment ratio (10-30%) + if line_metrics["total"] > 0: + comment_ratio = line_metrics["comment"] / line_metrics["total"] + if 0.1 <= comment_ratio <= 0.3: + score += 5 + + # Bonus for reasonable function sizes + if functions: + avg_lines = sum(f["lines"] for f in functions) / len(functions) + if avg_lines < 30: + score += 5 + + return max(0, min(100, score)) + + +def get_grade(score: int) -> str: + """Convert score to letter grade.""" + if score >= 90: + return "A" + elif score >= 80: + return "B" + elif score >= 70: + return "C" + elif score >= 60: + return "D" + else: + return "F" + + +def analyze_file(filepath: Path) -> Dict: + """Analyze a single file for code quality.""" + language = detect_language(filepath) + if not language: + return {"error": f"Unsupported file type: {filepath.suffix}"} + + content = read_file_content(filepath) + if not content: + return {"error": f"Could not read file: {filepath}"} + + line_metrics = count_lines(content) + functions = find_functions(content, language) + classes = find_classes(content, language) + smells = check_code_smells(content, functions, classes) + violations = check_solid_violations(content) + score = calculate_quality_score(line_metrics, functions, classes, smells, violations) + + return { + "file": str(filepath), + "language": language, + "metrics": { + "lines": line_metrics, + "functions": len(functions), + "classes": len(classes), + "avg_complexity": round(sum(f["complexity"] for f in functions) / max(1, len(functions)), 1) + }, + "quality_score": score, + "grade": get_grade(score), + "smells": smells, + "solid_violations": violations, + "function_details": functions[:10], + "class_details": classes[:10] + } + + +def analyze_directory( + dir_path: Path, + recursive: bool = True, + language: Optional[str] = None +) -> Dict: + """Analyze all files in a directory.""" + results = [] + extensions = [] + + if language: + extensions = LANGUAGE_EXTENSIONS.get(language, []) + else: + for exts in LANGUAGE_EXTENSIONS.values(): + extensions.extend(exts) + + pattern = "**/*" if recursive else "*" + + for ext in extensions: + for filepath in dir_path.glob(f"{pattern}{ext}"): + if "node_modules" in str(filepath) or ".git" in str(filepath): + continue + result = analyze_file(filepath) + if "error" not in result: + results.append(result) + + if not results: + return {"error": "No supported files found"} + + total_score = sum(r["quality_score"] for r in results) + avg_score = total_score / len(results) + total_smells = sum(len(r["smells"]) for r in results) + total_violations = sum(len(r["solid_violations"]) for r in results) + + return { + "directory": str(dir_path), + "files_analyzed": len(results), + "average_score": round(avg_score, 1), + "overall_grade": get_grade(int(avg_score)), + "total_code_smells": total_smells, + "total_solid_violations": total_violations, + "files": sorted(results, key=lambda x: x["quality_score"]) + } + + +def print_report(analysis: Dict) -> None: + """Print human-readable analysis report.""" + if "error" in analysis: + print(f"Error: {analysis['error']}") + return + + print("=" * 60) + print("CODE QUALITY REPORT") + print("=" * 60) + + if "file" in analysis: + print(f"\nFile: {analysis['file']}") + print(f"Language: {analysis['language']}") + print(f"Quality Score: {analysis['quality_score']}/100 ({analysis['grade']})") + + metrics = analysis["metrics"] + print(f"\nLines: {metrics['lines']['total']} ({metrics['lines']['code']} code, {metrics['lines']['comment']} comments)") + print(f"Functions: {metrics['functions']}") + print(f"Classes: {metrics['classes']}") + print(f"Avg Complexity: {metrics['avg_complexity']}") + + if analysis["smells"]: + print("\n--- CODE SMELLS ---") + for smell in analysis["smells"][:10]: + print(f" [{smell['severity'].upper()}] {smell['message']} ({smell['location']})") + + if analysis["solid_violations"]: + print("\n--- SOLID VIOLATIONS ---") + for v in analysis["solid_violations"]: + print(f" [{v['principle']}] {v['message']}") + else: + print(f"\nDirectory: {analysis['directory']}") + print(f"Files Analyzed: {analysis['files_analyzed']}") + print(f"Average Score: {analysis['average_score']}/100 ({analysis['overall_grade']})") + print(f"Total Code Smells: {analysis['total_code_smells']}") + print(f"Total SOLID Violations: {analysis['total_solid_violations']}") + + print("\n--- FILES BY QUALITY ---") + for f in analysis["files"][:10]: + print(f" {f['quality_score']:3d}/100 [{f['grade']}] {f['file']}") + + print("\n" + "=" * 60) + + +def main(): + parser = argparse.ArgumentParser( + description="Analyze code quality, smells, and SOLID violations" + ) + parser.add_argument( + "path", + help="File or directory to analyze" + ) + parser.add_argument( + "--recursive", "-r", + action="store_true", + default=True, + help="Recursively analyze directories (default: true)" + ) + parser.add_argument( + "--language", "-l", + choices=list(LANGUAGE_EXTENSIONS.keys()), + help="Filter by programming language" + ) + parser.add_argument( + "--json", + action="store_true", + help="Output in JSON format" + ) + parser.add_argument( + "--output", "-o", + help="Write output to file" + ) + + args = parser.parse_args() + + target = Path(args.path).resolve() + + if not target.exists(): + print(f"Error: Path does not exist: {target}", file=sys.stderr) + sys.exit(1) + + if target.is_file(): + analysis = analyze_file(target) + else: + analysis = analyze_directory(target, args.recursive, args.language) + + if args.json: + output = json.dumps(analysis, indent=2, default=str) + if args.output: + with open(args.output, "w") as f: + f.write(output) + print(f"Results written to {args.output}") + else: + print(output) + else: + print_report(analysis) + + +if __name__ == "__main__": + main() diff --git a/.windsurf/skills/code-reviewer/scripts/pr_analyzer.py b/.windsurf/skills/code-reviewer/scripts/pr_analyzer.py new file mode 100644 index 00000000..caedfe3f --- /dev/null +++ b/.windsurf/skills/code-reviewer/scripts/pr_analyzer.py @@ -0,0 +1,495 @@ +#!/usr/bin/env python3 +""" +PR Analyzer + +Analyzes pull request changes for review complexity, risk assessment, +and generates review priorities. + +Usage: + python .windsurf/skills/code-reviewer/scripts/pr_analyzer.py /path/to/repo + python .windsurf/skills/code-reviewer/scripts/pr_analyzer.py . --base main --head feature-branch + python .windsurf/skills/code-reviewer/scripts/pr_analyzer.py /path/to/repo --json +""" + +import argparse +import json +import os +import re +import subprocess +import sys +from pathlib import Path +from typing import Dict, List, Optional, Tuple + + +# File categories for review prioritization +FILE_CATEGORIES = { + "critical": { + "patterns": [ + r"auth", r"security", r"password", r"token", r"secret", + r"payment", r"billing", r"crypto", r"encrypt" + ], + "weight": 5, + "description": "Security-sensitive files requiring careful review" + }, + "high": { + "patterns": [ + r"api", r"database", r"migration", r"schema", r"model", + r"config", r"env", r"middleware" + ], + "weight": 4, + "description": "Core infrastructure files" + }, + "medium": { + "patterns": [ + r"service", r"controller", r"handler", r"util", r"helper" + ], + "weight": 3, + "description": "Business logic files" + }, + "low": { + "patterns": [ + r"test", r"spec", r"mock", r"fixture", r"story", + r"readme", r"docs", r"\.md$" + ], + "weight": 1, + "description": "Tests and documentation" + } +} + +# Risky patterns to flag +RISK_PATTERNS = [ + { + "name": "hardcoded_secrets", + "pattern": r"(password|secret|api_key|token)\s*[=:]\s*['\"][^'\"]+['\"]", + "severity": "critical", + "message": "Potential hardcoded secret detected" + }, + { + "name": "todo_fixme", + "pattern": r"(TODO|FIXME|HACK|XXX):", + "severity": "low", + "message": "TODO/FIXME comment found" + }, + { + "name": "console_log", + "pattern": r"console\.(log|debug|info|warn|error)\(", + "severity": "medium", + "message": "Console statement found (remove for production)" + }, + { + "name": "debugger", + "pattern": r"\bdebugger\b", + "severity": "high", + "message": "Debugger statement found" + }, + { + "name": "disable_eslint", + "pattern": r"eslint-disable", + "severity": "medium", + "message": "ESLint rule disabled" + }, + { + "name": "any_type", + "pattern": r":\s*any\b", + "severity": "medium", + "message": "TypeScript 'any' type used" + }, + { + "name": "sql_concatenation", + "pattern": r"(SELECT|INSERT|UPDATE|DELETE).*\+.*['\"]", + "severity": "critical", + "message": "Potential SQL injection (string concatenation in query)" + } +] + + +def run_git_command(cmd: List[str], cwd: Path) -> Tuple[bool, str]: + """Run a git command and return success status and output.""" + try: + result = subprocess.run( + cmd, + cwd=cwd, + capture_output=True, + text=True, + timeout=30 + ) + return result.returncode == 0, result.stdout.strip() + except subprocess.TimeoutExpired: + return False, "Command timed out" + except Exception as e: + return False, str(e) + + +def get_changed_files(repo_path: Path, base: str, head: str) -> List[Dict]: + """Get list of changed files between two refs.""" + success, output = run_git_command( + ["git", "diff", "--name-status", f"{base}...{head}"], + repo_path + ) + + if not success: + # Try without the triple dot (for uncommitted changes) + success, output = run_git_command( + ["git", "diff", "--name-status", base, head], + repo_path + ) + + if not success or not output: + # Fall back to staged changes + success, output = run_git_command( + ["git", "diff", "--name-status", "--cached"], + repo_path + ) + + files = [] + for line in output.split("\n"): + if not line.strip(): + continue + parts = line.split("\t") + if len(parts) >= 2: + status = parts[0][0] # First character of status + filepath = parts[-1] # Handle renames (R100\told\tnew) + status_map = { + "A": "added", + "M": "modified", + "D": "deleted", + "R": "renamed", + "C": "copied" + } + files.append({ + "path": filepath, + "status": status_map.get(status, "modified") + }) + + return files + + +def get_file_diff(repo_path: Path, filepath: str, base: str, head: str) -> str: + """Get diff content for a specific file.""" + success, output = run_git_command( + ["git", "diff", f"{base}...{head}", "--", filepath], + repo_path + ) + if not success: + success, output = run_git_command( + ["git", "diff", "--cached", "--", filepath], + repo_path + ) + return output if success else "" + + +def categorize_file(filepath: str) -> Tuple[str, int]: + """Categorize a file based on its path and name.""" + filepath_lower = filepath.lower() + + for category, info in FILE_CATEGORIES.items(): + for pattern in info["patterns"]: + if re.search(pattern, filepath_lower): + return category, info["weight"] + + return "medium", 2 # Default category + + +def analyze_diff_for_risks(diff_content: str, filepath: str) -> List[Dict]: + """Analyze diff content for risky patterns.""" + risks = [] + + # Only analyze added lines (starting with +) + added_lines = [ + line[1:] for line in diff_content.split("\n") + if line.startswith("+") and not line.startswith("+++") + ] + + content = "\n".join(added_lines) + + for risk in RISK_PATTERNS: + matches = re.findall(risk["pattern"], content, re.IGNORECASE) + if matches: + risks.append({ + "name": risk["name"], + "severity": risk["severity"], + "message": risk["message"], + "file": filepath, + "count": len(matches) + }) + + return risks + + +def count_changes(diff_content: str) -> Dict[str, int]: + """Count additions and deletions in diff.""" + additions = 0 + deletions = 0 + + for line in diff_content.split("\n"): + if line.startswith("+") and not line.startswith("+++"): + additions += 1 + elif line.startswith("-") and not line.startswith("---"): + deletions += 1 + + return {"additions": additions, "deletions": deletions} + + +def calculate_complexity_score(files: List[Dict], all_risks: List[Dict]) -> int: + """Calculate overall PR complexity score (1-10).""" + score = 0 + + # File count contribution (max 3 points) + file_count = len(files) + if file_count > 20: + score += 3 + elif file_count > 10: + score += 2 + elif file_count > 5: + score += 1 + + # Total changes contribution (max 3 points) + total_changes = sum(f.get("additions", 0) + f.get("deletions", 0) for f in files) + if total_changes > 500: + score += 3 + elif total_changes > 200: + score += 2 + elif total_changes > 50: + score += 1 + + # Risk severity contribution (max 4 points) + critical_risks = sum(1 for r in all_risks if r["severity"] == "critical") + high_risks = sum(1 for r in all_risks if r["severity"] == "high") + + score += min(2, critical_risks) + score += min(2, high_risks) + + return min(10, max(1, score)) + + +def analyze_commit_messages(repo_path: Path, base: str, head: str) -> Dict: + """Analyze commit messages in the PR.""" + success, output = run_git_command( + ["git", "log", "--oneline", f"{base}...{head}"], + repo_path + ) + + if not success or not output: + return {"commits": 0, "issues": []} + + commits = output.strip().split("\n") + issues = [] + + for commit in commits: + if len(commit) < 10: + continue + + # Check for conventional commit format + message = commit[8:] if len(commit) > 8 else commit # Skip hash + + if not re.match(r"^(feat|fix|docs|style|refactor|test|chore|perf|ci|build|revert)(\(.+\))?:", message): + issues.append({ + "commit": commit[:7], + "issue": "Does not follow conventional commit format" + }) + + if len(message) > 72: + issues.append({ + "commit": commit[:7], + "issue": "Commit message exceeds 72 characters" + }) + + return { + "commits": len(commits), + "issues": issues + } + + +def analyze_pr( + repo_path: Path, + base: str = "main", + head: str = "HEAD" +) -> Dict: + """Perform complete PR analysis.""" + # Get changed files + changed_files = get_changed_files(repo_path, base, head) + + if not changed_files: + return { + "status": "no_changes", + "message": "No changes detected between branches" + } + + # Analyze each file + all_risks = [] + file_analyses = [] + + for file_info in changed_files: + filepath = file_info["path"] + category, weight = categorize_file(filepath) + + # Get diff for the file + diff = get_file_diff(repo_path, filepath, base, head) + changes = count_changes(diff) + risks = analyze_diff_for_risks(diff, filepath) + + all_risks.extend(risks) + + file_analyses.append({ + "path": filepath, + "status": file_info["status"], + "category": category, + "priority_weight": weight, + "additions": changes["additions"], + "deletions": changes["deletions"], + "risks": risks + }) + + # Sort by priority (highest first) + file_analyses.sort(key=lambda x: (-x["priority_weight"], x["path"])) + + # Analyze commits + commit_analysis = analyze_commit_messages(repo_path, base, head) + + # Calculate metrics + complexity = calculate_complexity_score(file_analyses, all_risks) + + total_additions = sum(f["additions"] for f in file_analyses) + total_deletions = sum(f["deletions"] for f in file_analyses) + + return { + "status": "analyzed", + "summary": { + "files_changed": len(file_analyses), + "total_additions": total_additions, + "total_deletions": total_deletions, + "complexity_score": complexity, + "complexity_label": get_complexity_label(complexity), + "commits": commit_analysis["commits"] + }, + "risks": { + "critical": [r for r in all_risks if r["severity"] == "critical"], + "high": [r for r in all_risks if r["severity"] == "high"], + "medium": [r for r in all_risks if r["severity"] == "medium"], + "low": [r for r in all_risks if r["severity"] == "low"] + }, + "files": file_analyses, + "commit_issues": commit_analysis["issues"], + "review_order": [f["path"] for f in file_analyses[:10]] # Top 10 priority files + } + + +def get_complexity_label(score: int) -> str: + """Get human-readable complexity label.""" + if score <= 2: + return "Simple" + elif score <= 4: + return "Moderate" + elif score <= 6: + return "Complex" + elif score <= 8: + return "Very Complex" + else: + return "Critical" + + +def print_report(analysis: Dict) -> None: + """Print human-readable analysis report.""" + if analysis["status"] == "no_changes": + print("No changes detected.") + return + + summary = analysis["summary"] + risks = analysis["risks"] + + print("=" * 60) + print("PR ANALYSIS REPORT") + print("=" * 60) + + print(f"\nComplexity: {summary['complexity_score']}/10 ({summary['complexity_label']})") + print(f"Files Changed: {summary['files_changed']}") + print(f"Lines: +{summary['total_additions']} / -{summary['total_deletions']}") + print(f"Commits: {summary['commits']}") + + # Risk summary + print("\n--- RISK SUMMARY ---") + print(f"Critical: {len(risks['critical'])}") + print(f"High: {len(risks['high'])}") + print(f"Medium: {len(risks['medium'])}") + print(f"Low: {len(risks['low'])}") + + # Critical and high risks details + if risks["critical"]: + print("\n--- CRITICAL RISKS ---") + for risk in risks["critical"]: + print(f" [{risk['file']}] {risk['message']} (x{risk['count']})") + + if risks["high"]: + print("\n--- HIGH RISKS ---") + for risk in risks["high"]: + print(f" [{risk['file']}] {risk['message']} (x{risk['count']})") + + # Commit message issues + if analysis["commit_issues"]: + print("\n--- COMMIT MESSAGE ISSUES ---") + for issue in analysis["commit_issues"][:5]: + print(f" {issue['commit']}: {issue['issue']}") + + # Review order + print("\n--- SUGGESTED REVIEW ORDER ---") + for i, filepath in enumerate(analysis["review_order"], 1): + file_info = next(f for f in analysis["files"] if f["path"] == filepath) + print(f" {i}. [{file_info['category'].upper()}] {filepath}") + + print("\n" + "=" * 60) + + +def main(): + parser = argparse.ArgumentParser( + description="Analyze pull request for review complexity and risks" + ) + parser.add_argument( + "repo_path", + nargs="?", + default=".", + help="Path to git repository (default: current directory)" + ) + parser.add_argument( + "--base", "-b", + default="main", + help="Base branch for comparison (default: main)" + ) + parser.add_argument( + "--head", + default="HEAD", + help="Head branch/commit for comparison (default: HEAD)" + ) + parser.add_argument( + "--json", + action="store_true", + help="Output in JSON format" + ) + parser.add_argument( + "--output", "-o", + help="Write output to file" + ) + + args = parser.parse_args() + + repo_path = Path(args.repo_path).resolve() + + if not (repo_path / ".git").exists(): + print(f"Error: {repo_path} is not a git repository", file=sys.stderr) + sys.exit(1) + + analysis = analyze_pr(repo_path, args.base, args.head) + + if args.json: + output = json.dumps(analysis, indent=2) + if args.output: + with open(args.output, "w") as f: + f.write(output) + print(f"Results written to {args.output}") + else: + print(output) + else: + print_report(analysis) + + +if __name__ == "__main__": + main() diff --git a/.windsurf/skills/code-reviewer/scripts/review_report_generator.py b/.windsurf/skills/code-reviewer/scripts/review_report_generator.py new file mode 100644 index 00000000..13f8baaa --- /dev/null +++ b/.windsurf/skills/code-reviewer/scripts/review_report_generator.py @@ -0,0 +1,505 @@ +#!/usr/bin/env python3 +""" +Review Report Generator + +Generates comprehensive code review reports by combining PR analysis +and code quality findings into structured, actionable reports. + +Usage: + python .windsurf/skills/code-reviewer/scripts/review_report_generator.py /path/to/repo + python .windsurf/skills/code-reviewer/scripts/review_report_generator.py . --pr-analysis pr_results.json --quality-analysis quality_results.json + python .windsurf/skills/code-reviewer/scripts/review_report_generator.py /path/to/repo --format markdown --output review.md +""" + +import argparse +import json +import os +import subprocess +import sys +from datetime import datetime +from pathlib import Path +from typing import Dict, List, Optional, Tuple + + +# Severity weights for prioritization +SEVERITY_WEIGHTS = { + "critical": 100, + "high": 75, + "medium": 50, + "low": 25, + "info": 10 +} + +# Review verdict thresholds +VERDICT_THRESHOLDS = { + "approve": {"max_critical": 0, "max_high": 0, "max_score": 100}, + "approve_with_suggestions": {"max_critical": 0, "max_high": 2, "max_score": 85}, + "request_changes": {"max_critical": 0, "max_high": 5, "max_score": 70}, + "block": {"max_critical": float("inf"), "max_high": float("inf"), "max_score": 0} +} + + +def load_json_file(filepath: str) -> Optional[Dict]: + """Load JSON file if it exists.""" + try: + with open(filepath, "r") as f: + return json.load(f) + except (FileNotFoundError, json.JSONDecodeError): + return None + + +def run_pr_analyzer(repo_path: Path) -> Dict: + """Run .windsurf/skills/code-reviewer/scripts/pr_analyzer.py and return results.""" + script_path = Path(__file__).parent / ".windsurf/skills/code-reviewer/scripts/pr_analyzer.py" + if not script_path.exists(): + return {"status": "error", "message": ".windsurf/skills/code-reviewer/scripts/pr_analyzer.py not found"} + + try: + result = subprocess.run( + [sys.executable, str(script_path), str(repo_path), "--json"], + capture_output=True, + text=True, + timeout=120 + ) + if result.returncode == 0: + return json.loads(result.stdout) + return {"status": "error", "message": result.stderr} + except Exception as e: + return {"status": "error", "message": str(e)} + + +def run_quality_checker(repo_path: Path) -> Dict: + """Run .windsurf/skills/code-reviewer/scripts/code_quality_checker.py and return results.""" + script_path = Path(__file__).parent / ".windsurf/skills/code-reviewer/scripts/code_quality_checker.py" + if not script_path.exists(): + return {"status": "error", "message": ".windsurf/skills/code-reviewer/scripts/code_quality_checker.py not found"} + + try: + result = subprocess.run( + [sys.executable, str(script_path), str(repo_path), "--json"], + capture_output=True, + text=True, + timeout=300 + ) + if result.returncode == 0: + return json.loads(result.stdout) + return {"status": "error", "message": result.stderr} + except Exception as e: + return {"status": "error", "message": str(e)} + + +def calculate_review_score(pr_analysis: Dict, quality_analysis: Dict) -> int: + """Calculate overall review score (0-100).""" + score = 100 + + # Deduct for PR risks + if "risks" in pr_analysis: + risks = pr_analysis["risks"] + score -= len(risks.get("critical", [])) * 15 + score -= len(risks.get("high", [])) * 10 + score -= len(risks.get("medium", [])) * 5 + score -= len(risks.get("low", [])) * 2 + + # Deduct for code quality issues + if "issues" in quality_analysis: + issues = quality_analysis["issues"] + score -= len([i for i in issues if i.get("severity") == "critical"]) * 12 + score -= len([i for i in issues if i.get("severity") == "high"]) * 8 + score -= len([i for i in issues if i.get("severity") == "medium"]) * 4 + score -= len([i for i in issues if i.get("severity") == "low"]) * 1 + + # Deduct for complexity + if "summary" in pr_analysis: + complexity = pr_analysis["summary"].get("complexity_score", 0) + if complexity > 7: + score -= 10 + elif complexity > 5: + score -= 5 + + return max(0, min(100, score)) + + +def determine_verdict(score: int, critical_count: int, high_count: int) -> Tuple[str, str]: + """Determine review verdict based on score and issue counts.""" + if critical_count > 0: + return "block", "Critical issues must be resolved before merge" + + if score >= 90 and high_count == 0: + return "approve", "Code meets quality standards" + + if score >= 75 and high_count <= 2: + return "approve_with_suggestions", "Minor improvements recommended" + + if score >= 50: + return "request_changes", "Several issues need to be addressed" + + return "block", "Significant issues prevent approval" + + +def generate_findings_list(pr_analysis: Dict, quality_analysis: Dict) -> List[Dict]: + """Combine and prioritize all findings.""" + findings = [] + + # Add PR risk findings + if "risks" in pr_analysis: + for severity, items in pr_analysis["risks"].items(): + for item in items: + findings.append({ + "source": "pr_analysis", + "severity": severity, + "category": item.get("name", "unknown"), + "message": item.get("message", ""), + "file": item.get("file", ""), + "count": item.get("count", 1) + }) + + # Add code quality findings + if "issues" in quality_analysis: + for issue in quality_analysis["issues"]: + findings.append({ + "source": "quality_analysis", + "severity": issue.get("severity", "medium"), + "category": issue.get("type", "unknown"), + "message": issue.get("message", ""), + "file": issue.get("file", ""), + "line": issue.get("line", 0) + }) + + # Sort by severity weight + findings.sort( + key=lambda x: -SEVERITY_WEIGHTS.get(x["severity"], 0) + ) + + return findings + + +def generate_action_items(findings: List[Dict]) -> List[Dict]: + """Generate prioritized action items from findings.""" + action_items = [] + seen_categories = set() + + for finding in findings: + category = finding["category"] + severity = finding["severity"] + + # Group similar issues + if category in seen_categories and severity not in ["critical", "high"]: + continue + + action = { + "priority": "P0" if severity == "critical" else "P1" if severity == "high" else "P2", + "action": get_action_for_category(category, finding), + "severity": severity, + "files_affected": [finding["file"]] if finding.get("file") else [] + } + action_items.append(action) + seen_categories.add(category) + + return action_items[:15] # Top 15 actions + + +def get_action_for_category(category: str, finding: Dict) -> str: + """Get actionable recommendation for issue category.""" + actions = { + "hardcoded_secrets": "Remove hardcoded credentials and use environment variables or a secrets manager", + "sql_concatenation": "Use parameterized queries to prevent SQL injection", + "debugger": "Remove debugger statements before merging", + "console_log": "Remove or replace console statements with proper logging", + "todo_fixme": "Address TODO/FIXME comments or create tracking issues", + "disable_eslint": "Address the underlying issue instead of disabling lint rules", + "any_type": "Replace 'any' types with proper type definitions", + "long_function": "Break down function into smaller, focused units", + "god_class": "Split class into smaller, single-responsibility classes", + "too_many_params": "Use parameter objects or builder pattern", + "deep_nesting": "Refactor using early returns, guard clauses, or extraction", + "high_complexity": "Reduce cyclomatic complexity through refactoring", + "missing_error_handling": "Add proper error handling and recovery logic", + "duplicate_code": "Extract duplicate code into shared functions", + "magic_numbers": "Replace magic numbers with named constants", + "large_file": "Consider splitting into multiple smaller modules" + } + return actions.get(category, f"Review and address: {finding.get('message', category)}") + + +def format_markdown_report(report: Dict) -> str: + """Generate markdown-formatted report.""" + lines = [] + + # Header + lines.append("# Code Review Report") + lines.append("") + lines.append(f"**Generated:** {report['metadata']['generated_at']}") + lines.append(f"**Repository:** {report['metadata']['repository']}") + lines.append("") + + # Executive Summary + lines.append("## Executive Summary") + lines.append("") + summary = report["summary"] + verdict = summary["verdict"] + verdict_emoji = { + "approve": "✅", + "approve_with_suggestions": "✅", + "request_changes": "⚠️", + "block": "❌" + }.get(verdict, "❓") + + lines.append(f"**Verdict:** {verdict_emoji} {verdict.upper().replace('_', ' ')}") + lines.append(f"**Score:** {summary['score']}/100") + lines.append(f"**Rationale:** {summary['rationale']}") + lines.append("") + + # Issue Counts + lines.append("### Issue Summary") + lines.append("") + lines.append("| Severity | Count |") + lines.append("|----------|-------|") + for severity in ["critical", "high", "medium", "low"]: + count = summary["issue_counts"].get(severity, 0) + lines.append(f"| {severity.capitalize()} | {count} |") + lines.append("") + + # PR Statistics (if available) + if "pr_summary" in report: + pr = report["pr_summary"] + lines.append("### Change Statistics") + lines.append("") + lines.append(f"- **Files Changed:** {pr.get('files_changed', 'N/A')}") + lines.append(f"- **Lines Added:** +{pr.get('total_additions', 0)}") + lines.append(f"- **Lines Removed:** -{pr.get('total_deletions', 0)}") + lines.append(f"- **Complexity:** {pr.get('complexity_label', 'N/A')}") + lines.append("") + + # Action Items + if report.get("action_items"): + lines.append("## Action Items") + lines.append("") + for i, item in enumerate(report["action_items"], 1): + priority = item["priority"] + emoji = "🔴" if priority == "P0" else "🟠" if priority == "P1" else "🟡" + lines.append(f"{i}. {emoji} **[{priority}]** {item['action']}") + if item.get("files_affected"): + lines.append(f" - Files: {', '.join(item['files_affected'][:3])}") + lines.append("") + + # Critical Findings + critical_findings = [f for f in report.get("findings", []) if f["severity"] == "critical"] + if critical_findings: + lines.append("## Critical Issues (Must Fix)") + lines.append("") + for finding in critical_findings: + lines.append(f"- **{finding['category']}** in `{finding.get('file', 'unknown')}`") + lines.append(f" - {finding['message']}") + lines.append("") + + # High Priority Findings + high_findings = [f for f in report.get("findings", []) if f["severity"] == "high"] + if high_findings: + lines.append("## High Priority Issues") + lines.append("") + for finding in high_findings[:10]: + lines.append(f"- **{finding['category']}** in `{finding.get('file', 'unknown')}`") + lines.append(f" - {finding['message']}") + lines.append("") + + # Review Order (if available) + if "review_order" in report: + lines.append("## Suggested Review Order") + lines.append("") + for i, filepath in enumerate(report["review_order"][:10], 1): + lines.append(f"{i}. `{filepath}`") + lines.append("") + + # Footer + lines.append("---") + lines.append("*Generated by Code Reviewer*") + + return "\n".join(lines) + + +def format_text_report(report: Dict) -> str: + """Generate plain text report.""" + lines = [] + + lines.append("=" * 60) + lines.append("CODE REVIEW REPORT") + lines.append("=" * 60) + lines.append("") + lines.append(f"Generated: {report['metadata']['generated_at']}") + lines.append(f"Repository: {report['metadata']['repository']}") + lines.append("") + + summary = report["summary"] + verdict = summary["verdict"].upper().replace("_", " ") + lines.append(f"VERDICT: {verdict}") + lines.append(f"SCORE: {summary['score']}/100") + lines.append(f"RATIONALE: {summary['rationale']}") + lines.append("") + + lines.append("--- ISSUE SUMMARY ---") + for severity in ["critical", "high", "medium", "low"]: + count = summary["issue_counts"].get(severity, 0) + lines.append(f" {severity.capitalize()}: {count}") + lines.append("") + + if report.get("action_items"): + lines.append("--- ACTION ITEMS ---") + for i, item in enumerate(report["action_items"][:10], 1): + lines.append(f" {i}. [{item['priority']}] {item['action']}") + lines.append("") + + critical = [f for f in report.get("findings", []) if f["severity"] == "critical"] + if critical: + lines.append("--- CRITICAL ISSUES ---") + for f in critical: + lines.append(f" [{f.get('file', 'unknown')}] {f['message']}") + lines.append("") + + lines.append("=" * 60) + + return "\n".join(lines) + + +def generate_report( + repo_path: Path, + pr_analysis: Optional[Dict] = None, + quality_analysis: Optional[Dict] = None +) -> Dict: + """Generate comprehensive review report.""" + # Run analyses if not provided + if pr_analysis is None: + pr_analysis = run_pr_analyzer(repo_path) + + if quality_analysis is None: + quality_analysis = run_quality_checker(repo_path) + + # Generate findings + findings = generate_findings_list(pr_analysis, quality_analysis) + + # Count issues by severity + issue_counts = { + "critical": len([f for f in findings if f["severity"] == "critical"]), + "high": len([f for f in findings if f["severity"] == "high"]), + "medium": len([f for f in findings if f["severity"] == "medium"]), + "low": len([f for f in findings if f["severity"] == "low"]) + } + + # Calculate score and verdict + score = calculate_review_score(pr_analysis, quality_analysis) + verdict, rationale = determine_verdict( + score, + issue_counts["critical"], + issue_counts["high"] + ) + + # Generate action items + action_items = generate_action_items(findings) + + # Build report + report = { + "metadata": { + "generated_at": datetime.now().isoformat(), + "repository": str(repo_path), + "version": "1.0.0" + }, + "summary": { + "score": score, + "verdict": verdict, + "rationale": rationale, + "issue_counts": issue_counts + }, + "findings": findings, + "action_items": action_items + } + + # Add PR summary if available + if pr_analysis.get("status") == "analyzed": + report["pr_summary"] = pr_analysis.get("summary", {}) + report["review_order"] = pr_analysis.get("review_order", []) + + # Add quality summary if available + if quality_analysis.get("status") == "analyzed": + report["quality_summary"] = quality_analysis.get("summary", {}) + + return report + + +def main(): + parser = argparse.ArgumentParser( + description="Generate comprehensive code review reports" + ) + parser.add_argument( + "repo_path", + nargs="?", + default=".", + help="Path to repository (default: current directory)" + ) + parser.add_argument( + "--pr-analysis", + help="Path to pre-computed PR analysis JSON" + ) + parser.add_argument( + "--quality-analysis", + help="Path to pre-computed quality analysis JSON" + ) + parser.add_argument( + "--format", "-f", + choices=["text", "markdown", "json"], + default="text", + help="Output format (default: text)" + ) + parser.add_argument( + "--output", "-o", + help="Write output to file" + ) + parser.add_argument( + "--json", + action="store_true", + help="Output as JSON (shortcut for --format json)" + ) + + args = parser.parse_args() + + repo_path = Path(args.repo_path).resolve() + if not repo_path.exists(): + print(f"Error: Path does not exist: {repo_path}", file=sys.stderr) + sys.exit(1) + + # Load pre-computed analyses if provided + pr_analysis = None + quality_analysis = None + + if args.pr_analysis: + pr_analysis = load_json_file(args.pr_analysis) + if not pr_analysis: + print(f"Warning: Could not load PR analysis from {args.pr_analysis}") + + if args.quality_analysis: + quality_analysis = load_json_file(args.quality_analysis) + if not quality_analysis: + print(f"Warning: Could not load quality analysis from {args.quality_analysis}") + + # Generate report + report = generate_report(repo_path, pr_analysis, quality_analysis) + + # Format output + output_format = "json" if args.json else args.format + + if output_format == "json": + output = json.dumps(report, indent=2) + elif output_format == "markdown": + output = format_markdown_report(report) + else: + output = format_text_report(report) + + # Write or print output + if args.output: + with open(args.output, "w") as f: + f.write(output) + print(f"Report written to {args.output}") + else: + print(output) + + +if __name__ == "__main__": + main() diff --git a/website/astro.config.mjs b/website/astro.config.mjs index 5d848470..8503d8ea 100644 --- a/website/astro.config.mjs +++ b/website/astro.config.mjs @@ -22,7 +22,11 @@ export default defineConfig({ editLink: { baseUrl: 'https://github.com/sampleXbro/agentsmesh/edit/master/website/', }, - customCss: ['./src/styles/custom.css'], + customCss: [ + './src/styles/custom.css', + './src/styles/catalog-explorer.css', + './src/styles/catalog-explorer-table.css', + ], head: [ { tag: 'meta', diff --git a/website/src/components/catalog-explorer/CatalogExplorer.astro b/website/src/components/catalog-explorer/CatalogExplorer.astro new file mode 100644 index 00000000..e7203008 --- /dev/null +++ b/website/src/components/catalog-explorer/CatalogExplorer.astro @@ -0,0 +1,112 @@ +--- +import { skills } from '../../content/data/skills'; +import { agents } from '../../content/data/agents'; +import { commands } from '../../content/data/commands'; +import { toCatalogRows } from '../../lib/catalog-rows'; + +const payload = { + skills: toCatalogRows(skills), + agents: toCatalogRows(agents), + commands: toCatalogRows(commands), +}; +const catalogJson = JSON.stringify(payload); +--- + +
+
+
+ +
+ + +
+
+ +
+ + + +
+ +
+ + +
+ +
+
+ +

+
+ + diff --git a/website/src/components/catalog-explorer/catalog-dom.ts b/website/src/components/catalog-explorer/catalog-dom.ts new file mode 100644 index 00000000..a9302ff1 --- /dev/null +++ b/website/src/components/catalog-explorer/catalog-dom.ts @@ -0,0 +1,5 @@ +export function escHtml(s: string): string { + const d = document.createElement('div'); + d.textContent = s; + return d.innerHTML; +} diff --git a/website/src/components/catalog-explorer/catalog-explorer-client.ts b/website/src/components/catalog-explorer/catalog-explorer-client.ts new file mode 100644 index 00000000..c2175dc3 --- /dev/null +++ b/website/src/components/catalog-explorer/catalog-explorer-client.ts @@ -0,0 +1,155 @@ +import type { CatalogRow } from '../../lib/catalog-rows'; +import { mountVirtualBrowse, type VirtualBrowseHandle } from './catalog-virtual-browse'; + +export type CatalogPayload = { + skills: CatalogRow[]; + agents: CatalogRow[]; + commands: CatalogRow[]; +}; + +const DEFAULT_CMD = 'pnpm install agentsmesh'; +const TARGET = 'claude-code'; +const COPY_UI_MS = 2200; +const LIVE_ACK_MS = 2500; +const SEARCH_DEBOUNCE_MS = 100; +type TabId = 'skills' | 'agents' | 'commands'; + +const COPY_LABEL_DEFAULT = 'Copy'; +const COPY_LABEL_DONE = 'Copied ✓'; + +const EMPTY_FILTER_MSG = 'No matches — try different words or another tab.'; + +function shellSingleQuoted(url: string): string { + return url.replace(/'/g, `'\\''`); +} + +function installCommand(link: string, asKind: TabId): string { + return `agentsmesh install '${shellSingleQuoted(link)}' --target ${TARGET} --as ${asKind}`; +} + +function tabRows(data: CatalogPayload, tab: TabId): CatalogRow[] { + return data[tab]; +} + +function matchesQuery(row: CatalogRow, q: string): boolean { + const b = (s: string): boolean => s.toLowerCase().includes(q); + return b(row.t) || b(row.d) || b(row.l) || b(row.k) || b(row.i); +} + +export function mountCatalogExplorer(root: HTMLElement, data: CatalogPayload): void { + const copyInput = root.querySelector('[data-am-copy-input]'); + const copyBtn = root.querySelector('[data-am-copy-btn]'); + const copyLabel = root.querySelector('[data-am-copy-label]'); + const search = root.querySelector('[data-am-search]'); + const tableWrap = root.querySelector('[data-am-table-wrap]'); + const live = root.querySelector('[data-am-live]'); + const tabButtons = root.querySelectorAll('[data-am-tab]'); + + if (!copyInput || !copyBtn || !copyLabel || !search || !tableWrap || !live) return; + + let tab: TabId = 'skills'; + let liveTimer: ReturnType | undefined; + let copyUiTimer: ReturnType | undefined; + let searchTimer: ReturnType | undefined; + let virtualBrowse: VirtualBrowseHandle | null = null; + + function setLiveMessage(msg: string): void { + if (liveTimer !== undefined) clearTimeout(liveTimer); + live.textContent = msg; + if (msg) { + liveTimer = setTimeout(() => { + live.textContent = ''; + liveTimer = undefined; + }, LIVE_ACK_MS); + } + } + + function showCopyButtonDone(): void { + if (copyUiTimer !== undefined) clearTimeout(copyUiTimer); + copyBtn.classList.add('am-catalog-copy-btn--copied'); + copyLabel.textContent = COPY_LABEL_DONE; + copyUiTimer = setTimeout(() => { + copyBtn.classList.remove('am-catalog-copy-btn--copied'); + copyLabel.textContent = COPY_LABEL_DEFAULT; + copyUiTimer = undefined; + }, COPY_UI_MS); + } + + function applyPick(row: CatalogRow): void { + copyInput.value = installCommand(row.l, tab); + } + + function clearSearchDebounce(): void { + if (searchTimer !== undefined) { + clearTimeout(searchTimer); + searchTimer = undefined; + } + } + + function applyTable(): void { + const q = search.value.trim().toLowerCase(); + const all = tabRows(data, tab); + const filtered = q.length > 0 ? all.filter((x) => matchesQuery(x, q)) : all; + const browseOpts = + q.length > 0 && filtered.length === 0 ? { emptyMessage: EMPTY_FILTER_MSG } : undefined; + + tableWrap.hidden = false; + if (!virtualBrowse) { + virtualBrowse = mountVirtualBrowse(tableWrap, filtered, applyPick, browseOpts); + } else { + virtualBrowse.setRows(filtered, browseOpts); + } + } + + function scheduleSearchTable(): void { + clearSearchDebounce(); + searchTimer = setTimeout(() => { + searchTimer = undefined; + applyTable(); + }, SEARCH_DEBOUNCE_MS); + } + + function setTab(next: TabId): void { + tab = next; + copyInput.value = DEFAULT_CMD; + search.value = ''; + clearSearchDebounce(); + setLiveMessage(''); + if (copyUiTimer !== undefined) { + clearTimeout(copyUiTimer); + copyUiTimer = undefined; + } + copyBtn.classList.remove('am-catalog-copy-btn--copied'); + copyLabel.textContent = COPY_LABEL_DEFAULT; + tabButtons.forEach((btn) => { + const id = btn.getAttribute('data-am-tab') as TabId | null; + btn.setAttribute('aria-selected', String(id === next)); + }); + applyTable(); + } + + copyBtn.addEventListener('click', () => { + void navigator.clipboard.writeText(copyInput.value).then( + () => { + showCopyButtonDone(); + setLiveMessage('Copied to clipboard'); + }, + () => { + copyInput.select(); + document.execCommand('copy'); + showCopyButtonDone(); + setLiveMessage('Copied to clipboard'); + }, + ); + }); + + search.addEventListener('input', scheduleSearchTable); + tabButtons.forEach((btn) => { + btn.addEventListener('click', () => { + const id = btn.getAttribute('data-am-tab') as TabId | null; + if (id === 'skills' || id === 'agents' || id === 'commands') setTab(id); + }); + }); + + applyTable(); +} diff --git a/website/src/components/catalog-explorer/catalog-virtual-browse-rows.ts b/website/src/components/catalog-explorer/catalog-virtual-browse-rows.ts new file mode 100644 index 00000000..5bbfbf74 --- /dev/null +++ b/website/src/components/catalog-explorer/catalog-virtual-browse-rows.ts @@ -0,0 +1,51 @@ +import type { CatalogRow } from '../../lib/catalog-rows'; +import { escHtml } from './catalog-dom'; + +export function colgroupHtml(): string { + return ` + + + + `; +} + +export function spacerRow(heightPx: number): HTMLTableRowElement { + const tr = document.createElement('tr'); + tr.className = 'am-catalog-vspacer'; + tr.setAttribute('aria-hidden', 'true'); + const td = document.createElement('td'); + td.colSpan = 3; + td.className = 'am-catalog-vspacer-cell'; + td.style.height = `${heightPx}px`; + tr.appendChild(td); + return tr; +} + +export function buildDataRow( + r: CatalogRow, + index: number, + selectedId: string, +): HTMLTableRowElement { + const tr = document.createElement('tr'); + tr.className = 'am-catalog-tr'; + tr.tabIndex = 0; + tr.setAttribute('role', 'row'); + tr.dataset.amRow = String(index); + if (r.i === selectedId) tr.classList.add('am-catalog-tr--selected'); + tr.innerHTML = `${escHtml(r.t)} + ${escHtml(r.d)}`; + const tdLink = document.createElement('td'); + tdLink.dataset.label = 'Source'; + tdLink.className = 'am-catalog-td-link'; + const a = document.createElement('a'); + a.href = r.l; + a.target = '_blank'; + a.rel = 'noopener noreferrer'; + a.className = 'am-catalog-src-link'; + a.textContent = 'Link'; + a.title = r.l; + tdLink.appendChild(a); + tr.appendChild(tdLink); + a.addEventListener('click', (e) => e.stopPropagation()); + return tr; +} diff --git a/website/src/components/catalog-explorer/catalog-virtual-browse.ts b/website/src/components/catalog-explorer/catalog-virtual-browse.ts new file mode 100644 index 00000000..3a887a52 --- /dev/null +++ b/website/src/components/catalog-explorer/catalog-virtual-browse.ts @@ -0,0 +1,168 @@ +import type { CatalogRow } from '../../lib/catalog-rows'; +import { buildDataRow, colgroupHtml, spacerRow } from './catalog-virtual-browse-rows'; + +/** Must match `.am-catalog-tr` fixed row height in CSS */ +export const VIRTUAL_ROW_HEIGHT_PX = 52; +const OVERSCAN = 5; +const VISIBLE_BODY_ROWS = 3; +const DEFAULT_EMPTY = 'No items in this tab.'; + +export type VirtualBrowseHandle = { + teardown: () => void; + setRows: (rows: readonly CatalogRow[], options?: VirtualBrowseOptions) => void; +}; + +export type VirtualBrowseOptions = { + emptyMessage?: string; +}; + +export function mountVirtualBrowse( + wrap: HTMLElement, + rows: readonly CatalogRow[], + onPick: (row: CatalogRow) => void, + options?: VirtualBrowseOptions, +): VirtualBrowseHandle { + let rowRef: readonly CatalogRow[] = rows; + let emptyMsgRef = options?.emptyMessage ?? DEFAULT_EMPTY; + + wrap.innerHTML = ''; + const scroll = document.createElement('div'); + scroll.className = 'am-catalog-table-scroll am-catalog-table-scroll--virtual'; + scroll.setAttribute('data-am-table-scroll', ''); + const table = document.createElement('table'); + table.className = 'am-catalog-table am-catalog-table--body'; + table.innerHTML = `${colgroupHtml()} + Title + Description + Source + `; + const thead = table.querySelector('thead') as HTMLTableSectionElement; + const tbody = document.createElement('tbody'); + table.appendChild(tbody); + scroll.appendChild(table); + wrap.appendChild(scroll); + + function syncScrollMaxHeight(): void { + const h = thead.offsetHeight; + scroll.style.maxHeight = `${h + VISIBLE_BODY_ROWS * VIRTUAL_ROW_HEIGHT_PX}px`; + } + + let selectedId = ''; + let raf = 0; + let ro: ResizeObserver | undefined; + + function paint(): void { + const rowsNow = rowRef; + const theadH = thead.offsetHeight; + if (rowsNow.length === 0) { + tbody.replaceChildren(); + const tr = document.createElement('tr'); + const td = document.createElement('td'); + td.colSpan = 3; + td.className = 'am-catalog-table-empty'; + td.textContent = emptyMsgRef; + tr.appendChild(td); + tbody.appendChild(tr); + return; + } + + const rh = VIRTUAL_ROW_HEIGHT_PX; + const st = scroll.scrollTop; + const ch = scroll.clientHeight; + const bodyTop = theadH; + const bodyEnd = bodyTop + rowsNow.length * rh; + const viewBot = st + ch; + const iTop = Math.max(st, bodyTop); + const iBot = Math.min(viewBot, bodyEnd); + + let start = 0; + let end = rowsNow.length; + if (iBot > iTop) { + start = Math.max(0, Math.floor((iTop - bodyTop) / rh)); + end = Math.min(rowsNow.length, Math.ceil((iBot - bodyTop) / rh)); + } else { + start = 0; + end = 0; + } + start = Math.max(0, start - OVERSCAN); + end = Math.min(rowsNow.length, end + OVERSCAN); + + const topPad = start * rh; + const botPad = (rowsNow.length - end) * rh; + + const frag = document.createDocumentFragment(); + if (topPad > 0) frag.appendChild(spacerRow(topPad)); + for (let i = start; i < end; i++) { + frag.appendChild(buildDataRow(rowsNow[i], i, selectedId)); + } + if (botPad > 0) frag.appendChild(spacerRow(botPad)); + tbody.replaceChildren(frag); + } + + function schedulePaint(): void { + if (raf) cancelAnimationFrame(raf); + raf = requestAnimationFrame(() => { + raf = 0; + paint(); + }); + } + + function onTbodyClick(e: MouseEvent): void { + const tr = (e.target as HTMLElement).closest('tr.am-catalog-tr'); + if (!tr?.dataset.amRow) return; + if ((e.target as HTMLElement).closest('a')) return; + const idx = Number(tr.dataset.amRow); + const row = rowRef[idx]; + if (!row) return; + selectedId = row.i; + onPick(row); + schedulePaint(); + } + + function onTbodyKeydown(e: KeyboardEvent): void { + const tr = (e.target as HTMLElement).closest('tr.am-catalog-tr'); + if (!tr?.dataset.amRow) return; + if (e.key !== 'Enter' && e.key !== ' ') return; + e.preventDefault(); + const idx = Number(tr.dataset.amRow); + const row = rowRef[idx]; + if (!row) return; + selectedId = row.i; + onPick(row); + schedulePaint(); + } + + tbody.addEventListener('click', onTbodyClick); + tbody.addEventListener('keydown', onTbodyKeydown); + scroll.addEventListener('scroll', schedulePaint, { passive: true }); + + ro = new ResizeObserver(() => { + syncScrollMaxHeight(); + schedulePaint(); + }); + ro.observe(scroll); + ro.observe(thead); + + requestAnimationFrame(() => { + syncScrollMaxHeight(); + schedulePaint(); + }); + + return { + teardown: (): void => { + if (raf) cancelAnimationFrame(raf); + ro?.disconnect(); + scroll.removeEventListener('scroll', schedulePaint); + tbody.removeEventListener('click', onTbodyClick); + tbody.removeEventListener('keydown', onTbodyKeydown); + }, + setRows: (next: readonly CatalogRow[], opts?: VirtualBrowseOptions) => { + rowRef = next; + emptyMsgRef = opts?.emptyMessage ?? DEFAULT_EMPTY; + if (selectedId !== '' && !rowRef.some((r) => r.i === selectedId)) selectedId = ''; + scroll.scrollTop = 0; + syncScrollMaxHeight(); + schedulePaint(); + }, + }; +} diff --git a/website/src/content/data/agents.ts b/website/src/content/data/agents.ts new file mode 100644 index 00000000..36dd5143 --- /dev/null +++ b/website/src/content/data/agents.ts @@ -0,0 +1,1382 @@ +export type CatalogKind = 'agent'; +export interface CatalogItem { + id: string; + title: string; + description: string; + kind: CatalogKind; + link: string; +} + +import { CATALOG_PAGE_SIZE } from './constants'; +export const agentsCatalogPages: CatalogItem[][] = [ + // Page 1 + [ + { + id: 'agent_api-designer_526db71bd2', + title: 'api-designer', + description: + 'REST and GraphQL API architect Agent category: 01-core-development. Source: VoltAgent awesome claude code subagents.', + kind: 'agent', + link: 'https://github.com/VoltAgent/awesome-claude-code-subagents/blob/main/categories/01-core-development/api-designer.md', + }, + { + id: 'agent_backend-developer_59a9f05e52', + title: 'backend-developer', + description: + 'Server-side expert for scalable APIs Agent category: 01-core-development. Source: VoltAgent awesome claude code subagents.', + kind: 'agent', + link: 'https://github.com/VoltAgent/awesome-claude-code-subagents/blob/main/categories/01-core-development/backend-developer.md', + }, + { + id: 'agent_electron-pro_4679210594', + title: 'electron-pro', + description: + 'Desktop application expert Agent category: 01-core-development. Source: VoltAgent awesome claude code subagents.', + kind: 'agent', + link: 'https://github.com/VoltAgent/awesome-claude-code-subagents/blob/main/categories/01-core-development/electron-pro.md', + }, + { + id: 'agent_frontend-developer_72ff52e3ec', + title: 'frontend-developer', + description: + 'UI/UX specialist for React, Vue, and Angular Agent category: 01-core-development. Source: VoltAgent awesome claude code subagents.', + kind: 'agent', + link: 'https://github.com/VoltAgent/awesome-claude-code-subagents/blob/main/categories/01-core-development/frontend-developer.md', + }, + { + id: 'agent_fullstack-developer_120eca79f7', + title: 'fullstack-developer', + description: + 'End-to-end feature development Agent category: 01-core-development. Source: VoltAgent awesome claude code subagents.', + kind: 'agent', + link: 'https://github.com/VoltAgent/awesome-claude-code-subagents/blob/main/categories/01-core-development/fullstack-developer.md', + }, + { + id: 'agent_typescript-pro_2488e03cb7', + title: 'typescript-pro', + description: + 'TypeScript specialist Agent category: 02-language-specialists. Source: VoltAgent awesome claude code subagents.', + kind: 'agent', + link: 'https://github.com/VoltAgent/awesome-claude-code-subagents/blob/main/categories/02-language-specialists/typescript-pro.md', + }, + { + id: 'agent_react-specialist_97e834b8ec', + title: 'react-specialist', + description: + 'React 18+ modern patterns expert Agent category: 02-language-specialists. Source: VoltAgent awesome claude code subagents.', + kind: 'agent', + link: 'https://github.com/VoltAgent/awesome-claude-code-subagents/blob/main/categories/02-language-specialists/react-specialist.md', + }, + { + id: 'agent_nextjs-developer_21c76f2b6b', + title: 'nextjs-developer', + description: + 'Next.js 14+ full-stack specialist Agent category: 02-language-specialists. Source: VoltAgent awesome claude code subagents.', + kind: 'agent', + link: 'https://github.com/VoltAgent/awesome-claude-code-subagents/blob/main/categories/02-language-specialists/nextjs-developer.md', + }, + { + id: 'agent_python-pro_ae00a0ba3a', + title: 'python-pro', + description: + 'Python ecosystem master Agent category: 02-language-specialists. Source: VoltAgent awesome claude code subagents.', + kind: 'agent', + link: 'https://github.com/VoltAgent/awesome-claude-code-subagents/blob/main/categories/02-language-specialists/python-pro.md', + }, + { + id: 'agent_golang-pro_1f60bd41fe', + title: 'golang-pro', + description: + 'Go concurrency specialist Agent category: 02-language-specialists. Source: VoltAgent awesome claude code subagents.', + kind: 'agent', + link: 'https://github.com/VoltAgent/awesome-claude-code-subagents/blob/main/categories/02-language-specialists/golang-pro.md', + }, + { + id: 'agent_rust-engineer_50077c3305', + title: 'rust-engineer', + description: + 'Systems programming expert Agent category: 02-language-specialists. Source: VoltAgent awesome claude code subagents.', + kind: 'agent', + link: 'https://github.com/VoltAgent/awesome-claude-code-subagents/blob/main/categories/02-language-specialists/rust-engineer.md', + }, + { + id: 'agent_javascript-pro_86917e4885', + title: 'javascript-pro', + description: + 'JavaScript development expert Agent category: 02-language-specialists. Source: VoltAgent awesome claude code subagents.', + kind: 'agent', + link: 'https://github.com/VoltAgent/awesome-claude-code-subagents/blob/main/categories/02-language-specialists/javascript-pro.md', + }, + { + id: 'agent_docker-expert_52e2c445e3', + title: 'docker-expert', + description: + 'Docker containerization and optimization expert Agent category: 03-infrastructure. Source: VoltAgent awesome claude code subagents.', + kind: 'agent', + link: 'https://github.com/VoltAgent/awesome-claude-code-subagents/blob/main/categories/03-infrastructure/docker-expert.md', + }, + { + id: 'agent_kubernetes-specialist_c2189c9df4', + title: 'kubernetes-specialist', + description: + 'Container orchestration master Agent category: 03-infrastructure. Source: VoltAgent awesome claude code subagents.', + kind: 'agent', + link: 'https://github.com/VoltAgent/awesome-claude-code-subagents/blob/main/categories/03-infrastructure/kubernetes-specialist.md', + }, + { + id: 'agent_devops-engineer_aaaee48bdf', + title: 'devops-engineer', + description: + 'CI/CD and automation expert Agent category: 03-infrastructure. Source: VoltAgent awesome claude code subagents.', + kind: 'agent', + link: 'https://github.com/VoltAgent/awesome-claude-code-subagents/blob/main/categories/03-infrastructure/devops-engineer.md', + }, + { + id: 'agent_sql-pro_e62f3fc64d', + title: 'sql-pro', + description: + 'Database query expert Agent category: 02-language-specialists. Source: VoltAgent awesome claude code subagents.', + kind: 'agent', + link: 'https://github.com/VoltAgent/awesome-claude-code-subagents/blob/main/categories/02-language-specialists/sql-pro.md', + }, + { + id: 'agent_postgres-pro_43467205f7', + title: 'postgres-pro', + description: + 'PostgreSQL database expert Agent category: 05-data-ai. Source: VoltAgent awesome claude code subagents.', + kind: 'agent', + link: 'https://github.com/VoltAgent/awesome-claude-code-subagents/blob/main/categories/05-data-ai/postgres-pro.md', + }, + { + id: 'agent_graphql-architect_a9e1506687', + title: 'graphql-architect', + description: + 'GraphQL schema and federation expert Agent category: 01-core-development. Source: VoltAgent awesome claude code subagents.', + kind: 'agent', + link: 'https://github.com/VoltAgent/awesome-claude-code-subagents/blob/main/categories/01-core-development/graphql-architect.md', + }, + { + id: 'agent_microservices-architect_109c61d313', + title: 'microservices-architect', + description: + 'Distributed systems designer Agent category: 01-core-development. Source: VoltAgent awesome claude code subagents.', + kind: 'agent', + link: 'https://github.com/VoltAgent/awesome-claude-code-subagents/blob/main/categories/01-core-development/microservices-architect.md', + }, + { + id: 'agent_mobile-developer_ee9d32b7a4', + title: 'mobile-developer', + description: + 'Cross-platform mobile specialist Agent category: 01-core-development. Source: VoltAgent awesome claude code subagents.', + kind: 'agent', + link: 'https://github.com/VoltAgent/awesome-claude-code-subagents/blob/main/categories/01-core-development/mobile-developer.md', + }, + { + id: 'agent_ui-designer_c1f7b30158', + title: 'ui-designer', + description: + 'Visual design and interaction specialist Agent category: 01-core-development. Source: VoltAgent awesome claude code subagents.', + kind: 'agent', + link: 'https://github.com/VoltAgent/awesome-claude-code-subagents/blob/main/categories/01-core-development/ui-designer.md', + }, + { + id: 'agent_websocket-engineer_7fa54d89b6', + title: 'websocket-engineer', + description: + 'Real-time communication specialist Agent category: 01-core-development. Source: VoltAgent awesome claude code subagents.', + kind: 'agent', + link: 'https://github.com/VoltAgent/awesome-claude-code-subagents/blob/main/categories/01-core-development/websocket-engineer.md', + }, + { + id: 'agent_angular-architect_f075c5b317', + title: 'angular-architect', + description: + 'Angular 15+ enterprise patterns expert Agent category: 02-language-specialists. Source: VoltAgent awesome claude code subagents.', + kind: 'agent', + link: 'https://github.com/VoltAgent/awesome-claude-code-subagents/blob/main/categories/02-language-specialists/angular-architect.md', + }, + { + id: 'agent_cpp-pro_ba5184d249', + title: 'cpp-pro', + description: + 'C++ performance expert Agent category: 02-language-specialists. Source: VoltAgent awesome claude code subagents.', + kind: 'agent', + link: 'https://github.com/VoltAgent/awesome-claude-code-subagents/blob/main/categories/02-language-specialists/cpp-pro.md', + }, + { + id: 'agent_csharp-developer_388f02777a', + title: 'csharp-developer', + description: + '.NET ecosystem specialist Agent category: 02-language-specialists. Source: VoltAgent awesome claude code subagents.', + kind: 'agent', + link: 'https://github.com/VoltAgent/awesome-claude-code-subagents/blob/main/categories/02-language-specialists/csharp-developer.md', + }, + { + id: 'agent_django-developer_1c3a9ecc69', + title: 'django-developer', + description: + 'Django 4+ web development expert Agent category: 02-language-specialists. Source: VoltAgent awesome claude code subagents.', + kind: 'agent', + link: 'https://github.com/VoltAgent/awesome-claude-code-subagents/blob/main/categories/02-language-specialists/django-developer.md', + }, + { + id: 'agent_dotnet-core-expert_01408d87a1', + title: 'dotnet-core-expert', + description: + '.NET 8 cross-platform specialist Agent category: 02-language-specialists. Source: VoltAgent awesome claude code subagents.', + kind: 'agent', + link: 'https://github.com/VoltAgent/awesome-claude-code-subagents/blob/main/categories/02-language-specialists/dotnet-core-expert.md', + }, + { + id: 'agent_dotnet-framework-4-8-expert_252f7c201e', + title: 'dotnet-framework-4.8-expert', + description: + '.NET Framework legacy enterprise specialist Agent category: 02-language-specialists. Source: VoltAgent awesome claude code subagents.', + kind: 'agent', + link: 'https://github.com/VoltAgent/awesome-claude-code-subagents/blob/main/categories/02-language-specialists/dotnet-framework-4.8-expert.md', + }, + { + id: 'agent_elixir-expert_e2347d9c4e', + title: 'elixir-expert', + description: + 'Elixir and OTP fault-tolerant systems expert Agent category: 02-language-specialists. Source: VoltAgent awesome claude code subagents.', + kind: 'agent', + link: 'https://github.com/VoltAgent/awesome-claude-code-subagents/blob/main/categories/02-language-specialists/elixir-expert.md', + }, + { + id: 'agent_flutter-expert_83317cc806', + title: 'flutter-expert', + description: + 'Flutter 3+ cross-platform mobile expert Agent category: 02-language-specialists. Source: VoltAgent awesome claude code subagents.', + kind: 'agent', + link: 'https://github.com/VoltAgent/awesome-claude-code-subagents/blob/main/categories/02-language-specialists/flutter-expert.md', + }, + { + id: 'agent_java-architect_578d229210', + title: 'java-architect', + description: + 'Enterprise Java expert Agent category: 02-language-specialists. Source: VoltAgent awesome claude code subagents.', + kind: 'agent', + link: 'https://github.com/VoltAgent/awesome-claude-code-subagents/blob/main/categories/02-language-specialists/java-architect.md', + }, + { + id: 'agent_kotlin-specialist_f806234e3f', + title: 'kotlin-specialist', + description: + 'Modern JVM language expert Agent category: 02-language-specialists. Source: VoltAgent awesome claude code subagents.', + kind: 'agent', + link: 'https://github.com/VoltAgent/awesome-claude-code-subagents/blob/main/categories/02-language-specialists/kotlin-specialist.md', + }, + { + id: 'agent_laravel-specialist_aec9f16ad1', + title: 'laravel-specialist', + description: + 'Laravel 10+ PHP framework expert Agent category: 02-language-specialists. Source: VoltAgent awesome claude code subagents.', + kind: 'agent', + link: 'https://github.com/VoltAgent/awesome-claude-code-subagents/blob/main/categories/02-language-specialists/laravel-specialist.md', + }, + { + id: 'agent_php-pro_afa5bf3e83', + title: 'php-pro', + description: + 'PHP web development expert Agent category: 02-language-specialists. Source: VoltAgent awesome claude code subagents.', + kind: 'agent', + link: 'https://github.com/VoltAgent/awesome-claude-code-subagents/blob/main/categories/02-language-specialists/php-pro.md', + }, + { + id: 'agent_powershell-5-1-expert_a14f9ee387', + title: 'powershell-5.1-expert', + description: + 'Windows PowerShell 5.1 and full .NET Framework automation specialist Agent category: 02-language-specialists. Source: VoltAgent awesome claude code subagents.', + kind: 'agent', + link: 'https://github.com/VoltAgent/awesome-claude-code-subagents/blob/main/categories/02-language-specialists/powershell-5.1-expert.md', + }, + { + id: 'agent_powershell-7-expert_875222d4ea', + title: 'powershell-7-expert', + description: + 'Cross-platform PowerShell 7+ automation and modern .NET specialist Agent category: 02-language-specialists. Source: VoltAgent awesome claude code subagents.', + kind: 'agent', + link: 'https://github.com/VoltAgent/awesome-claude-code-subagents/blob/main/categories/02-language-specialists/powershell-7-expert.md', + }, + { + id: 'agent_rails-expert_f08962cbf1', + title: 'rails-expert', + description: + 'Rails 8.1 rapid development expert Agent category: 02-language-specialists. Source: VoltAgent awesome claude code subagents.', + kind: 'agent', + link: 'https://github.com/VoltAgent/awesome-claude-code-subagents/blob/main/categories/02-language-specialists/rails-expert.md', + }, + { + id: 'agent_spring-boot-engineer_87ba439a4a', + title: 'spring-boot-engineer', + description: + 'Spring Boot 3+ microservices expert Agent category: 02-language-specialists. Source: VoltAgent awesome claude code subagents.', + kind: 'agent', + link: 'https://github.com/VoltAgent/awesome-claude-code-subagents/blob/main/categories/02-language-specialists/spring-boot-engineer.md', + }, + { + id: 'agent_swift-expert_8febc581d9', + title: 'swift-expert', + description: + 'iOS and macOS specialist Agent category: 02-language-specialists. Source: VoltAgent awesome claude code subagents.', + kind: 'agent', + link: 'https://github.com/VoltAgent/awesome-claude-code-subagents/blob/main/categories/02-language-specialists/swift-expert.md', + }, + { + id: 'agent_vue-expert_f63241e9c6', + title: 'vue-expert', + description: + 'Vue 3 Composition API expert Agent category: 02-language-specialists. Source: VoltAgent awesome claude code subagents.', + kind: 'agent', + link: 'https://github.com/VoltAgent/awesome-claude-code-subagents/blob/main/categories/02-language-specialists/vue-expert.md', + }, + { + id: 'agent_azure-infra-engineer_65eef8d389', + title: 'azure-infra-engineer', + description: + 'Azure infrastructure and Az PowerShell automation expert Agent category: 03-infrastructure. Source: VoltAgent awesome claude code subagents.', + kind: 'agent', + link: 'https://github.com/VoltAgent/awesome-claude-code-subagents/blob/main/categories/03-infrastructure/azure-infra-engineer.md', + }, + { + id: 'agent_cloud-architect_af08c4c34f', + title: 'cloud-architect', + description: + 'AWS/GCP/Azure specialist Agent category: 03-infrastructure. Source: VoltAgent awesome claude code subagents.', + kind: 'agent', + link: 'https://github.com/VoltAgent/awesome-claude-code-subagents/blob/main/categories/03-infrastructure/cloud-architect.md', + }, + { + id: 'agent_database-administrator_28c54a5b91', + title: 'database-administrator', + description: + 'Database management expert Agent category: 03-infrastructure. Source: VoltAgent awesome claude code subagents.', + kind: 'agent', + link: 'https://github.com/VoltAgent/awesome-claude-code-subagents/blob/main/categories/03-infrastructure/database-administrator.md', + }, + { + id: 'agent_deployment-engineer_11821ff218', + title: 'deployment-engineer', + description: + 'Deployment automation specialist Agent category: 03-infrastructure. Source: VoltAgent awesome claude code subagents.', + kind: 'agent', + link: 'https://github.com/VoltAgent/awesome-claude-code-subagents/blob/main/categories/03-infrastructure/deployment-engineer.md', + }, + { + id: 'agent_devops-incident-responder_4c4380976f', + title: 'devops-incident-responder', + description: + 'DevOps incident management Agent category: 03-infrastructure. Source: VoltAgent awesome claude code subagents.', + kind: 'agent', + link: 'https://github.com/VoltAgent/awesome-claude-code-subagents/blob/main/categories/03-infrastructure/devops-incident-responder.md', + }, + { + id: 'agent_incident-responder_813cf4af1b', + title: 'incident-responder', + description: + 'System incident response expert Agent category: 03-infrastructure. Source: VoltAgent awesome claude code subagents.', + kind: 'agent', + link: 'https://github.com/VoltAgent/awesome-claude-code-subagents/blob/main/categories/03-infrastructure/incident-responder.md', + }, + { + id: 'agent_network-engineer_8eb8b24b26', + title: 'network-engineer', + description: + 'Network infrastructure specialist Agent category: 03-infrastructure. Source: VoltAgent awesome claude code subagents.', + kind: 'agent', + link: 'https://github.com/VoltAgent/awesome-claude-code-subagents/blob/main/categories/03-infrastructure/network-engineer.md', + }, + { + id: 'agent_platform-engineer_ce27eee4cd', + title: 'platform-engineer', + description: + 'Platform architecture expert Agent category: 03-infrastructure. Source: VoltAgent awesome claude code subagents.', + kind: 'agent', + link: 'https://github.com/VoltAgent/awesome-claude-code-subagents/blob/main/categories/03-infrastructure/platform-engineer.md', + }, + { + id: 'agent_security-engineer_2c226da14d', + title: 'security-engineer', + description: + 'Infrastructure security specialist Agent category: 03-infrastructure. Source: VoltAgent awesome claude code subagents.', + kind: 'agent', + link: 'https://github.com/VoltAgent/awesome-claude-code-subagents/blob/main/categories/03-infrastructure/security-engineer.md', + }, + { + id: 'agent_sre-engineer_2cc506d39c', + title: 'sre-engineer', + description: + 'Site reliability engineering expert Agent category: 03-infrastructure. Source: VoltAgent awesome claude code subagents.', + kind: 'agent', + link: 'https://github.com/VoltAgent/awesome-claude-code-subagents/blob/main/categories/03-infrastructure/sre-engineer.md', + }, + { + id: 'agent_terraform-engineer_a1c6bcfd6c', + title: 'terraform-engineer', + description: + 'Infrastructure as Code expert Agent category: 03-infrastructure. Source: VoltAgent awesome claude code subagents.', + kind: 'agent', + link: 'https://github.com/VoltAgent/awesome-claude-code-subagents/blob/main/categories/03-infrastructure/terraform-engineer.md', + }, + { + id: 'agent_terragrunt-expert_bf66137c2b', + title: 'terragrunt-expert', + description: + 'Terragrunt orchestration and DRY IaC specialist Agent category: 03-infrastructure. Source: VoltAgent awesome claude code subagents.', + kind: 'agent', + link: 'https://github.com/VoltAgent/awesome-claude-code-subagents/blob/main/categories/03-infrastructure/terragrunt-expert.md', + }, + { + id: 'agent_windows-infra-admin_63016f7fa5', + title: 'windows-infra-admin', + description: + 'Active Directory, DNS, DHCP, and GPO automation specialist Agent category: 03-infrastructure. Source: VoltAgent awesome claude code subagents.', + kind: 'agent', + link: 'https://github.com/VoltAgent/awesome-claude-code-subagents/blob/main/categories/03-infrastructure/windows-infra-admin.md', + }, + { + id: 'agent_accessibility-tester_0cb1814966', + title: 'accessibility-tester', + description: + 'A11y compliance expert Agent category: 04-quality-security. Source: VoltAgent awesome claude code subagents.', + kind: 'agent', + link: 'https://github.com/VoltAgent/awesome-claude-code-subagents/blob/main/categories/04-quality-security/accessibility-tester.md', + }, + { + id: 'agent_ad-security-reviewer_a6e5d65896', + title: 'ad-security-reviewer', + description: + 'Active Directory security and GPO audit specialist Agent category: 04-quality-security. Source: VoltAgent awesome claude code subagents.', + kind: 'agent', + link: 'https://github.com/VoltAgent/awesome-claude-code-subagents/blob/main/categories/04-quality-security/ad-security-reviewer.md', + }, + { + id: 'agent_architect-reviewer_2b317cf540', + title: 'architect-reviewer', + description: + 'Architecture review specialist Agent category: 04-quality-security. Source: VoltAgent awesome claude code subagents.', + kind: 'agent', + link: 'https://github.com/VoltAgent/awesome-claude-code-subagents/blob/main/categories/04-quality-security/architect-reviewer.md', + }, + { + id: 'agent_chaos-engineer_ef9f4b4cd9', + title: 'chaos-engineer', + description: + 'System resilience testing expert Agent category: 04-quality-security. Source: VoltAgent awesome claude code subagents.', + kind: 'agent', + link: 'https://github.com/VoltAgent/awesome-claude-code-subagents/blob/main/categories/04-quality-security/chaos-engineer.md', + }, + { + id: 'agent_code-reviewer_e0912e49c9', + title: 'code-reviewer', + description: + 'Code quality guardian Agent category: 04-quality-security. Source: VoltAgent awesome claude code subagents.', + kind: 'agent', + link: 'https://github.com/VoltAgent/awesome-claude-code-subagents/blob/main/categories/04-quality-security/code-reviewer.md', + }, + { + id: 'agent_compliance-auditor_589f185cc7', + title: 'compliance-auditor', + description: + 'Regulatory compliance expert Agent category: 04-quality-security. Source: VoltAgent awesome claude code subagents.', + kind: 'agent', + link: 'https://github.com/VoltAgent/awesome-claude-code-subagents/blob/main/categories/04-quality-security/compliance-auditor.md', + }, + { + id: 'agent_debugger_81d45aecb3', + title: 'debugger', + description: + 'Advanced debugging specialist Agent category: 04-quality-security. Source: VoltAgent awesome claude code subagents.', + kind: 'agent', + link: 'https://github.com/VoltAgent/awesome-claude-code-subagents/blob/main/categories/04-quality-security/debugger.md', + }, + { + id: 'agent_error-detective_36b4b8504b', + title: 'error-detective', + description: + 'Error analysis and resolution expert Agent category: 04-quality-security. Source: VoltAgent awesome claude code subagents.', + kind: 'agent', + link: 'https://github.com/VoltAgent/awesome-claude-code-subagents/blob/main/categories/04-quality-security/error-detective.md', + }, + { + id: 'agent_penetration-tester_49a96d068f', + title: 'penetration-tester', + description: + 'Ethical hacking specialist Agent category: 04-quality-security. Source: VoltAgent awesome claude code subagents.', + kind: 'agent', + link: 'https://github.com/VoltAgent/awesome-claude-code-subagents/blob/main/categories/04-quality-security/penetration-tester.md', + }, + { + id: 'agent_performance-engineer_49c25b9d64', + title: 'performance-engineer', + description: + 'Performance optimization expert Agent category: 04-quality-security. Source: VoltAgent awesome claude code subagents.', + kind: 'agent', + link: 'https://github.com/VoltAgent/awesome-claude-code-subagents/blob/main/categories/04-quality-security/performance-engineer.md', + }, + { + id: 'agent_powershell-security-hardening_42d986bd9f', + title: 'powershell-security-hardening', + description: + 'PowerShell security hardening and compliance specialist Agent category: 04-quality-security. Source: VoltAgent awesome claude code subagents.', + kind: 'agent', + link: 'https://github.com/VoltAgent/awesome-claude-code-subagents/blob/main/categories/04-quality-security/powershell-security-hardening.md', + }, + { + id: 'agent_qa-expert_9b5e1a5fb2', + title: 'qa-expert', + description: + 'Test automation specialist Agent category: 04-quality-security. Source: VoltAgent awesome claude code subagents.', + kind: 'agent', + link: 'https://github.com/VoltAgent/awesome-claude-code-subagents/blob/main/categories/04-quality-security/qa-expert.md', + }, + { + id: 'agent_security-auditor_89aee6651d', + title: 'security-auditor', + description: + 'Security vulnerability expert Agent category: 04-quality-security. Source: VoltAgent awesome claude code subagents.', + kind: 'agent', + link: 'https://github.com/VoltAgent/awesome-claude-code-subagents/blob/main/categories/04-quality-security/security-auditor.md', + }, + { + id: 'agent_test-automator_551e6c82b6', + title: 'test-automator', + description: + 'Test automation framework expert Agent category: 04-quality-security. Source: VoltAgent awesome claude code subagents.', + kind: 'agent', + link: 'https://github.com/VoltAgent/awesome-claude-code-subagents/blob/main/categories/04-quality-security/test-automator.md', + }, + { + id: 'agent_ai-engineer_ad721644cb', + title: 'ai-engineer', + description: + 'AI system design and deployment expert Agent category: 05-data-ai. Source: VoltAgent awesome claude code subagents.', + kind: 'agent', + link: 'https://github.com/VoltAgent/awesome-claude-code-subagents/blob/main/categories/05-data-ai/ai-engineer.md', + }, + { + id: 'agent_data-analyst_2ba9118613', + title: 'data-analyst', + description: + 'Data insights and visualization specialist Agent category: 05-data-ai. Source: VoltAgent awesome claude code subagents.', + kind: 'agent', + link: 'https://github.com/VoltAgent/awesome-claude-code-subagents/blob/main/categories/05-data-ai/data-analyst.md', + }, + { + id: 'agent_data-engineer_82d97e84e3', + title: 'data-engineer', + description: + 'Data pipeline architect Agent category: 05-data-ai. Source: VoltAgent awesome claude code subagents.', + kind: 'agent', + link: 'https://github.com/VoltAgent/awesome-claude-code-subagents/blob/main/categories/05-data-ai/data-engineer.md', + }, + { + id: 'agent_data-scientist_4847215d52', + title: 'data-scientist', + description: + 'Analytics and insights expert Agent category: 05-data-ai. Source: VoltAgent awesome claude code subagents.', + kind: 'agent', + link: 'https://github.com/VoltAgent/awesome-claude-code-subagents/blob/main/categories/05-data-ai/data-scientist.md', + }, + { + id: 'agent_database-optimizer_87ff04a6bd', + title: 'database-optimizer', + description: + 'Database performance specialist Agent category: 05-data-ai. Source: VoltAgent awesome claude code subagents.', + kind: 'agent', + link: 'https://github.com/VoltAgent/awesome-claude-code-subagents/blob/main/categories/05-data-ai/database-optimizer.md', + }, + { + id: 'agent_llm-architect_9ad22f0f78', + title: 'llm-architect', + description: + 'Large language model architect Agent category: 05-data-ai. Source: VoltAgent awesome claude code subagents.', + kind: 'agent', + link: 'https://github.com/VoltAgent/awesome-claude-code-subagents/blob/main/categories/05-data-ai/llm-architect.md', + }, + { + id: 'agent_machine-learning-engineer_3be958bbc5', + title: 'machine-learning-engineer', + description: + 'Machine learning systems expert Agent category: 05-data-ai. Source: VoltAgent awesome claude code subagents.', + kind: 'agent', + link: 'https://github.com/VoltAgent/awesome-claude-code-subagents/blob/main/categories/05-data-ai/machine-learning-engineer.md', + }, + { + id: 'agent_ml-engineer_231bd3a8a4', + title: 'ml-engineer', + description: + 'Machine learning specialist Agent category: 05-data-ai. Source: VoltAgent awesome claude code subagents.', + kind: 'agent', + link: 'https://github.com/VoltAgent/awesome-claude-code-subagents/blob/main/categories/05-data-ai/ml-engineer.md', + }, + { + id: 'agent_mlops-engineer_014fd98725', + title: 'mlops-engineer', + description: + 'MLOps and model deployment expert Agent category: 05-data-ai. Source: VoltAgent awesome claude code subagents.', + kind: 'agent', + link: 'https://github.com/VoltAgent/awesome-claude-code-subagents/blob/main/categories/05-data-ai/mlops-engineer.md', + }, + { + id: 'agent_nlp-engineer_76a2afffdd', + title: 'nlp-engineer', + description: + 'Natural language processing expert Agent category: 05-data-ai. Source: VoltAgent awesome claude code subagents.', + kind: 'agent', + link: 'https://github.com/VoltAgent/awesome-claude-code-subagents/blob/main/categories/05-data-ai/nlp-engineer.md', + }, + { + id: 'agent_prompt-engineer_9bbfb656e4', + title: 'prompt-engineer', + description: + 'Prompt optimization specialist Agent category: 05-data-ai. Source: VoltAgent awesome claude code subagents.', + kind: 'agent', + link: 'https://github.com/VoltAgent/awesome-claude-code-subagents/blob/main/categories/05-data-ai/prompt-engineer.md', + }, + { + id: 'agent_reinforcement-learning-engineer_ffda8b9d47', + title: 'reinforcement-learning-engineer', + description: + 'Reinforcement learning and agent training expert Agent category: 05-data-ai. Source: VoltAgent awesome claude code subagents.', + kind: 'agent', + link: 'https://github.com/VoltAgent/awesome-claude-code-subagents/blob/main/categories/05-data-ai/reinforcement-learning-engineer.md', + }, + { + id: 'agent_build-engineer_6db0cdf1ea', + title: 'build-engineer', + description: + 'Build system specialist Agent category: 06-developer-experience. Source: VoltAgent awesome claude code subagents.', + kind: 'agent', + link: 'https://github.com/VoltAgent/awesome-claude-code-subagents/blob/main/categories/06-developer-experience/build-engineer.md', + }, + { + id: 'agent_cli-developer_7fc968b353', + title: 'cli-developer', + description: + 'Command-line tool creator Agent category: 06-developer-experience. Source: VoltAgent awesome claude code subagents.', + kind: 'agent', + link: 'https://github.com/VoltAgent/awesome-claude-code-subagents/blob/main/categories/06-developer-experience/cli-developer.md', + }, + { + id: 'agent_dependency-manager_c919722181', + title: 'dependency-manager', + description: + 'Package and dependency specialist Agent category: 06-developer-experience. Source: VoltAgent awesome claude code subagents.', + kind: 'agent', + link: 'https://github.com/VoltAgent/awesome-claude-code-subagents/blob/main/categories/06-developer-experience/dependency-manager.md', + }, + { + id: 'agent_documentation-engineer_03dad04e2b', + title: 'documentation-engineer', + description: + 'Technical documentation expert Agent category: 06-developer-experience. Source: VoltAgent awesome claude code subagents.', + kind: 'agent', + link: 'https://github.com/VoltAgent/awesome-claude-code-subagents/blob/main/categories/06-developer-experience/documentation-engineer.md', + }, + { + id: 'agent_dx-optimizer_9da896bd49', + title: 'dx-optimizer', + description: + 'Developer experience optimization specialist Agent category: 06-developer-experience. Source: VoltAgent awesome claude code subagents.', + kind: 'agent', + link: 'https://github.com/VoltAgent/awesome-claude-code-subagents/blob/main/categories/06-developer-experience/dx-optimizer.md', + }, + { + id: 'agent_git-workflow-manager_5c8aa0e99c', + title: 'git-workflow-manager', + description: + 'Git workflow and branching expert Agent category: 06-developer-experience. Source: VoltAgent awesome claude code subagents.', + kind: 'agent', + link: 'https://github.com/VoltAgent/awesome-claude-code-subagents/blob/main/categories/06-developer-experience/git-workflow-manager.md', + }, + { + id: 'agent_legacy-modernizer_6f10a1ee2f', + title: 'legacy-modernizer', + description: + 'Legacy code modernization specialist Agent category: 06-developer-experience. Source: VoltAgent awesome claude code subagents.', + kind: 'agent', + link: 'https://github.com/VoltAgent/awesome-claude-code-subagents/blob/main/categories/06-developer-experience/legacy-modernizer.md', + }, + { + id: 'agent_mcp-developer_1b2aa45061', + title: 'mcp-developer', + description: + 'Model Context Protocol specialist Agent category: 06-developer-experience. Source: VoltAgent awesome claude code subagents.', + kind: 'agent', + link: 'https://github.com/VoltAgent/awesome-claude-code-subagents/blob/main/categories/06-developer-experience/mcp-developer.md', + }, + { + id: 'agent_powershell-module-architect_f319cf8e2f', + title: 'powershell-module-architect', + description: + 'PowerShell module and profile architecture specialist Agent category: 06-developer-experience. Source: VoltAgent awesome claude code subagents.', + kind: 'agent', + link: 'https://github.com/VoltAgent/awesome-claude-code-subagents/blob/main/categories/06-developer-experience/powershell-module-architect.md', + }, + { + id: 'agent_powershell-ui-architect_4df8ea7555', + title: 'powershell-ui-architect', + description: + 'PowerShell UI/UX specialist for WinForms, WPF, Metro frameworks, and TUIs Agent category: 06-developer-experience. Source: VoltAgent awesome claude code subagents.', + kind: 'agent', + link: 'https://github.com/VoltAgent/awesome-claude-code-subagents/blob/main/categories/06-developer-experience/powershell-ui-architect.md', + }, + { + id: 'agent_refactoring-specialist_8d5c17b094', + title: 'refactoring-specialist', + description: + 'Code refactoring expert Agent category: 06-developer-experience. Source: VoltAgent awesome claude code subagents.', + kind: 'agent', + link: 'https://github.com/VoltAgent/awesome-claude-code-subagents/blob/main/categories/06-developer-experience/refactoring-specialist.md', + }, + { + id: 'agent_slack-expert_b518b7b501', + title: 'slack-expert', + description: + 'Slack platform and @slack/bolt specialist Agent category: 06-developer-experience. Source: VoltAgent awesome claude code subagents.', + kind: 'agent', + link: 'https://github.com/VoltAgent/awesome-claude-code-subagents/blob/main/categories/06-developer-experience/slack-expert.md', + }, + { + id: 'agent_tooling-engineer_00971b8541', + title: 'tooling-engineer', + description: + 'Developer tooling specialist Agent category: 06-developer-experience. Source: VoltAgent awesome claude code subagents.', + kind: 'agent', + link: 'https://github.com/VoltAgent/awesome-claude-code-subagents/blob/main/categories/06-developer-experience/tooling-engineer.md', + }, + { + id: 'agent_api-documenter_ddbb59eb2e', + title: 'api-documenter', + description: + 'API documentation specialist Agent category: 07-specialized-domains. Source: VoltAgent awesome claude code subagents.', + kind: 'agent', + link: 'https://github.com/VoltAgent/awesome-claude-code-subagents/blob/main/categories/07-specialized-domains/api-documenter.md', + }, + { + id: 'agent_blockchain-developer_3aeb9e6db6', + title: 'blockchain-developer', + description: + 'Web3 and crypto specialist Agent category: 07-specialized-domains. Source: VoltAgent awesome claude code subagents.', + kind: 'agent', + link: 'https://github.com/VoltAgent/awesome-claude-code-subagents/blob/main/categories/07-specialized-domains/blockchain-developer.md', + }, + { + id: 'agent_embedded-systems_90f8213e31', + title: 'embedded-systems', + description: + 'Embedded and real-time systems expert Agent category: 07-specialized-domains. Source: VoltAgent awesome claude code subagents.', + kind: 'agent', + link: 'https://github.com/VoltAgent/awesome-claude-code-subagents/blob/main/categories/07-specialized-domains/embedded-systems.md', + }, + { + id: 'agent_fintech-engineer_52c6758e75', + title: 'fintech-engineer', + description: + 'Financial technology specialist Agent category: 07-specialized-domains. Source: VoltAgent awesome claude code subagents.', + kind: 'agent', + link: 'https://github.com/VoltAgent/awesome-claude-code-subagents/blob/main/categories/07-specialized-domains/fintech-engineer.md', + }, + { + id: 'agent_game-developer_5f9631e9e8', + title: 'game-developer', + description: + 'Game development expert Agent category: 07-specialized-domains. Source: VoltAgent awesome claude code subagents.', + kind: 'agent', + link: 'https://github.com/VoltAgent/awesome-claude-code-subagents/blob/main/categories/07-specialized-domains/game-developer.md', + }, + { + id: 'agent_iot-engineer_40224d54e2', + title: 'iot-engineer', + description: + 'IoT systems developer Agent category: 07-specialized-domains. Source: VoltAgent awesome claude code subagents.', + kind: 'agent', + link: 'https://github.com/VoltAgent/awesome-claude-code-subagents/blob/main/categories/07-specialized-domains/iot-engineer.md', + }, + { + id: 'agent_m365-admin_b835cac9fd', + title: 'm365-admin', + description: + 'Microsoft 365, Exchange Online, Teams, and SharePoint administration specialist Agent category: 07-specialized-domains. Source: VoltAgent awesome claude code subagents.', + kind: 'agent', + link: 'https://github.com/VoltAgent/awesome-claude-code-subagents/blob/main/categories/07-specialized-domains/m365-admin.md', + }, + { + id: 'agent_mobile-app-developer_b228d4de5d', + title: 'mobile-app-developer', + description: + 'Mobile application specialist Agent category: 07-specialized-domains. Source: VoltAgent awesome claude code subagents.', + kind: 'agent', + link: 'https://github.com/VoltAgent/awesome-claude-code-subagents/blob/main/categories/07-specialized-domains/mobile-app-developer.md', + }, + ], + // Page 2 + [ + { + id: 'agent_payment-integration_452ce0859f', + title: 'payment-integration', + description: + 'Payment systems expert Agent category: 07-specialized-domains. Source: VoltAgent awesome claude code subagents.', + kind: 'agent', + link: 'https://github.com/VoltAgent/awesome-claude-code-subagents/blob/main/categories/07-specialized-domains/payment-integration.md', + }, + { + id: 'agent_quant-analyst_3d15160672', + title: 'quant-analyst', + description: + 'Quantitative analysis specialist Agent category: 07-specialized-domains. Source: VoltAgent awesome claude code subagents.', + kind: 'agent', + link: 'https://github.com/VoltAgent/awesome-claude-code-subagents/blob/main/categories/07-specialized-domains/quant-analyst.md', + }, + { + id: 'agent_risk-manager_5ccf5c15d6', + title: 'risk-manager', + description: + 'Risk assessment and management expert Agent category: 07-specialized-domains. Source: VoltAgent awesome claude code subagents.', + kind: 'agent', + link: 'https://github.com/VoltAgent/awesome-claude-code-subagents/blob/main/categories/07-specialized-domains/risk-manager.md', + }, + { + id: 'agent_seo-specialist_b8998fb0af', + title: 'seo-specialist', + description: + 'Search engine optimization expert Agent category: 07-specialized-domains. Source: VoltAgent awesome claude code subagents.', + kind: 'agent', + link: 'https://github.com/VoltAgent/awesome-claude-code-subagents/blob/main/categories/07-specialized-domains/seo-specialist.md', + }, + { + id: 'agent_business-analyst_e64dd9b882', + title: 'business-analyst', + description: + 'Requirements specialist Agent category: 08-business-product. Source: VoltAgent awesome claude code subagents.', + kind: 'agent', + link: 'https://github.com/VoltAgent/awesome-claude-code-subagents/blob/main/categories/08-business-product/business-analyst.md', + }, + { + id: 'agent_content-marketer_4ae8c88596', + title: 'content-marketer', + description: + 'Content marketing specialist Agent category: 08-business-product. Source: VoltAgent awesome claude code subagents.', + kind: 'agent', + link: 'https://github.com/VoltAgent/awesome-claude-code-subagents/blob/main/categories/08-business-product/content-marketer.md', + }, + { + id: 'agent_customer-success-manager_ff48ab0467', + title: 'customer-success-manager', + description: + 'Customer success expert Agent category: 08-business-product. Source: VoltAgent awesome claude code subagents.', + kind: 'agent', + link: 'https://github.com/VoltAgent/awesome-claude-code-subagents/blob/main/categories/08-business-product/customer-success-manager.md', + }, + { + id: 'agent_legal-advisor_1678c8b42f', + title: 'legal-advisor', + description: + 'Legal and compliance specialist Agent category: 08-business-product. Source: VoltAgent awesome claude code subagents.', + kind: 'agent', + link: 'https://github.com/VoltAgent/awesome-claude-code-subagents/blob/main/categories/08-business-product/legal-advisor.md', + }, + { + id: 'agent_product-manager_246b0ad757', + title: 'product-manager', + description: + 'Product strategy expert Agent category: 08-business-product. Source: VoltAgent awesome claude code subagents.', + kind: 'agent', + link: 'https://github.com/VoltAgent/awesome-claude-code-subagents/blob/main/categories/08-business-product/product-manager.md', + }, + { + id: 'agent_project-manager_5787e8174a', + title: 'project-manager', + description: + 'Project management specialist Agent category: 08-business-product. Source: VoltAgent awesome claude code subagents.', + kind: 'agent', + link: 'https://github.com/VoltAgent/awesome-claude-code-subagents/blob/main/categories/08-business-product/project-manager.md', + }, + { + id: 'agent_sales-engineer_1a8334a1dd', + title: 'sales-engineer', + description: + 'Technical sales expert Agent category: 08-business-product. Source: VoltAgent awesome claude code subagents.', + kind: 'agent', + link: 'https://github.com/VoltAgent/awesome-claude-code-subagents/blob/main/categories/08-business-product/sales-engineer.md', + }, + { + id: 'agent_scrum-master_165cab3f81', + title: 'scrum-master', + description: + 'Agile methodology expert Agent category: 08-business-product. Source: VoltAgent awesome claude code subagents.', + kind: 'agent', + link: 'https://github.com/VoltAgent/awesome-claude-code-subagents/blob/main/categories/08-business-product/scrum-master.md', + }, + { + id: 'agent_technical-writer_3abfa67b27', + title: 'technical-writer', + description: + 'Technical documentation specialist Agent category: 08-business-product. Source: VoltAgent awesome claude code subagents.', + kind: 'agent', + link: 'https://github.com/VoltAgent/awesome-claude-code-subagents/blob/main/categories/08-business-product/technical-writer.md', + }, + { + id: 'agent_ux-researcher_4dc9553506', + title: 'ux-researcher', + description: + 'User research expert Agent category: 08-business-product. Source: VoltAgent awesome claude code subagents.', + kind: 'agent', + link: 'https://github.com/VoltAgent/awesome-claude-code-subagents/blob/main/categories/08-business-product/ux-researcher.md', + }, + { + id: 'agent_wordpress-master_578bd5d128', + title: 'wordpress-master', + description: + 'WordPress development and optimization expert Agent category: 08-business-product. Source: VoltAgent awesome claude code subagents.', + kind: 'agent', + link: 'https://github.com/VoltAgent/awesome-claude-code-subagents/blob/main/categories/08-business-product/wordpress-master.md', + }, + { + id: 'agent_agent-installer_744d1480b4', + title: 'agent-installer', + description: + 'Browse and install agents from this repository via GitHub Agent category: 09-meta-orchestration. Source: VoltAgent awesome claude code subagents.', + kind: 'agent', + link: 'https://github.com/VoltAgent/awesome-claude-code-subagents/blob/main/categories/09-meta-orchestration/agent-installer.md', + }, + { + id: 'agent_agent-organizer_8394c4aeb4', + title: 'agent-organizer', + description: + 'Multi-agent coordinator Agent category: 09-meta-orchestration. Source: VoltAgent awesome claude code subagents.', + kind: 'agent', + link: 'https://github.com/VoltAgent/awesome-claude-code-subagents/blob/main/categories/09-meta-orchestration/agent-organizer.md', + }, + { + id: 'agent_context-manager_f1f5abc655', + title: 'context-manager', + description: + 'Context optimization expert Agent category: 09-meta-orchestration. Source: VoltAgent awesome claude code subagents.', + kind: 'agent', + link: 'https://github.com/VoltAgent/awesome-claude-code-subagents/blob/main/categories/09-meta-orchestration/context-manager.md', + }, + { + id: 'agent_error-coordinator_ce228be4db', + title: 'error-coordinator', + description: + 'Error handling and recovery specialist Agent category: 09-meta-orchestration. Source: VoltAgent awesome claude code subagents.', + kind: 'agent', + link: 'https://github.com/VoltAgent/awesome-claude-code-subagents/blob/main/categories/09-meta-orchestration/error-coordinator.md', + }, + { + id: 'agent_it-ops-orchestrator_ee5e4f390b', + title: 'it-ops-orchestrator', + description: + 'IT operations workflow orchestration specialist Agent category: 09-meta-orchestration. Source: VoltAgent awesome claude code subagents.', + kind: 'agent', + link: 'https://github.com/VoltAgent/awesome-claude-code-subagents/blob/main/categories/09-meta-orchestration/it-ops-orchestrator.md', + }, + { + id: 'agent_knowledge-synthesizer_db3780f17f', + title: 'knowledge-synthesizer', + description: + 'Knowledge aggregation expert Agent category: 09-meta-orchestration. Source: VoltAgent awesome claude code subagents.', + kind: 'agent', + link: 'https://github.com/VoltAgent/awesome-claude-code-subagents/blob/main/categories/09-meta-orchestration/knowledge-synthesizer.md', + }, + { + id: 'agent_multi-agent-coordinator_381471234d', + title: 'multi-agent-coordinator', + description: + 'Advanced multi-agent orchestration Agent category: 09-meta-orchestration. Source: VoltAgent awesome claude code subagents.', + kind: 'agent', + link: 'https://github.com/VoltAgent/awesome-claude-code-subagents/blob/main/categories/09-meta-orchestration/multi-agent-coordinator.md', + }, + { + id: 'agent_performance-monitor_b06ccf32d9', + title: 'performance-monitor', + description: + 'Agent performance optimization Agent category: 09-meta-orchestration. Source: VoltAgent awesome claude code subagents.', + kind: 'agent', + link: 'https://github.com/VoltAgent/awesome-claude-code-subagents/blob/main/categories/09-meta-orchestration/performance-monitor.md', + }, + { + id: 'agent_task-distributor_a2f629676f', + title: 'task-distributor', + description: + 'Task allocation specialist Agent category: 09-meta-orchestration. Source: VoltAgent awesome claude code subagents.', + kind: 'agent', + link: 'https://github.com/VoltAgent/awesome-claude-code-subagents/blob/main/categories/09-meta-orchestration/task-distributor.md', + }, + { + id: 'agent_workflow-orchestrator_b062b3b575', + title: 'workflow-orchestrator', + description: + 'Complex workflow automation Agent category: 09-meta-orchestration. Source: VoltAgent awesome claude code subagents.', + kind: 'agent', + link: 'https://github.com/VoltAgent/awesome-claude-code-subagents/blob/main/categories/09-meta-orchestration/workflow-orchestrator.md', + }, + { + id: 'agent_pied-piper_c2edf76c07', + title: 'pied-piper', + description: + 'Orchestrate Team of AI Subagents for repetitive SDLC workflows Agent category: 09-meta-orchestration (external). Source: VoltAgent awesome claude code subagents.', + kind: 'agent', + link: 'https://github.com/sathish316/pied-piper', + }, + { + id: 'agent_taskade_6430f82b58', + title: 'taskade', + description: + 'AI-powered workspace with autonomous agents, real-time collaboration, and workflow automation with MCP integration Agent category: 09-meta-orchestration (external). Source: VoltAgent awesome claude code subagents.', + kind: 'agent', + link: 'https://github.com/taskade/mcp', + }, + { + id: 'agent_competitive-analyst_169ac87572', + title: 'competitive-analyst', + description: + 'Competitive intelligence specialist Agent category: 10-research-analysis. Source: VoltAgent awesome claude code subagents.', + kind: 'agent', + link: 'https://github.com/VoltAgent/awesome-claude-code-subagents/blob/main/categories/10-research-analysis/competitive-analyst.md', + }, + { + id: 'agent_data-researcher_004af58455', + title: 'data-researcher', + description: + 'Data discovery and analysis expert Agent category: 10-research-analysis. Source: VoltAgent awesome claude code subagents.', + kind: 'agent', + link: 'https://github.com/VoltAgent/awesome-claude-code-subagents/blob/main/categories/10-research-analysis/data-researcher.md', + }, + { + id: 'agent_market-researcher_5e8e1dd938', + title: 'market-researcher', + description: + 'Market analysis and consumer insights Agent category: 10-research-analysis. Source: VoltAgent awesome claude code subagents.', + kind: 'agent', + link: 'https://github.com/VoltAgent/awesome-claude-code-subagents/blob/main/categories/10-research-analysis/market-researcher.md', + }, + { + id: 'agent_research-analyst_2f2ab5f93d', + title: 'research-analyst', + description: + 'Comprehensive research specialist Agent category: 10-research-analysis. Source: VoltAgent awesome claude code subagents.', + kind: 'agent', + link: 'https://github.com/VoltAgent/awesome-claude-code-subagents/blob/main/categories/10-research-analysis/research-analyst.md', + }, + { + id: 'agent_scientific-literature-researcher_1bcc5cb239', + title: 'scientific-literature-researcher', + description: + 'Scientific paper search and evidence synthesis via [BGPT MCP](https://github.com/connerlambden/bgpt-mcp) Agent category: 10-research-analysis. Source: VoltAgent awesome claude code subagents.', + kind: 'agent', + link: 'https://github.com/VoltAgent/awesome-claude-code-subagents/blob/main/categories/10-research-analysis/scientific-literature-researcher.md', + }, + { + id: 'agent_search-specialist_770d39ef5d', + title: 'search-specialist', + description: + 'Advanced information retrieval expert Agent category: 10-research-analysis. Source: VoltAgent awesome claude code subagents.', + kind: 'agent', + link: 'https://github.com/VoltAgent/awesome-claude-code-subagents/blob/main/categories/10-research-analysis/search-specialist.md', + }, + { + id: 'agent_trend-analyst_e2cf047fb0', + title: 'trend-analyst', + description: + 'Emerging trends and forecasting expert Agent category: 10-research-analysis. Source: VoltAgent awesome claude code subagents.', + kind: 'agent', + link: 'https://github.com/VoltAgent/awesome-claude-code-subagents/blob/main/categories/10-research-analysis/trend-analyst.md', + }, + { + id: 'agent_frontend-developer_2a8bd2486f', + title: 'Frontend Developer', + description: 'Frontend Developer subagent from Njengah claude-code-cheat-sheet.', + kind: 'agent', + link: 'https://github.com/Njengah/claude-code-cheat-sheet/blob/main/subagents/frontend-developer.md', + }, + { + id: 'agent_backend-developer_cf34e462ac', + title: 'Backend Developer', + description: 'Backend Developer subagent from Njengah claude-code-cheat-sheet.', + kind: 'agent', + link: 'https://github.com/Njengah/claude-code-cheat-sheet/blob/main/subagents/backend-developer.md', + }, + { + id: 'agent_api-developer_7781de729a', + title: 'API Developer', + description: 'API Developer subagent from Njengah claude-code-cheat-sheet.', + kind: 'agent', + link: 'https://github.com/Njengah/claude-code-cheat-sheet/blob/main/subagents/api-developer.md', + }, + { + id: 'agent_mobile-developer_e3694688fb', + title: 'Mobile Developer', + description: 'Mobile Developer subagent from Njengah claude-code-cheat-sheet.', + kind: 'agent', + link: 'https://github.com/Njengah/claude-code-cheat-sheet/blob/main/subagents/mobile-developer.md', + }, + { + id: 'agent_python-developer_c64569c53a', + title: 'Python Developer', + description: 'Python Developer subagent from Njengah claude-code-cheat-sheet.', + kind: 'agent', + link: 'https://github.com/Njengah/claude-code-cheat-sheet/blob/main/subagents/python-developer.md', + }, + { + id: 'agent_javascript-developer_ef0fb38ac5', + title: 'JavaScript Developer', + description: 'JavaScript Developer subagent from Njengah claude-code-cheat-sheet.', + kind: 'agent', + link: 'https://github.com/Njengah/claude-code-cheat-sheet/blob/main/subagents/javascript-developer.md', + }, + { + id: 'agent_typescript-developer_d600238cc7', + title: 'TypeScript Developer', + description: 'TypeScript Developer subagent from Njengah claude-code-cheat-sheet.', + kind: 'agent', + link: 'https://github.com/Njengah/claude-code-cheat-sheet/blob/main/subagents/typescript-developer.md', + }, + { + id: 'agent_php-developer_1acf102f64', + title: 'PHP Developer', + description: 'PHP Developer subagent from Njengah claude-code-cheat-sheet.', + kind: 'agent', + link: 'https://github.com/Njengah/claude-code-cheat-sheet/blob/main/subagents/php-developer.md', + }, + { + id: 'agent_wordpress-developer_3d1f82d873', + title: 'WordPress Developer', + description: 'WordPress Developer subagent from Njengah claude-code-cheat-sheet.', + kind: 'agent', + link: 'https://github.com/Njengah/claude-code-cheat-sheet/blob/main/subagents/wordpress-developer.md', + }, + { + id: 'agent_ios-developer_01d104f3bd', + title: 'iOS Developer', + description: 'iOS Developer subagent from Njengah claude-code-cheat-sheet.', + kind: 'agent', + link: 'https://github.com/Njengah/claude-code-cheat-sheet/blob/main/subagents/ios-developer.md', + }, + { + id: 'agent_database-designer_e6b9d17395', + title: 'Database Designer', + description: 'Database Designer subagent from Njengah claude-code-cheat-sheet.', + kind: 'agent', + link: 'https://github.com/Njengah/claude-code-cheat-sheet/blob/main/subagents/database-designer.md', + }, + { + id: 'agent_code-reviewer_88a6ee082c', + title: 'Code Reviewer', + description: 'Code Reviewer subagent from Njengah claude-code-cheat-sheet.', + kind: 'agent', + link: 'https://github.com/Njengah/claude-code-cheat-sheet/blob/main/subagents/code-reviewer.md', + }, + { + id: 'agent_code-debugger_0bf7e6ed22', + title: 'Code Debugger', + description: 'Code Debugger subagent from Njengah claude-code-cheat-sheet.', + kind: 'agent', + link: 'https://github.com/Njengah/claude-code-cheat-sheet/blob/main/subagents/code-debugger.md', + }, + { + id: 'agent_code-documenter_3a77658876', + title: 'Code Documenter', + description: 'Code Documenter subagent from Njengah claude-code-cheat-sheet.', + kind: 'agent', + link: 'https://github.com/Njengah/claude-code-cheat-sheet/blob/main/subagents/code-documenter.md', + }, + { + id: 'agent_code-refactor_93928975cb', + title: 'Code Refactor', + description: 'Code Refactor subagent from Njengah claude-code-cheat-sheet.', + kind: 'agent', + link: 'https://github.com/Njengah/claude-code-cheat-sheet/blob/main/subagents/code-refactor.md', + }, + { + id: 'agent_code-security-auditor_5e7eb7c5e2', + title: 'Code Security Auditor', + description: 'Code Security Auditor subagent from Njengah claude-code-cheat-sheet.', + kind: 'agent', + link: 'https://github.com/Njengah/claude-code-cheat-sheet/blob/main/subagents/code-security-auditor.md', + }, + { + id: 'agent_code-standards-enforcer_52cf5fcafe', + title: 'Code Standards Enforcer', + description: 'Code Standards Enforcer subagent from Njengah claude-code-cheat-sheet.', + kind: 'agent', + link: 'https://github.com/Njengah/claude-code-cheat-sheet/blob/main/subagents/code-standards-enforcer.md', + }, + { + id: 'agent_cs-growth-strategist_0c56b39bea', + title: 'cs-growth-strategist', + description: 'cs-growth-strategist agent from joeking-ly claude-skills-arsenal.', + kind: 'agent', + link: 'https://github.com/joeking-ly/claude-skills-arsenal/blob/main/skills/agents/business-growth/cs-growth-strategist.md', + }, + { + id: 'agent_cs-ceo-advisor_8751dabcce', + title: 'cs-ceo-advisor', + description: 'cs-ceo-advisor agent from joeking-ly claude-skills-arsenal.', + kind: 'agent', + link: 'https://github.com/joeking-ly/claude-skills-arsenal/blob/main/skills/agents/c-level/cs-ceo-advisor.md', + }, + { + id: 'agent_cs-cto-advisor_8c173b8fd8', + title: 'cs-cto-advisor', + description: 'cs-cto-advisor agent from joeking-ly claude-skills-arsenal.', + kind: 'agent', + link: 'https://github.com/joeking-ly/claude-skills-arsenal/blob/main/skills/agents/c-level/cs-cto-advisor.md', + }, + { + id: 'agent_cs-engineering-lead_2310f557dd', + title: 'cs-engineering-lead', + description: 'cs-engineering-lead agent from joeking-ly claude-skills-arsenal.', + kind: 'agent', + link: 'https://github.com/joeking-ly/claude-skills-arsenal/blob/main/skills/agents/engineering-team/cs-engineering-lead.md', + }, + { + id: 'agent_cs-workspace-admin_90f99ec603', + title: 'cs-workspace-admin', + description: 'cs-workspace-admin agent from joeking-ly claude-skills-arsenal.', + kind: 'agent', + link: 'https://github.com/joeking-ly/claude-skills-arsenal/blob/main/skills/agents/engineering-team/cs-workspace-admin.md', + }, + { + id: 'agent_cs-senior-engineer_500ce480c9', + title: 'cs-senior-engineer', + description: 'cs-senior-engineer agent from joeking-ly claude-skills-arsenal.', + kind: 'agent', + link: 'https://github.com/joeking-ly/claude-skills-arsenal/blob/main/skills/agents/engineering/cs-senior-engineer.md', + }, + { + id: 'agent_cs-financial-analyst_5d5b9a31b3', + title: 'cs-financial-analyst', + description: 'cs-financial-analyst agent from joeking-ly claude-skills-arsenal.', + kind: 'agent', + link: 'https://github.com/joeking-ly/claude-skills-arsenal/blob/main/skills/agents/finance/cs-financial-analyst.md', + }, + { + id: 'agent_cs-content-creator_e578a92c2e', + title: 'cs-content-creator', + description: 'cs-content-creator agent from joeking-ly claude-skills-arsenal.', + kind: 'agent', + link: 'https://github.com/joeking-ly/claude-skills-arsenal/blob/main/skills/agents/marketing/cs-content-creator.md', + }, + { + id: 'agent_cs-demand-gen-specialist_0be5984904', + title: 'cs-demand-gen-specialist', + description: 'cs-demand-gen-specialist agent from joeking-ly claude-skills-arsenal.', + kind: 'agent', + link: 'https://github.com/joeking-ly/claude-skills-arsenal/blob/main/skills/agents/marketing/cs-demand-gen-specialist.md', + }, + { + id: 'agent_content-strategist_18dba5517d', + title: 'content-strategist', + description: 'content-strategist agent from joeking-ly claude-skills-arsenal.', + kind: 'agent', + link: 'https://github.com/joeking-ly/claude-skills-arsenal/blob/main/skills/agents/personas/content-strategist.md', + }, + { + id: 'agent_devops-engineer_edb71e4082', + title: 'devops-engineer', + description: 'devops-engineer agent from joeking-ly claude-skills-arsenal.', + kind: 'agent', + link: 'https://github.com/joeking-ly/claude-skills-arsenal/blob/main/skills/agents/personas/devops-engineer.md', + }, + { + id: 'agent_finance-lead_677f4c0b88', + title: 'finance-lead', + description: 'finance-lead agent from joeking-ly claude-skills-arsenal.', + kind: 'agent', + link: 'https://github.com/joeking-ly/claude-skills-arsenal/blob/main/skills/agents/personas/finance-lead.md', + }, + { + id: 'agent_growth-marketer_937b5f2aee', + title: 'growth-marketer', + description: 'growth-marketer agent from joeking-ly claude-skills-arsenal.', + kind: 'agent', + link: 'https://github.com/joeking-ly/claude-skills-arsenal/blob/main/skills/agents/personas/growth-marketer.md', + }, + { + id: 'agent_product-manager_663e3ba97f', + title: 'product-manager', + description: 'product-manager agent from joeking-ly claude-skills-arsenal.', + kind: 'agent', + link: 'https://github.com/joeking-ly/claude-skills-arsenal/blob/main/skills/agents/personas/product-manager.md', + }, + { + id: 'agent_solo-founder_4992597e0e', + title: 'solo-founder', + description: 'solo-founder agent from joeking-ly claude-skills-arsenal.', + kind: 'agent', + link: 'https://github.com/joeking-ly/claude-skills-arsenal/blob/main/skills/agents/personas/solo-founder.md', + }, + { + id: 'agent_startup-cto_e095bf586c', + title: 'startup-cto', + description: 'startup-cto agent from joeking-ly claude-skills-arsenal.', + kind: 'agent', + link: 'https://github.com/joeking-ly/claude-skills-arsenal/blob/main/skills/agents/personas/startup-cto.md', + }, + { + id: 'agent_cs-agile-product-owner_f824ca0c8f', + title: 'cs-agile-product-owner', + description: 'cs-agile-product-owner agent from joeking-ly claude-skills-arsenal.', + kind: 'agent', + link: 'https://github.com/joeking-ly/claude-skills-arsenal/blob/main/skills/agents/product/cs-agile-product-owner.md', + }, + { + id: 'agent_cs-product-analyst_238a736627', + title: 'cs-product-analyst', + description: 'cs-product-analyst agent from joeking-ly claude-skills-arsenal.', + kind: 'agent', + link: 'https://github.com/joeking-ly/claude-skills-arsenal/blob/main/skills/agents/product/cs-product-analyst.md', + }, + { + id: 'agent_cs-product-manager_f4a514e12b', + title: 'cs-product-manager', + description: 'cs-product-manager agent from joeking-ly claude-skills-arsenal.', + kind: 'agent', + link: 'https://github.com/joeking-ly/claude-skills-arsenal/blob/main/skills/agents/product/cs-product-manager.md', + }, + { + id: 'agent_cs-product-strategist_f2d47eb114', + title: 'cs-product-strategist', + description: 'cs-product-strategist agent from joeking-ly claude-skills-arsenal.', + kind: 'agent', + link: 'https://github.com/joeking-ly/claude-skills-arsenal/blob/main/skills/agents/product/cs-product-strategist.md', + }, + { + id: 'agent_cs-ux-researcher_d169d466be', + title: 'cs-ux-researcher', + description: 'cs-ux-researcher agent from joeking-ly claude-skills-arsenal.', + kind: 'agent', + link: 'https://github.com/joeking-ly/claude-skills-arsenal/blob/main/skills/agents/product/cs-ux-researcher.md', + }, + { + id: 'agent_cs-project-manager_7161e396ed', + title: 'cs-project-manager', + description: 'cs-project-manager agent from joeking-ly claude-skills-arsenal.', + kind: 'agent', + link: 'https://github.com/joeking-ly/claude-skills-arsenal/blob/main/skills/agents/project-management/cs-project-manager.md', + }, + { + id: 'agent_cs-quality-regulatory_5c8d369654', + title: 'cs-quality-regulatory', + description: 'cs-quality-regulatory agent from joeking-ly claude-skills-arsenal.', + kind: 'agent', + link: 'https://github.com/joeking-ly/claude-skills-arsenal/blob/main/skills/agents/ra-qm-team/cs-quality-regulatory.md', + }, + { + id: 'agent_obra-testing-skills-with-subagents_38b8770b49', + title: 'obra/testing-skills-with-subagents', + description: + 'obra/testing-skills-with-subagents agent for Claude workflows from VoltAgent/awesome-agent-skills.', + kind: 'agent', + link: 'https://github.com/obra/superpowers/blob/main/skills/testing-skills-with-subagents/SKILL.md', + }, + ], +]; + +export const agents: CatalogItem[] = agentsCatalogPages.flat(); +export const agentsTotalItems = agents.length; +export const agentsTotalPages = agentsCatalogPages.length; diff --git a/website/src/content/data/commands.ts b/website/src/content/data/commands.ts new file mode 100644 index 00000000..6fe60046 --- /dev/null +++ b/website/src/content/data/commands.ts @@ -0,0 +1,1793 @@ +/** + * Claude Code command catalog. + * Merged from qdhenry/Claude-Command-Suite and hesreallyhim/awesome-claude-code slash command sources. + */ + +import type { CatalogItem } from './types'; + +export const commandsCatalogPages: CatalogItem[][] = [ + [ + { + id: 'qdhenry-dev-code-review', + title: '/dev:code-review', + description: 'Comprehensive code quality review', + kind: 'command', + link: 'https://github.com/qdhenry/Claude-Command-Suite/blob/main/.claude/commands/dev/code-review.md', + }, + { + id: 'qdhenry-dev-debug-error', + title: '/dev:debug-error', + description: 'Systematically debug and fix errors', + kind: 'command', + link: 'https://github.com/qdhenry/Claude-Command-Suite/blob/main/.claude/commands/dev/debug-error.md', + }, + { + id: 'qdhenry-dev-explain-code', + title: '/dev:explain-code', + description: 'Analyze and explain code functionality', + kind: 'command', + link: 'https://github.com/qdhenry/Claude-Command-Suite/blob/main/.claude/commands/dev/explain-code.md', + }, + { + id: 'qdhenry-dev-refactor-code', + title: '/dev:refactor-code', + description: 'Intelligently refactor and improve code quality', + kind: 'command', + link: 'https://github.com/qdhenry/Claude-Command-Suite/blob/main/.claude/commands/dev/refactor-code.md', + }, + { + id: 'qdhenry-dev-fix-issue', + title: '/dev:fix-issue', + description: 'Identify and resolve code issues', + kind: 'command', + link: 'https://github.com/qdhenry/Claude-Command-Suite/blob/main/.claude/commands/dev/fix-issue.md', + }, + { + id: 'acc-pr-review-e127aa3f', + title: '/pr-review', + description: + 'Version Control & Git: Reviews pull request changes to provide feedback, check for issues, and suggest improvements before merging into the main codebase.', + kind: 'command', + link: 'https://github.com/hesreallyhim/awesome-claude-code/blob/923ddf1c3dba0413ecae1c6c2921a1607dc5911d/resources/slash-commands/pr-review/pr-review.md', + }, + { + id: 'acc-commit-926943e9', + title: '/commit', + description: + 'Version Control & Git: Creates git commits using conventional commit format with appropriate emojis, following project standards and creating descriptive messages that explain t...', + kind: 'command', + link: 'https://github.com/evmts/tevm-monorepo/blob/main/.claude/commands/commit.md', + }, + { + id: 'acc-commit-fast-8ff24e19', + title: '/commit-fast', + description: + 'Version Control & Git: Automates git commit process by selecting the first suggested message, generating structured commits with consistent formatting while skipping manual conf...', + kind: 'command', + link: 'https://github.com/steadycursor/steadystart/blob/main/.claude/commands/2-commit-fast.md', + }, + { + id: 'acc-create-pr-8822c211', + title: '/create-pr', + description: + 'Version Control & Git: Streamlines pull request creation by handling the entire workflow: creating a new branch, committing changes, formatting modified files with Biome, and su...', + kind: 'command', + link: 'https://github.com/toyamarinyon/giselle/blob/main/.claude/commands/create-pr.md', + }, + { + id: 'acc-create-pull-request-f28e609a', + title: '/create-pull-request', + description: + 'Version Control & Git: Provides comprehensive PR creation guidance with GitHub CLI, enforcing title conventions, following template structure, and offering concrete command exam...', + kind: 'command', + link: 'https://github.com/liam-hq/liam/blob/main/.claude/commands/create-pull-request.md', + }, + { + id: 'acc-create-worktrees-974c9438', + title: '/create-worktrees', + description: + 'Version Control & Git: Creates git worktrees for all open PRs or specific branches, handling branches with slashes, cleaning up stale worktrees, and supporting custom branch cre...', + kind: 'command', + link: 'https://github.com/evmts/tevm-monorepo/blob/main/.claude/commands/create-worktrees.md', + }, + { + id: 'acc-fix-pr-e7f20d70', + title: '/fix-pr', + description: + 'Version Control & Git: Fetches and fixes unresolved PR comments by automatically retrieving feedback, addressing reviewer concerns, making targeted code improvements, and stream...', + kind: 'command', + link: 'https://github.com/metabase/metabase/blob/master/.claude/commands/fix-pr.md', + }, + { + id: 'acc-fix-issue-6bc127e2', + title: '/fix-issue', + description: + 'Version Control & Git: Addresses GitHub issues by taking issue number as parameter, analyzing context, implementing solution, and testing/validating the fix for proper integration.', + kind: 'command', + link: 'https://github.com/metabase/metabase/blob/master/.claude/commands/fix-issue.md', + }, + { + id: 'acc-fix-github-issue-a966c6d9', + title: '/fix-github-issue', + description: + 'Version Control & Git: Analyzes and fixes GitHub issues using a structured approach with GitHub CLI for issue details, implementing necessary code changes, running tests, and cr...', + kind: 'command', + link: 'https://github.com/jeremymailen/kotlinter-gradle/blob/master/.claude/commands/fix-github-issue.md', + }, + { + id: 'qdhenry-dev-ultra-think', + title: '/dev:ultra-think', + description: 'Deep analysis and problem solving mode', + kind: 'command', + link: 'https://github.com/qdhenry/Claude-Command-Suite/blob/main/.claude/commands/dev/ultra-think.md', + }, + { + id: 'qdhenry-dev-prime', + title: '/dev:prime', + description: 'Enhanced AI mode for complex tasks', + kind: 'command', + link: 'https://github.com/qdhenry/Claude-Command-Suite/blob/main/.claude/commands/dev/prime.md', + }, + { + id: 'qdhenry-dev-all-tools', + title: '/dev:all-tools', + description: 'Display all available development tools', + kind: 'command', + link: 'https://github.com/qdhenry/Claude-Command-Suite/blob/main/.claude/commands/dev/all-tools.md', + }, + { + id: 'qdhenry-dev-git-status', + title: '/dev:git-status', + description: 'Show detailed git repository status', + kind: 'command', + link: 'https://github.com/qdhenry/Claude-Command-Suite/blob/main/.claude/commands/dev/git-status.md', + }, + { + id: 'qdhenry-dev-clean-branches', + title: '/dev:clean-branches', + description: 'Clean up merged and stale git branches', + kind: 'command', + link: 'https://github.com/qdhenry/Claude-Command-Suite/blob/main/.claude/commands/dev/clean-branches.md', + }, + { + id: 'qdhenry-dev-directory-deep-dive', + title: '/dev:directory-deep-dive', + description: 'Analyze directory structure and purpose', + kind: 'command', + link: 'https://github.com/qdhenry/Claude-Command-Suite/blob/main/.claude/commands/dev/directory-deep-dive.md', + }, + { + id: 'qdhenry-dev-code-to-task', + title: '/dev:code-to-task', + description: 'Convert code analysis to Linear tasks', + kind: 'command', + link: 'https://github.com/qdhenry/Claude-Command-Suite/blob/main/.claude/commands/dev/code-to-task.md', + }, + { + id: 'qdhenry-dev-code-permutation-tester', + title: '/dev:code-permutation-tester', + description: 'Test multiple code variations through simulation', + kind: 'command', + link: 'https://github.com/qdhenry/Claude-Command-Suite/blob/main/.claude/commands/dev/code-permutation-tester.md', + }, + { + id: 'qdhenry-dev-architecture-scenario-explorer', + title: '/dev:architecture-scenario-explorer', + description: 'Explore architectural decisions through scenario analysis', + kind: 'command', + link: 'https://github.com/qdhenry/Claude-Command-Suite/blob/main/.claude/commands/dev/architecture-scenario-explorer.md', + }, + { + id: 'qdhenry-dev-incremental-feature-build', + title: '/dev:incremental-feature-build', + description: 'Build features incrementally with validation gates', + kind: 'command', + link: 'https://github.com/qdhenry/Claude-Command-Suite/blob/main/.claude/commands/dev/incremental-feature-build.md', + }, + { + id: 'qdhenry-dev-parallel-feature-build', + title: '/dev:parallel-feature-build', + description: 'Build features using parallel agent execution', + kind: 'command', + link: 'https://github.com/qdhenry/Claude-Command-Suite/blob/main/.claude/commands/dev/parallel-feature-build.md', + }, + { + id: 'qdhenry-dev-cloudflare-worker', + title: '/dev:cloudflare-worker', + description: 'Generate and deploy Cloudflare Workers', + kind: 'command', + link: 'https://github.com/qdhenry/Claude-Command-Suite/blob/main/.claude/commands/dev/cloudflare-worker.md', + }, + { + id: 'qdhenry-dev-generate-linear-worklog', + title: '/dev:generate-linear-worklog', + description: 'Generate work logs from Linear task history', + kind: 'command', + link: 'https://github.com/qdhenry/Claude-Command-Suite/blob/main/.claude/commands/dev/generate-linear-worklog.md', + }, + { + id: 'qdhenry-dev-rule2hook', + title: '/dev:rule2hook', + description: 'Convert CLAUDE.md rules to Claude Code hooks', + kind: 'command', + link: 'https://github.com/qdhenry/Claude-Command-Suite/blob/main/.claude/commands/dev/rule2hook.md', + }, + { + id: 'qdhenry-dev-cleanup-vibes', + title: '/dev:cleanup-vibes', + description: 'Transform vibecoded projects into structured deployment-ready codebases', + kind: 'command', + link: 'https://github.com/qdhenry/Claude-Command-Suite/blob/main/.claude/commands/dev/cleanup-vibes.md', + }, + { + id: 'qdhenry-dev-remove-dead-code', + title: '/dev:remove-dead-code', + description: 'Scan remove and validate dead code with backup branches', + kind: 'command', + link: 'https://github.com/qdhenry/Claude-Command-Suite/blob/main/.claude/commands/dev/remove-dead-code.md', + }, + { + id: 'qdhenry-dev-create-ui-component', + title: '/dev:create-ui-component', + description: 'Create UI components with design system compliance and Storybook stories', + kind: 'command', + link: 'https://github.com/qdhenry/Claude-Command-Suite/blob/main/.claude/commands/dev/create-ui-component.md', + }, + { + id: 'qdhenry-dev-watch', + title: '/dev:watch', + description: 'Trigger Claude on file changes with filtering and debounce', + kind: 'command', + link: 'https://github.com/qdhenry/Claude-Command-Suite/blob/main/.claude/commands/dev/watch.md', + }, + { + id: 'qdhenry-dev-xml-prompt-formatter', + title: '/dev:xml-prompt-formatter', + description: 'Reformat prompts with structured XML tags for semantic clarity', + kind: 'command', + link: 'https://github.com/qdhenry/Claude-Command-Suite/blob/main/.claude/commands/dev/xml-prompt-formatter.md', + }, + { + id: 'qdhenry-test-generate-test-cases', + title: '/test:generate-test-cases', + description: 'Generate comprehensive test cases automatically', + kind: 'command', + link: 'https://github.com/qdhenry/Claude-Command-Suite/blob/main/.claude/commands/test/generate-test-cases.md', + }, + { + id: 'qdhenry-test-write-tests', + title: '/test:write-tests', + description: 'Write unit and integration tests', + kind: 'command', + link: 'https://github.com/qdhenry/Claude-Command-Suite/blob/main/.claude/commands/test/write-tests.md', + }, + { + id: 'qdhenry-test-test-coverage', + title: '/test:test-coverage', + description: 'Analyze and report test coverage', + kind: 'command', + link: 'https://github.com/qdhenry/Claude-Command-Suite/blob/main/.claude/commands/test/test-coverage.md', + }, + { + id: 'qdhenry-test-setup-comprehensive-testing', + title: '/test:setup-comprehensive-testing', + description: 'Set up complete testing infrastructure', + kind: 'command', + link: 'https://github.com/qdhenry/Claude-Command-Suite/blob/main/.claude/commands/test/setup-comprehensive-testing.md', + }, + { + id: 'qdhenry-test-e2e-setup', + title: '/test:e2e-setup', + description: 'Configure end-to-end testing suite', + kind: 'command', + link: 'https://github.com/qdhenry/Claude-Command-Suite/blob/main/.claude/commands/test/e2e-setup.md', + }, + { + id: 'qdhenry-test-setup-visual-testing', + title: '/test:setup-visual-testing', + description: 'Set up visual regression testing', + kind: 'command', + link: 'https://github.com/qdhenry/Claude-Command-Suite/blob/main/.claude/commands/test/setup-visual-testing.md', + }, + { + id: 'qdhenry-test-setup-load-testing', + title: '/test:setup-load-testing', + description: 'Configure load and performance testing', + kind: 'command', + link: 'https://github.com/qdhenry/Claude-Command-Suite/blob/main/.claude/commands/test/setup-load-testing.md', + }, + { + id: 'qdhenry-test-add-mutation-testing', + title: '/test:add-mutation-testing', + description: 'Set up mutation testing for code quality', + kind: 'command', + link: 'https://github.com/qdhenry/Claude-Command-Suite/blob/main/.claude/commands/test/add-mutation-testing.md', + }, + { + id: 'qdhenry-test-add-property-based-testing', + title: '/test:add-property-based-testing', + description: 'Implement property-based testing framework', + kind: 'command', + link: 'https://github.com/qdhenry/Claude-Command-Suite/blob/main/.claude/commands/test/add-property-based-testing.md', + }, + { + id: 'qdhenry-test-test-changelog-automation', + title: '/test:test-changelog-automation', + description: 'Automate changelog testing workflow', + kind: 'command', + link: 'https://github.com/qdhenry/Claude-Command-Suite/blob/main/.claude/commands/test/test-changelog-automation.md', + }, + { + id: 'qdhenry-deploy-ci-setup', + title: '/deploy:ci-setup', + description: 'Set up continuous integration pipeline', + kind: 'command', + link: 'https://github.com/qdhenry/Claude-Command-Suite/blob/main/.claude/commands/deploy/ci-setup.md', + }, + { + id: 'qdhenry-deploy-containerize-application', + title: '/deploy:containerize-application', + description: 'Containerize application for deployment', + kind: 'command', + link: 'https://github.com/qdhenry/Claude-Command-Suite/blob/main/.claude/commands/deploy/containerize-application.md', + }, + { + id: 'qdhenry-deploy-setup-kubernetes-deployment', + title: '/deploy:setup-kubernetes-deployment', + description: 'Configure Kubernetes deployment manifests', + kind: 'command', + link: 'https://github.com/qdhenry/Claude-Command-Suite/blob/main/.claude/commands/deploy/setup-kubernetes-deployment.md', + }, + { + id: 'qdhenry-deploy-prepare-release', + title: '/deploy:prepare-release', + description: 'Prepare and validate release packages', + kind: 'command', + link: 'https://github.com/qdhenry/Claude-Command-Suite/blob/main/.claude/commands/deploy/prepare-release.md', + }, + { + id: 'qdhenry-project-init-project', + title: '/project:init-project', + description: 'Initialize new project with essential structure', + kind: 'command', + link: 'https://github.com/qdhenry/Claude-Command-Suite/blob/main/.claude/commands/project/init-project.md', + }, + { + id: 'qdhenry-project-add-package', + title: '/project:add-package', + description: 'Add and configure new project dependencies', + kind: 'command', + link: 'https://github.com/qdhenry/Claude-Command-Suite/blob/main/.claude/commands/project/add-package.md', + }, + { + id: 'qdhenry-project-create-feature', + title: '/project:create-feature', + description: 'Scaffold new feature with boilerplate code', + kind: 'command', + link: 'https://github.com/qdhenry/Claude-Command-Suite/blob/main/.claude/commands/project/create-feature.md', + }, + { + id: 'qdhenry-project-milestone-tracker', + title: '/project:milestone-tracker', + description: 'Track and monitor project milestone progress', + kind: 'command', + link: 'https://github.com/qdhenry/Claude-Command-Suite/blob/main/.claude/commands/project/milestone-tracker.md', + }, + { + id: 'qdhenry-project-project-health-check', + title: '/project:project-health-check', + description: 'Analyze overall project health and metrics', + kind: 'command', + link: 'https://github.com/qdhenry/Claude-Command-Suite/blob/main/.claude/commands/project/project-health-check.md', + }, + { + id: 'qdhenry-project-project-to-linear', + title: '/project:project-to-linear', + description: 'Sync project structure to Linear workspace', + kind: 'command', + link: 'https://github.com/qdhenry/Claude-Command-Suite/blob/main/.claude/commands/project/project-to-linear.md', + }, + { + id: 'qdhenry-project-project-timeline-simulator', + title: '/project:project-timeline-simulator', + description: 'Simulate project outcomes with variable modeling', + kind: 'command', + link: 'https://github.com/qdhenry/Claude-Command-Suite/blob/main/.claude/commands/project/project-timeline-simulator.md', + }, + { + id: 'qdhenry-project-pac-configure', + title: '/project:pac-configure', + description: 'Configure Product as Code project structure', + kind: 'command', + link: 'https://github.com/qdhenry/Claude-Command-Suite/blob/main/.claude/commands/project/pac-configure.md', + }, + { + id: 'qdhenry-project-pac-create-epic', + title: '/project:pac-create-epic', + description: 'Create new PAC epic with guided workflow', + kind: 'command', + link: 'https://github.com/qdhenry/Claude-Command-Suite/blob/main/.claude/commands/project/pac-create-epic.md', + }, + { + id: 'qdhenry-project-pac-create-ticket', + title: '/project:pac-create-ticket', + description: 'Create new PAC ticket within an epic', + kind: 'command', + link: 'https://github.com/qdhenry/Claude-Command-Suite/blob/main/.claude/commands/project/pac-create-ticket.md', + }, + { + id: 'qdhenry-project-pac-validate', + title: '/project:pac-validate', + description: 'Validate PAC structure for specification compliance', + kind: 'command', + link: 'https://github.com/qdhenry/Claude-Command-Suite/blob/main/.claude/commands/project/pac-validate.md', + }, + { + id: 'qdhenry-project-pac-update-status', + title: '/project:pac-update-status', + description: 'Update PAC ticket status and track progress', + kind: 'command', + link: 'https://github.com/qdhenry/Claude-Command-Suite/blob/main/.claude/commands/project/pac-update-status.md', + }, + { + id: 'qdhenry-project-todo-branch', + title: '/project:todo-branch', + description: 'Create feature branches from todo items', + kind: 'command', + link: 'https://github.com/qdhenry/Claude-Command-Suite/blob/main/.claude/commands/project/todo-branch.md', + }, + { + id: 'qdhenry-project-todo-worktree', + title: '/project:todo-worktree', + description: 'Create git worktrees from todo items', + kind: 'command', + link: 'https://github.com/qdhenry/Claude-Command-Suite/blob/main/.claude/commands/project/todo-worktree.md', + }, + { + id: 'qdhenry-security-security-audit', + title: '/security:security-audit', + description: 'Perform comprehensive security assessment', + kind: 'command', + link: 'https://github.com/qdhenry/Claude-Command-Suite/blob/main/.claude/commands/security/security-audit.md', + }, + { + id: 'qdhenry-security-dependency-audit', + title: '/security:dependency-audit', + description: 'Audit dependencies for security vulnerabilities', + kind: 'command', + link: 'https://github.com/qdhenry/Claude-Command-Suite/blob/main/.claude/commands/security/dependency-audit.md', + }, + { + id: 'qdhenry-security-security-hardening', + title: '/security:security-hardening', + description: 'Harden application security configuration', + kind: 'command', + link: 'https://github.com/qdhenry/Claude-Command-Suite/blob/main/.claude/commands/security/security-hardening.md', + }, + { + id: 'qdhenry-security-add-authentication-system', + title: '/security:add-authentication-system', + description: 'Implement secure user authentication system', + kind: 'command', + link: 'https://github.com/qdhenry/Claude-Command-Suite/blob/main/.claude/commands/security/add-authentication-system.md', + }, + { + id: 'qdhenry-performance-performance-audit', + title: '/performance:performance-audit', + description: 'Audit application performance metrics', + kind: 'command', + link: 'https://github.com/qdhenry/Claude-Command-Suite/blob/main/.claude/commands/performance/performance-audit.md', + }, + { + id: 'qdhenry-performance-optimize-build', + title: '/performance:optimize-build', + description: 'Optimize build processes and speed', + kind: 'command', + link: 'https://github.com/qdhenry/Claude-Command-Suite/blob/main/.claude/commands/performance/optimize-build.md', + }, + { + id: 'qdhenry-performance-optimize-bundle-size', + title: '/performance:optimize-bundle-size', + description: 'Reduce and optimize bundle sizes', + kind: 'command', + link: 'https://github.com/qdhenry/Claude-Command-Suite/blob/main/.claude/commands/performance/optimize-bundle-size.md', + }, + { + id: 'qdhenry-performance-optimize-database-performance', + title: '/performance:optimize-database-performance', + description: 'Optimize database queries and performance', + kind: 'command', + link: 'https://github.com/qdhenry/Claude-Command-Suite/blob/main/.claude/commands/performance/optimize-database-performance.md', + }, + { + id: 'qdhenry-performance-implement-caching-strategy', + title: '/performance:implement-caching-strategy', + description: 'Design and implement caching solutions', + kind: 'command', + link: 'https://github.com/qdhenry/Claude-Command-Suite/blob/main/.claude/commands/performance/implement-caching-strategy.md', + }, + { + id: 'qdhenry-performance-add-performance-monitoring', + title: '/performance:add-performance-monitoring', + description: 'Set up application performance monitoring', + kind: 'command', + link: 'https://github.com/qdhenry/Claude-Command-Suite/blob/main/.claude/commands/performance/add-performance-monitoring.md', + }, + { + id: 'qdhenry-performance-setup-cdn-optimization', + title: '/performance:setup-cdn-optimization', + description: 'Configure CDN for optimal delivery', + kind: 'command', + link: 'https://github.com/qdhenry/Claude-Command-Suite/blob/main/.claude/commands/performance/setup-cdn-optimization.md', + }, + { + id: 'qdhenry-performance-system-behavior-simulator', + title: '/performance:system-behavior-simulator', + description: 'Simulate system performance under various loads', + kind: 'command', + link: 'https://github.com/qdhenry/Claude-Command-Suite/blob/main/.claude/commands/performance/system-behavior-simulator.md', + }, + { + id: 'qdhenry-sync-sync-issues-to-linear', + title: '/sync:sync-issues-to-linear', + description: 'Sync GitHub issues to Linear workspace', + kind: 'command', + link: 'https://github.com/qdhenry/Claude-Command-Suite/blob/main/.claude/commands/sync/sync-issues-to-linear.md', + }, + { + id: 'qdhenry-sync-sync-linear-to-issues', + title: '/sync:sync-linear-to-issues', + description: 'Sync Linear tasks to GitHub issues', + kind: 'command', + link: 'https://github.com/qdhenry/Claude-Command-Suite/blob/main/.claude/commands/sync/sync-linear-to-issues.md', + }, + { + id: 'qdhenry-sync-bidirectional-sync', + title: '/sync:bidirectional-sync', + description: 'Enable bidirectional GitHub-Linear synchronization', + kind: 'command', + link: 'https://github.com/qdhenry/Claude-Command-Suite/blob/main/.claude/commands/sync/bidirectional-sync.md', + }, + { + id: 'qdhenry-sync-issue-to-linear-task', + title: '/sync:issue-to-linear-task', + description: 'Convert GitHub issues to Linear tasks', + kind: 'command', + link: 'https://github.com/qdhenry/Claude-Command-Suite/blob/main/.claude/commands/sync/issue-to-linear-task.md', + }, + { + id: 'qdhenry-sync-linear-task-to-issue', + title: '/sync:linear-task-to-issue', + description: 'Convert Linear tasks to GitHub issues', + kind: 'command', + link: 'https://github.com/qdhenry/Claude-Command-Suite/blob/main/.claude/commands/sync/linear-task-to-issue.md', + }, + { + id: 'qdhenry-sync-sync-pr-to-task', + title: '/sync:sync-pr-to-task', + description: 'Link pull requests to Linear tasks', + kind: 'command', + link: 'https://github.com/qdhenry/Claude-Command-Suite/blob/main/.claude/commands/sync/sync-pr-to-task.md', + }, + { + id: 'qdhenry-sync-sync-status', + title: '/sync:sync-status', + description: 'Monitor GitHub-Linear sync health status', + kind: 'command', + link: 'https://github.com/qdhenry/Claude-Command-Suite/blob/main/.claude/commands/sync/sync-status.md', + }, + { + id: 'qdhenry-sync-bulk-import-issues', + title: '/sync:bulk-import-issues', + description: 'Bulk import GitHub issues to Linear', + kind: 'command', + link: 'https://github.com/qdhenry/Claude-Command-Suite/blob/main/.claude/commands/sync/bulk-import-issues.md', + }, + { + id: 'qdhenry-sync-cross-reference-manager', + title: '/sync:cross-reference-manager', + description: 'Manage cross-platform reference links', + kind: 'command', + link: 'https://github.com/qdhenry/Claude-Command-Suite/blob/main/.claude/commands/sync/cross-reference-manager.md', + }, + { + id: 'qdhenry-sync-sync-automation-setup', + title: '/sync:sync-automation-setup', + description: 'Set up automated synchronization workflows', + kind: 'command', + link: 'https://github.com/qdhenry/Claude-Command-Suite/blob/main/.claude/commands/sync/sync-automation-setup.md', + }, + { + id: 'qdhenry-sync-sync-conflict-resolver', + title: '/sync:sync-conflict-resolver', + description: 'Resolve synchronization conflicts automatically', + kind: 'command', + link: 'https://github.com/qdhenry/Claude-Command-Suite/blob/main/.claude/commands/sync/sync-conflict-resolver.md', + }, + { + id: 'qdhenry-sync-task-from-pr', + title: '/sync:task-from-pr', + description: 'Create Linear tasks from pull requests', + kind: 'command', + link: 'https://github.com/qdhenry/Claude-Command-Suite/blob/main/.claude/commands/sync/task-from-pr.md', + }, + { + id: 'qdhenry-deploy-hotfix-deploy', + title: '/deploy:hotfix-deploy', + description: 'Deploy critical hotfixes quickly', + kind: 'command', + link: 'https://github.com/qdhenry/Claude-Command-Suite/blob/main/.claude/commands/deploy/hotfix-deploy.md', + }, + { + id: 'qdhenry-deploy-rollback-deploy', + title: '/deploy:rollback-deploy', + description: 'Rollback deployment to previous version', + kind: 'command', + link: 'https://github.com/qdhenry/Claude-Command-Suite/blob/main/.claude/commands/deploy/rollback-deploy.md', + }, + { + id: 'qdhenry-deploy-setup-automated-releases', + title: '/deploy:setup-automated-releases', + description: 'Set up automated release workflows', + kind: 'command', + link: 'https://github.com/qdhenry/Claude-Command-Suite/blob/main/.claude/commands/deploy/setup-automated-releases.md', + }, + { + id: 'qdhenry-deploy-add-changelog', + title: '/deploy:add-changelog', + description: 'Generate and maintain project changelog', + kind: 'command', + link: 'https://github.com/qdhenry/Claude-Command-Suite/blob/main/.claude/commands/deploy/add-changelog.md', + }, + { + id: 'qdhenry-deploy-changelog-demo-command', + title: '/deploy:changelog-demo-command', + description: 'Demo changelog automation features', + kind: 'command', + link: 'https://github.com/qdhenry/Claude-Command-Suite/blob/main/.claude/commands/deploy/changelog-demo-command.md', + }, + { + id: 'qdhenry-docs-generate-api-documentation', + title: '/docs:generate-api-documentation', + description: 'Auto-generate API reference documentation', + kind: 'command', + link: 'https://github.com/qdhenry/Claude-Command-Suite/blob/main/.claude/commands/docs/generate-api-documentation.md', + }, + { + id: 'qdhenry-docs-doc-api', + title: '/docs:doc-api', + description: 'Generate API documentation from code', + kind: 'command', + link: 'https://github.com/qdhenry/Claude-Command-Suite/blob/main/.claude/commands/docs/doc-api.md', + }, + { + id: 'qdhenry-docs-create-architecture-documentation', + title: '/docs:create-architecture-documentation', + description: 'Generate comprehensive architecture documentation', + kind: 'command', + link: 'https://github.com/qdhenry/Claude-Command-Suite/blob/main/.claude/commands/docs/create-architecture-documentation.md', + }, + { + id: 'qdhenry-docs-create-onboarding-guide', + title: '/docs:create-onboarding-guide', + description: 'Create developer onboarding guide', + kind: 'command', + link: 'https://github.com/qdhenry/Claude-Command-Suite/blob/main/.claude/commands/docs/create-onboarding-guide.md', + }, + { + id: 'qdhenry-docs-migration-guide', + title: '/docs:migration-guide', + description: 'Create migration guides for updates', + kind: 'command', + link: 'https://github.com/qdhenry/Claude-Command-Suite/blob/main/.claude/commands/docs/migration-guide.md', + }, + { + id: 'qdhenry-docs-troubleshooting-guide', + title: '/docs:troubleshooting-guide', + description: 'Generate troubleshooting documentation', + kind: 'command', + link: 'https://github.com/qdhenry/Claude-Command-Suite/blob/main/.claude/commands/docs/troubleshooting-guide.md', + }, + { + id: 'qdhenry-setup-setup-development-environment', + title: '/setup:setup-development-environment', + description: 'Set up a complete development environment', + kind: 'command', + link: 'https://github.com/qdhenry/Claude-Command-Suite/blob/main/.claude/commands/setup/setup-development-environment.md', + }, + { + id: 'qdhenry-setup-setup-linting', + title: '/setup:setup-linting', + description: 'Set up code linting and quality tools', + kind: 'command', + link: 'https://github.com/qdhenry/Claude-Command-Suite/blob/main/.claude/commands/setup/setup-linting.md', + }, + { + id: 'qdhenry-setup-setup-formatting', + title: '/setup:setup-formatting', + description: 'Configure code formatting tools', + kind: 'command', + link: 'https://github.com/qdhenry/Claude-Command-Suite/blob/main/.claude/commands/setup/setup-formatting.md', + }, + { + id: 'qdhenry-setup-setup-monitoring-observability', + title: '/setup:setup-monitoring-observability', + description: 'Set up monitoring and observability tools', + kind: 'command', + link: 'https://github.com/qdhenry/Claude-Command-Suite/blob/main/.claude/commands/setup/setup-monitoring-observability.md', + }, + { + id: 'qdhenry-setup-setup-monorepo', + title: '/setup:setup-monorepo', + description: 'Configure monorepo project structure', + kind: 'command', + link: 'https://github.com/qdhenry/Claude-Command-Suite/blob/main/.claude/commands/setup/setup-monorepo.md', + }, + { + id: 'qdhenry-setup-migrate-to-typescript', + title: '/setup:migrate-to-typescript', + description: 'Migrate JavaScript project to TypeScript', + kind: 'command', + link: 'https://github.com/qdhenry/Claude-Command-Suite/blob/main/.claude/commands/setup/migrate-to-typescript.md', + }, + { + id: 'qdhenry-setup-modernize-deps', + title: '/setup:modernize-deps', + description: 'Update and modernize project dependencies', + kind: 'command', + link: 'https://github.com/qdhenry/Claude-Command-Suite/blob/main/.claude/commands/setup/modernize-deps.md', + }, + { + id: 'qdhenry-setup-design-database-schema', + title: '/setup:design-database-schema', + description: 'Design optimized database schemas', + kind: 'command', + link: 'https://github.com/qdhenry/Claude-Command-Suite/blob/main/.claude/commands/setup/design-database-schema.md', + }, + { + id: 'qdhenry-setup-create-database-migrations', + title: '/setup:create-database-migrations', + description: 'Create and manage database migrations', + kind: 'command', + link: 'https://github.com/qdhenry/Claude-Command-Suite/blob/main/.claude/commands/setup/create-database-migrations.md', + }, + { + id: 'qdhenry-setup-design-rest-api', + title: '/setup:design-rest-api', + description: 'Design RESTful API architecture', + kind: 'command', + link: 'https://github.com/qdhenry/Claude-Command-Suite/blob/main/.claude/commands/setup/design-rest-api.md', + }, + { + id: 'qdhenry-setup-implement-graphql-api', + title: '/setup:implement-graphql-api', + description: 'Implement GraphQL API endpoints', + kind: 'command', + link: 'https://github.com/qdhenry/Claude-Command-Suite/blob/main/.claude/commands/setup/implement-graphql-api.md', + }, + { + id: 'qdhenry-setup-setup-rate-limiting', + title: '/setup:setup-rate-limiting', + description: 'Implement API rate limiting', + kind: 'command', + link: 'https://github.com/qdhenry/Claude-Command-Suite/blob/main/.claude/commands/setup/setup-rate-limiting.md', + }, + { + id: 'qdhenry-setup-agent-tail', + title: '/setup:agent-tail', + description: 'Configure agent-tail log aggregation with framework auto-detection', + kind: 'command', + link: 'https://github.com/qdhenry/Claude-Command-Suite/blob/main/.claude/commands/setup/agent-tail.md', + }, + ], + [ + { + id: 'qdhenry-setup-portless', + title: '/setup:portless', + description: 'Set up Portless for named localhost URLs replacing port numbers', + kind: 'command', + link: 'https://github.com/qdhenry/Claude-Command-Suite/blob/main/.claude/commands/setup/portless.md', + }, + { + id: 'qdhenry-team-standup-report', + title: '/team:standup-report', + description: 'Generate daily standup reports', + kind: 'command', + link: 'https://github.com/qdhenry/Claude-Command-Suite/blob/main/.claude/commands/team/standup-report.md', + }, + { + id: 'qdhenry-team-sprint-planning', + title: '/team:sprint-planning', + description: 'Plan and organize sprint workflows', + kind: 'command', + link: 'https://github.com/qdhenry/Claude-Command-Suite/blob/main/.claude/commands/team/sprint-planning.md', + }, + { + id: 'qdhenry-team-retrospective-analyzer', + title: '/team:retrospective-analyzer', + description: 'Analyze team retrospectives for insights', + kind: 'command', + link: 'https://github.com/qdhenry/Claude-Command-Suite/blob/main/.claude/commands/team/retrospective-analyzer.md', + }, + { + id: 'qdhenry-team-team-workload-balancer', + title: '/team:team-workload-balancer', + description: 'Balance team workload distribution', + kind: 'command', + link: 'https://github.com/qdhenry/Claude-Command-Suite/blob/main/.claude/commands/team/team-workload-balancer.md', + }, + { + id: 'qdhenry-team-issue-triage', + title: '/team:issue-triage', + description: 'Triage and prioritize issues effectively', + kind: 'command', + link: 'https://github.com/qdhenry/Claude-Command-Suite/blob/main/.claude/commands/team/issue-triage.md', + }, + { + id: 'qdhenry-team-estimate-assistant', + title: '/team:estimate-assistant', + description: 'Generate accurate project time estimates', + kind: 'command', + link: 'https://github.com/qdhenry/Claude-Command-Suite/blob/main/.claude/commands/team/estimate-assistant.md', + }, + { + id: 'qdhenry-team-session-learning-capture', + title: '/team:session-learning-capture', + description: 'Capture and document session learnings', + kind: 'command', + link: 'https://github.com/qdhenry/Claude-Command-Suite/blob/main/.claude/commands/team/session-learning-capture.md', + }, + { + id: 'qdhenry-team-memory-spring-cleaning', + title: '/team:memory-spring-cleaning', + description: 'Clean and organize project memory', + kind: 'command', + link: 'https://github.com/qdhenry/Claude-Command-Suite/blob/main/.claude/commands/team/memory-spring-cleaning.md', + }, + { + id: 'qdhenry-team-architecture-review', + title: '/team:architecture-review', + description: 'Review and improve system architecture', + kind: 'command', + link: 'https://github.com/qdhenry/Claude-Command-Suite/blob/main/.claude/commands/team/architecture-review.md', + }, + { + id: 'qdhenry-team-dependency-mapper', + title: '/team:dependency-mapper', + description: 'Map and analyze project dependencies', + kind: 'command', + link: 'https://github.com/qdhenry/Claude-Command-Suite/blob/main/.claude/commands/team/dependency-mapper.md', + }, + { + id: 'qdhenry-team-migration-assistant', + title: '/team:migration-assistant', + description: 'Assist with system migration planning', + kind: 'command', + link: 'https://github.com/qdhenry/Claude-Command-Suite/blob/main/.claude/commands/team/migration-assistant.md', + }, + { + id: 'qdhenry-team-decision-quality-analyzer', + title: '/team:decision-quality-analyzer', + description: 'Analyze decision quality with scenario testing', + kind: 'command', + link: 'https://github.com/qdhenry/Claude-Command-Suite/blob/main/.claude/commands/team/decision-quality-analyzer.md', + }, + { + id: 'qdhenry-simulation-business-scenario-explorer', + title: '/simulation:business-scenario-explorer', + description: 'Explore business scenarios with constraint validation', + kind: 'command', + link: 'https://github.com/qdhenry/Claude-Command-Suite/blob/main/.claude/commands/simulation/business-scenario-explorer.md', + }, + { + id: 'qdhenry-simulation-digital-twin-creator', + title: '/simulation:digital-twin-creator', + description: 'Create digital twins with data quality checks', + kind: 'command', + link: 'https://github.com/qdhenry/Claude-Command-Suite/blob/main/.claude/commands/simulation/digital-twin-creator.md', + }, + { + id: 'qdhenry-simulation-decision-tree-explorer', + title: '/simulation:decision-tree-explorer', + description: 'Analyze decision branches with probability weighting', + kind: 'command', + link: 'https://github.com/qdhenry/Claude-Command-Suite/blob/main/.claude/commands/simulation/decision-tree-explorer.md', + }, + { + id: 'qdhenry-simulation-market-response-modeler', + title: '/simulation:market-response-modeler', + description: 'Simulate customer and market response by segment', + kind: 'command', + link: 'https://github.com/qdhenry/Claude-Command-Suite/blob/main/.claude/commands/simulation/market-response-modeler.md', + }, + { + id: 'qdhenry-simulation-timeline-compressor', + title: '/simulation:timeline-compressor', + description: 'Run accelerated scenario testing with confidence intervals', + kind: 'command', + link: 'https://github.com/qdhenry/Claude-Command-Suite/blob/main/.claude/commands/simulation/timeline-compressor.md', + }, + { + id: 'qdhenry-simulation-constraint-modeler', + title: '/simulation:constraint-modeler', + description: 'Model world constraints with assumption validation', + kind: 'command', + link: 'https://github.com/qdhenry/Claude-Command-Suite/blob/main/.claude/commands/simulation/constraint-modeler.md', + }, + { + id: 'qdhenry-simulation-future-scenario-generator', + title: '/simulation:future-scenario-generator', + description: 'Generate future scenarios with plausibility scoring', + kind: 'command', + link: 'https://github.com/qdhenry/Claude-Command-Suite/blob/main/.claude/commands/simulation/future-scenario-generator.md', + }, + { + id: 'qdhenry-simulation-simulation-calibrator', + title: '/simulation:simulation-calibrator', + description: 'Test and refine simulation accuracy', + kind: 'command', + link: 'https://github.com/qdhenry/Claude-Command-Suite/blob/main/.claude/commands/simulation/simulation-calibrator.md', + }, + { + id: 'qdhenry-rust-audit-clean-arch', + title: '/rust:audit-clean-arch', + description: 'Audit Rust codebase against Clean Architecture principles', + kind: 'command', + link: 'https://github.com/qdhenry/Claude-Command-Suite/blob/main/.claude/commands/rust/audit-clean-arch.md', + }, + { + id: 'qdhenry-rust-audit-dependencies', + title: '/rust:audit-dependencies', + description: 'Audit dependency direction violations', + kind: 'command', + link: 'https://github.com/qdhenry/Claude-Command-Suite/blob/main/.claude/commands/rust/audit-dependencies.md', + }, + { + id: 'qdhenry-rust-audit-layer-boundaries', + title: '/rust:audit-layer-boundaries', + description: 'Verify architectural layer boundaries', + kind: 'command', + link: 'https://github.com/qdhenry/Claude-Command-Suite/blob/main/.claude/commands/rust/audit-layer-boundaries.md', + }, + { + id: 'qdhenry-rust-audit-ports-adapters', + title: '/rust:audit-ports-adapters', + description: 'Audit Ports and Adapters pattern compliance', + kind: 'command', + link: 'https://github.com/qdhenry/Claude-Command-Suite/blob/main/.claude/commands/rust/audit-ports-adapters.md', + }, + { + id: 'qdhenry-rust-suggest-refactor', + title: '/rust:suggest-refactor', + description: 'Generate refactoring suggestions for Rust code', + kind: 'command', + link: 'https://github.com/qdhenry/Claude-Command-Suite/blob/main/.claude/commands/rust/suggest-refactor.md', + }, + { + id: 'qdhenry-rust-setup-tauri-mcp', + title: '/rust:setup-tauri-mcp', + description: 'Set up Tauri MCP integration', + kind: 'command', + link: 'https://github.com/qdhenry/Claude-Command-Suite/blob/main/.claude/commands/rust/setup-tauri-mcp.md', + }, + { + id: 'qdhenry-rust-tauri-launch', + title: '/rust:tauri:launch', + description: 'Launch Tauri application', + kind: 'command', + link: 'https://github.com/qdhenry/Claude-Command-Suite/blob/main/.claude/commands/rust/tauri/launch.md', + }, + { + id: 'qdhenry-rust-tauri-health', + title: '/rust:tauri:health', + description: 'Check Tauri app health', + kind: 'command', + link: 'https://github.com/qdhenry/Claude-Command-Suite/blob/main/.claude/commands/rust/tauri/health.md', + }, + { + id: 'qdhenry-rust-tauri-inspect', + title: '/rust:tauri:inspect', + description: 'Inspect Tauri app state', + kind: 'command', + link: 'https://github.com/qdhenry/Claude-Command-Suite/blob/main/.claude/commands/rust/tauri/inspect.md', + }, + { + id: 'qdhenry-rust-tauri-screenshot', + title: '/rust:tauri:screenshot', + description: 'Capture Tauri app screenshots', + kind: 'command', + link: 'https://github.com/qdhenry/Claude-Command-Suite/blob/main/.claude/commands/rust/tauri/screenshot.md', + }, + { + id: 'qdhenry-rust-tauri-call-ipc', + title: '/rust:tauri:call-ipc', + description: 'Call Tauri IPC commands', + kind: 'command', + link: 'https://github.com/qdhenry/Claude-Command-Suite/blob/main/.claude/commands/rust/tauri/call-ipc.md', + }, + { + id: 'qdhenry-rust-tauri-list-commands', + title: '/rust:tauri:list-commands', + description: 'List available IPC commands', + kind: 'command', + link: 'https://github.com/qdhenry/Claude-Command-Suite/blob/main/.claude/commands/rust/tauri/list-commands.md', + }, + { + id: 'qdhenry-rust-tauri-exec-js', + title: '/rust:tauri:exec-js', + description: 'Execute JavaScript in Tauri webview', + kind: 'command', + link: 'https://github.com/qdhenry/Claude-Command-Suite/blob/main/.claude/commands/rust/tauri/exec-js.md', + }, + { + id: 'qdhenry-rust-tauri-click', + title: '/rust:tauri:click', + description: 'Click elements in Tauri UI', + kind: 'command', + link: 'https://github.com/qdhenry/Claude-Command-Suite/blob/main/.claude/commands/rust/tauri/click.md', + }, + { + id: 'qdhenry-rust-tauri-type', + title: '/rust:tauri:type', + description: 'Type text into Tauri UI elements', + kind: 'command', + link: 'https://github.com/qdhenry/Claude-Command-Suite/blob/main/.claude/commands/rust/tauri/type.md', + }, + { + id: 'qdhenry-rust-tauri-window', + title: '/rust:tauri:window', + description: 'Manage Tauri windows', + kind: 'command', + link: 'https://github.com/qdhenry/Claude-Command-Suite/blob/main/.claude/commands/rust/tauri/window.md', + }, + { + id: 'qdhenry-rust-tauri-devtools', + title: '/rust:tauri:devtools', + description: 'Open Tauri DevTools', + kind: 'command', + link: 'https://github.com/qdhenry/Claude-Command-Suite/blob/main/.claude/commands/rust/tauri/devtools.md', + }, + { + id: 'qdhenry-rust-tauri-logs', + title: '/rust:tauri:logs', + description: 'View Tauri application logs', + kind: 'command', + link: 'https://github.com/qdhenry/Claude-Command-Suite/blob/main/.claude/commands/rust/tauri/logs.md', + }, + { + id: 'qdhenry-rust-tauri-resources', + title: '/rust:tauri:resources', + description: 'Manage Tauri app resources', + kind: 'command', + link: 'https://github.com/qdhenry/Claude-Command-Suite/blob/main/.claude/commands/rust/tauri/resources.md', + }, + { + id: 'qdhenry-rust-tauri-stop', + title: '/rust:tauri:stop', + description: 'Stop running Tauri application', + kind: 'command', + link: 'https://github.com/qdhenry/Claude-Command-Suite/blob/main/.claude/commands/rust/tauri/stop.md', + }, + { + id: 'qdhenry-webmcp-webmcp', + title: '/webmcp:webmcp', + description: 'Implement WebMCP in web projects', + kind: 'command', + link: 'https://github.com/qdhenry/Claude-Command-Suite/blob/main/.claude/commands/webmcp/webmcp.md', + }, + { + id: 'qdhenry-webmcp-setup', + title: '/webmcp:setup', + description: 'Set up WebMCP in a project from scratch', + kind: 'command', + link: 'https://github.com/qdhenry/Claude-Command-Suite/blob/main/.claude/commands/webmcp/setup.md', + }, + { + id: 'qdhenry-webmcp-add-tool', + title: '/webmcp:add-tool', + description: 'Add a new WebMCP tool to a project', + kind: 'command', + link: 'https://github.com/qdhenry/Claude-Command-Suite/blob/main/.claude/commands/webmcp/add-tool.md', + }, + { + id: 'qdhenry-webmcp-debug', + title: '/webmcp:debug', + description: 'Debug WebMCP tools that are not working correctly', + kind: 'command', + link: 'https://github.com/qdhenry/Claude-Command-Suite/blob/main/.claude/commands/webmcp/debug.md', + }, + { + id: 'qdhenry-webmcp-audit', + title: '/webmcp:audit', + description: 'Audit existing WebMCP implementation for best practices', + kind: 'command', + link: 'https://github.com/qdhenry/Claude-Command-Suite/blob/main/.claude/commands/webmcp/audit.md', + }, + { + id: 'qdhenry-media-extract-video-frames', + title: '/media:extract-video-frames', + description: 'Extract PNG frames and audio segments from video files', + kind: 'command', + link: 'https://github.com/qdhenry/Claude-Command-Suite/blob/main/.claude/commands/media/extract-video-frames.md', + }, + { + id: 'qdhenry-media-elevenlabs-transcribe', + title: '/media:elevenlabs-transcribe', + description: 'Transcribe audio or video files with speaker diarization', + kind: 'command', + link: 'https://github.com/qdhenry/Claude-Command-Suite/blob/main/.claude/commands/media/elevenlabs-transcribe.md', + }, + { + id: 'qdhenry-session-handoff', + title: '/session:handoff', + description: 'Create comprehensive handoff documents for context transfer', + kind: 'command', + link: 'https://github.com/qdhenry/Claude-Command-Suite/blob/main/.claude/commands/session/handoff.md', + }, + { + id: 'qdhenry-session-handoff-continue', + title: '/session:handoff-continue', + description: 'Create handoff and spawn a new Claude session', + kind: 'command', + link: 'https://github.com/qdhenry/Claude-Command-Suite/blob/main/.claude/commands/session/handoff-continue.md', + }, + { + id: 'qdhenry-orchestration-start', + title: '/orchestration:start', + description: 'Begin a new project with intelligent task decomposition', + kind: 'command', + link: 'https://github.com/qdhenry/Claude-Command-Suite/blob/main/.claude/commands/orchestration/start.md', + }, + { + id: 'qdhenry-orchestration-status', + title: '/orchestration:status', + description: 'Check progress across all projects', + kind: 'command', + link: 'https://github.com/qdhenry/Claude-Command-Suite/blob/main/.claude/commands/orchestration/status.md', + }, + { + id: 'qdhenry-orchestration-resume', + title: '/orchestration:resume', + description: 'Continue work with full context restoration', + kind: 'command', + link: 'https://github.com/qdhenry/Claude-Command-Suite/blob/main/.claude/commands/orchestration/resume.md', + }, + { + id: 'qdhenry-orchestration-move', + title: '/orchestration:move', + description: 'Update task status as work progresses', + kind: 'command', + link: 'https://github.com/qdhenry/Claude-Command-Suite/blob/main/.claude/commands/orchestration/move.md', + }, + { + id: 'qdhenry-orchestration-commit', + title: '/orchestration:commit', + description: 'Create professional Git commits linked to tasks', + kind: 'command', + link: 'https://github.com/qdhenry/Claude-Command-Suite/blob/main/.claude/commands/orchestration/commit.md', + }, + { + id: 'qdhenry-orchestration-log', + title: '/orchestration:log', + description: 'View task activity and change history', + kind: 'command', + link: 'https://github.com/qdhenry/Claude-Command-Suite/blob/main/.claude/commands/orchestration/log.md', + }, + { + id: 'qdhenry-orchestration-find', + title: '/orchestration:find', + description: 'Search and discover tasks across projects', + kind: 'command', + link: 'https://github.com/qdhenry/Claude-Command-Suite/blob/main/.claude/commands/orchestration/find.md', + }, + { + id: 'qdhenry-orchestration-report', + title: '/orchestration:report', + description: 'Generate standup reports and executive summaries', + kind: 'command', + link: 'https://github.com/qdhenry/Claude-Command-Suite/blob/main/.claude/commands/orchestration/report.md', + }, + { + id: 'qdhenry-orchestration-sync', + title: '/orchestration:sync', + description: 'Synchronize task status with Git commits', + kind: 'command', + link: 'https://github.com/qdhenry/Claude-Command-Suite/blob/main/.claude/commands/orchestration/sync.md', + }, + { + id: 'qdhenry-orchestration-remove', + title: '/orchestration:remove', + description: 'Remove or archive completed tasks', + kind: 'command', + link: 'https://github.com/qdhenry/Claude-Command-Suite/blob/main/.claude/commands/orchestration/remove.md', + }, + { + id: 'qdhenry-wfgy-init', + title: '/wfgy:init', + description: 'Initialize the WFGY semantic reasoning system', + kind: 'command', + link: 'https://github.com/qdhenry/Claude-Command-Suite/blob/main/.claude/commands/wfgy/init.md', + }, + { + id: 'qdhenry-wfgy-bbmc', + title: '/wfgy:bbmc', + description: 'Apply semantic residue minimization', + kind: 'command', + link: 'https://github.com/qdhenry/Claude-Command-Suite/blob/main/.claude/commands/wfgy/bbmc.md', + }, + { + id: 'qdhenry-wfgy-bbpf', + title: '/wfgy:bbpf', + description: 'Execute multi-path progression', + kind: 'command', + link: 'https://github.com/qdhenry/Claude-Command-Suite/blob/main/.claude/commands/wfgy/bbpf.md', + }, + { + id: 'qdhenry-wfgy-bbcr', + title: '/wfgy:bbcr', + description: 'Trigger collapse-rebirth correction', + kind: 'command', + link: 'https://github.com/qdhenry/Claude-Command-Suite/blob/main/.claude/commands/wfgy/bbcr.md', + }, + { + id: 'qdhenry-wfgy-bbam', + title: '/wfgy:bbam', + description: 'Apply attention modulation', + kind: 'command', + link: 'https://github.com/qdhenry/Claude-Command-Suite/blob/main/.claude/commands/wfgy/bbam.md', + }, + { + id: 'qdhenry-wfgy-formula-all', + title: '/wfgy:formula-all', + description: 'Apply all WFGY formulas in sequence', + kind: 'command', + link: 'https://github.com/qdhenry/Claude-Command-Suite/blob/main/.claude/commands/wfgy/formula-all.md', + }, + { + id: 'qdhenry-semantic-tree-init', + title: '/semantic:tree-init', + description: 'Create a new semantic memory tree', + kind: 'command', + link: 'https://github.com/qdhenry/Claude-Command-Suite/blob/main/.claude/commands/semantic/tree-init.md', + }, + { + id: 'qdhenry-semantic-node-build', + title: '/semantic:node-build', + description: 'Record semantic nodes', + kind: 'command', + link: 'https://github.com/qdhenry/Claude-Command-Suite/blob/main/.claude/commands/semantic/node-build.md', + }, + { + id: 'qdhenry-semantic-tree-view', + title: '/semantic:tree-view', + description: 'Display tree structure', + kind: 'command', + link: 'https://github.com/qdhenry/Claude-Command-Suite/blob/main/.claude/commands/semantic/tree-view.md', + }, + { + id: 'qdhenry-semantic-tree-export', + title: '/semantic:tree-export', + description: 'Export memory to file', + kind: 'command', + link: 'https://github.com/qdhenry/Claude-Command-Suite/blob/main/.claude/commands/semantic/tree-export.md', + }, + { + id: 'qdhenry-semantic-tree-import', + title: '/semantic:tree-import', + description: 'Import an existing tree', + kind: 'command', + link: 'https://github.com/qdhenry/Claude-Command-Suite/blob/main/.claude/commands/semantic/tree-import.md', + }, + { + id: 'qdhenry-semantic-tree-switch', + title: '/semantic:tree-switch', + description: 'Switch between semantic trees', + kind: 'command', + link: 'https://github.com/qdhenry/Claude-Command-Suite/blob/main/.claude/commands/semantic/tree-switch.md', + }, + { + id: 'qdhenry-boundary-detect', + title: '/boundary:detect', + description: 'Check knowledge limits', + kind: 'command', + link: 'https://github.com/qdhenry/Claude-Command-Suite/blob/main/.claude/commands/boundary/detect.md', + }, + { + id: 'qdhenry-boundary-heatmap', + title: '/boundary:heatmap', + description: 'Visualize risk zones', + kind: 'command', + link: 'https://github.com/qdhenry/Claude-Command-Suite/blob/main/.claude/commands/boundary/heatmap.md', + }, + { + id: 'qdhenry-boundary-risk-assess', + title: '/boundary:risk-assess', + description: 'Evaluate current risk', + kind: 'command', + link: 'https://github.com/qdhenry/Claude-Command-Suite/blob/main/.claude/commands/boundary/risk-assess.md', + }, + { + id: 'qdhenry-boundary-bbcr-fallback', + title: '/boundary:bbcr-fallback', + description: 'Execute recovery fallback', + kind: 'command', + link: 'https://github.com/qdhenry/Claude-Command-Suite/blob/main/.claude/commands/boundary/bbcr-fallback.md', + }, + { + id: 'qdhenry-boundary-safe-bridge', + title: '/boundary:safe-bridge', + description: 'Find safe semantic connections', + kind: 'command', + link: 'https://github.com/qdhenry/Claude-Command-Suite/blob/main/.claude/commands/boundary/safe-bridge.md', + }, + { + id: 'qdhenry-reasoning-multi-path', + title: '/reasoning:multi-path', + description: 'Run parallel reasoning exploration', + kind: 'command', + link: 'https://github.com/qdhenry/Claude-Command-Suite/blob/main/.claude/commands/reasoning/multi-path.md', + }, + { + id: 'qdhenry-reasoning-tension-calc', + title: '/reasoning:tension-calc', + description: 'Calculate semantic tension', + kind: 'command', + link: 'https://github.com/qdhenry/Claude-Command-Suite/blob/main/.claude/commands/reasoning/tension-calc.md', + }, + { + id: 'qdhenry-reasoning-logic-vector', + title: '/reasoning:logic-vector', + description: 'Analyze logic flow', + kind: 'command', + link: 'https://github.com/qdhenry/Claude-Command-Suite/blob/main/.claude/commands/reasoning/logic-vector.md', + }, + { + id: 'qdhenry-reasoning-resonance', + title: '/reasoning:resonance', + description: 'Measure reasoning stability', + kind: 'command', + link: 'https://github.com/qdhenry/Claude-Command-Suite/blob/main/.claude/commands/reasoning/resonance.md', + }, + { + id: 'qdhenry-reasoning-chain-validate', + title: '/reasoning:chain-validate', + description: 'Verify logic chains', + kind: 'command', + link: 'https://github.com/qdhenry/Claude-Command-Suite/blob/main/.claude/commands/reasoning/chain-validate.md', + }, + { + id: 'qdhenry-memory-checkpoint', + title: '/memory:checkpoint', + description: 'Create recovery points', + kind: 'command', + link: 'https://github.com/qdhenry/Claude-Command-Suite/blob/main/.claude/commands/memory/checkpoint.md', + }, + { + id: 'qdhenry-memory-recall', + title: '/memory:recall', + description: 'Search and retrieve memories', + kind: 'command', + link: 'https://github.com/qdhenry/Claude-Command-Suite/blob/main/.claude/commands/memory/recall.md', + }, + { + id: 'qdhenry-memory-compress', + title: '/memory:compress', + description: 'Optimize semantic tree size', + kind: 'command', + link: 'https://github.com/qdhenry/Claude-Command-Suite/blob/main/.claude/commands/memory/compress.md', + }, + { + id: 'qdhenry-memory-merge', + title: '/memory:merge', + description: 'Combine related memory nodes', + kind: 'command', + link: 'https://github.com/qdhenry/Claude-Command-Suite/blob/main/.claude/commands/memory/merge.md', + }, + { + id: 'qdhenry-memory-prune', + title: '/memory:prune', + description: 'Remove stale or irrelevant memories', + kind: 'command', + link: 'https://github.com/qdhenry/Claude-Command-Suite/blob/main/.claude/commands/memory/prune.md', + }, + { + id: 'acc-build-react-app-25a2b59b', + title: '/build-react-app', + description: + 'CI / Deployment: Builds React applications locally with intelligent error handling, creating specific tasks for build failures and providing appropriate server commands based on...', + kind: 'command', + link: 'https://github.com/wmjones/wyatt-personal-aws/blob/main/.claude/commands/build-react-app.md', + }, + { + id: 'acc-release-9af905e2', + title: '/release', + description: + 'CI / Deployment: Manages software releases by updating changelogs, reviewing README changes, evaluating version increments, and documenting release changes for better version tr...', + kind: 'command', + link: 'https://github.com/kelp/webdown/blob/main/.claude/commands/release.md', + }, + { + id: 'acc-run-ci-16f86c68', + title: '/run-ci', + description: + 'CI / Deployment: Activates virtual environments, runs CI-compatible check scripts, iteratively fixes errors, and ensures all tests pass before completion.', + kind: 'command', + link: 'https://github.com/hackdays-io/toban-contribution-viewer/blob/main/.claude/commands/run-ci.md', + }, + { + id: 'acc-run-pre-commit-12037211', + title: '/run-pre-commit', + description: + 'CI / Deployment: Runs pre-commit checks with intelligent results handling, analyzing outputs, creating tasks for issue fixing, and integrating with task management systems.', + kind: 'command', + link: 'https://github.com/wmjones/wyatt-personal-aws/blob/main/.claude/commands/run-pre-commit.md', + }, + { + id: 'acc-analyze-code-ab6368ad', + title: '/analyze-code', + description: + 'Code Analysis & Testing: Reviews code structure and identifies key components, mapping relationships between elements and suggesting targeted improvements for better architectur...', + kind: 'command', + link: 'https://github.com/Hkgstax/VALUGATOR/blob/main/.claude/commands/analyze-code.md', + }, + { + id: 'acc-check-c441b922', + title: '/check', + description: + 'Code Analysis & Testing: Performs comprehensive code quality and security checks, featuring static analysis integration, security vulnerability scanning, code style enforcement,...', + kind: 'command', + link: 'https://github.com/rygwdn/slack-tools/blob/main/.claude/commands/check.md', + }, + { + id: 'acc-clean-9c176694', + title: '/clean', + description: + 'Code Analysis & Testing: Addresses code formatting and quality issues by fixing black formatting problems, organizing imports with isort, resolving flake8 linting issues, and co...', + kind: 'command', + link: 'https://github.com/Graphlet-AI/eridu/blob/main/.claude/commands/clean.md', + }, + { + id: 'acc-code-analysis-4d392173', + title: '/code_analysis', + description: + 'Code Analysis & Testing: Provides a menu of advanced code analysis commands for deep inspection, including knowledge graph generation, optimization suggestions, and quality eval...', + kind: 'command', + link: 'https://github.com/kingler/n8n_agent/blob/main/.claude/commands/code_analysis.md', + }, + { + id: 'acc-implement-issue-3fbd3d12', + title: '/implement-issue', + description: + 'Code Analysis & Testing: Implements GitHub issues following strict project guidelines, complete implementation checklists, variable naming conventions, testing procedures, and d...', + kind: 'command', + link: 'https://github.com/cmxela/thinkube/blob/main/.claude/commands/implement-issue.md', + }, + { + id: 'acc-implement-task-8926a1ac', + title: '/implement-task', + description: + 'Code Analysis & Testing: Approaches task implementation methodically by thinking through strategy step-by-step, evaluating different approaches, considering tradeoffs, and imple...', + kind: 'command', + link: 'https://github.com/Hkgstax/VALUGATOR/blob/main/.claude/commands/implement-task.md', + }, + { + id: 'acc-optimize-06598050', + title: '/optimize', + description: + 'Code Analysis & Testing: Analyzes code performance to identify bottlenecks, proposing concrete optimizations with implementation guidance for improved application performance.', + kind: 'command', + link: 'https://github.com/to4iki/ai-project-rules/blob/main/.claude/commands/optimize.md', + }, + { + id: 'acc-repro-issue-bf8d6e2e', + title: '/repro-issue', + description: + 'Code Analysis & Testing: Creates reproducible test cases for GitHub issues, ensuring tests fail reliably and documenting clear reproduction steps for developers.', + kind: 'command', + link: 'https://github.com/rzykov/metabase/blob/master/.claude/commands/repro-issue.md', + }, + { + id: 'acc-task-breakdown-9d82cee9', + title: '/task-breakdown', + description: + 'Code Analysis & Testing: Analyzes feature requirements, identifies components and dependencies, creates manageable tasks, and sets priorities for efficient feature implementation.', + kind: 'command', + link: 'https://github.com/Hkgstax/VALUGATOR/blob/main/.claude/commands/task-breakdown.md', + }, + ], + [ + { + id: 'acc-tdd-56528972', + title: '/tdd', + description: + 'Code Analysis & Testing: Guides development using Test-Driven Development principles, enforcing Red-Green-Refactor discipline, integrating with git workflow, and managing PR cre...', + kind: 'command', + link: 'https://github.com/zscott/pane/blob/main/.claude/commands/tdd.md', + }, + { + id: 'acc-tdd-implement-38c1543e', + title: '/tdd-implement', + description: + 'Code Analysis & Testing: Implements Test-Driven Development by analyzing feature requirements, creating tests first (red), implementing minimal passing code (green), and refacto...', + kind: 'command', + link: 'https://github.com/jerseycheese/Narraitor/blob/feature/issue-227-ai-suggestions/.claude/commands/tdd-implement.md', + }, + { + id: 'acc-testing-plan-integration-98ff955c', + title: '/testing_plan_integration', + description: + 'Code Analysis & Testing: Creates inline Rust-style tests, suggests refactoring for testability, analyzes code challenges, and creates comprehensive test coverage for robust code.', + kind: 'command', + link: 'https://github.com/buster-so/buster/blob/main/api/.claude/commands/testing_plan_integration.md', + }, + { + id: 'acc-context-prime-bd83ec21', + title: '/context-prime', + description: + 'Context Loading & Priming: Primes Claude with comprehensive project understanding by loading repository structure, setting development context, establishing project goals, and d...', + kind: 'command', + link: 'https://github.com/elizaOS/elizaos.github.io/blob/main/.claude/commands/context-prime.md', + }, + { + id: 'acc-initref-6144dd2f', + title: '/initref', + description: + 'Context Loading & Priming: Initializes reference documentation structure with standard doc templates, API reference setup, documentation conventions, and placeholder content gen...', + kind: 'command', + link: 'https://github.com/okuvshynov/cubestat/blob/main/.claude/commands/initref.md', + }, + { + id: 'acc-load-llms-txt-2d9997c5', + title: '/load-llms-txt', + description: + 'Context Loading & Priming: Loads LLM configuration files to context, importing specific terminology, model configurations, and establishing baseline terminology for AI discussions.', + kind: 'command', + link: 'https://github.com/ethpandaops/xatu-data/blob/master/.claude/commands/load-llms-txt.md', + }, + { + id: 'acc-load-coo-context-85280977', + title: '/load_coo_context', + description: + 'Context Loading & Priming: References specific files for sparse matrix operations, explains transform usage, compares with previous approaches, and sets data formatting context...', + kind: 'command', + link: 'https://github.com/Mjvolk3/torchcell/blob/main/.claude/commands/load_coo_context.md', + }, + { + id: 'acc-load-dango-pipeline-dd3b96c0', + title: '/load_dango_pipeline', + description: + 'Context Loading & Priming: Sets context for model training by referencing pipeline files, establishing working context, and preparing for pipeline work with relevant documentation.', + kind: 'command', + link: 'https://github.com/Mjvolk3/torchcell/blob/main/.claude/commands/load_dango_pipeline.md', + }, + { + id: 'acc-prime-03eb1e00', + title: '/prime', + description: + 'Context Loading & Priming: Sets up initial project context by viewing directory structure and reading key files, creating standardized context with directory visualization and k...', + kind: 'command', + link: 'https://github.com/yzyydev/AI-Engineering-Structure/blob/main/.claude/commands/prime.md', + }, + { + id: 'acc-reminder-4a58dd73', + title: '/reminder', + description: + 'Context Loading & Priming: Re-establishes project context after conversation breaks or compaction, restoring context and fixing guideline inconsistencies for complex implementat...', + kind: 'command', + link: 'https://github.com/cmxela/thinkube/blob/main/.claude/commands/reminder.md', + }, + { + id: 'acc-rsi-4a6d731c', + title: '/rsi', + description: + 'Context Loading & Priming: Reads all commands and key project files to optimize AI-assisted development by streamlining the process, loading command context, and setting up for...', + kind: 'command', + link: 'https://github.com/ddisisto/si/blob/main/.claude/commands/rsi.md', + }, + { + id: 'acc-add-to-changelog-a8b378b6', + title: '/add-to-changelog', + description: + 'Documentation & Changelogs: Adds new entries to changelog files while maintaining format consistency, properly documenting changes, and following established project standards f...', + kind: 'command', + link: 'https://github.com/berrydev-ai/blockdoc-python/blob/main/.claude/commands/add-to-changelog.md', + }, + { + id: 'acc-create-docs-96c8c0dd', + title: '/create-docs', + description: + 'Documentation & Changelogs: Analyzes code structure and purpose to create comprehensive documentation detailing inputs/outputs, behavior, user interaction flows, and edge cases...', + kind: 'command', + link: 'https://github.com/jerseycheese/Narraitor/blob/feature/issue-227-ai-suggestions/.claude/commands/create-docs.md', + }, + { + id: 'acc-docs-c4020195', + title: '/docs', + description: + 'Documentation & Changelogs: Generates comprehensive documentation that follows project structure, documenting APIs and usage patterns with consistent formatting for better user...', + kind: 'command', + link: 'https://github.com/slunsford/coffee-analytics/blob/main/.claude/commands/docs.md', + }, + { + id: 'acc-explain-issue-fix-3cccd51b', + title: '/explain-issue-fix', + description: + 'Documentation & Changelogs: Documents solution approaches for GitHub issues, explaining technical decisions, detailing challenges overcome, and providing implementation context...', + kind: 'command', + link: 'https://github.com/hackdays-io/toban-contribution-viewer/blob/main/.claude/commands/explain-issue-fix.md', + }, + { + id: 'acc-update-docs-0fbc2e1b', + title: '/update-docs', + description: + 'Documentation & Changelogs: Reviews current documentation status, updates implementation progress, reviews phase documents, and maintains documentation consistency across the pr...', + kind: 'command', + link: 'https://github.com/Consiliency/Flutter-Structurizr/blob/main/.claude/commands/update-docs.md', + }, + { + id: 'acc-create-hook-f511a35a', + title: '/create-hook', + description: + 'General: Slash command for hook creation - intelligently prompts you through the creation process with smart suggestions based on your project setup (TS, Prettier, ESLint...).', + kind: 'command', + link: 'https://github.com/omril321/automated-notebooklm/blob/main/.claude/commands/create-hook.md', + }, + { + id: 'acc-linux-desktop-slash-commands-4b7f134a', + title: '/linux-desktop-slash-commands', + description: + 'General: A library of slash commands intended specifically to facilitate common and advanced operations on Linux desktop environments (although many would also be useful on Linu...', + kind: 'command', + link: 'https://github.com/danielrosehill/Claude-Code-Linux-Desktop-Slash-Commands', + }, + { + id: 'acc-act-8a9e1338', + title: '/act', + description: + 'Miscellaneous: Generates React components with proper accessibility, creating ARIA-compliant components with keyboard navigation that follow React best practices and include com...', + kind: 'command', + link: 'https://github.com/sotayamashita/dotfiles/blob/main/.claude/commands/act.md', + }, + { + id: 'acc-dump-98539def', + title: '/dump', + description: + 'Miscellaneous: Dumps the current Claude Code conversation to a markdown file in `.claude/logs/` with timestamped files that include session details and preserve full conversatio...', + kind: 'command', + link: 'https://gist.github.com/fumito-ito/77c308e0382e06a9c16b22619f8a2f83#file-dump-md', + }, + { + id: 'acc-fixing-go-in-graph-2a241784', + title: '/fixing_go_in_graph', + description: + 'Miscellaneous: Focuses on Gene Ontology annotation integration in graph databases, handling multiple data sources, addressing graph representation issues, and ensuring correct d...', + kind: 'command', + link: 'https://github.com/Mjvolk3/torchcell/blob/main/.claude/commands/fixing_go_in_graph.md', + }, + { + id: 'acc-mermaid-c13d5b06', + title: '/mermaid', + description: + 'Miscellaneous: Generates Mermaid diagrams from SQL schema files, creating entity relationship diagrams with table properties, validating diagram compilation, and ensuring comple...', + kind: 'command', + link: 'https://github.com/GaloyMoney/lana-bank/blob/main/.claude/commands/mermaid.md', + }, + { + id: 'acc-review-dcell-model-12ba588f', + title: '/review_dcell_model', + description: + 'Miscellaneous: Reviews old Dcell implementation files, comparing with newer Dango model, noting changes over time, and analyzing refactoring approaches for better code organizat...', + kind: 'command', + link: 'https://github.com/Mjvolk3/torchcell/blob/main/.claude/commands/review_dcell_model.md', + }, + { + id: 'acc-use-stepper-18e57101', + title: '/use-stepper', + description: + 'Miscellaneous: Reformats documentation to use React Stepper component, transforming heading formats, applying proper indentation, and maintaining markdown compatibility with adm...', + kind: 'command', + link: 'https://github.com/zuplo/docs/blob/main/.claude/commands/use-stepper.md', + }, + { + id: 'acc-create-command-ca5eeb3c', + title: '/create-command', + description: + 'Project & Task Management: Guides Claude through creating new custom commands with proper structure by analyzing requirements, templating commands by category, enforcing command...', + kind: 'command', + link: 'https://github.com/scopecraft/command/blob/main/.claude/commands/create-command.md', + }, + { + id: 'acc-create-plan-1786c511', + title: '/create-plan', + description: + 'Project & Task Management: Generates comprehensive product requirement documents outlining detailed specifications, requirements, and features following standardized document st...', + kind: 'command', + link: 'https://github.com/hesreallyhim/inkverse-fork/blob/preserve-claude-resources/.claude/commands/create-plan.md', + }, + { + id: 'acc-create-prp-00d0ce46', + title: '/create-prp', + description: + 'Project & Task Management: Creates product requirement plans by reading PRP methodology, following template structure, creating comprehensive requirements, and structuring produ...', + kind: 'command', + link: 'https://github.com/Wirasm/claudecode-utils/blob/main/.claude/commands/create-prp.md', + }, + { + id: 'acc-do-issue-c3c7e41a', + title: '/do-issue', + description: + 'Project & Task Management: Implements GitHub issues with manual review points, following a structured approach with issue number parameter and offering alternative automated mod...', + kind: 'command', + link: 'https://github.com/jerseycheese/Narraitor/blob/feature/issue-227-ai-suggestions/.claude/commands/do-issue.md', + }, + { + id: 'acc-next-task-49104658', + title: '/next-task', + description: + 'Project & Task Management: Gets the next task from TaskMaster and creates a branch for it, integrating with task management systems, automating branch creation, and enforcing na...', + kind: 'command', + link: 'https://github.com/wmjones/wyatt-personal-aws/blob/main/.claude/commands/next-task.md', + }, + { + id: 'acc-prd-generator-577f2d2f', + title: '/prd-generator', + description: + 'Project & Task Management: A Claude Code plugin that generates comprehensive Product Requirements Documents (PRDs) from conversation context. Invoke `/create-prd` after discussi...', + kind: 'command', + link: 'https://github.com/dredozubov/prd-generator', + }, + { + id: 'acc-project-hello-w-name-2b82a27e', + title: '/project_hello_w_name', + description: + 'Project & Task Management: Creates customizable greeting components with name input, demonstrating argument passing, component reusability, state management, and user input hand...', + kind: 'command', + link: 'https://github.com/disler/just-prompt/blob/main/.claude/commands/project_hello_w_name.md', + }, + { + id: 'acc-todo-374c2e16', + title: '/todo', + description: + 'Project & Task Management: A convenient command to quickly manage project todo items without leaving the Claude Code interface, featuring due dates, sorting, task prioritization...', + kind: 'command', + link: 'https://github.com/chrisleyva/todo-slash-command/blob/main/todo.md', + }, + { + id: 'acc-analyze-issue-ebd9c2ca', + title: '/analyze-issue', + description: + 'Version Control & Git: Fetches GitHub issue details to create comprehensive implementation specifications, analyzing requirements and planning structured approach with clear imp...', + kind: 'command', + link: 'https://github.com/jerseycheese/Narraitor/blob/feature/issue-227-ai-suggestions/.claude/commands/analyze-issue.md', + }, + { + id: 'acc-bug-fix-cd91d393', + title: '/bug-fix', + description: + 'Version Control & Git: Streamlines bug fixing by creating a GitHub issue first, then a feature branch for implementing and thoroughly testing the solution before merging.', + kind: 'command', + link: 'https://github.com/danielscholl/mvn-mcp-server/blob/main/.claude/commands/bug-fix.md', + }, + { + id: 'acc-husky-e35f96ed', + title: '/husky', + description: + 'Version Control & Git: Sets up and manages Husky Git hooks by configuring pre-commit hooks, establishing commit message standards, integrating with linting tools, and ensuring c...', + kind: 'command', + link: 'https://github.com/evmts/tevm-monorepo/blob/main/.claude/commands/husky.md', + }, + { + id: 'acc-update-branch-name-05278b5b', + title: '/update-branch-name', + description: + 'Version Control & Git: Updates branch names with proper prefixes and formats, enforcing naming conventions, supporting semantic prefixes, and managing remote branch updates.', + kind: 'command', + link: 'https://github.com/giselles-ai/giselle/blob/main/.claude/commands/update-branch-name.md', + }, + ], +]; + +export const commands: CatalogItem[] = commandsCatalogPages.flat(); + +export const commandsTotalItems = commands.length; +export const commandsTotalPages = commandsCatalogPages.length; diff --git a/website/src/content/data/constants.ts b/website/src/content/data/constants.ts new file mode 100644 index 00000000..59727ea2 --- /dev/null +++ b/website/src/content/data/constants.ts @@ -0,0 +1 @@ +export const CATALOG_PAGE_SIZE = 100; diff --git a/website/src/content/data/skills.ts b/website/src/content/data/skills.ts new file mode 100644 index 00000000..77bfc4ae --- /dev/null +++ b/website/src/content/data/skills.ts @@ -0,0 +1,8670 @@ +export const SKILLS_CATALOG_PAGE_SIZE = 100; + +import type { CatalogItem } from './types'; + +export const skillsCatalogPages: CatalogItem[][] = [ + // Page 1 + [ + { + id: 'skill_vercel-labs-next-best-practices_fe62bd6955', + title: 'vercel-labs/next-best-practices', + description: + 'vercel-labs/next-best-practices skill for Claude workflows from VoltAgent/awesome-agent-skills.', + kind: 'skill', + link: 'https://github.com/vercel-labs/next-skills/blob/main/skills/next-best-practices/SKILL.md', + }, + { + id: 'skill_vercel-labs-next-cache-components_9ef8b65da2', + title: 'vercel-labs/next-cache-components', + description: + 'vercel-labs/next-cache-components skill for Claude workflows from VoltAgent/awesome-agent-skills.', + kind: 'skill', + link: 'https://github.com/vercel-labs/next-skills/blob/main/skills/next-cache-components/SKILL.md', + }, + { + id: 'skill_vercel-labs-next-upgrade_40b03f5a6e', + title: 'vercel-labs/next-upgrade', + description: + 'vercel-labs/next-upgrade skill for Claude workflows from VoltAgent/awesome-agent-skills.', + kind: 'skill', + link: 'https://github.com/vercel-labs/next-skills/blob/main/skills/next-upgrade/SKILL.md', + }, + { + id: 'skill_vercel-labs-react-best-practices_be75307138', + title: 'vercel-labs/react-best-practices', + description: + 'vercel-labs/react-best-practices skill for Claude workflows from VoltAgent/awesome-agent-skills.', + kind: 'skill', + link: 'https://github.com/vercel-labs/agent-skills/blob/main/skills/react-best-practices/SKILL.md', + }, + { + id: 'skill_vercel-labs-composition-patterns_482ee00345', + title: 'vercel-labs/composition-patterns', + description: + 'vercel-labs/composition-patterns skill for Claude workflows from VoltAgent/awesome-agent-skills.', + kind: 'skill', + link: 'https://github.com/vercel-labs/agent-skills/blob/main/skills/composition-patterns/SKILL.md', + }, + { + id: 'skill_vercel-labs-web-design-guidelines_fe3bc7ad67', + title: 'vercel-labs/web-design-guidelines', + description: + 'vercel-labs/web-design-guidelines skill for Claude workflows from VoltAgent/awesome-agent-skills.', + kind: 'skill', + link: 'https://github.com/vercel-labs/agent-skills/blob/main/skills/web-design-guidelines/SKILL.md', + }, + { + id: 'skill_vitest_b22b11c5a4', + title: 'vitest', + description: 'vitest skill for Claude workflows from onmax/nuxt-skills.', + kind: 'skill', + link: 'https://github.com/onmax/nuxt-skills/blob/main/skills/vitest/SKILL.md', + }, + { + id: 'skill_vite_abaef2f738', + title: 'vite', + description: 'vite skill for Claude workflows from onmax/nuxt-skills.', + kind: 'skill', + link: 'https://github.com/onmax/nuxt-skills/blob/main/skills/vite/SKILL.md', + }, + { + id: 'skill_pnpm_b4fd4c7ec3', + title: 'pnpm', + description: 'pnpm skill for Claude workflows from onmax/nuxt-skills.', + kind: 'skill', + link: 'https://github.com/onmax/nuxt-skills/blob/main/skills/pnpm/SKILL.md', + }, + { + id: 'skill_postgres_fa61aa4ce4', + title: 'postgres', + description: 'postgres skill for Claude workflows from BehiSecc/awesome-claude-skills.', + kind: 'skill', + link: 'https://github.com/sanjay3290/ai-skills/blob/main/skills/postgres/SKILL.md', + }, + { + id: 'skill_openai-openai-docs_2b7d099127', + title: 'openai/openai-docs', + description: + 'openai/openai-docs skill for Claude workflows from VoltAgent/awesome-agent-skills.', + kind: 'skill', + link: 'https://github.com/openai/skills/blob/main/skills/.curated/openai-docs/SKILL.md', + }, + { + id: 'skill_openai-playwright_c27d8e9d59', + title: 'openai/playwright', + description: + 'openai/playwright skill for Claude workflows from VoltAgent/awesome-agent-skills.', + kind: 'skill', + link: 'https://github.com/openai/skills/blob/main/skills/.curated/playwright/SKILL.md', + }, + { + id: 'skill_senior-frontend_5170623f57', + title: 'senior-frontend', + description: 'senior-frontend skill for Claude workflows from alirezarezvani/claude-skills.', + kind: 'skill', + link: 'https://github.com/alirezarezvani/claude-skills/blob/main/engineering-team/senior-frontend/SKILL.md', + }, + { + id: 'skill_senior-backend_e26c791dcd', + title: 'senior-backend', + description: 'senior-backend skill for Claude workflows from alirezarezvani/claude-skills.', + kind: 'skill', + link: 'https://github.com/alirezarezvani/claude-skills/blob/main/engineering-team/senior-backend/SKILL.md', + }, + { + id: 'skill_senior-fullstack_f805988f39', + title: 'senior-fullstack', + description: 'senior-fullstack skill for Claude workflows from alirezarezvani/claude-skills.', + kind: 'skill', + link: 'https://github.com/alirezarezvani/claude-skills/blob/main/engineering-team/senior-fullstack/SKILL.md', + }, + { + id: 'skill_senior-devops_7c2691ce88', + title: 'senior-devops', + description: 'senior-devops skill for Claude workflows from alirezarezvani/claude-skills.', + kind: 'skill', + link: 'https://github.com/alirezarezvani/claude-skills/blob/main/engineering-team/senior-devops/SKILL.md', + }, + { + id: 'skill_senior-security_155667bfb2', + title: 'senior-security', + description: 'senior-security skill for Claude workflows from alirezarezvani/claude-skills.', + kind: 'skill', + link: 'https://github.com/alirezarezvani/claude-skills/blob/main/engineering-team/senior-security/SKILL.md', + }, + { + id: 'skill_senior-secops_030172daed', + title: 'senior-secops', + description: 'senior-secops skill for Claude workflows from alirezarezvani/claude-skills.', + kind: 'skill', + link: 'https://github.com/alirezarezvani/claude-skills/blob/main/engineering-team/senior-secops/SKILL.md', + }, + { + id: 'skill_senior-qa_db88ca09de', + title: 'senior-qa', + description: 'senior-qa skill for Claude workflows from alirezarezvani/claude-skills.', + kind: 'skill', + link: 'https://github.com/alirezarezvani/claude-skills/blob/main/engineering-team/senior-qa/SKILL.md', + }, + { + id: 'skill_senior-architect_ff50ec94c6', + title: 'senior-architect', + description: 'senior-architect skill for Claude workflows from alirezarezvani/claude-skills.', + kind: 'skill', + link: 'https://github.com/alirezarezvani/claude-skills/blob/main/engineering-team/senior-architect/SKILL.md', + }, + { + id: 'skill_code-reviewer_25a4bc8c76', + title: 'code-reviewer', + description: 'code-reviewer skill for Claude workflows from alirezarezvani/claude-skills.', + kind: 'skill', + link: 'https://github.com/alirezarezvani/claude-skills/blob/main/engineering-team/code-reviewer/SKILL.md', + }, + { + id: 'skill_playwright-pro_6c1eebb903', + title: 'playwright-pro', + description: 'playwright-pro skill for Claude workflows from alirezarezvani/claude-skills.', + kind: 'skill', + link: 'https://github.com/alirezarezvani/claude-skills/blob/main/engineering-team/playwright-pro/SKILL.md', + }, + { + id: 'skill_tdd-guide_b6e6800141', + title: 'tdd-guide', + description: 'tdd-guide skill for Claude workflows from alirezarezvani/claude-skills.', + kind: 'skill', + link: 'https://github.com/alirezarezvani/claude-skills/blob/main/engineering-team/tdd-guide/SKILL.md', + }, + { + id: 'skill_tech-stack-evaluator_404b23d8e5', + title: 'tech-stack-evaluator', + description: + 'tech-stack-evaluator skill for Claude workflows from alirezarezvani/claude-skills.', + kind: 'skill', + link: 'https://github.com/alirezarezvani/claude-skills/blob/main/engineering-team/tech-stack-evaluator/SKILL.md', + }, + { + id: 'skill_a11y-audit_227a345852', + title: 'a11y-audit', + description: 'a11y-audit skill for Claude workflows from alirezarezvani/claude-skills.', + kind: 'skill', + link: 'https://github.com/alirezarezvani/claude-skills/blob/main/engineering-team/a11y-audit/SKILL.md', + }, + { + id: 'skill_incident-commander_5b81954826', + title: 'incident-commander', + description: + 'incident-commander skill for Claude workflows from alirezarezvani/claude-skills.', + kind: 'skill', + link: 'https://github.com/alirezarezvani/claude-skills/blob/main/engineering-team/incident-commander/SKILL.md', + }, + { + id: 'skill_security-pen-testing_cc6afad657', + title: 'security-pen-testing', + description: + 'security-pen-testing skill for Claude workflows from alirezarezvani/claude-skills.', + kind: 'skill', + link: 'https://github.com/alirezarezvani/claude-skills/blob/main/engineering-team/security-pen-testing/SKILL.md', + }, + { + id: 'skill_aws-solution-architect_7b7f8f2b5f', + title: 'aws-solution-architect', + description: + 'aws-solution-architect skill for Claude workflows from alirezarezvani/claude-skills.', + kind: 'skill', + link: 'https://github.com/alirezarezvani/claude-skills/blob/main/engineering-team/aws-solution-architect/SKILL.md', + }, + { + id: 'skill_azure-cloud-architect_ea45a43ae5', + title: 'azure-cloud-architect', + description: + 'azure-cloud-architect skill for Claude workflows from alirezarezvani/claude-skills.', + kind: 'skill', + link: 'https://github.com/alirezarezvani/claude-skills/blob/main/engineering-team/azure-cloud-architect/SKILL.md', + }, + { + id: 'skill_gcp-cloud-architect_40c0fe0cdc', + title: 'gcp-cloud-architect', + description: + 'gcp-cloud-architect skill for Claude workflows from alirezarezvani/claude-skills.', + kind: 'skill', + link: 'https://github.com/alirezarezvani/claude-skills/blob/main/engineering-team/gcp-cloud-architect/SKILL.md', + }, + { + id: 'skill_stripe-integration-expert_36449db863', + title: 'stripe-integration-expert', + description: + 'stripe-integration-expert skill for Claude workflows from alirezarezvani/claude-skills.', + kind: 'skill', + link: 'https://github.com/alirezarezvani/claude-skills/blob/main/engineering-team/stripe-integration-expert/SKILL.md', + }, + { + id: 'skill_google-workspace-cli_7c6bd187c5', + title: 'google-workspace-cli', + description: + 'google-workspace-cli skill for Claude workflows from alirezarezvani/claude-skills.', + kind: 'skill', + link: 'https://github.com/alirezarezvani/claude-skills/blob/main/engineering-team/google-workspace-cli/SKILL.md', + }, + { + id: 'skill_ms365-tenant-manager_5c9ef0adbb', + title: 'ms365-tenant-manager', + description: + 'ms365-tenant-manager skill for Claude workflows from alirezarezvani/claude-skills.', + kind: 'skill', + link: 'https://github.com/alirezarezvani/claude-skills/blob/main/engineering-team/ms365-tenant-manager/SKILL.md', + }, + { + id: 'skill_snowflake-development_f4f33f3034', + title: 'snowflake-development', + description: + 'snowflake-development skill for Claude workflows from alirezarezvani/claude-skills.', + kind: 'skill', + link: 'https://github.com/alirezarezvani/claude-skills/blob/main/engineering-team/snowflake-development/SKILL.md', + }, + { + id: 'skill_senior-data-engineer_e8da596af6', + title: 'senior-data-engineer', + description: + 'senior-data-engineer skill for Claude workflows from alirezarezvani/claude-skills.', + kind: 'skill', + link: 'https://github.com/alirezarezvani/claude-skills/blob/main/engineering-team/senior-data-engineer/SKILL.md', + }, + { + id: 'skill_senior-data-scientist_4ab6487755', + title: 'senior-data-scientist', + description: + 'senior-data-scientist skill for Claude workflows from alirezarezvani/claude-skills.', + kind: 'skill', + link: 'https://github.com/alirezarezvani/claude-skills/blob/main/engineering-team/senior-data-scientist/SKILL.md', + }, + { + id: 'skill_senior-ml-engineer_b799fed340', + title: 'senior-ml-engineer', + description: + 'senior-ml-engineer skill for Claude workflows from alirezarezvani/claude-skills.', + kind: 'skill', + link: 'https://github.com/alirezarezvani/claude-skills/blob/main/engineering-team/senior-ml-engineer/SKILL.md', + }, + { + id: 'skill_senior-computer-vision_4ad5582ca2', + title: 'senior-computer-vision', + description: + 'senior-computer-vision skill for Claude workflows from alirezarezvani/claude-skills.', + kind: 'skill', + link: 'https://github.com/alirezarezvani/claude-skills/blob/main/engineering-team/senior-computer-vision/SKILL.md', + }, + { + id: 'skill_agent-designer_4c21719593', + title: 'agent-designer', + description: 'agent-designer skill for Claude workflows from alirezarezvani/claude-skills.', + kind: 'skill', + link: 'https://github.com/alirezarezvani/claude-skills/blob/main/engineering/agent-designer/SKILL.md', + }, + { + id: 'skill_agent-workflow-designer_2062b091cd', + title: 'agent-workflow-designer', + description: + 'agent-workflow-designer skill for Claude workflows from alirezarezvani/claude-skills.', + kind: 'skill', + link: 'https://github.com/alirezarezvani/claude-skills/blob/main/engineering/agent-workflow-designer/SKILL.md', + }, + { + id: 'skill_agenthub_e827b734e0', + title: 'agenthub', + description: 'agenthub skill for Claude workflows from alirezarezvani/claude-skills.', + kind: 'skill', + link: 'https://github.com/alirezarezvani/claude-skills/blob/main/engineering/agenthub/SKILL.md', + }, + { + id: 'skill_api-design-reviewer_b0704df499', + title: 'api-design-reviewer', + description: + 'api-design-reviewer skill for Claude workflows from alirezarezvani/claude-skills.', + kind: 'skill', + link: 'https://github.com/alirezarezvani/claude-skills/blob/main/engineering/api-design-reviewer/SKILL.md', + }, + { + id: 'skill_api-test-suite-builder_6ae9681c84', + title: 'api-test-suite-builder', + description: + 'api-test-suite-builder skill for Claude workflows from alirezarezvani/claude-skills.', + kind: 'skill', + link: 'https://github.com/alirezarezvani/claude-skills/blob/main/engineering/api-test-suite-builder/SKILL.md', + }, + { + id: 'skill_autoresearch-agent_9f48e4653b', + title: 'autoresearch-agent', + description: + 'autoresearch-agent skill for Claude workflows from alirezarezvani/claude-skills.', + kind: 'skill', + link: 'https://github.com/alirezarezvani/claude-skills/blob/main/engineering/autoresearch-agent/SKILL.md', + }, + { + id: 'skill_browser-automation_5051b6a430', + title: 'browser-automation', + description: + 'browser-automation skill for Claude workflows from alirezarezvani/claude-skills.', + kind: 'skill', + link: 'https://github.com/alirezarezvani/claude-skills/blob/main/engineering/browser-automation/SKILL.md', + }, + { + id: 'skill_changelog-generator_7e7ee3f49d', + title: 'changelog-generator', + description: + 'changelog-generator skill for Claude workflows from alirezarezvani/claude-skills.', + kind: 'skill', + link: 'https://github.com/alirezarezvani/claude-skills/blob/main/engineering/changelog-generator/SKILL.md', + }, + { + id: 'skill_ci-cd-pipeline-builder_5bc103a4df', + title: 'ci-cd-pipeline-builder', + description: + 'ci-cd-pipeline-builder skill for Claude workflows from alirezarezvani/claude-skills.', + kind: 'skill', + link: 'https://github.com/alirezarezvani/claude-skills/blob/main/engineering/ci-cd-pipeline-builder/SKILL.md', + }, + { + id: 'skill_codebase-onboarding_797d413440', + title: 'codebase-onboarding', + description: + 'codebase-onboarding skill for Claude workflows from alirezarezvani/claude-skills.', + kind: 'skill', + link: 'https://github.com/alirezarezvani/claude-skills/blob/main/engineering/codebase-onboarding/SKILL.md', + }, + { + id: 'skill_database-designer_2c578b6187', + title: 'database-designer', + description: + 'database-designer skill for Claude workflows from alirezarezvani/claude-skills.', + kind: 'skill', + link: 'https://github.com/alirezarezvani/claude-skills/blob/main/engineering/database-designer/SKILL.md', + }, + { + id: 'skill_database-schema-designer_3bb2c10d93', + title: 'database-schema-designer', + description: + 'database-schema-designer skill for Claude workflows from alirezarezvani/claude-skills.', + kind: 'skill', + link: 'https://github.com/alirezarezvani/claude-skills/blob/main/engineering/database-schema-designer/SKILL.md', + }, + { + id: 'skill_dependency-auditor_b4def3b36e', + title: 'dependency-auditor', + description: + 'dependency-auditor skill for Claude workflows from alirezarezvani/claude-skills.', + kind: 'skill', + link: 'https://github.com/alirezarezvani/claude-skills/blob/main/engineering/dependency-auditor/SKILL.md', + }, + { + id: 'skill_docker-development_45ebe1d43c', + title: 'docker-development', + description: + 'docker-development skill for Claude workflows from alirezarezvani/claude-skills.', + kind: 'skill', + link: 'https://github.com/alirezarezvani/claude-skills/blob/main/engineering/docker-development/SKILL.md', + }, + { + id: 'skill_env-secrets-manager_e3d67d0db7', + title: 'env-secrets-manager', + description: + 'env-secrets-manager skill for Claude workflows from alirezarezvani/claude-skills.', + kind: 'skill', + link: 'https://github.com/alirezarezvani/claude-skills/blob/main/engineering/env-secrets-manager/SKILL.md', + }, + { + id: 'skill_focused-fix_95b237d015', + title: 'focused-fix', + description: 'focused-fix skill for Claude workflows from alirezarezvani/claude-skills.', + kind: 'skill', + link: 'https://github.com/alirezarezvani/claude-skills/blob/main/engineering/focused-fix/SKILL.md', + }, + { + id: 'skill_git-worktree-manager_7a6129d9cf', + title: 'git-worktree-manager', + description: + 'git-worktree-manager skill for Claude workflows from alirezarezvani/claude-skills.', + kind: 'skill', + link: 'https://github.com/alirezarezvani/claude-skills/blob/main/engineering/git-worktree-manager/SKILL.md', + }, + { + id: 'skill_helm-chart-builder_7a7ed8d508', + title: 'helm-chart-builder', + description: + 'helm-chart-builder skill for Claude workflows from alirezarezvani/claude-skills.', + kind: 'skill', + link: 'https://github.com/alirezarezvani/claude-skills/blob/main/engineering/helm-chart-builder/SKILL.md', + }, + { + id: 'skill_interview-system-designer_890db04ad5', + title: 'interview-system-designer', + description: + 'interview-system-designer skill for Claude workflows from alirezarezvani/claude-skills.', + kind: 'skill', + link: 'https://github.com/alirezarezvani/claude-skills/blob/main/engineering/interview-system-designer/SKILL.md', + }, + { + id: 'skill_mcp-server-builder_4610eba94a', + title: 'mcp-server-builder', + description: + 'mcp-server-builder skill for Claude workflows from alirezarezvani/claude-skills.', + kind: 'skill', + link: 'https://github.com/alirezarezvani/claude-skills/blob/main/engineering/mcp-server-builder/SKILL.md', + }, + { + id: 'skill_migration-architect_6803d970ba', + title: 'migration-architect', + description: + 'migration-architect skill for Claude workflows from alirezarezvani/claude-skills.', + kind: 'skill', + link: 'https://github.com/alirezarezvani/claude-skills/blob/main/engineering/migration-architect/SKILL.md', + }, + { + id: 'skill_monorepo-navigator_7014d89a0f', + title: 'monorepo-navigator', + description: + 'monorepo-navigator skill for Claude workflows from alirezarezvani/claude-skills.', + kind: 'skill', + link: 'https://github.com/alirezarezvani/claude-skills/blob/main/engineering/monorepo-navigator/SKILL.md', + }, + { + id: 'skill_observability-designer_a5d6131515', + title: 'observability-designer', + description: + 'observability-designer skill for Claude workflows from alirezarezvani/claude-skills.', + kind: 'skill', + link: 'https://github.com/alirezarezvani/claude-skills/blob/main/engineering/observability-designer/SKILL.md', + }, + { + id: 'skill_performance-profiler_2647318647', + title: 'performance-profiler', + description: + 'performance-profiler skill for Claude workflows from alirezarezvani/claude-skills.', + kind: 'skill', + link: 'https://github.com/alirezarezvani/claude-skills/blob/main/engineering/performance-profiler/SKILL.md', + }, + { + id: 'skill_pr-review-expert_e164553ea8', + title: 'pr-review-expert', + description: 'pr-review-expert skill for Claude workflows from alirezarezvani/claude-skills.', + kind: 'skill', + link: 'https://github.com/alirezarezvani/claude-skills/blob/main/engineering/pr-review-expert/SKILL.md', + }, + { + id: 'skill_rag-architect_086bb9e4ad', + title: 'rag-architect', + description: 'rag-architect skill for Claude workflows from alirezarezvani/claude-skills.', + kind: 'skill', + link: 'https://github.com/alirezarezvani/claude-skills/blob/main/engineering/rag-architect/SKILL.md', + }, + { + id: 'skill_release-manager_b25ad3a679', + title: 'release-manager', + description: 'release-manager skill for Claude workflows from alirezarezvani/claude-skills.', + kind: 'skill', + link: 'https://github.com/alirezarezvani/claude-skills/blob/main/engineering/release-manager/SKILL.md', + }, + { + id: 'skill_runbook-generator_2f9dbfa166', + title: 'runbook-generator', + description: + 'runbook-generator skill for Claude workflows from alirezarezvani/claude-skills.', + kind: 'skill', + link: 'https://github.com/alirezarezvani/claude-skills/blob/main/engineering/runbook-generator/SKILL.md', + }, + { + id: 'skill_secrets-vault-manager_207fa1bab8', + title: 'secrets-vault-manager', + description: + 'secrets-vault-manager skill for Claude workflows from alirezarezvani/claude-skills.', + kind: 'skill', + link: 'https://github.com/alirezarezvani/claude-skills/blob/main/engineering/secrets-vault-manager/SKILL.md', + }, + { + id: 'skill_skill-security-auditor_f17493cdd4', + title: 'skill-security-auditor', + description: + 'skill-security-auditor skill for Claude workflows from alirezarezvani/claude-skills.', + kind: 'skill', + link: 'https://github.com/alirezarezvani/claude-skills/blob/main/engineering/skill-security-auditor/SKILL.md', + }, + { + id: 'skill_skill-tester_54d8aa00d5', + title: 'skill-tester', + description: 'skill-tester skill for Claude workflows from alirezarezvani/claude-skills.', + kind: 'skill', + link: 'https://github.com/alirezarezvani/claude-skills/blob/main/engineering/skill-tester/SKILL.md', + }, + { + id: 'skill_spec-driven-workflow_3038cdc178', + title: 'spec-driven-workflow', + description: + 'spec-driven-workflow skill for Claude workflows from alirezarezvani/claude-skills.', + kind: 'skill', + link: 'https://github.com/alirezarezvani/claude-skills/blob/main/engineering/spec-driven-workflow/SKILL.md', + }, + { + id: 'skill_sql-database-assistant_2e5ab3d037', + title: 'sql-database-assistant', + description: + 'sql-database-assistant skill for Claude workflows from alirezarezvani/claude-skills.', + kind: 'skill', + link: 'https://github.com/alirezarezvani/claude-skills/blob/main/engineering/sql-database-assistant/SKILL.md', + }, + { + id: 'skill_tech-debt-tracker_2e03d5c122', + title: 'tech-debt-tracker', + description: + 'tech-debt-tracker skill for Claude workflows from alirezarezvani/claude-skills.', + kind: 'skill', + link: 'https://github.com/alirezarezvani/claude-skills/blob/main/engineering/tech-debt-tracker/SKILL.md', + }, + { + id: 'skill_terraform-patterns_f944d27605', + title: 'terraform-patterns', + description: + 'terraform-patterns skill for Claude workflows from alirezarezvani/claude-skills.', + kind: 'skill', + link: 'https://github.com/alirezarezvani/claude-skills/blob/main/engineering/terraform-patterns/SKILL.md', + }, + { + id: 'skill_product-manager-toolkit_5bd089778a', + title: 'product-manager-toolkit', + description: + 'product-manager-toolkit skill for Claude workflows from alirezarezvani/claude-skills.', + kind: 'skill', + link: 'https://github.com/alirezarezvani/claude-skills/blob/main/product-team/product-manager-toolkit/SKILL.md', + }, + { + id: 'skill_product-strategist_d5a4ee6479', + title: 'product-strategist', + description: + 'product-strategist skill for Claude workflows from alirezarezvani/claude-skills.', + kind: 'skill', + link: 'https://github.com/alirezarezvani/claude-skills/blob/main/product-team/product-strategist/SKILL.md', + }, + { + id: 'skill_product-discovery_4866b3e3f8', + title: 'product-discovery', + description: + 'product-discovery skill for Claude workflows from alirezarezvani/claude-skills.', + kind: 'skill', + link: 'https://github.com/alirezarezvani/claude-skills/blob/main/product-team/product-discovery/SKILL.md', + }, + { + id: 'skill_product-analytics_e617d45416', + title: 'product-analytics', + description: + 'product-analytics skill for Claude workflows from alirezarezvani/claude-skills.', + kind: 'skill', + link: 'https://github.com/alirezarezvani/claude-skills/blob/main/product-team/product-analytics/SKILL.md', + }, + { + id: 'skill_agile-product-owner_c2e3b57d25', + title: 'agile-product-owner', + description: + 'agile-product-owner skill for Claude workflows from alirezarezvani/claude-skills.', + kind: 'skill', + link: 'https://github.com/alirezarezvani/claude-skills/blob/main/product-team/agile-product-owner/SKILL.md', + }, + { + id: 'skill_experiment-designer_c2841110ad', + title: 'experiment-designer', + description: + 'experiment-designer skill for Claude workflows from alirezarezvani/claude-skills.', + kind: 'skill', + link: 'https://github.com/alirezarezvani/claude-skills/blob/main/product-team/experiment-designer/SKILL.md', + }, + { + id: 'skill_roadmap-communicator_46047e0785', + title: 'roadmap-communicator', + description: + 'roadmap-communicator skill for Claude workflows from alirezarezvani/claude-skills.', + kind: 'skill', + link: 'https://github.com/alirezarezvani/claude-skills/blob/main/product-team/roadmap-communicator/SKILL.md', + }, + { + id: 'skill_competitive-teardown_992c3ef043', + title: 'competitive-teardown', + description: + 'competitive-teardown skill for Claude workflows from alirezarezvani/claude-skills.', + kind: 'skill', + link: 'https://github.com/alirezarezvani/claude-skills/blob/main/product-team/competitive-teardown/SKILL.md', + }, + { + id: 'skill_ux-researcher-designer_6701e31617', + title: 'ux-researcher-designer', + description: + 'ux-researcher-designer skill for Claude workflows from alirezarezvani/claude-skills.', + kind: 'skill', + link: 'https://github.com/alirezarezvani/claude-skills/blob/main/product-team/ux-researcher-designer/SKILL.md', + }, + { + id: 'skill_ui-design-system_ea0f2f744a', + title: 'ui-design-system', + description: 'ui-design-system skill for Claude workflows from alirezarezvani/claude-skills.', + kind: 'skill', + link: 'https://github.com/alirezarezvani/claude-skills/blob/main/product-team/ui-design-system/SKILL.md', + }, + { + id: 'skill_landing-page-generator_64746383e0', + title: 'landing-page-generator', + description: + 'landing-page-generator skill for Claude workflows from alirezarezvani/claude-skills.', + kind: 'skill', + link: 'https://github.com/alirezarezvani/claude-skills/blob/main/product-team/landing-page-generator/SKILL.md', + }, + { + id: 'skill_saas-scaffolder_3754cdfc98', + title: 'saas-scaffolder', + description: 'saas-scaffolder skill for Claude workflows from alirezarezvani/claude-skills.', + kind: 'skill', + link: 'https://github.com/alirezarezvani/claude-skills/blob/main/product-team/saas-scaffolder/SKILL.md', + }, + { + id: 'skill_code-to-prd_dec346778c', + title: 'code-to-prd', + description: 'code-to-prd skill for Claude workflows from alirezarezvani/claude-skills.', + kind: 'skill', + link: 'https://github.com/alirezarezvani/claude-skills/blob/main/product-team/code-to-prd/SKILL.md', + }, + { + id: 'skill_research-summarizer_c99113f2bd', + title: 'research-summarizer', + description: + 'research-summarizer skill for Claude workflows from alirezarezvani/claude-skills.', + kind: 'skill', + link: 'https://github.com/alirezarezvani/claude-skills/blob/main/product-team/research-summarizer/SKILL.md', + }, + { + id: 'skill_senior-pm_b290ae6615', + title: 'senior-pm', + description: 'senior-pm skill for Claude workflows from alirezarezvani/claude-skills.', + kind: 'skill', + link: 'https://github.com/alirezarezvani/claude-skills/blob/main/project-management/senior-pm/SKILL.md', + }, + { + id: 'skill_scrum-master_b911c51868', + title: 'scrum-master', + description: 'scrum-master skill for Claude workflows from alirezarezvani/claude-skills.', + kind: 'skill', + link: 'https://github.com/alirezarezvani/claude-skills/blob/main/project-management/scrum-master/SKILL.md', + }, + { + id: 'skill_jira-expert_6a5ba79150', + title: 'jira-expert', + description: 'jira-expert skill for Claude workflows from alirezarezvani/claude-skills.', + kind: 'skill', + link: 'https://github.com/alirezarezvani/claude-skills/blob/main/project-management/jira-expert/SKILL.md', + }, + { + id: 'skill_confluence-expert_3995088245', + title: 'confluence-expert', + description: + 'confluence-expert skill for Claude workflows from alirezarezvani/claude-skills.', + kind: 'skill', + link: 'https://github.com/alirezarezvani/claude-skills/blob/main/project-management/confluence-expert/SKILL.md', + }, + { + id: 'skill_atlassian-admin_84b72569e5', + title: 'atlassian-admin', + description: 'atlassian-admin skill for Claude workflows from alirezarezvani/claude-skills.', + kind: 'skill', + link: 'https://github.com/alirezarezvani/claude-skills/blob/main/project-management/atlassian-admin/SKILL.md', + }, + { + id: 'skill_atlassian-templates_47278c68f6', + title: 'atlassian-templates', + description: + 'atlassian-templates skill for Claude workflows from alirezarezvani/claude-skills.', + kind: 'skill', + link: 'https://github.com/alirezarezvani/claude-skills/blob/main/project-management/atlassian-templates/SKILL.md', + }, + { + id: 'skill_saas-metrics-coach_3f7f253585', + title: 'saas-metrics-coach', + description: + 'saas-metrics-coach skill for Claude workflows from alirezarezvani/claude-skills.', + kind: 'skill', + link: 'https://github.com/alirezarezvani/claude-skills/blob/main/finance/saas-metrics-coach/SKILL.md', + }, + { + id: 'skill_financial-analyst_1a730faa4f', + title: 'financial-analyst', + description: + 'financial-analyst skill for Claude workflows from alirezarezvani/claude-skills.', + kind: 'skill', + link: 'https://github.com/alirezarezvani/claude-skills/blob/main/finance/financial-analyst/SKILL.md', + }, + { + id: 'skill_cto-advisor_864fac28bc', + title: 'cto-advisor', + description: 'cto-advisor skill for Claude workflows from alirezarezvani/claude-skills.', + kind: 'skill', + link: 'https://github.com/alirezarezvani/claude-skills/blob/main/c-level-advisor/cto-advisor/SKILL.md', + }, + { + id: 'skill_ceo-advisor_c6f3a4d8fc', + title: 'ceo-advisor', + description: 'ceo-advisor skill for Claude workflows from alirezarezvani/claude-skills.', + kind: 'skill', + link: 'https://github.com/alirezarezvani/claude-skills/blob/main/c-level-advisor/ceo-advisor/SKILL.md', + }, + { + id: 'skill_cfo-advisor_7b05033ee7', + title: 'cfo-advisor', + description: 'cfo-advisor skill for Claude workflows from alirezarezvani/claude-skills.', + kind: 'skill', + link: 'https://github.com/alirezarezvani/claude-skills/blob/main/c-level-advisor/cfo-advisor/SKILL.md', + }, + { + id: 'skill_ciso-advisor_3849386b31', + title: 'ciso-advisor', + description: 'ciso-advisor skill for Claude workflows from alirezarezvani/claude-skills.', + kind: 'skill', + link: 'https://github.com/alirezarezvani/claude-skills/blob/main/c-level-advisor/ciso-advisor/SKILL.md', + }, + { + id: 'skill_cpo-advisor_7ae94fef56', + title: 'cpo-advisor', + description: 'cpo-advisor skill for Claude workflows from alirezarezvani/claude-skills.', + kind: 'skill', + link: 'https://github.com/alirezarezvani/claude-skills/blob/main/c-level-advisor/cpo-advisor/SKILL.md', + }, + ], + // Page 2 + [ + { + id: 'skill_coo-advisor_89cae773e0', + title: 'coo-advisor', + description: 'coo-advisor skill for Claude workflows from alirezarezvani/claude-skills.', + kind: 'skill', + link: 'https://github.com/alirezarezvani/claude-skills/blob/main/c-level-advisor/coo-advisor/SKILL.md', + }, + { + id: 'skill_cro-advisor_a5e89c0034', + title: 'cro-advisor', + description: 'cro-advisor skill for Claude workflows from alirezarezvani/claude-skills.', + kind: 'skill', + link: 'https://github.com/alirezarezvani/claude-skills/blob/main/c-level-advisor/cro-advisor/SKILL.md', + }, + { + id: 'skill_cmo-advisor_469fcebf2b', + title: 'cmo-advisor', + description: 'cmo-advisor skill for Claude workflows from alirezarezvani/claude-skills.', + kind: 'skill', + link: 'https://github.com/alirezarezvani/claude-skills/blob/main/c-level-advisor/cmo-advisor/SKILL.md', + }, + { + id: 'skill_chro-advisor_87657bac30', + title: 'chro-advisor', + description: 'chro-advisor skill for Claude workflows from alirezarezvani/claude-skills.', + kind: 'skill', + link: 'https://github.com/alirezarezvani/claude-skills/blob/main/c-level-advisor/chro-advisor/SKILL.md', + }, + { + id: 'skill_chief-of-staff_a19d2d2344', + title: 'chief-of-staff', + description: 'chief-of-staff skill for Claude workflows from alirezarezvani/claude-skills.', + kind: 'skill', + link: 'https://github.com/alirezarezvani/claude-skills/blob/main/c-level-advisor/chief-of-staff/SKILL.md', + }, + { + id: 'skill_board-meeting_0fcb0c5dfc', + title: 'board-meeting', + description: 'board-meeting skill for Claude workflows from alirezarezvani/claude-skills.', + kind: 'skill', + link: 'https://github.com/alirezarezvani/claude-skills/blob/main/c-level-advisor/board-meeting/SKILL.md', + }, + { + id: 'skill_board-deck-builder_7e60f66fc1', + title: 'board-deck-builder', + description: + 'board-deck-builder skill for Claude workflows from alirezarezvani/claude-skills.', + kind: 'skill', + link: 'https://github.com/alirezarezvani/claude-skills/blob/main/c-level-advisor/board-deck-builder/SKILL.md', + }, + { + id: 'skill_strategic-alignment_72ab3098af', + title: 'strategic-alignment', + description: + 'strategic-alignment skill for Claude workflows from alirezarezvani/claude-skills.', + kind: 'skill', + link: 'https://github.com/alirezarezvani/claude-skills/blob/main/c-level-advisor/strategic-alignment/SKILL.md', + }, + { + id: 'skill_scenario-war-room_5b37694d4a', + title: 'scenario-war-room', + description: + 'scenario-war-room skill for Claude workflows from alirezarezvani/claude-skills.', + kind: 'skill', + link: 'https://github.com/alirezarezvani/claude-skills/blob/main/c-level-advisor/scenario-war-room/SKILL.md', + }, + { + id: 'skill_competitive-intel_58ee994df6', + title: 'competitive-intel', + description: + 'competitive-intel skill for Claude workflows from alirezarezvani/claude-skills.', + kind: 'skill', + link: 'https://github.com/alirezarezvani/claude-skills/blob/main/c-level-advisor/competitive-intel/SKILL.md', + }, + { + id: 'skill_decision-logger_e1b449474e', + title: 'decision-logger', + description: 'decision-logger skill for Claude workflows from alirezarezvani/claude-skills.', + kind: 'skill', + link: 'https://github.com/alirezarezvani/claude-skills/blob/main/c-level-advisor/decision-logger/SKILL.md', + }, + { + id: 'skill_company-os_52454bd2b9', + title: 'company-os', + description: 'company-os skill for Claude workflows from alirezarezvani/claude-skills.', + kind: 'skill', + link: 'https://github.com/alirezarezvani/claude-skills/blob/main/c-level-advisor/company-os/SKILL.md', + }, + { + id: 'skill_context-engine_ffd6de033d', + title: 'context-engine', + description: 'context-engine skill for Claude workflows from alirezarezvani/claude-skills.', + kind: 'skill', + link: 'https://github.com/alirezarezvani/claude-skills/blob/main/c-level-advisor/context-engine/SKILL.md', + }, + { + id: 'skill_org-health-diagnostic_038f76574f', + title: 'org-health-diagnostic', + description: + 'org-health-diagnostic skill for Claude workflows from alirezarezvani/claude-skills.', + kind: 'skill', + link: 'https://github.com/alirezarezvani/claude-skills/blob/main/c-level-advisor/org-health-diagnostic/SKILL.md', + }, + { + id: 'skill_culture-architect_ab24de6bd5', + title: 'culture-architect', + description: + 'culture-architect skill for Claude workflows from alirezarezvani/claude-skills.', + kind: 'skill', + link: 'https://github.com/alirezarezvani/claude-skills/blob/main/c-level-advisor/culture-architect/SKILL.md', + }, + { + id: 'skill_change-management_ddb74bc0dd', + title: 'change-management', + description: + 'change-management skill for Claude workflows from alirezarezvani/claude-skills.', + kind: 'skill', + link: 'https://github.com/alirezarezvani/claude-skills/blob/main/c-level-advisor/change-management/SKILL.md', + }, + { + id: 'skill_agent-protocol_4ee6334dda', + title: 'agent-protocol', + description: 'agent-protocol skill for Claude workflows from alirezarezvani/claude-skills.', + kind: 'skill', + link: 'https://github.com/alirezarezvani/claude-skills/blob/main/c-level-advisor/agent-protocol/SKILL.md', + }, + { + id: 'skill_executive-mentor_79054e5d0e', + title: 'executive-mentor', + description: 'executive-mentor skill for Claude workflows from alirezarezvani/claude-skills.', + kind: 'skill', + link: 'https://github.com/alirezarezvani/claude-skills/blob/main/c-level-advisor/executive-mentor/SKILL.md', + }, + { + id: 'skill_founder-coach_537a7096fa', + title: 'founder-coach', + description: 'founder-coach skill for Claude workflows from alirezarezvani/claude-skills.', + kind: 'skill', + link: 'https://github.com/alirezarezvani/claude-skills/blob/main/c-level-advisor/founder-coach/SKILL.md', + }, + { + id: 'skill_internal-narrative_66a150cf59', + title: 'internal-narrative', + description: + 'internal-narrative skill for Claude workflows from alirezarezvani/claude-skills.', + kind: 'skill', + link: 'https://github.com/alirezarezvani/claude-skills/blob/main/c-level-advisor/internal-narrative/SKILL.md', + }, + { + id: 'skill_intl-expansion_1248dda9ad', + title: 'intl-expansion', + description: 'intl-expansion skill for Claude workflows from alirezarezvani/claude-skills.', + kind: 'skill', + link: 'https://github.com/alirezarezvani/claude-skills/blob/main/c-level-advisor/intl-expansion/SKILL.md', + }, + { + id: 'skill_ma-playbook_08c798df7c', + title: 'ma-playbook', + description: 'ma-playbook skill for Claude workflows from alirezarezvani/claude-skills.', + kind: 'skill', + link: 'https://github.com/alirezarezvani/claude-skills/blob/main/c-level-advisor/ma-playbook/SKILL.md', + }, + { + id: 'skill_cs-onboard_8f2b226827', + title: 'cs-onboard', + description: 'cs-onboard skill for Claude workflows from alirezarezvani/claude-skills.', + kind: 'skill', + link: 'https://github.com/alirezarezvani/claude-skills/blob/main/c-level-advisor/cs-onboard/SKILL.md', + }, + { + id: 'skill_sales-engineer_5dfedb389c', + title: 'sales-engineer', + description: 'sales-engineer skill for Claude workflows from alirezarezvani/claude-skills.', + kind: 'skill', + link: 'https://github.com/alirezarezvani/claude-skills/blob/main/business-growth/sales-engineer/SKILL.md', + }, + { + id: 'skill_customer-success-manager_414b11bf02', + title: 'customer-success-manager', + description: + 'customer-success-manager skill for Claude workflows from alirezarezvani/claude-skills.', + kind: 'skill', + link: 'https://github.com/alirezarezvani/claude-skills/blob/main/business-growth/customer-success-manager/SKILL.md', + }, + { + id: 'skill_revenue-operations_bae77fddb8', + title: 'revenue-operations', + description: + 'revenue-operations skill for Claude workflows from alirezarezvani/claude-skills.', + kind: 'skill', + link: 'https://github.com/alirezarezvani/claude-skills/blob/main/business-growth/revenue-operations/SKILL.md', + }, + { + id: 'skill_contract-and-proposal-writer_ae00f66618', + title: 'contract-and-proposal-writer', + description: + 'contract-and-proposal-writer skill for Claude workflows from alirezarezvani/claude-skills.', + kind: 'skill', + link: 'https://github.com/alirezarezvani/claude-skills/blob/main/business-growth/contract-and-proposal-writer/SKILL.md', + }, + { + id: 'skill_analytics-tracking_08fee5ba0e', + title: 'analytics-tracking', + description: + 'analytics-tracking skill for Claude workflows from alirezarezvani/claude-skills.', + kind: 'skill', + link: 'https://github.com/alirezarezvani/claude-skills/blob/main/marketing-skill/analytics-tracking/SKILL.md', + }, + { + id: 'skill_ab-test-setup_99bfd5b09a', + title: 'ab-test-setup', + description: 'ab-test-setup skill for Claude workflows from alirezarezvani/claude-skills.', + kind: 'skill', + link: 'https://github.com/alirezarezvani/claude-skills/blob/main/marketing-skill/ab-test-setup/SKILL.md', + }, + { + id: 'skill_app-store-optimization_29ee111c91', + title: 'app-store-optimization', + description: + 'app-store-optimization skill for Claude workflows from alirezarezvani/claude-skills.', + kind: 'skill', + link: 'https://github.com/alirezarezvani/claude-skills/blob/main/marketing-skill/app-store-optimization/SKILL.md', + }, + { + id: 'skill_campaign-analytics_9c9c64d906', + title: 'campaign-analytics', + description: + 'campaign-analytics skill for Claude workflows from alirezarezvani/claude-skills.', + kind: 'skill', + link: 'https://github.com/alirezarezvani/claude-skills/blob/main/marketing-skill/campaign-analytics/SKILL.md', + }, + { + id: 'skill_competitor-alternatives_438639dff8', + title: 'competitor-alternatives', + description: + 'competitor-alternatives skill for Claude workflows from alirezarezvani/claude-skills.', + kind: 'skill', + link: 'https://github.com/alirezarezvani/claude-skills/blob/main/marketing-skill/competitor-alternatives/SKILL.md', + }, + { + id: 'skill_content-creator_3e9c4c4b6b', + title: 'content-creator', + description: 'content-creator skill for Claude workflows from alirezarezvani/claude-skills.', + kind: 'skill', + link: 'https://github.com/alirezarezvani/claude-skills/blob/main/marketing-skill/content-creator/SKILL.md', + }, + { + id: 'skill_content-humanizer_a6dad168e1', + title: 'content-humanizer', + description: + 'content-humanizer skill for Claude workflows from alirezarezvani/claude-skills.', + kind: 'skill', + link: 'https://github.com/alirezarezvani/claude-skills/blob/main/marketing-skill/content-humanizer/SKILL.md', + }, + { + id: 'skill_content-production_59c610dc0e', + title: 'content-production', + description: + 'content-production skill for Claude workflows from alirezarezvani/claude-skills.', + kind: 'skill', + link: 'https://github.com/alirezarezvani/claude-skills/blob/main/marketing-skill/content-production/SKILL.md', + }, + { + id: 'skill_content-strategy_35d31d6fa2', + title: 'content-strategy', + description: 'content-strategy skill for Claude workflows from alirezarezvani/claude-skills.', + kind: 'skill', + link: 'https://github.com/alirezarezvani/claude-skills/blob/main/marketing-skill/content-strategy/SKILL.md', + }, + { + id: 'skill_copy-editing_625cdabb67', + title: 'copy-editing', + description: 'copy-editing skill for Claude workflows from alirezarezvani/claude-skills.', + kind: 'skill', + link: 'https://github.com/alirezarezvani/claude-skills/blob/main/marketing-skill/copy-editing/SKILL.md', + }, + { + id: 'skill_copywriting_881e747bb1', + title: 'copywriting', + description: 'copywriting skill for Claude workflows from alirezarezvani/claude-skills.', + kind: 'skill', + link: 'https://github.com/alirezarezvani/claude-skills/blob/main/marketing-skill/copywriting/SKILL.md', + }, + { + id: 'skill_email-sequence_06aeb0994a', + title: 'email-sequence', + description: 'email-sequence skill for Claude workflows from alirezarezvani/claude-skills.', + kind: 'skill', + link: 'https://github.com/alirezarezvani/claude-skills/blob/main/marketing-skill/email-sequence/SKILL.md', + }, + { + id: 'skill_free-tool-strategy_d7b74c467c', + title: 'free-tool-strategy', + description: + 'free-tool-strategy skill for Claude workflows from alirezarezvani/claude-skills.', + kind: 'skill', + link: 'https://github.com/alirezarezvani/claude-skills/blob/main/marketing-skill/free-tool-strategy/SKILL.md', + }, + { + id: 'skill_launch-strategy_9a65b65ee6', + title: 'launch-strategy', + description: 'launch-strategy skill for Claude workflows from alirezarezvani/claude-skills.', + kind: 'skill', + link: 'https://github.com/alirezarezvani/claude-skills/blob/main/marketing-skill/launch-strategy/SKILL.md', + }, + { + id: 'skill_marketing-context_0b4c39dcc5', + title: 'marketing-context', + description: + 'marketing-context skill for Claude workflows from alirezarezvani/claude-skills.', + kind: 'skill', + link: 'https://github.com/alirezarezvani/claude-skills/blob/main/marketing-skill/marketing-context/SKILL.md', + }, + { + id: 'skill_marketing-demand-acquisition_a252ecab3c', + title: 'marketing-demand-acquisition', + description: + 'marketing-demand-acquisition skill for Claude workflows from alirezarezvani/claude-skills.', + kind: 'skill', + link: 'https://github.com/alirezarezvani/claude-skills/blob/main/marketing-skill/marketing-demand-acquisition/SKILL.md', + }, + { + id: 'skill_marketing-ops_47943f2c7e', + title: 'marketing-ops', + description: 'marketing-ops skill for Claude workflows from alirezarezvani/claude-skills.', + kind: 'skill', + link: 'https://github.com/alirezarezvani/claude-skills/blob/main/marketing-skill/marketing-ops/SKILL.md', + }, + { + id: 'skill_pricing-strategy_3b4bb1a7a3', + title: 'pricing-strategy', + description: 'pricing-strategy skill for Claude workflows from alirezarezvani/claude-skills.', + kind: 'skill', + link: 'https://github.com/alirezarezvani/claude-skills/blob/main/marketing-skill/pricing-strategy/SKILL.md', + }, + { + id: 'skill_prompt-engineer-toolkit_52dea9022b', + title: 'prompt-engineer-toolkit', + description: + 'prompt-engineer-toolkit skill for Claude workflows from alirezarezvani/claude-skills.', + kind: 'skill', + link: 'https://github.com/alirezarezvani/claude-skills/blob/main/marketing-skill/prompt-engineer-toolkit/SKILL.md', + }, + { + id: 'skill_programmatic-seo_dcd3400817', + title: 'programmatic-seo', + description: 'programmatic-seo skill for Claude workflows from alirezarezvani/claude-skills.', + kind: 'skill', + link: 'https://github.com/alirezarezvani/claude-skills/blob/main/marketing-skill/programmatic-seo/SKILL.md', + }, + { + id: 'skill_schema-markup_145f380285', + title: 'schema-markup', + description: 'schema-markup skill for Claude workflows from alirezarezvani/claude-skills.', + kind: 'skill', + link: 'https://github.com/alirezarezvani/claude-skills/blob/main/marketing-skill/schema-markup/SKILL.md', + }, + { + id: 'skill_seo-audit_b9e9b43156', + title: 'seo-audit', + description: 'seo-audit skill for Claude workflows from alirezarezvani/claude-skills.', + kind: 'skill', + link: 'https://github.com/alirezarezvani/claude-skills/blob/main/marketing-skill/seo-audit/SKILL.md', + }, + { + id: 'skill_brand-guidelines_9aab64080d', + title: 'brand-guidelines', + description: 'brand-guidelines skill for Claude workflows from alirezarezvani/claude-skills.', + kind: 'skill', + link: 'https://github.com/alirezarezvani/claude-skills/blob/main/marketing-skill/brand-guidelines/SKILL.md', + }, + { + id: 'skill_clawfu-mcp-skills_21422d1be4', + title: '@clawfu/mcp-skills', + description: + '@clawfu/mcp-skills skill for Claude workflows from BehiSecc/awesome-claude-skills.', + kind: 'skill', + link: 'https://github.com/guia-matthieu/clawfu-skills/blob/main/SKILL.md', + }, + { + id: 'skill_agent-almanac_7f3a326289', + title: 'Agent Almanac', + description: 'Agent Almanac skill for Claude workflows from BehiSecc/awesome-claude-skills.', + kind: 'skill', + link: 'https://github.com/pjt222/agent-almanac/blob/main/SKILL.md', + }, + { + id: 'skill_agent-cards-skill_9a80c2f487', + title: 'agent-cards-skill', + description: + 'agent-cards-skill skill for Claude workflows from BehiSecc/awesome-claude-skills.', + kind: 'skill', + link: 'https://github.com/agent-cards/skill/blob/main/SKILL.md', + }, + { + id: 'skill_agentfund-mcp_f09146d22c', + title: 'agentfund-mcp', + description: 'agentfund-mcp skill for Claude workflows from BehiSecc/awesome-claude-skills.', + kind: 'skill', + link: 'https://github.com/RioTheGreat-ai/agentfund-mcp/blob/main/SKILL.md', + }, + { + id: 'skill_agnix_518eede941', + title: 'agnix', + description: 'agnix skill for Claude workflows from BehiSecc/awesome-claude-skills.', + kind: 'skill', + link: 'https://github.com/avifenesh/agnix/blob/main/SKILL.md', + }, + { + id: 'skill_agricidaniel-claude-seo_818ab7d9fe', + title: 'AgriciDaniel/claude-seo', + description: + 'AgriciDaniel/claude-seo skill for Claude workflows from VoltAgent/awesome-agent-skills.', + kind: 'skill', + link: 'https://github.com/AgriciDaniel/claude-seo/blob/main/SKILL.md', + }, + { + id: 'skill_algorithmic-art_af9a46818e', + title: 'algorithmic-art', + description: 'algorithmic-art skill for Claude workflows from anthropics/skills.', + kind: 'skill', + link: 'https://github.com/anthropics/skills/blob/main/skills/algorithmic-art/SKILL.md', + }, + { + id: 'skill_alinaqi-claude-bootstrap_5def4cb320', + title: 'alinaqi/claude-bootstrap', + description: + 'alinaqi/claude-bootstrap skill for Claude workflows from VoltAgent/awesome-agent-skills.', + kind: 'skill', + link: 'https://github.com/alinaqi/claude-bootstrap/blob/main/SKILL.md', + }, + { + id: 'skill_anthropics-brand-guidelines_64a3f14b63', + title: 'anthropics/brand-guidelines', + description: + 'anthropics/brand-guidelines skill for Claude workflows from VoltAgent/awesome-agent-skills.', + kind: 'skill', + link: 'https://github.com/anthropics/skills/blob/main/skills/brand-guidelines/SKILL.md', + }, + { + id: 'skill_anthropics-canvas-design_5eb020baeb', + title: 'anthropics/canvas-design', + description: + 'anthropics/canvas-design skill for Claude workflows from VoltAgent/awesome-agent-skills.', + kind: 'skill', + link: 'https://github.com/anthropics/skills/blob/main/skills/canvas-design/SKILL.md', + }, + { + id: 'skill_anthropics-doc-coauthoring_e4aeb572cd', + title: 'anthropics/doc-coauthoring', + description: + 'anthropics/doc-coauthoring skill for Claude workflows from VoltAgent/awesome-agent-skills.', + kind: 'skill', + link: 'https://github.com/anthropics/skills/blob/main/skills/doc-coauthoring/SKILL.md', + }, + { + id: 'skill_anthropics-docx_26567cf68a', + title: 'anthropics/docx', + description: + 'anthropics/docx skill for Claude workflows from VoltAgent/awesome-agent-skills.', + kind: 'skill', + link: 'https://github.com/anthropics/skills/blob/main/skills/docx/SKILL.md', + }, + { + id: 'skill_anthropics-frontend-design_c44fd0d42d', + title: 'anthropics/frontend-design', + description: + 'anthropics/frontend-design skill for Claude workflows from VoltAgent/awesome-agent-skills.', + kind: 'skill', + link: 'https://github.com/anthropics/skills/blob/main/skills/frontend-design/SKILL.md', + }, + { + id: 'skill_anthropics-internal-comms_b7452cd182', + title: 'anthropics/internal-comms', + description: + 'anthropics/internal-comms skill for Claude workflows from VoltAgent/awesome-agent-skills.', + kind: 'skill', + link: 'https://github.com/anthropics/skills/blob/main/skills/internal-comms/SKILL.md', + }, + { + id: 'skill_anthropics-mcp-builder_71e04201dc', + title: 'anthropics/mcp-builder', + description: + 'anthropics/mcp-builder skill for Claude workflows from VoltAgent/awesome-agent-skills.', + kind: 'skill', + link: 'https://github.com/anthropics/skills/blob/main/skills/mcp-builder/SKILL.md', + }, + { + id: 'skill_anthropics-pdf_d1f3812f2c', + title: 'anthropics/pdf', + description: 'anthropics/pdf skill for Claude workflows from VoltAgent/awesome-agent-skills.', + kind: 'skill', + link: 'https://github.com/anthropics/skills/blob/main/skills/pdf/SKILL.md', + }, + { + id: 'skill_anthropics-pptx_bf99a0aeab', + title: 'anthropics/pptx', + description: + 'anthropics/pptx skill for Claude workflows from VoltAgent/awesome-agent-skills.', + kind: 'skill', + link: 'https://github.com/anthropics/skills/blob/main/skills/pptx/SKILL.md', + }, + { + id: 'skill_anthropics-skill-creator_268b1d038f', + title: 'anthropics/skill-creator', + description: + 'anthropics/skill-creator skill for Claude workflows from VoltAgent/awesome-agent-skills.', + kind: 'skill', + link: 'https://github.com/anthropics/skills/blob/main/skills/skill-creator/SKILL.md', + }, + { + id: 'skill_anthropics-slack-gif-creator_2415b408bb', + title: 'anthropics/slack-gif-creator', + description: + 'anthropics/slack-gif-creator skill for Claude workflows from VoltAgent/awesome-agent-skills.', + kind: 'skill', + link: 'https://github.com/anthropics/skills/blob/main/skills/slack-gif-creator/SKILL.md', + }, + { + id: 'skill_anthropics-template_3aaa7160bd', + title: 'anthropics/template', + description: + 'anthropics/template skill for Claude workflows from VoltAgent/awesome-agent-skills.', + kind: 'skill', + link: 'https://github.com/anthropics/skills/blob/main/template/SKILL.md', + }, + { + id: 'skill_anthropics-theme-factory_ba4399992b', + title: 'anthropics/theme-factory', + description: + 'anthropics/theme-factory skill for Claude workflows from VoltAgent/awesome-agent-skills.', + kind: 'skill', + link: 'https://github.com/anthropics/skills/blob/main/skills/theme-factory/SKILL.md', + }, + { + id: 'skill_anthropics-web-artifacts-builder_dab13dba8c', + title: 'anthropics/web-artifacts-builder', + description: + 'anthropics/web-artifacts-builder skill for Claude workflows from VoltAgent/awesome-agent-skills.', + kind: 'skill', + link: 'https://github.com/anthropics/skills/blob/main/skills/web-artifacts-builder/SKILL.md', + }, + { + id: 'skill_anthropics-webapp-testing_b1e86df6bb', + title: 'anthropics/webapp-testing', + description: + 'anthropics/webapp-testing skill for Claude workflows from VoltAgent/awesome-agent-skills.', + kind: 'skill', + link: 'https://github.com/anthropics/skills/blob/main/skills/webapp-testing/SKILL.md', + }, + { + id: 'skill_anthropics-xlsx_33720f51c4', + title: 'anthropics/xlsx', + description: + 'anthropics/xlsx skill for Claude workflows from VoltAgent/awesome-agent-skills.', + kind: 'skill', + link: 'https://github.com/anthropics/skills/blob/main/skills/xlsx/SKILL.md', + }, + { + id: 'skill_antonbabenko-terraform-skill_d9b920d1e9', + title: 'antonbabenko/terraform-skill', + description: + 'antonbabenko/terraform-skill skill for Claude workflows from VoltAgent/awesome-agent-skills.', + kind: 'skill', + link: 'https://github.com/antonbabenko/terraform-skill/blob/main/SKILL.md', + }, + { + id: 'skill_article-extractor_354f04f2ac', + title: 'article-extractor', + description: + 'article-extractor skill for Claude workflows from BehiSecc/awesome-claude-skills.', + kind: 'skill', + link: 'https://github.com/michalparkola/tapestry-skills-for-claude-code/blob/main/article-extractor/SKILL.md', + }, + { + id: 'skill_avdlee-swiftui-expert-skill_3c04067d0d', + title: 'AvdLee/swiftui-expert-skill', + description: + 'AvdLee/swiftui-expert-skill skill for Claude workflows from VoltAgent/awesome-agent-skills.', + kind: 'skill', + link: 'https://github.com/AvdLee/SwiftUI-Agent-Skill/blob/main/swiftui-expert-skill/SKILL.md', + }, + { + id: 'skill_avoid-ai-writing_839c031bef', + title: 'avoid-ai-writing', + description: + 'avoid-ai-writing skill for Claude workflows from BehiSecc/awesome-claude-skills.', + kind: 'skill', + link: 'https://github.com/conorbronsdon/avoid-ai-writing/blob/main/SKILL.md', + }, + { + id: 'skill_aws-skills_2c698b92c0', + title: 'aws-skills', + description: 'aws-skills skill for Claude workflows from BehiSecc/awesome-claude-skills.', + kind: 'skill', + link: 'https://github.com/zxkane/aws-skills/blob/main/SKILL.md', + }, + { + id: 'skill_azure-devops_6dd6d8d148', + title: 'azure-devops', + description: 'azure-devops skill for Claude workflows from BehiSecc/awesome-claude-skills.', + kind: 'skill', + link: 'https://github.com/sanjay3290/ai-skills/blob/main/skills/azure-devops/SKILL.md', + }, + { + id: 'skill_behisecc-vibesec_0a02c8efc6', + title: 'BehiSecc/vibesec', + description: + 'BehiSecc/vibesec skill for Claude workflows from VoltAgent/awesome-agent-skills.', + kind: 'skill', + link: 'https://github.com/BehiSecc/VibeSec-Skill/blob/main/SKILL.md', + }, + { + id: 'skill_better-auth-best-practices_1828809fdb', + title: 'better-auth/best-practices', + description: + 'better-auth/best-practices skill for Claude workflows from VoltAgent/awesome-agent-skills.', + kind: 'skill', + link: 'https://github.com/better-auth/skills/blob/main/better-auth/best-practices/SKILL.md', + }, + { + id: 'skill_better-auth-create-auth_927ca5d5a5', + title: 'better-auth/create-auth', + description: + 'better-auth/create-auth skill for Claude workflows from VoltAgent/awesome-agent-skills.', + kind: 'skill', + link: 'https://github.com/better-auth/skills/blob/main/better-auth/create-auth/SKILL.md', + }, + { + id: 'skill_better-auth-emailandpassword_f54f943f90', + title: 'better-auth/emailAndPassword', + description: + 'better-auth/emailAndPassword skill for Claude workflows from VoltAgent/awesome-agent-skills.', + kind: 'skill', + link: 'https://github.com/better-auth/skills/blob/main/better-auth/emailAndPassword/SKILL.md', + }, + { + id: 'command_better-auth-explain-error_f0e83db395', + title: 'better-auth/explain-error', + description: + 'better-auth/explain-error command for Claude workflows from VoltAgent/awesome-agent-skills.', + kind: 'command', + link: 'https://github.com/better-auth/skills/blob/main/better-auth/commands/explain-error.md', + }, + { + id: 'skill_better-auth-organization_fc76bf105e', + title: 'better-auth/organization', + description: + 'better-auth/organization skill for Claude workflows from VoltAgent/awesome-agent-skills.', + kind: 'skill', + link: 'https://github.com/better-auth/skills/blob/main/better-auth/organization/SKILL.md', + }, + { + id: 'command_better-auth-providers_31fd03e944', + title: 'better-auth/providers', + description: + 'better-auth/providers command for Claude workflows from VoltAgent/awesome-agent-skills.', + kind: 'command', + link: 'https://github.com/better-auth/skills/blob/main/better-auth/commands/providers.md', + }, + { + id: 'skill_better-auth-twofactor_91c373e524', + title: 'better-auth/twoFactor', + description: + 'better-auth/twoFactor skill for Claude workflows from VoltAgent/awesome-agent-skills.', + kind: 'skill', + link: 'https://github.com/better-auth/skills/blob/main/better-auth/twoFactor/SKILL.md', + }, + { + id: 'skill_binance-crypto-market-rank_5a3844f5ae', + title: 'binance/crypto-market-rank', + description: + 'binance/crypto-market-rank skill for Claude workflows from VoltAgent/awesome-agent-skills.', + kind: 'skill', + link: 'https://github.com/binance/binance-skills-hub/blob/main/skills/binance-web3/crypto-market-rank/SKILL.md', + }, + { + id: 'skill_binance-meme-rush_ee94ec504b', + title: 'binance/meme-rush', + description: + 'binance/meme-rush skill for Claude workflows from VoltAgent/awesome-agent-skills.', + kind: 'skill', + link: 'https://github.com/binance/binance-skills-hub/blob/main/skills/binance-web3/meme-rush/SKILL.md', + }, + { + id: 'skill_binance-query-address-info_362ca4bd4d', + title: 'binance/query-address-info', + description: + 'binance/query-address-info skill for Claude workflows from VoltAgent/awesome-agent-skills.', + kind: 'skill', + link: 'https://github.com/binance/binance-skills-hub/blob/main/skills/binance-web3/query-address-info/SKILL.md', + }, + { + id: 'skill_binance-query-token-audit_1dc29e3835', + title: 'binance/query-token-audit', + description: + 'binance/query-token-audit skill for Claude workflows from VoltAgent/awesome-agent-skills.', + kind: 'skill', + link: 'https://github.com/binance/binance-skills-hub/blob/main/skills/binance-web3/query-token-audit/SKILL.md', + }, + { + id: 'skill_binance-query-token-info_9cc0935f12', + title: 'binance/query-token-info', + description: + 'binance/query-token-info skill for Claude workflows from VoltAgent/awesome-agent-skills.', + kind: 'skill', + link: 'https://github.com/binance/binance-skills-hub/blob/main/skills/binance-web3/query-token-info/SKILL.md', + }, + { + id: 'skill_binance-spot_d6dee4e6c1', + title: 'binance/spot', + description: 'binance/spot skill for Claude workflows from VoltAgent/awesome-agent-skills.', + kind: 'skill', + link: 'https://github.com/binance/binance-skills-hub/blob/main/skills/binance/spot/SKILL.md', + }, + { + id: 'skill_binance-trading-signal_e979246d6f', + title: 'binance/trading-signal', + description: + 'binance/trading-signal skill for Claude workflows from VoltAgent/awesome-agent-skills.', + kind: 'skill', + link: 'https://github.com/binance/binance-skills-hub/blob/main/skills/binance-web3/trading-signal/SKILL.md', + }, + { + id: 'skill_bitwize-music-studio-claude-ai-music-skills_2b22cb6f7a', + title: 'bitwize-music-studio/claude-ai-music-skills', + description: + 'bitwize-music-studio/claude-ai-music-skills skill for Claude workflows from VoltAgent/awesome-agent-skills.', + kind: 'skill', + link: 'https://github.com/bitwize-music-studio/claude-ai-music-skills/blob/main/SKILL.md', + }, + { + id: 'skill_blader-humanizer_2e3b64cda9', + title: 'blader/humanizer', + description: + 'blader/humanizer skill for Claude workflows from VoltAgent/awesome-agent-skills.', + kind: 'skill', + link: 'https://github.com/blader/humanizer/blob/main/SKILL.md', + }, + { + id: 'skill_brainstorming_1bbd33ff8f', + title: 'brainstorming', + description: 'brainstorming skill for Claude workflows from BehiSecc/awesome-claude-skills.', + kind: 'skill', + link: 'https://github.com/obra/superpowers/blob/main/skills/brainstorming/SKILL.md', + }, + { + id: 'skill_brianrwagner-ai-marketing-skills_878fb7b7f7', + title: 'BrianRWagner/ai-marketing-skills', + description: + 'BrianRWagner/ai-marketing-skills skill for Claude workflows from VoltAgent/awesome-agent-skills.', + kind: 'skill', + link: 'https://github.com/BrianRWagner/ai-marketing-skills/blob/main/SKILL.md', + }, + { + id: 'skill_callstackincubator-github_6e686c9b56', + title: 'callstackincubator/github', + description: + 'callstackincubator/github skill for Claude workflows from VoltAgent/awesome-agent-skills.', + kind: 'skill', + link: 'https://github.com/callstackincubator/agent-skills/blob/main/skills/github/SKILL.md', + }, + ], + // Page 3 + [ + { + id: 'skill_callstackincubator-react-native-best-practices_1a7c56ad15', + title: 'callstackincubator/react-native-best-practices', + description: + 'callstackincubator/react-native-best-practices skill for Claude workflows from VoltAgent/awesome-agent-skills.', + kind: 'skill', + link: 'https://github.com/callstackincubator/agent-skills/blob/main/skills/react-native-best-practices/SKILL.md', + }, + { + id: 'skill_callstackincubator-upgrading-react-native_6b4f501676', + title: 'callstackincubator/upgrading-react-native', + description: + 'callstackincubator/upgrading-react-native skill for Claude workflows from VoltAgent/awesome-agent-skills.', + kind: 'skill', + link: 'https://github.com/callstackincubator/agent-skills/blob/main/skills/upgrading-react-native/SKILL.md', + }, + { + id: 'skill_chainaware-behavioral-prediction_85435ad217', + title: 'chainaware-behavioral-prediction', + description: + 'chainaware-behavioral-prediction skill for Claude workflows from BehiSecc/awesome-claude-skills.', + kind: 'skill', + link: 'https://github.com/ChainAware/behavioral-prediction-mcp/blob/main/SKILL.md', + }, + { + id: 'skill_charles-proxy-extract_468bfc4e4f', + title: 'charles-proxy-extract', + description: + 'charles-proxy-extract skill for Claude workflows from BehiSecc/awesome-claude-skills.', + kind: 'skill', + link: 'https://github.com/wannabehero/charles-proxy-extract-skill/blob/main/SKILL.md', + }, + { + id: 'skill_charlie85270-dorothy_968baff019', + title: 'Charlie85270/Dorothy', + description: + 'Charlie85270/Dorothy skill for Claude workflows from VoltAgent/awesome-agent-skills.', + kind: 'skill', + link: 'https://github.com/Charlie85270/Dorothy/blob/main/SKILL.md', + }, + { + id: 'skill_claude-code-video-toolkit_728eb027d3', + title: 'Claude Code Video Toolkit', + description: + 'Claude Code Video Toolkit skill for Claude workflows from BehiSecc/awesome-claude-skills.', + kind: 'skill', + link: 'https://github.com/digitalsamba/claude-code-video-toolkit/blob/main/SKILL.md', + }, + { + id: 'skill_claude-ally-health_5af0f5cbe3', + title: 'claude-ally-health', + description: + 'claude-ally-health skill for Claude workflows from BehiSecc/awesome-claude-skills.', + kind: 'skill', + link: 'https://github.com/huifer/Claude-Ally-Health/blob/main/SKILL.md', + }, + { + id: 'skill_claude-api_ca646a31eb', + title: 'claude-api', + description: 'claude-api skill for Claude workflows from anthropics/skills.', + kind: 'skill', + link: 'https://github.com/anthropics/skills/blob/main/skills/claude-api/SKILL.md', + }, + { + id: 'skill_claude-code-notion-plugin_eb5006d5d1', + title: 'claude-code-notion-plugin', + description: + 'claude-code-notion-plugin skill for Claude workflows from VoltAgent/awesome-agent-skills.', + kind: 'skill', + link: 'https://github.com/makenotion/claude-code-notion-plugin/blob/main/skills/notion/SKILL.md', + }, + { + id: 'skill_claude-code-terminal-title_c6d773cb99', + title: 'claude-code-terminal-title', + description: + 'claude-code-terminal-title skill for Claude workflows from BehiSecc/awesome-claude-skills.', + kind: 'skill', + link: 'https://github.com/bluzername/claude-code-terminal-title/blob/main/SKILL.md', + }, + { + id: 'skill_claude-ecom_050ac33a9c', + title: 'claude-ecom', + description: 'claude-ecom skill for Claude workflows from BehiSecc/awesome-claude-skills.', + kind: 'skill', + link: 'https://github.com/takechanman1228/claude-ecom/blob/main/SKILL.md', + }, + { + id: 'skill_claude-epub-skill_b4a8ea22d7', + title: 'claude-epub-skill', + description: + 'claude-epub-skill skill for Claude workflows from BehiSecc/awesome-claude-skills.', + kind: 'skill', + link: 'https://github.com/smerchek/claude-epub-skill/blob/main/SKILL.md', + }, + { + id: 'skill_claude-scientific-skills_fea2158c79', + title: 'claude-scientific-skills', + description: + 'claude-scientific-skills skill for Claude workflows from BehiSecc/awesome-claude-skills.', + kind: 'skill', + link: 'https://github.com/K-Dense-AI/claude-scientific-skills/blob/main/SKILL.md', + }, + { + id: 'skill_claude-skills_929e47cb81', + title: 'claude-skills', + description: 'claude-skills skill for Claude workflows from BehiSecc/awesome-claude-skills.', + kind: 'skill', + link: 'https://github.com/jeffallan/claude-skills/blob/main/SKILL.md', + }, + { + id: 'skill_claude-starter_55ddaa2a84', + title: 'claude-starter', + description: 'claude-starter skill for Claude workflows from BehiSecc/awesome-claude-skills.', + kind: 'skill', + link: 'https://github.com/raintree-technology/claude-starter/blob/main/SKILL.md', + }, + { + id: 'skill_clickhouse-agent-skills_51de408a93', + title: 'ClickHouse/agent-skills', + description: + 'ClickHouse/agent-skills skill for Claude workflows from VoltAgent/awesome-agent-skills.', + kind: 'skill', + link: 'https://github.com/ClickHouse/agent-skills/blob/main/SKILL.md', + }, + { + id: 'skill_cloudai-x-threejs-skills_b63115fa98', + title: 'CloudAI-X/threejs-skills', + description: + 'CloudAI-X/threejs-skills skill for Claude workflows from VoltAgent/awesome-agent-skills.', + kind: 'skill', + link: 'https://github.com/CloudAI-X/threejs-skills/blob/main/SKILL.md', + }, + { + id: 'skill_cloudflare-agents-sdk_20c7cdf130', + title: 'cloudflare/agents-sdk', + description: + 'cloudflare/agents-sdk skill for Claude workflows from VoltAgent/awesome-agent-skills.', + kind: 'skill', + link: 'https://github.com/cloudflare/skills/blob/main/skills/agents-sdk/SKILL.md', + }, + { + id: 'skill_cloudflare-building-ai-agent-on-cloudflare_40ea2c471f', + title: 'cloudflare/building-ai-agent-on-cloudflare', + description: + 'cloudflare/building-ai-agent-on-cloudflare skill for Claude workflows from VoltAgent/awesome-agent-skills.', + kind: 'skill', + link: 'https://github.com/cloudflare/skills/blob/main/skills/building-ai-agent-on-cloudflare/SKILL.md', + }, + { + id: 'skill_cloudflare-building-mcp-server-on-cloudflare_7ed8c62f8c', + title: 'cloudflare/building-mcp-server-on-cloudflare', + description: + 'cloudflare/building-mcp-server-on-cloudflare skill for Claude workflows from VoltAgent/awesome-agent-skills.', + kind: 'skill', + link: 'https://github.com/cloudflare/skills/blob/main/skills/building-mcp-server-on-cloudflare/SKILL.md', + }, + { + id: 'command_cloudflare-commands_14d914db13', + title: 'cloudflare/commands', + description: + 'cloudflare/commands command for Claude workflows from VoltAgent/awesome-agent-skills.', + kind: 'command', + link: 'https://github.com/cloudflare/skills/blob/main/commands/SKILL.md', + }, + { + id: 'skill_cloudflare-durable-objects_24f9cb504d', + title: 'cloudflare/durable-objects', + description: + 'cloudflare/durable-objects skill for Claude workflows from VoltAgent/awesome-agent-skills.', + kind: 'skill', + link: 'https://github.com/cloudflare/skills/blob/main/skills/durable-objects/SKILL.md', + }, + { + id: 'skill_cloudflare-web-perf_a24e8908d5', + title: 'cloudflare/web-perf', + description: + 'cloudflare/web-perf skill for Claude workflows from VoltAgent/awesome-agent-skills.', + kind: 'skill', + link: 'https://github.com/cloudflare/skills/blob/main/skills/web-perf/SKILL.md', + }, + { + id: 'skill_cloudflare-wrangler_d47abe3286', + title: 'cloudflare/wrangler', + description: + 'cloudflare/wrangler skill for Claude workflows from VoltAgent/awesome-agent-skills.', + kind: 'skill', + link: 'https://github.com/cloudflare/skills/blob/main/skills/wrangler/SKILL.md', + }, + { + id: 'skill_coderabbitai-skills_3ec6d130e3', + title: 'coderabbitai/skills', + description: + 'coderabbitai/skills skill for Claude workflows from VoltAgent/awesome-agent-skills.', + kind: 'skill', + link: 'https://github.com/coderabbitai/skills/blob/main/SKILL.md', + }, + { + id: 'skill_composiohq-skills_bcae102b5b', + title: 'ComposioHQ/skills', + description: + 'ComposioHQ/skills skill for Claude workflows from VoltAgent/awesome-agent-skills.', + kind: 'skill', + link: 'https://github.com/ComposioHQ/skills/blob/main/SKILL.md', + }, + { + id: 'skill_conorluddy-ios-simulator-skill_3374c01e7e', + title: 'conorluddy/ios-simulator-skill', + description: + 'conorluddy/ios-simulator-skill skill for Claude workflows from VoltAgent/awesome-agent-skills.', + kind: 'skill', + link: 'https://github.com/conorluddy/ios-simulator-skill/blob/main/SKILL.md', + }, + { + id: 'skill_content-research-writer_85be887af9', + title: 'content-research-writer', + description: + 'content-research-writer skill for Claude workflows from BehiSecc/awesome-claude-skills.', + kind: 'skill', + link: 'https://github.com/ComposioHQ/awesome-claude-skills/blob/master/content-research-writer/SKILL.md', + }, + { + id: 'skill_corey-haines_9081bf9102', + title: 'Corey Haines', + description: 'Corey Haines skill for Claude workflows from VoltAgent/awesome-agent-skills.', + kind: 'skill', + link: 'https://github.com/coreyhaines31', + }, + { + id: 'skill_coreyhaines31-ab-test-setup_058cf4fa11', + title: 'coreyhaines31/ab-test-setup', + description: + 'coreyhaines31/ab-test-setup skill for Claude workflows from VoltAgent/awesome-agent-skills.', + kind: 'skill', + link: 'https://github.com/coreyhaines31/marketingskills/blob/main/skills/ab-test-setup/SKILL.md', + }, + { + id: 'skill_coreyhaines31-ad-creative_77ed350bfc', + title: 'coreyhaines31/ad-creative', + description: + 'coreyhaines31/ad-creative skill for Claude workflows from VoltAgent/awesome-agent-skills.', + kind: 'skill', + link: 'https://github.com/coreyhaines31/marketingskills/blob/main/skills/ad-creative/SKILL.md', + }, + { + id: 'skill_coreyhaines31-ai-seo_fc976113e9', + title: 'coreyhaines31/ai-seo', + description: + 'coreyhaines31/ai-seo skill for Claude workflows from VoltAgent/awesome-agent-skills.', + kind: 'skill', + link: 'https://github.com/coreyhaines31/marketingskills/blob/main/skills/ai-seo/SKILL.md', + }, + { + id: 'skill_coreyhaines31-analytics-tracking_0405454c53', + title: 'coreyhaines31/analytics-tracking', + description: + 'coreyhaines31/analytics-tracking skill for Claude workflows from VoltAgent/awesome-agent-skills.', + kind: 'skill', + link: 'https://github.com/coreyhaines31/marketingskills/blob/main/skills/analytics-tracking/SKILL.md', + }, + { + id: 'skill_coreyhaines31-churn-prevention_2daf3409d1', + title: 'coreyhaines31/churn-prevention', + description: + 'coreyhaines31/churn-prevention skill for Claude workflows from VoltAgent/awesome-agent-skills.', + kind: 'skill', + link: 'https://github.com/coreyhaines31/marketingskills/blob/main/skills/churn-prevention/SKILL.md', + }, + { + id: 'skill_coreyhaines31-cold-email_4a9ab672ec', + title: 'coreyhaines31/cold-email', + description: + 'coreyhaines31/cold-email skill for Claude workflows from VoltAgent/awesome-agent-skills.', + kind: 'skill', + link: 'https://github.com/coreyhaines31/marketingskills/blob/main/skills/cold-email/SKILL.md', + }, + { + id: 'skill_coreyhaines31-competitor-alternatives_fbca3364b0', + title: 'coreyhaines31/competitor-alternatives', + description: + 'coreyhaines31/competitor-alternatives skill for Claude workflows from VoltAgent/awesome-agent-skills.', + kind: 'skill', + link: 'https://github.com/coreyhaines31/marketingskills/blob/main/skills/competitor-alternatives/SKILL.md', + }, + { + id: 'skill_coreyhaines31-content-strategy_05d12be41f', + title: 'coreyhaines31/content-strategy', + description: + 'coreyhaines31/content-strategy skill for Claude workflows from VoltAgent/awesome-agent-skills.', + kind: 'skill', + link: 'https://github.com/coreyhaines31/marketingskills/blob/main/skills/content-strategy/SKILL.md', + }, + { + id: 'skill_coreyhaines31-copy-editing_bade3e0972', + title: 'coreyhaines31/copy-editing', + description: + 'coreyhaines31/copy-editing skill for Claude workflows from VoltAgent/awesome-agent-skills.', + kind: 'skill', + link: 'https://github.com/coreyhaines31/marketingskills/blob/main/skills/copy-editing/SKILL.md', + }, + { + id: 'skill_coreyhaines31-copywriting_757ed76aef', + title: 'coreyhaines31/copywriting', + description: + 'coreyhaines31/copywriting skill for Claude workflows from VoltAgent/awesome-agent-skills.', + kind: 'skill', + link: 'https://github.com/coreyhaines31/marketingskills/blob/main/skills/copywriting/SKILL.md', + }, + { + id: 'skill_coreyhaines31-email-sequence_244d214ddd', + title: 'coreyhaines31/email-sequence', + description: + 'coreyhaines31/email-sequence skill for Claude workflows from VoltAgent/awesome-agent-skills.', + kind: 'skill', + link: 'https://github.com/coreyhaines31/marketingskills/blob/main/skills/email-sequence/SKILL.md', + }, + { + id: 'skill_coreyhaines31-form-cro_a96311d54d', + title: 'coreyhaines31/form-cro', + description: + 'coreyhaines31/form-cro skill for Claude workflows from VoltAgent/awesome-agent-skills.', + kind: 'skill', + link: 'https://github.com/coreyhaines31/marketingskills/blob/main/skills/form-cro/SKILL.md', + }, + { + id: 'skill_coreyhaines31-free-tool-strategy_1dce25f7a4', + title: 'coreyhaines31/free-tool-strategy', + description: + 'coreyhaines31/free-tool-strategy skill for Claude workflows from VoltAgent/awesome-agent-skills.', + kind: 'skill', + link: 'https://github.com/coreyhaines31/marketingskills/blob/main/skills/free-tool-strategy/SKILL.md', + }, + { + id: 'skill_coreyhaines31-launch-strategy_9d1873943b', + title: 'coreyhaines31/launch-strategy', + description: + 'coreyhaines31/launch-strategy skill for Claude workflows from VoltAgent/awesome-agent-skills.', + kind: 'skill', + link: 'https://github.com/coreyhaines31/marketingskills/blob/main/skills/launch-strategy/SKILL.md', + }, + { + id: 'skill_coreyhaines31-marketing-ideas_be19559bf6', + title: 'coreyhaines31/marketing-ideas', + description: + 'coreyhaines31/marketing-ideas skill for Claude workflows from VoltAgent/awesome-agent-skills.', + kind: 'skill', + link: 'https://github.com/coreyhaines31/marketingskills/blob/main/skills/marketing-ideas/SKILL.md', + }, + { + id: 'skill_coreyhaines31-marketing-psychology_9b2b1b9a82', + title: 'coreyhaines31/marketing-psychology', + description: + 'coreyhaines31/marketing-psychology skill for Claude workflows from VoltAgent/awesome-agent-skills.', + kind: 'skill', + link: 'https://github.com/coreyhaines31/marketingskills/blob/main/skills/marketing-psychology/SKILL.md', + }, + { + id: 'skill_coreyhaines31-onboarding-cro_db2323853e', + title: 'coreyhaines31/onboarding-cro', + description: + 'coreyhaines31/onboarding-cro skill for Claude workflows from VoltAgent/awesome-agent-skills.', + kind: 'skill', + link: 'https://github.com/coreyhaines31/marketingskills/blob/main/skills/onboarding-cro/SKILL.md', + }, + { + id: 'skill_coreyhaines31-page-cro_fe261635b7', + title: 'coreyhaines31/page-cro', + description: + 'coreyhaines31/page-cro skill for Claude workflows from VoltAgent/awesome-agent-skills.', + kind: 'skill', + link: 'https://github.com/coreyhaines31/marketingskills/blob/main/skills/page-cro/SKILL.md', + }, + { + id: 'skill_coreyhaines31-paid-ads_c306cede1b', + title: 'coreyhaines31/paid-ads', + description: + 'coreyhaines31/paid-ads skill for Claude workflows from VoltAgent/awesome-agent-skills.', + kind: 'skill', + link: 'https://github.com/coreyhaines31/marketingskills/blob/main/skills/paid-ads/SKILL.md', + }, + { + id: 'skill_coreyhaines31-paywall-upgrade-cro_8932b2314e', + title: 'coreyhaines31/paywall-upgrade-cro', + description: + 'coreyhaines31/paywall-upgrade-cro skill for Claude workflows from VoltAgent/awesome-agent-skills.', + kind: 'skill', + link: 'https://github.com/coreyhaines31/marketingskills/blob/main/skills/paywall-upgrade-cro/SKILL.md', + }, + { + id: 'skill_coreyhaines31-popup-cro_465f91306c', + title: 'coreyhaines31/popup-cro', + description: + 'coreyhaines31/popup-cro skill for Claude workflows from VoltAgent/awesome-agent-skills.', + kind: 'skill', + link: 'https://github.com/coreyhaines31/marketingskills/blob/main/skills/popup-cro/SKILL.md', + }, + { + id: 'skill_coreyhaines31-pricing-strategy_1c15481466', + title: 'coreyhaines31/pricing-strategy', + description: + 'coreyhaines31/pricing-strategy skill for Claude workflows from VoltAgent/awesome-agent-skills.', + kind: 'skill', + link: 'https://github.com/coreyhaines31/marketingskills/blob/main/skills/pricing-strategy/SKILL.md', + }, + { + id: 'skill_coreyhaines31-product-marketing-context_bf16ae549b', + title: 'coreyhaines31/product-marketing-context', + description: + 'coreyhaines31/product-marketing-context skill for Claude workflows from VoltAgent/awesome-agent-skills.', + kind: 'skill', + link: 'https://github.com/coreyhaines31/marketingskills/blob/main/skills/product-marketing-context/SKILL.md', + }, + { + id: 'skill_coreyhaines31-programmatic-seo_793dca2cfe', + title: 'coreyhaines31/programmatic-seo', + description: + 'coreyhaines31/programmatic-seo skill for Claude workflows from VoltAgent/awesome-agent-skills.', + kind: 'skill', + link: 'https://github.com/coreyhaines31/marketingskills/blob/main/skills/programmatic-seo/SKILL.md', + }, + { + id: 'skill_coreyhaines31-referral-program_efe0cfcc6f', + title: 'coreyhaines31/referral-program', + description: + 'coreyhaines31/referral-program skill for Claude workflows from VoltAgent/awesome-agent-skills.', + kind: 'skill', + link: 'https://github.com/coreyhaines31/marketingskills/blob/main/skills/referral-program/SKILL.md', + }, + { + id: 'skill_coreyhaines31-revops_49989a8c42', + title: 'coreyhaines31/revops', + description: + 'coreyhaines31/revops skill for Claude workflows from VoltAgent/awesome-agent-skills.', + kind: 'skill', + link: 'https://github.com/coreyhaines31/marketingskills/blob/main/skills/revops/SKILL.md', + }, + { + id: 'skill_coreyhaines31-sales-enablement_bb51271a19', + title: 'coreyhaines31/sales-enablement', + description: + 'coreyhaines31/sales-enablement skill for Claude workflows from VoltAgent/awesome-agent-skills.', + kind: 'skill', + link: 'https://github.com/coreyhaines31/marketingskills/blob/main/skills/sales-enablement/SKILL.md', + }, + { + id: 'skill_coreyhaines31-schema-markup_4ef8f14a82', + title: 'coreyhaines31/schema-markup', + description: + 'coreyhaines31/schema-markup skill for Claude workflows from VoltAgent/awesome-agent-skills.', + kind: 'skill', + link: 'https://github.com/coreyhaines31/marketingskills/blob/main/skills/schema-markup/SKILL.md', + }, + { + id: 'skill_coreyhaines31-seo-audit_342618b4de', + title: 'coreyhaines31/seo-audit', + description: + 'coreyhaines31/seo-audit skill for Claude workflows from VoltAgent/awesome-agent-skills.', + kind: 'skill', + link: 'https://github.com/coreyhaines31/marketingskills/blob/main/skills/seo-audit/SKILL.md', + }, + { + id: 'skill_coreyhaines31-signup-flow-cro_1427d36ad6', + title: 'coreyhaines31/signup-flow-cro', + description: + 'coreyhaines31/signup-flow-cro skill for Claude workflows from VoltAgent/awesome-agent-skills.', + kind: 'skill', + link: 'https://github.com/coreyhaines31/marketingskills/blob/main/skills/signup-flow-cro/SKILL.md', + }, + { + id: 'skill_coreyhaines31-site-architecture_35eaf342e9', + title: 'coreyhaines31/site-architecture', + description: + 'coreyhaines31/site-architecture skill for Claude workflows from VoltAgent/awesome-agent-skills.', + kind: 'skill', + link: 'https://github.com/coreyhaines31/marketingskills/blob/main/skills/site-architecture/SKILL.md', + }, + { + id: 'skill_coreyhaines31-social-content_b2a9c1788c', + title: 'coreyhaines31/social-content', + description: + 'coreyhaines31/social-content skill for Claude workflows from VoltAgent/awesome-agent-skills.', + kind: 'skill', + link: 'https://github.com/coreyhaines31/marketingskills/blob/main/skills/social-content/SKILL.md', + }, + { + id: 'skill_cosmoblk-email-marketing-bible_76712d8a23', + title: 'CosmoBlk/email-marketing-bible', + description: + 'CosmoBlk/email-marketing-bible skill for Claude workflows from VoltAgent/awesome-agent-skills.', + kind: 'skill', + link: 'https://github.com/CosmoBlk/email-marketing-bible/blob/main/SKILL.md', + }, + { + id: 'skill_csv-data-summarizer-claude-skill_06035f2ed4', + title: 'csv-data-summarizer-claude-skill', + description: + 'csv-data-summarizer-claude-skill skill for Claude workflows from BehiSecc/awesome-claude-skills.', + kind: 'skill', + link: 'https://github.com/coffeefuelbump/csv-data-summarizer-claude-skill/blob/main/SKILL.md', + }, + { + id: 'skill_czlonkowski-n8n-code-javascript_4cee35b260', + title: 'czlonkowski/n8n-code-javascript', + description: + 'czlonkowski/n8n-code-javascript skill for Claude workflows from VoltAgent/awesome-agent-skills.', + kind: 'skill', + link: 'https://github.com/czlonkowski/n8n-skills/blob/main/skills/n8n-code-javascript/SKILL.md', + }, + { + id: 'skill_czlonkowski-n8n-code-python_e618bf9863', + title: 'czlonkowski/n8n-code-python', + description: + 'czlonkowski/n8n-code-python skill for Claude workflows from VoltAgent/awesome-agent-skills.', + kind: 'skill', + link: 'https://github.com/czlonkowski/n8n-skills/blob/main/skills/n8n-code-python/SKILL.md', + }, + { + id: 'skill_czlonkowski-n8n-expression-syntax_9731070995', + title: 'czlonkowski/n8n-expression-syntax', + description: + 'czlonkowski/n8n-expression-syntax skill for Claude workflows from VoltAgent/awesome-agent-skills.', + kind: 'skill', + link: 'https://github.com/czlonkowski/n8n-skills/blob/main/skills/n8n-expression-syntax/SKILL.md', + }, + { + id: 'skill_czlonkowski-n8n-mcp-tools-expert_f28b93f163', + title: 'czlonkowski/n8n-mcp-tools-expert', + description: + 'czlonkowski/n8n-mcp-tools-expert skill for Claude workflows from VoltAgent/awesome-agent-skills.', + kind: 'skill', + link: 'https://github.com/czlonkowski/n8n-skills/blob/main/skills/n8n-mcp-tools-expert/SKILL.md', + }, + { + id: 'skill_czlonkowski-n8n-node-configuration_85db274843', + title: 'czlonkowski/n8n-node-configuration', + description: + 'czlonkowski/n8n-node-configuration skill for Claude workflows from VoltAgent/awesome-agent-skills.', + kind: 'skill', + link: 'https://github.com/czlonkowski/n8n-skills/blob/main/skills/n8n-node-configuration/SKILL.md', + }, + { + id: 'skill_czlonkowski-n8n-validation-expert_ba8dcbdb56', + title: 'czlonkowski/n8n-validation-expert', + description: + 'czlonkowski/n8n-validation-expert skill for Claude workflows from VoltAgent/awesome-agent-skills.', + kind: 'skill', + link: 'https://github.com/czlonkowski/n8n-skills/blob/main/skills/n8n-validation-expert/SKILL.md', + }, + { + id: 'skill_czlonkowski-n8n-workflow-patterns_1df6df2d7c', + title: 'czlonkowski/n8n-workflow-patterns', + description: + 'czlonkowski/n8n-workflow-patterns skill for Claude workflows from VoltAgent/awesome-agent-skills.', + kind: 'skill', + link: 'https://github.com/czlonkowski/n8n-skills/blob/main/skills/n8n-workflow-patterns/SKILL.md', + }, + { + id: 'skill_dean-peters_5e40a23085', + title: 'Dean Peters', + description: 'Dean Peters skill for Claude workflows from VoltAgent/awesome-agent-skills.', + kind: 'skill', + link: 'https://github.com/deanpeters', + }, + { + id: 'skill_deanpeters-acquisition-channel-advisor_659fee09e7', + title: 'deanpeters/acquisition-channel-advisor', + description: + 'deanpeters/acquisition-channel-advisor skill for Claude workflows from VoltAgent/awesome-agent-skills.', + kind: 'skill', + link: 'https://github.com/deanpeters/Product-Manager-Skills/blob/main/skills/acquisition-channel-advisor/SKILL.md', + }, + { + id: 'skill_deanpeters-ai-shaped-readiness-advisor_e5f6eb78c8', + title: 'deanpeters/ai-shaped-readiness-advisor', + description: + 'deanpeters/ai-shaped-readiness-advisor skill for Claude workflows from VoltAgent/awesome-agent-skills.', + kind: 'skill', + link: 'https://github.com/deanpeters/Product-Manager-Skills/blob/main/skills/ai-shaped-readiness-advisor/SKILL.md', + }, + { + id: 'skill_deanpeters-altitude-horizon-framework_6dca8bb9fb', + title: 'deanpeters/altitude-horizon-framework', + description: + 'deanpeters/altitude-horizon-framework skill for Claude workflows from VoltAgent/awesome-agent-skills.', + kind: 'skill', + link: 'https://github.com/deanpeters/Product-Manager-Skills/blob/main/skills/altitude-horizon-framework/SKILL.md', + }, + { + id: 'skill_deanpeters-business-health-diagnostic_53c8c49157', + title: 'deanpeters/business-health-diagnostic', + description: + 'deanpeters/business-health-diagnostic skill for Claude workflows from VoltAgent/awesome-agent-skills.', + kind: 'skill', + link: 'https://github.com/deanpeters/Product-Manager-Skills/blob/main/skills/business-health-diagnostic/SKILL.md', + }, + { + id: 'skill_deanpeters-company-research_ead771de44', + title: 'deanpeters/company-research', + description: + 'deanpeters/company-research skill for Claude workflows from VoltAgent/awesome-agent-skills.', + kind: 'skill', + link: 'https://github.com/deanpeters/Product-Manager-Skills/blob/main/skills/company-research/SKILL.md', + }, + { + id: 'skill_deanpeters-context-engineering-advisor_e77c9832da', + title: 'deanpeters/context-engineering-advisor', + description: + 'deanpeters/context-engineering-advisor skill for Claude workflows from VoltAgent/awesome-agent-skills.', + kind: 'skill', + link: 'https://github.com/deanpeters/Product-Manager-Skills/blob/main/skills/context-engineering-advisor/SKILL.md', + }, + { + id: 'skill_deanpeters-customer-journey-map_0a09956c38', + title: 'deanpeters/customer-journey-map', + description: + 'deanpeters/customer-journey-map skill for Claude workflows from VoltAgent/awesome-agent-skills.', + kind: 'skill', + link: 'https://github.com/deanpeters/Product-Manager-Skills/blob/main/skills/customer-journey-map/SKILL.md', + }, + { + id: 'skill_deanpeters-customer-journey-mapping-workshop_a8b78f23cb', + title: 'deanpeters/customer-journey-mapping-workshop', + description: + 'deanpeters/customer-journey-mapping-workshop skill for Claude workflows from VoltAgent/awesome-agent-skills.', + kind: 'skill', + link: 'https://github.com/deanpeters/Product-Manager-Skills/blob/main/skills/customer-journey-mapping-workshop/SKILL.md', + }, + { + id: 'skill_deanpeters-director-readiness-advisor_2ee4a69505', + title: 'deanpeters/director-readiness-advisor', + description: + 'deanpeters/director-readiness-advisor skill for Claude workflows from VoltAgent/awesome-agent-skills.', + kind: 'skill', + link: 'https://github.com/deanpeters/Product-Manager-Skills/blob/main/skills/director-readiness-advisor/SKILL.md', + }, + { + id: 'skill_deanpeters-discovery-interview-prep_6f90ab820f', + title: 'deanpeters/discovery-interview-prep', + description: + 'deanpeters/discovery-interview-prep skill for Claude workflows from VoltAgent/awesome-agent-skills.', + kind: 'skill', + link: 'https://github.com/deanpeters/Product-Manager-Skills/blob/main/skills/discovery-interview-prep/SKILL.md', + }, + { + id: 'skill_deanpeters-discovery-process_afb1f1e17d', + title: 'deanpeters/discovery-process', + description: + 'deanpeters/discovery-process skill for Claude workflows from VoltAgent/awesome-agent-skills.', + kind: 'skill', + link: 'https://github.com/deanpeters/Product-Manager-Skills/blob/main/skills/discovery-process/SKILL.md', + }, + { + id: 'skill_deanpeters-eol-message_6d37d3ca55', + title: 'deanpeters/eol-message', + description: + 'deanpeters/eol-message skill for Claude workflows from VoltAgent/awesome-agent-skills.', + kind: 'skill', + link: 'https://github.com/deanpeters/Product-Manager-Skills/blob/main/skills/eol-message/SKILL.md', + }, + { + id: 'skill_deanpeters-epic-breakdown-advisor_7f2eae3e1f', + title: 'deanpeters/epic-breakdown-advisor', + description: + 'deanpeters/epic-breakdown-advisor skill for Claude workflows from VoltAgent/awesome-agent-skills.', + kind: 'skill', + link: 'https://github.com/deanpeters/Product-Manager-Skills/blob/main/skills/epic-breakdown-advisor/SKILL.md', + }, + { + id: 'skill_deanpeters-epic-hypothesis_810e2d2cac', + title: 'deanpeters/epic-hypothesis', + description: + 'deanpeters/epic-hypothesis skill for Claude workflows from VoltAgent/awesome-agent-skills.', + kind: 'skill', + link: 'https://github.com/deanpeters/Product-Manager-Skills/blob/main/skills/epic-hypothesis/SKILL.md', + }, + { + id: 'skill_deanpeters-executive-onboarding-playbook_3a4fc6e727', + title: 'deanpeters/executive-onboarding-playbook', + description: + 'deanpeters/executive-onboarding-playbook skill for Claude workflows from VoltAgent/awesome-agent-skills.', + kind: 'skill', + link: 'https://github.com/deanpeters/Product-Manager-Skills/blob/main/skills/executive-onboarding-playbook/SKILL.md', + }, + { + id: 'skill_deanpeters-feature-investment-advisor_2a8d91d0a7', + title: 'deanpeters/feature-investment-advisor', + description: + 'deanpeters/feature-investment-advisor skill for Claude workflows from VoltAgent/awesome-agent-skills.', + kind: 'skill', + link: 'https://github.com/deanpeters/Product-Manager-Skills/blob/main/skills/feature-investment-advisor/SKILL.md', + }, + { + id: 'skill_deanpeters-finance-based-pricing-advisor_5456cb8835', + title: 'deanpeters/finance-based-pricing-advisor', + description: + 'deanpeters/finance-based-pricing-advisor skill for Claude workflows from VoltAgent/awesome-agent-skills.', + kind: 'skill', + link: 'https://github.com/deanpeters/Product-Manager-Skills/blob/main/skills/finance-based-pricing-advisor/SKILL.md', + }, + { + id: 'skill_deanpeters-finance-metrics-quickref_b36f4b1208', + title: 'deanpeters/finance-metrics-quickref', + description: + 'deanpeters/finance-metrics-quickref skill for Claude workflows from VoltAgent/awesome-agent-skills.', + kind: 'skill', + link: 'https://github.com/deanpeters/Product-Manager-Skills/blob/main/skills/finance-metrics-quickref/SKILL.md', + }, + { + id: 'skill_deanpeters-jobs-to-be-done_27cc6969d3', + title: 'deanpeters/jobs-to-be-done', + description: + 'deanpeters/jobs-to-be-done skill for Claude workflows from VoltAgent/awesome-agent-skills.', + kind: 'skill', + link: 'https://github.com/deanpeters/Product-Manager-Skills/blob/main/skills/jobs-to-be-done/SKILL.md', + }, + { + id: 'skill_deanpeters-lean-ux-canvas_f6369325d4', + title: 'deanpeters/lean-ux-canvas', + description: + 'deanpeters/lean-ux-canvas skill for Claude workflows from VoltAgent/awesome-agent-skills.', + kind: 'skill', + link: 'https://github.com/deanpeters/Product-Manager-Skills/blob/main/skills/lean-ux-canvas/SKILL.md', + }, + { + id: 'skill_deanpeters-opportunity-solution-tree_2de3da64cc', + title: 'deanpeters/opportunity-solution-tree', + description: + 'deanpeters/opportunity-solution-tree skill for Claude workflows from VoltAgent/awesome-agent-skills.', + kind: 'skill', + link: 'https://github.com/deanpeters/Product-Manager-Skills/blob/main/skills/opportunity-solution-tree/SKILL.md', + }, + { + id: 'skill_deanpeters-pestel-analysis_d991527315', + title: 'deanpeters/pestel-analysis', + description: + 'deanpeters/pestel-analysis skill for Claude workflows from VoltAgent/awesome-agent-skills.', + kind: 'skill', + link: 'https://github.com/deanpeters/Product-Manager-Skills/blob/main/skills/pestel-analysis/SKILL.md', + }, + { + id: 'skill_deanpeters-pol-probe_e5a4455a70', + title: 'deanpeters/pol-probe', + description: + 'deanpeters/pol-probe skill for Claude workflows from VoltAgent/awesome-agent-skills.', + kind: 'skill', + link: 'https://github.com/deanpeters/Product-Manager-Skills/blob/main/skills/pol-probe/SKILL.md', + }, + { + id: 'skill_deanpeters-pol-probe-advisor_3939e9e675', + title: 'deanpeters/pol-probe-advisor', + description: + 'deanpeters/pol-probe-advisor skill for Claude workflows from VoltAgent/awesome-agent-skills.', + kind: 'skill', + link: 'https://github.com/deanpeters/Product-Manager-Skills/blob/main/skills/pol-probe-advisor/SKILL.md', + }, + { + id: 'skill_deanpeters-positioning-statement_2c74bcd6e5', + title: 'deanpeters/positioning-statement', + description: + 'deanpeters/positioning-statement skill for Claude workflows from VoltAgent/awesome-agent-skills.', + kind: 'skill', + link: 'https://github.com/deanpeters/Product-Manager-Skills/blob/main/skills/positioning-statement/SKILL.md', + }, + { + id: 'skill_deanpeters-positioning-workshop_20d69affab', + title: 'deanpeters/positioning-workshop', + description: + 'deanpeters/positioning-workshop skill for Claude workflows from VoltAgent/awesome-agent-skills.', + kind: 'skill', + link: 'https://github.com/deanpeters/Product-Manager-Skills/blob/main/skills/positioning-workshop/SKILL.md', + }, + { + id: 'skill_deanpeters-prd-development_14251b74cb', + title: 'deanpeters/prd-development', + description: + 'deanpeters/prd-development skill for Claude workflows from VoltAgent/awesome-agent-skills.', + kind: 'skill', + link: 'https://github.com/deanpeters/Product-Manager-Skills/blob/main/skills/prd-development/SKILL.md', + }, + { + id: 'skill_deanpeters-press-release_2e02849a06', + title: 'deanpeters/press-release', + description: + 'deanpeters/press-release skill for Claude workflows from VoltAgent/awesome-agent-skills.', + kind: 'skill', + link: 'https://github.com/deanpeters/Product-Manager-Skills/blob/main/skills/press-release/SKILL.md', + }, + { + id: 'skill_deanpeters-prioritization-advisor_da51e62991', + title: 'deanpeters/prioritization-advisor', + description: + 'deanpeters/prioritization-advisor skill for Claude workflows from VoltAgent/awesome-agent-skills.', + kind: 'skill', + link: 'https://github.com/deanpeters/Product-Manager-Skills/blob/main/skills/prioritization-advisor/SKILL.md', + }, + ], + // Page 4 + [ + { + id: 'skill_deanpeters-problem-framing-canvas_c76ae368ef', + title: 'deanpeters/problem-framing-canvas', + description: + 'deanpeters/problem-framing-canvas skill for Claude workflows from VoltAgent/awesome-agent-skills.', + kind: 'skill', + link: 'https://github.com/deanpeters/Product-Manager-Skills/blob/main/skills/problem-framing-canvas/SKILL.md', + }, + { + id: 'skill_deanpeters-problem-statement_b337dd2a4a', + title: 'deanpeters/problem-statement', + description: + 'deanpeters/problem-statement skill for Claude workflows from VoltAgent/awesome-agent-skills.', + kind: 'skill', + link: 'https://github.com/deanpeters/Product-Manager-Skills/blob/main/skills/problem-statement/SKILL.md', + }, + { + id: 'skill_deanpeters-product-strategy-session_614eecf29c', + title: 'deanpeters/product-strategy-session', + description: + 'deanpeters/product-strategy-session skill for Claude workflows from VoltAgent/awesome-agent-skills.', + kind: 'skill', + link: 'https://github.com/deanpeters/Product-Manager-Skills/blob/main/skills/product-strategy-session/SKILL.md', + }, + { + id: 'skill_deanpeters-proto-persona_5ecb038bb0', + title: 'deanpeters/proto-persona', + description: + 'deanpeters/proto-persona skill for Claude workflows from VoltAgent/awesome-agent-skills.', + kind: 'skill', + link: 'https://github.com/deanpeters/Product-Manager-Skills/blob/main/skills/proto-persona/SKILL.md', + }, + { + id: 'skill_deanpeters-recommendation-canvas_7b032b2e41', + title: 'deanpeters/recommendation-canvas', + description: + 'deanpeters/recommendation-canvas skill for Claude workflows from VoltAgent/awesome-agent-skills.', + kind: 'skill', + link: 'https://github.com/deanpeters/Product-Manager-Skills/blob/main/skills/recommendation-canvas/SKILL.md', + }, + { + id: 'skill_deanpeters-roadmap-planning_b0e5d6e745', + title: 'deanpeters/roadmap-planning', + description: + 'deanpeters/roadmap-planning skill for Claude workflows from VoltAgent/awesome-agent-skills.', + kind: 'skill', + link: 'https://github.com/deanpeters/Product-Manager-Skills/blob/main/skills/roadmap-planning/SKILL.md', + }, + { + id: 'skill_deanpeters-saas-economics-efficiency-metrics_12d0bc6882', + title: 'deanpeters/saas-economics-efficiency-metrics', + description: + 'deanpeters/saas-economics-efficiency-metrics skill for Claude workflows from VoltAgent/awesome-agent-skills.', + kind: 'skill', + link: 'https://github.com/deanpeters/Product-Manager-Skills/blob/main/skills/saas-economics-efficiency-metrics/SKILL.md', + }, + { + id: 'skill_deanpeters-saas-revenue-growth-metrics_e5ea1952e8', + title: 'deanpeters/saas-revenue-growth-metrics', + description: + 'deanpeters/saas-revenue-growth-metrics skill for Claude workflows from VoltAgent/awesome-agent-skills.', + kind: 'skill', + link: 'https://github.com/deanpeters/Product-Manager-Skills/blob/main/skills/saas-revenue-growth-metrics/SKILL.md', + }, + { + id: 'skill_deanpeters-skill-authoring-workflow_5a5b2b7427', + title: 'deanpeters/skill-authoring-workflow', + description: + 'deanpeters/skill-authoring-workflow skill for Claude workflows from VoltAgent/awesome-agent-skills.', + kind: 'skill', + link: 'https://github.com/deanpeters/Product-Manager-Skills/blob/main/skills/skill-authoring-workflow/SKILL.md', + }, + { + id: 'skill_deanpeters-storyboard_ce9a8a1f8e', + title: 'deanpeters/storyboard', + description: + 'deanpeters/storyboard skill for Claude workflows from VoltAgent/awesome-agent-skills.', + kind: 'skill', + link: 'https://github.com/deanpeters/Product-Manager-Skills/blob/main/skills/storyboard/SKILL.md', + }, + { + id: 'skill_deanpeters-tam-sam-som-calculator_1747443f64', + title: 'deanpeters/tam-sam-som-calculator', + description: + 'deanpeters/tam-sam-som-calculator skill for Claude workflows from VoltAgent/awesome-agent-skills.', + kind: 'skill', + link: 'https://github.com/deanpeters/Product-Manager-Skills/blob/main/skills/tam-sam-som-calculator/SKILL.md', + }, + { + id: 'skill_deanpeters-user-story_c720056f87', + title: 'deanpeters/user-story', + description: + 'deanpeters/user-story skill for Claude workflows from VoltAgent/awesome-agent-skills.', + kind: 'skill', + link: 'https://github.com/deanpeters/Product-Manager-Skills/blob/main/skills/user-story/SKILL.md', + }, + { + id: 'skill_deanpeters-user-story-mapping_2ad4abee94', + title: 'deanpeters/user-story-mapping', + description: + 'deanpeters/user-story-mapping skill for Claude workflows from VoltAgent/awesome-agent-skills.', + kind: 'skill', + link: 'https://github.com/deanpeters/Product-Manager-Skills/blob/main/skills/user-story-mapping/SKILL.md', + }, + { + id: 'skill_deanpeters-user-story-mapping-workshop_2b45b9273f', + title: 'deanpeters/user-story-mapping-workshop', + description: + 'deanpeters/user-story-mapping-workshop skill for Claude workflows from VoltAgent/awesome-agent-skills.', + kind: 'skill', + link: 'https://github.com/deanpeters/Product-Manager-Skills/blob/main/skills/user-story-mapping-workshop/SKILL.md', + }, + { + id: 'skill_deanpeters-user-story-splitting_aa05aaceff', + title: 'deanpeters/user-story-splitting', + description: + 'deanpeters/user-story-splitting skill for Claude workflows from VoltAgent/awesome-agent-skills.', + kind: 'skill', + link: 'https://github.com/deanpeters/Product-Manager-Skills/blob/main/skills/user-story-splitting/SKILL.md', + }, + { + id: 'skill_deanpeters-vp-cpo-readiness-advisor_a00ba6fb3a', + title: 'deanpeters/vp-cpo-readiness-advisor', + description: + 'deanpeters/vp-cpo-readiness-advisor skill for Claude workflows from VoltAgent/awesome-agent-skills.', + kind: 'skill', + link: 'https://github.com/deanpeters/Product-Manager-Skills/blob/main/skills/vp-cpo-readiness-advisor/SKILL.md', + }, + { + id: 'skill_deanpeters-workshop-facilitation_d610040c37', + title: 'deanpeters/workshop-facilitation', + description: + 'deanpeters/workshop-facilitation skill for Claude workflows from VoltAgent/awesome-agent-skills.', + kind: 'skill', + link: 'https://github.com/deanpeters/Product-Manager-Skills/blob/main/skills/workshop-facilitation/SKILL.md', + }, + { + id: 'skill_deapi-ai-claude-code-skills_cebf7e6e24', + title: 'deapi-ai/claude-code-skills', + description: + 'deapi-ai/claude-code-skills skill for Claude workflows from BehiSecc/awesome-claude-skills.', + kind: 'skill', + link: 'https://github.com/deapi-ai/claude-code-skills/blob/main/SKILL.md', + }, + { + id: 'skill_debug-skill_ea281b7675', + title: 'debug-skill', + description: 'debug-skill skill for Claude workflows from BehiSecc/awesome-claude-skills.', + kind: 'skill', + link: 'https://github.com/AlmogBaku/debug-skill/blob/main/SKILL.md', + }, + { + id: 'skill_deep-research_7c3f3036f3', + title: 'deep-research', + description: 'deep-research skill for Claude workflows from BehiSecc/awesome-claude-skills.', + kind: 'skill', + link: 'https://github.com/sanjay3290/ai-skills/blob/main/skills/deep-research/SKILL.md', + }, + { + id: 'skill_defense-in-depth_518d68591b', + title: 'defense-in-depth', + description: + 'defense-in-depth skill for Claude workflows from BehiSecc/awesome-claude-skills.', + kind: 'skill', + link: 'https://github.com/obra/superpowers/blob/main/skills/defense-in-depth/SKILL.md', + }, + { + id: 'skill_design-auditor_3294969c25', + title: 'Design Auditor', + description: 'Design Auditor skill for Claude workflows from BehiSecc/awesome-claude-skills.', + kind: 'skill', + link: 'https://github.com/Ashutos1997/claude-design-auditor-skill/blob/main/SKILL.md', + }, + { + id: 'skill_deusyu-translate-book_9f4f1a9936', + title: 'deusyu/translate-book', + description: + 'deusyu/translate-book skill for Claude workflows from VoltAgent/awesome-agent-skills.', + kind: 'skill', + link: 'https://github.com/deusyu/translate-book/blob/main/SKILL.md', + }, + { + id: 'skill_devmarketing-skills_67a3b0a135', + title: 'devmarketing-skills', + description: + 'devmarketing-skills skill for Claude workflows from BehiSecc/awesome-claude-skills.', + kind: 'skill', + link: 'https://github.com/jonathimer/devmarketing-skills/blob/main/SKILL.md', + }, + { + id: 'skill_digidai-product-manager-skills_98f8504838', + title: 'Digidai/product-manager-skills', + description: + 'Digidai/product-manager-skills skill for Claude workflows from VoltAgent/awesome-agent-skills.', + kind: 'skill', + link: 'https://github.com/Digidai/product-manager-skills/blob/main/SKILL.md', + }, + { + id: 'skill_dna-claude-analysis_586fa91dbb', + title: 'dna-claude-analysis', + description: + 'dna-claude-analysis skill for Claude workflows from BehiSecc/awesome-claude-skills.', + kind: 'skill', + link: 'https://github.com/shmlkv/dna-claude-analysis/blob/main/SKILL.md', + }, + { + id: 'skill_document-writer_b6b3c682ba', + title: 'document-writer', + description: 'document-writer skill for Claude workflows from onmax/nuxt-skills.', + kind: 'skill', + link: 'https://github.com/onmax/nuxt-skills/blob/main/skills/document-writer/SKILL.md', + }, + { + id: 'skill_duckdb-attach-db_5cb52f99ec', + title: 'duckdb/attach-db', + description: + 'duckdb/attach-db skill for Claude workflows from VoltAgent/awesome-agent-skills.', + kind: 'skill', + link: 'https://github.com/duckdb/duckdb-skills/blob/main/skills/attach-db/SKILL.md', + }, + { + id: 'skill_duckdb-duckdb-docs_c45883f580', + title: 'duckdb/duckdb-docs', + description: + 'duckdb/duckdb-docs skill for Claude workflows from VoltAgent/awesome-agent-skills.', + kind: 'skill', + link: 'https://github.com/duckdb/duckdb-skills/blob/main/skills/duckdb-docs/SKILL.md', + }, + { + id: 'skill_duckdb-install-duckdb_1960bb96cb', + title: 'duckdb/install-duckdb', + description: + 'duckdb/install-duckdb skill for Claude workflows from VoltAgent/awesome-agent-skills.', + kind: 'skill', + link: 'https://github.com/duckdb/duckdb-skills/blob/main/skills/install-duckdb/SKILL.md', + }, + { + id: 'skill_duckdb-query_cbc5ae86fc', + title: 'duckdb/query', + description: 'duckdb/query skill for Claude workflows from VoltAgent/awesome-agent-skills.', + kind: 'skill', + link: 'https://github.com/duckdb/duckdb-skills/blob/main/skills/query/SKILL.md', + }, + { + id: 'skill_duckdb-read-file_a8ee06e912', + title: 'duckdb/read-file', + description: + 'duckdb/read-file skill for Claude workflows from VoltAgent/awesome-agent-skills.', + kind: 'skill', + link: 'https://github.com/duckdb/duckdb-skills/blob/main/skills/read-file/SKILL.md', + }, + { + id: 'skill_duckdb-read-memories_35da6088bf', + title: 'duckdb/read-memories', + description: + 'duckdb/read-memories skill for Claude workflows from VoltAgent/awesome-agent-skills.', + kind: 'skill', + link: 'https://github.com/duckdb/duckdb-skills/blob/main/skills/read-memories/SKILL.md', + }, + { + id: 'skill_efremidze-swift-patterns-skill_74598892ac', + title: 'efremidze/swift-patterns-skill', + description: + 'efremidze/swift-patterns-skill skill for Claude workflows from VoltAgent/awesome-agent-skills.', + kind: 'skill', + link: 'https://github.com/efremidze/swift-patterns-skill/blob/main/swift-patterns/SKILL.md', + }, + { + id: 'skill_ehmo-platform-design-skills_9eec171fe4', + title: 'ehmo/platform-design-skills', + description: + 'ehmo/platform-design-skills skill for Claude workflows from VoltAgent/awesome-agent-skills.', + kind: 'skill', + link: 'https://github.com/ehmo/platform-design-skills/blob/main/SKILL.md', + }, + { + id: 'skill_elevenlabs_66de2613b5', + title: 'elevenlabs', + description: 'elevenlabs skill for Claude workflows from BehiSecc/awesome-claude-skills.', + kind: 'skill', + link: 'https://github.com/sanjay3290/ai-skills/blob/main/skills/elevenlabs/SKILL.md', + }, + { + id: 'skill_elicitation_09f0c68d61', + title: 'elicitation', + description: 'elicitation skill for Claude workflows from BehiSecc/awesome-claude-skills.', + kind: 'skill', + link: 'https://github.com/tasteray/skills/blob/main/elicitation/SKILL.md', + }, + { + id: 'skill_email-html-mjml_cef1e98578', + title: 'email-html-mjml', + description: + 'email-html-mjml skill for Claude workflows from BehiSecc/awesome-claude-skills.', + kind: 'skill', + link: 'https://github.com/framix-team/skill-email-html-mjml/blob/main/SKILL.md', + }, + { + id: 'skill_emblem-ai-agent-wallet_764730e9fd', + title: 'Emblem AI Agent Wallet', + description: + 'Emblem AI Agent Wallet skill for Claude workflows from BehiSecc/awesome-claude-skills.', + kind: 'skill', + link: 'https://github.com/EmblemCompany/Agent-skills/blob/main/skills/emblem-ai-agent-wallet/SKILL.md', + }, + { + id: 'skill_eronred-aso-skills_91d9bb7ce3', + title: 'Eronred/aso-skills', + description: + 'Eronred/aso-skills skill for Claude workflows from VoltAgent/awesome-agent-skills.', + kind: 'skill', + link: 'https://github.com/Eronred/aso-skills/blob/main/SKILL.md', + }, + { + id: 'skill_ethos-link-rails-conventions_4198a474b0', + title: 'ethos-link/rails-conventions', + description: + 'ethos-link/rails-conventions skill for Claude workflows from VoltAgent/awesome-agent-skills.', + kind: 'skill', + link: 'https://github.com/ethos-link/rails-conventions/blob/main/SKILL.md', + }, + { + id: 'skill_everyinc-charlie-cfo-skill_d00df17b60', + title: 'EveryInc/charlie-cfo-skill', + description: + 'EveryInc/charlie-cfo-skill skill for Claude workflows from VoltAgent/awesome-agent-skills.', + kind: 'skill', + link: 'https://github.com/EveryInc/charlie-cfo-skill/blob/main/SKILL.md', + }, + { + id: 'skill_expo-building-native-ui_d2a54b6a87', + title: 'expo/building-native-ui', + description: + 'expo/building-native-ui skill for Claude workflows from VoltAgent/awesome-agent-skills.', + kind: 'skill', + link: 'https://github.com/expo/skills/blob/main/plugins/expo/skills/building-native-ui/SKILL.md', + }, + { + id: 'skill_expo-expo-api-routes_7c87e870f7', + title: 'expo/expo-api-routes', + description: + 'expo/expo-api-routes skill for Claude workflows from VoltAgent/awesome-agent-skills.', + kind: 'skill', + link: 'https://github.com/expo/skills/blob/main/plugins/expo/skills/expo-api-routes/SKILL.md', + }, + { + id: 'skill_expo-expo-cicd-workflows_78d03fb3f3', + title: 'expo/expo-cicd-workflows', + description: + 'expo/expo-cicd-workflows skill for Claude workflows from VoltAgent/awesome-agent-skills.', + kind: 'skill', + link: 'https://github.com/expo/skills/blob/main/plugins/expo/skills/expo-cicd-workflows/SKILL.md', + }, + { + id: 'skill_expo-expo-deployment_3c80a8a916', + title: 'expo/expo-deployment', + description: + 'expo/expo-deployment skill for Claude workflows from VoltAgent/awesome-agent-skills.', + kind: 'skill', + link: 'https://github.com/expo/skills/blob/main/plugins/expo/skills/expo-deployment/SKILL.md', + }, + { + id: 'skill_expo-expo-dev-client_248b64d7f7', + title: 'expo/expo-dev-client', + description: + 'expo/expo-dev-client skill for Claude workflows from VoltAgent/awesome-agent-skills.', + kind: 'skill', + link: 'https://github.com/expo/skills/blob/main/plugins/expo/skills/expo-dev-client/SKILL.md', + }, + { + id: 'skill_expo-expo-tailwind-setup_f7cd9fd3aa', + title: 'expo/expo-tailwind-setup', + description: + 'expo/expo-tailwind-setup skill for Claude workflows from VoltAgent/awesome-agent-skills.', + kind: 'skill', + link: 'https://github.com/expo/skills/blob/main/plugins/expo/skills/expo-tailwind-setup/SKILL.md', + }, + { + id: 'skill_expo-expo-ui-jetpack-compose_2cff200a97', + title: 'expo/expo-ui-jetpack-compose', + description: + 'expo/expo-ui-jetpack-compose skill for Claude workflows from VoltAgent/awesome-agent-skills.', + kind: 'skill', + link: 'https://github.com/expo/skills/blob/main/plugins/expo/skills/expo-ui-jetpack-compose/SKILL.md', + }, + { + id: 'skill_expo-expo-ui-swift-ui_05d0af854d', + title: 'expo/expo-ui-swift-ui', + description: + 'expo/expo-ui-swift-ui skill for Claude workflows from VoltAgent/awesome-agent-skills.', + kind: 'skill', + link: 'https://github.com/expo/skills/blob/main/plugins/expo/skills/expo-ui-swift-ui/SKILL.md', + }, + { + id: 'skill_expo-native-data-fetching_c254a8ef22', + title: 'expo/native-data-fetching', + description: + 'expo/native-data-fetching skill for Claude workflows from VoltAgent/awesome-agent-skills.', + kind: 'skill', + link: 'https://github.com/expo/skills/blob/main/plugins/expo/skills/native-data-fetching/SKILL.md', + }, + { + id: 'skill_expo-upgrading-expo_705ec878b4', + title: 'expo/upgrading-expo', + description: + 'expo/upgrading-expo skill for Claude workflows from VoltAgent/awesome-agent-skills.', + kind: 'skill', + link: 'https://github.com/expo/skills/blob/main/plugins/expo/skills/upgrading-expo/SKILL.md', + }, + { + id: 'skill_expo-use-dom_006f3f5ff3', + title: 'expo/use-dom', + description: 'expo/use-dom skill for Claude workflows from VoltAgent/awesome-agent-skills.', + kind: 'skill', + link: 'https://github.com/expo/skills/blob/main/plugins/expo/skills/use-dom/SKILL.md', + }, + { + id: 'skill_fal-ai-community-fal-3d_8f6fdab07c', + title: 'fal-ai-community/fal-3d', + description: + 'fal-ai-community/fal-3d skill for Claude workflows from VoltAgent/awesome-agent-skills.', + kind: 'skill', + link: 'https://github.com/fal-ai-community/skills/blob/main/skills/claude.ai/fal-3d/SKILL.md', + }, + { + id: 'skill_fal-ai-community-fal-audio_2af0905491', + title: 'fal-ai-community/fal-audio', + description: + 'fal-ai-community/fal-audio skill for Claude workflows from VoltAgent/awesome-agent-skills.', + kind: 'skill', + link: 'https://github.com/fal-ai-community/skills/blob/main/skills/claude.ai/fal-audio/SKILL.md', + }, + { + id: 'skill_fal-ai-community-fal-generate_c9ee6d5968', + title: 'fal-ai-community/fal-generate', + description: + 'fal-ai-community/fal-generate skill for Claude workflows from VoltAgent/awesome-agent-skills.', + kind: 'skill', + link: 'https://github.com/fal-ai-community/skills/blob/main/skills/claude.ai/fal-generate/SKILL.md', + }, + { + id: 'skill_fal-ai-community-fal-image-edit_02790b2e48', + title: 'fal-ai-community/fal-image-edit', + description: + 'fal-ai-community/fal-image-edit skill for Claude workflows from VoltAgent/awesome-agent-skills.', + kind: 'skill', + link: 'https://github.com/fal-ai-community/skills/blob/main/skills/claude.ai/fal-image-edit/SKILL.md', + }, + { + id: 'skill_fal-ai-community-fal-kling-o3_40a58f442c', + title: 'fal-ai-community/fal-kling-o3', + description: + 'fal-ai-community/fal-kling-o3 skill for Claude workflows from VoltAgent/awesome-agent-skills.', + kind: 'skill', + link: 'https://github.com/fal-ai-community/skills/blob/main/skills/claude.ai/fal-kling-o3/SKILL.md', + }, + { + id: 'skill_fal-ai-community-fal-lip-sync_78051ef1bb', + title: 'fal-ai-community/fal-lip-sync', + description: + 'fal-ai-community/fal-lip-sync skill for Claude workflows from VoltAgent/awesome-agent-skills.', + kind: 'skill', + link: 'https://github.com/fal-ai-community/skills/blob/main/skills/claude.ai/fal-lip-sync/SKILL.md', + }, + { + id: 'skill_fal-ai-community-fal-platform_e7011ff14b', + title: 'fal-ai-community/fal-platform', + description: + 'fal-ai-community/fal-platform skill for Claude workflows from VoltAgent/awesome-agent-skills.', + kind: 'skill', + link: 'https://github.com/fal-ai-community/skills/blob/main/skills/claude.ai/fal-platform/SKILL.md', + }, + { + id: 'skill_fal-ai-community-fal-realtime_04eec9edd2', + title: 'fal-ai-community/fal-realtime', + description: + 'fal-ai-community/fal-realtime skill for Claude workflows from VoltAgent/awesome-agent-skills.', + kind: 'skill', + link: 'https://github.com/fal-ai-community/skills/blob/main/skills/claude.ai/fal-realtime/SKILL.md', + }, + { + id: 'skill_fal-ai-community-fal-restore_212b5e977f', + title: 'fal-ai-community/fal-restore', + description: + 'fal-ai-community/fal-restore skill for Claude workflows from VoltAgent/awesome-agent-skills.', + kind: 'skill', + link: 'https://github.com/fal-ai-community/skills/blob/main/skills/claude.ai/fal-restore/SKILL.md', + }, + { + id: 'skill_fal-ai-community-fal-train_e44053cc99', + title: 'fal-ai-community/fal-train', + description: + 'fal-ai-community/fal-train skill for Claude workflows from VoltAgent/awesome-agent-skills.', + kind: 'skill', + link: 'https://github.com/fal-ai-community/skills/blob/main/skills/claude.ai/fal-train/SKILL.md', + }, + { + id: 'skill_fal-ai-community-fal-tryon_7bf45a13c0', + title: 'fal-ai-community/fal-tryon', + description: + 'fal-ai-community/fal-tryon skill for Claude workflows from VoltAgent/awesome-agent-skills.', + kind: 'skill', + link: 'https://github.com/fal-ai-community/skills/blob/main/skills/claude.ai/fal-tryon/SKILL.md', + }, + { + id: 'skill_fal-ai-community-fal-upscale_6623baa1ec', + title: 'fal-ai-community/fal-upscale', + description: + 'fal-ai-community/fal-upscale skill for Claude workflows from VoltAgent/awesome-agent-skills.', + kind: 'skill', + link: 'https://github.com/fal-ai-community/skills/blob/main/skills/claude.ai/fal-upscale/SKILL.md', + }, + { + id: 'skill_fal-ai-community-fal-video-edit_6261d23dda', + title: 'fal-ai-community/fal-video-edit', + description: + 'fal-ai-community/fal-video-edit skill for Claude workflows from VoltAgent/awesome-agent-skills.', + kind: 'skill', + link: 'https://github.com/fal-ai-community/skills/blob/main/skills/claude.ai/fal-video-edit/SKILL.md', + }, + { + id: 'skill_fal-ai-community-fal-vision_5b060cbfb4', + title: 'fal-ai-community/fal-vision', + description: + 'fal-ai-community/fal-vision skill for Claude workflows from VoltAgent/awesome-agent-skills.', + kind: 'skill', + link: 'https://github.com/fal-ai-community/skills/blob/main/skills/claude.ai/fal-vision/SKILL.md', + }, + { + id: 'skill_fal-ai-community-fal-workflow_e4c49afcab', + title: 'fal-ai-community/fal-workflow', + description: + 'fal-ai-community/fal-workflow skill for Claude workflows from VoltAgent/awesome-agent-skills.', + kind: 'skill', + link: 'https://github.com/fal-ai-community/skills/blob/main/skills/claude.ai/fal-workflow/SKILL.md', + }, + { + id: 'skill_family-history-research_dee5809afc', + title: 'family-history-research', + description: + 'family-history-research skill for Claude workflows from BehiSecc/awesome-claude-skills.', + kind: 'skill', + link: 'https://github.com/emaynard/claude-family-history-research-skill/blob/main/SKILL.md', + }, + { + id: 'skill_ffuf-claude-skill_707b5b95d6', + title: 'ffuf_claude_skill', + description: + 'ffuf_claude_skill skill for Claude workflows from BehiSecc/awesome-claude-skills.', + kind: 'skill', + link: 'https://github.com/jthack/ffuf_claude_skill/blob/main/SKILL.md', + }, + { + id: 'skill_figma-figma-code-connect-components_0da2e7c1e8', + title: 'figma/figma-code-connect-components', + description: + 'figma/figma-code-connect-components skill for Claude workflows from VoltAgent/awesome-agent-skills.', + kind: 'skill', + link: 'https://github.com/figma/mcp-server-guide/blob/main/skills/figma-code-connect-components/SKILL.md', + }, + { + id: 'skill_figma-figma-create-design-system-rules_3994c3cac7', + title: 'figma/figma-create-design-system-rules', + description: + 'figma/figma-create-design-system-rules skill for Claude workflows from VoltAgent/awesome-agent-skills.', + kind: 'skill', + link: 'https://github.com/figma/mcp-server-guide/blob/main/skills/figma-create-design-system-rules/SKILL.md', + }, + { + id: 'skill_figma-figma-create-new-file_36dd1b88a6', + title: 'figma/figma-create-new-file', + description: + 'figma/figma-create-new-file skill for Claude workflows from VoltAgent/awesome-agent-skills.', + kind: 'skill', + link: 'https://github.com/figma/mcp-server-guide/blob/main/skills/figma-create-new-file/SKILL.md', + }, + { + id: 'skill_figma-figma-generate-design_0d5a1bb367', + title: 'figma/figma-generate-design', + description: + 'figma/figma-generate-design skill for Claude workflows from VoltAgent/awesome-agent-skills.', + kind: 'skill', + link: 'https://github.com/figma/mcp-server-guide/blob/main/skills/figma-generate-design/SKILL.md', + }, + { + id: 'skill_figma-figma-generate-library_fe6df0a05e', + title: 'figma/figma-generate-library', + description: + 'figma/figma-generate-library skill for Claude workflows from VoltAgent/awesome-agent-skills.', + kind: 'skill', + link: 'https://github.com/figma/mcp-server-guide/blob/main/skills/figma-generate-library/SKILL.md', + }, + { + id: 'skill_figma-figma-implement-design_896983758b', + title: 'figma/figma-implement-design', + description: + 'figma/figma-implement-design skill for Claude workflows from VoltAgent/awesome-agent-skills.', + kind: 'skill', + link: 'https://github.com/figma/mcp-server-guide/blob/main/skills/figma-implement-design/SKILL.md', + }, + { + id: 'skill_figma-figma-use_79faec14c0', + title: 'figma/figma-use', + description: + 'figma/figma-use skill for Claude workflows from VoltAgent/awesome-agent-skills.', + kind: 'skill', + link: 'https://github.com/figma/mcp-server-guide/blob/main/skills/figma-use/SKILL.md', + }, + { + id: 'skill_file-organizer_33762e5f9b', + title: 'file-organizer', + description: 'file-organizer skill for Claude workflows from BehiSecc/awesome-claude-skills.', + kind: 'skill', + link: 'https://github.com/ComposioHQ/awesome-claude-skills/blob/master/file-organizer/SKILL.md', + }, + { + id: 'skill_find-scene_3558d25ca2', + title: 'find-scene', + description: 'find-scene skill for Claude workflows from BehiSecc/awesome-claude-skills.', + kind: 'skill', + link: 'https://github.com/uriva/find-scene-skill/blob/main/SKILL.md', + }, + { + id: 'skill_find-skills_1461c3a959', + title: 'find-skills', + description: 'find-skills skill for Claude workflows from BehiSecc/awesome-claude-skills.', + kind: 'skill', + link: 'https://github.com/agentbay-ai/agentbay-skills/blob/main/SKILL.md', + }, + { + id: 'skill_finishing-a-development-branch_72ab9aecbd', + title: 'finishing-a-development-branch', + description: + 'finishing-a-development-branch skill for Claude workflows from BehiSecc/awesome-claude-skills.', + kind: 'skill', + link: 'https://github.com/obra/superpowers/blob/main/skills/finishing-a-development-branch/SKILL.md', + }, + { + id: 'skill_firecrawl-firecrawl-agent_649a3cba66', + title: 'firecrawl/firecrawl-agent', + description: + 'firecrawl/firecrawl-agent skill for Claude workflows from VoltAgent/awesome-agent-skills.', + kind: 'skill', + link: 'https://github.com/firecrawl/cli/blob/main/skills/firecrawl-agent/SKILL.md', + }, + { + id: 'skill_firecrawl-firecrawl-browser_8c976419e3', + title: 'firecrawl/firecrawl-browser', + description: + 'firecrawl/firecrawl-browser skill for Claude workflows from VoltAgent/awesome-agent-skills.', + kind: 'skill', + link: 'https://github.com/firecrawl/cli/blob/main/skills/firecrawl-browser/SKILL.md', + }, + { + id: 'skill_firecrawl-firecrawl-cli_3c99dbf993', + title: 'firecrawl/firecrawl-cli', + description: + 'firecrawl/firecrawl-cli skill for Claude workflows from VoltAgent/awesome-agent-skills.', + kind: 'skill', + link: 'https://github.com/firecrawl/cli/blob/main/skills/firecrawl-cli/SKILL.md', + }, + { + id: 'skill_firecrawl-firecrawl-crawl_61ef63d3a8', + title: 'firecrawl/firecrawl-crawl', + description: + 'firecrawl/firecrawl-crawl skill for Claude workflows from VoltAgent/awesome-agent-skills.', + kind: 'skill', + link: 'https://github.com/firecrawl/cli/blob/main/skills/firecrawl-crawl/SKILL.md', + }, + { + id: 'skill_firecrawl-firecrawl-download_c01ca3ae25', + title: 'firecrawl/firecrawl-download', + description: + 'firecrawl/firecrawl-download skill for Claude workflows from VoltAgent/awesome-agent-skills.', + kind: 'skill', + link: 'https://github.com/firecrawl/cli/blob/main/skills/firecrawl-download/SKILL.md', + }, + { + id: 'skill_firecrawl-firecrawl-map_d467d738d5', + title: 'firecrawl/firecrawl-map', + description: + 'firecrawl/firecrawl-map skill for Claude workflows from VoltAgent/awesome-agent-skills.', + kind: 'skill', + link: 'https://github.com/firecrawl/cli/blob/main/skills/firecrawl-map/SKILL.md', + }, + { + id: 'skill_firecrawl-firecrawl-scrape_1676c9b756', + title: 'firecrawl/firecrawl-scrape', + description: + 'firecrawl/firecrawl-scrape skill for Claude workflows from VoltAgent/awesome-agent-skills.', + kind: 'skill', + link: 'https://github.com/firecrawl/cli/blob/main/skills/firecrawl-scrape/SKILL.md', + }, + { + id: 'skill_firecrawl-firecrawl-search_3bbe801757', + title: 'firecrawl/firecrawl-search', + description: + 'firecrawl/firecrawl-search skill for Claude workflows from VoltAgent/awesome-agent-skills.', + kind: 'skill', + link: 'https://github.com/firecrawl/cli/blob/main/skills/firecrawl-search/SKILL.md', + }, + { + id: 'skill_frmoretto-clarity-gate_75180a56f2', + title: 'frmoretto/clarity-gate', + description: + 'frmoretto/clarity-gate skill for Claude workflows from VoltAgent/awesome-agent-skills.', + kind: 'skill', + link: 'https://github.com/frmoretto/clarity-gate/blob/main/SKILL.md', + }, + { + id: 'skill_fvadicamo-dev-agent-skills_ec8fa03f23', + title: 'fvadicamo/dev-agent-skills', + description: + 'fvadicamo/dev-agent-skills skill for Claude workflows from VoltAgent/awesome-agent-skills.', + kind: 'skill', + link: 'https://github.com/fvadicamo/dev-agent-skills/blob/main/SKILL.md', + }, + { + id: 'skill_garry-tan_d1e0ddf8ef', + title: 'Garry Tan', + description: 'Garry Tan skill for Claude workflows from VoltAgent/awesome-agent-skills.', + kind: 'skill', + link: 'https://github.com/garrytan', + }, + { + id: 'skill_garrytan-autoplan_1597882c56', + title: 'garrytan/autoplan', + description: + 'garrytan/autoplan skill for Claude workflows from VoltAgent/awesome-agent-skills.', + kind: 'skill', + link: 'https://github.com/garrytan/gstack/blob/main/autoplan/SKILL.md', + }, + { + id: 'skill_garrytan-benchmark_ad6880476b', + title: 'garrytan/benchmark', + description: + 'garrytan/benchmark skill for Claude workflows from VoltAgent/awesome-agent-skills.', + kind: 'skill', + link: 'https://github.com/garrytan/gstack/blob/main/benchmark/SKILL.md', + }, + { + id: 'skill_garrytan-browse_f96e1845ed', + title: 'garrytan/browse', + description: + 'garrytan/browse skill for Claude workflows from VoltAgent/awesome-agent-skills.', + kind: 'skill', + link: 'https://github.com/garrytan/gstack/blob/main/browse/SKILL.md', + }, + { + id: 'skill_garrytan-canary_5e4616ccf4', + title: 'garrytan/canary', + description: + 'garrytan/canary skill for Claude workflows from VoltAgent/awesome-agent-skills.', + kind: 'skill', + link: 'https://github.com/garrytan/gstack/blob/main/canary/SKILL.md', + }, + { + id: 'skill_garrytan-careful_38261a78e4', + title: 'garrytan/careful', + description: + 'garrytan/careful skill for Claude workflows from VoltAgent/awesome-agent-skills.', + kind: 'skill', + link: 'https://github.com/garrytan/gstack/blob/main/careful/SKILL.md', + }, + { + id: 'skill_garrytan-codex_d0a9a0dfa1', + title: 'garrytan/codex', + description: 'garrytan/codex skill for Claude workflows from VoltAgent/awesome-agent-skills.', + kind: 'skill', + link: 'https://github.com/garrytan/gstack/blob/main/codex/SKILL.md', + }, + { + id: 'skill_garrytan-cso_3d49ce4c20', + title: 'garrytan/cso', + description: 'garrytan/cso skill for Claude workflows from VoltAgent/awesome-agent-skills.', + kind: 'skill', + link: 'https://github.com/garrytan/gstack/blob/main/cso/SKILL.md', + }, + { + id: 'skill_garrytan-design-consultation_dfcfa8feb5', + title: 'garrytan/design-consultation', + description: + 'garrytan/design-consultation skill for Claude workflows from VoltAgent/awesome-agent-skills.', + kind: 'skill', + link: 'https://github.com/garrytan/gstack/blob/main/design-consultation/SKILL.md', + }, + ], + // Page 5 + [ + { + id: 'skill_garrytan-design-review_038b935e0f', + title: 'garrytan/design-review', + description: + 'garrytan/design-review skill for Claude workflows from VoltAgent/awesome-agent-skills.', + kind: 'skill', + link: 'https://github.com/garrytan/gstack/blob/main/design-review/SKILL.md', + }, + { + id: 'skill_garrytan-document-release_33c6362a79', + title: 'garrytan/document-release', + description: + 'garrytan/document-release skill for Claude workflows from VoltAgent/awesome-agent-skills.', + kind: 'skill', + link: 'https://github.com/garrytan/gstack/blob/main/document-release/SKILL.md', + }, + { + id: 'skill_garrytan-freeze_7450e5ae1a', + title: 'garrytan/freeze', + description: + 'garrytan/freeze skill for Claude workflows from VoltAgent/awesome-agent-skills.', + kind: 'skill', + link: 'https://github.com/garrytan/gstack/blob/main/freeze/SKILL.md', + }, + { + id: 'skill_garrytan-gstack-upgrade_7e15fdd762', + title: 'garrytan/gstack-upgrade', + description: + 'garrytan/gstack-upgrade skill for Claude workflows from VoltAgent/awesome-agent-skills.', + kind: 'skill', + link: 'https://github.com/garrytan/gstack/blob/main/gstack-upgrade/SKILL.md', + }, + { + id: 'skill_garrytan-guard_68f97cc8c1', + title: 'garrytan/guard', + description: 'garrytan/guard skill for Claude workflows from VoltAgent/awesome-agent-skills.', + kind: 'skill', + link: 'https://github.com/garrytan/gstack/blob/main/guard/SKILL.md', + }, + { + id: 'skill_garrytan-investigate_1ba4851417', + title: 'garrytan/investigate', + description: + 'garrytan/investigate skill for Claude workflows from VoltAgent/awesome-agent-skills.', + kind: 'skill', + link: 'https://github.com/garrytan/gstack/blob/main/investigate/SKILL.md', + }, + { + id: 'skill_garrytan-land-and-deploy_ea48422e22', + title: 'garrytan/land-and-deploy', + description: + 'garrytan/land-and-deploy skill for Claude workflows from VoltAgent/awesome-agent-skills.', + kind: 'skill', + link: 'https://github.com/garrytan/gstack/blob/main/land-and-deploy/SKILL.md', + }, + { + id: 'skill_garrytan-office-hours_9a5d5b8e58', + title: 'garrytan/office-hours', + description: + 'garrytan/office-hours skill for Claude workflows from VoltAgent/awesome-agent-skills.', + kind: 'skill', + link: 'https://github.com/garrytan/gstack/blob/main/office-hours/SKILL.md', + }, + { + id: 'skill_garrytan-plan-ceo-review_c4c2e267b8', + title: 'garrytan/plan-ceo-review', + description: + 'garrytan/plan-ceo-review skill for Claude workflows from VoltAgent/awesome-agent-skills.', + kind: 'skill', + link: 'https://github.com/garrytan/gstack/blob/main/plan-ceo-review/SKILL.md', + }, + { + id: 'skill_garrytan-plan-design-review_43d69c32a4', + title: 'garrytan/plan-design-review', + description: + 'garrytan/plan-design-review skill for Claude workflows from VoltAgent/awesome-agent-skills.', + kind: 'skill', + link: 'https://github.com/garrytan/gstack/blob/main/plan-design-review/SKILL.md', + }, + { + id: 'skill_garrytan-plan-eng-review_c6f5abf281', + title: 'garrytan/plan-eng-review', + description: + 'garrytan/plan-eng-review skill for Claude workflows from VoltAgent/awesome-agent-skills.', + kind: 'skill', + link: 'https://github.com/garrytan/gstack/blob/main/plan-eng-review/SKILL.md', + }, + { + id: 'skill_garrytan-qa_b71e9c157f', + title: 'garrytan/qa', + description: 'garrytan/qa skill for Claude workflows from VoltAgent/awesome-agent-skills.', + kind: 'skill', + link: 'https://github.com/garrytan/gstack/blob/main/qa/SKILL.md', + }, + { + id: 'skill_garrytan-qa-only_6a32b32e31', + title: 'garrytan/qa-only', + description: + 'garrytan/qa-only skill for Claude workflows from VoltAgent/awesome-agent-skills.', + kind: 'skill', + link: 'https://github.com/garrytan/gstack/blob/main/qa-only/SKILL.md', + }, + { + id: 'skill_garrytan-retro_7c96efc61a', + title: 'garrytan/retro', + description: 'garrytan/retro skill for Claude workflows from VoltAgent/awesome-agent-skills.', + kind: 'skill', + link: 'https://github.com/garrytan/gstack/blob/main/retro/SKILL.md', + }, + { + id: 'skill_garrytan-review_945318e11f', + title: 'garrytan/review', + description: + 'garrytan/review skill for Claude workflows from VoltAgent/awesome-agent-skills.', + kind: 'skill', + link: 'https://github.com/garrytan/gstack/blob/main/review/SKILL.md', + }, + { + id: 'skill_garrytan-setup-browser-cookies_aef8191765', + title: 'garrytan/setup-browser-cookies', + description: + 'garrytan/setup-browser-cookies skill for Claude workflows from VoltAgent/awesome-agent-skills.', + kind: 'skill', + link: 'https://github.com/garrytan/gstack/blob/main/setup-browser-cookies/SKILL.md', + }, + { + id: 'skill_garrytan-setup-deploy_fc8f0e6b41', + title: 'garrytan/setup-deploy', + description: + 'garrytan/setup-deploy skill for Claude workflows from VoltAgent/awesome-agent-skills.', + kind: 'skill', + link: 'https://github.com/garrytan/gstack/blob/main/setup-deploy/SKILL.md', + }, + { + id: 'skill_garrytan-ship_5ddb25ab26', + title: 'garrytan/ship', + description: 'garrytan/ship skill for Claude workflows from VoltAgent/awesome-agent-skills.', + kind: 'skill', + link: 'https://github.com/garrytan/gstack/blob/main/ship/SKILL.md', + }, + { + id: 'skill_garrytan-supabase_2307b705d6', + title: 'garrytan/supabase', + description: + 'garrytan/supabase skill for Claude workflows from VoltAgent/awesome-agent-skills.', + kind: 'skill', + link: 'https://github.com/garrytan/gstack/blob/main/supabase/SKILL.md', + }, + { + id: 'skill_garrytan-unfreeze_8123f2a105', + title: 'garrytan/unfreeze', + description: + 'garrytan/unfreeze skill for Claude workflows from VoltAgent/awesome-agent-skills.', + kind: 'skill', + link: 'https://github.com/garrytan/gstack/blob/main/unfreeze/SKILL.md', + }, + { + id: 'skill_getsentry-agents-md_cd8a7c8620', + title: 'getsentry/agents-md', + description: + 'getsentry/agents-md skill for Claude workflows from VoltAgent/awesome-agent-skills.', + kind: 'skill', + link: 'https://github.com/getsentry/skills/blob/main/plugins/sentry-skills/skills/agents-md/SKILL.md', + }, + { + id: 'skill_getsentry-claude-settings-audit_04c97d5226', + title: 'getsentry/claude-settings-audit', + description: + 'getsentry/claude-settings-audit skill for Claude workflows from VoltAgent/awesome-agent-skills.', + kind: 'skill', + link: 'https://github.com/getsentry/skills/blob/main/plugins/sentry-skills/skills/claude-settings-audit/SKILL.md', + }, + { + id: 'skill_getsentry-code-review_7d53b872c9', + title: 'getsentry/code-review', + description: + 'getsentry/code-review skill for Claude workflows from VoltAgent/awesome-agent-skills.', + kind: 'skill', + link: 'https://github.com/getsentry/skills/blob/main/plugins/sentry-skills/skills/code-review/SKILL.md', + }, + { + id: 'skill_getsentry-commit_bf891cf414', + title: 'getsentry/commit', + description: + 'getsentry/commit skill for Claude workflows from VoltAgent/awesome-agent-skills.', + kind: 'skill', + link: 'https://github.com/getsentry/skills/blob/main/plugins/sentry-skills/skills/commit/SKILL.md', + }, + { + id: 'skill_getsentry-create-pr_27f24d3b49', + title: 'getsentry/create-pr', + description: + 'getsentry/create-pr skill for Claude workflows from VoltAgent/awesome-agent-skills.', + kind: 'skill', + link: 'https://github.com/getsentry/skills/blob/main/plugins/sentry-skills/skills/create-pr/SKILL.md', + }, + { + id: 'skill_getsentry-find-bugs_c7d172c0e7', + title: 'getsentry/find-bugs', + description: + 'getsentry/find-bugs skill for Claude workflows from VoltAgent/awesome-agent-skills.', + kind: 'skill', + link: 'https://github.com/getsentry/skills/blob/main/plugins/sentry-skills/skills/find-bugs/SKILL.md', + }, + { + id: 'skill_getsentry-iterate-pr_6a6ff92f09', + title: 'getsentry/iterate-pr', + description: + 'getsentry/iterate-pr skill for Claude workflows from VoltAgent/awesome-agent-skills.', + kind: 'skill', + link: 'https://github.com/getsentry/skills/blob/main/plugins/sentry-skills/skills/iterate-pr/SKILL.md', + }, + { + id: 'skill_git-pushing_714a406307', + title: 'git-pushing', + description: 'git-pushing skill for Claude workflows from BehiSecc/awesome-claude-skills.', + kind: 'skill', + link: 'https://github.com/mhattingpete/claude-skills-marketplace/blob/main/engineering-workflow-plugin/skills/git-pushing/SKILL.md', + }, + { + id: 'skill_glitternetwork-pinme_2a20f66c14', + title: 'glitternetwork/pinme', + description: + 'glitternetwork/pinme skill for Claude workflows from BehiSecc/awesome-claude-skills.', + kind: 'skill', + link: 'https://github.com/glitternetwork/skills/blob/main/pinme/SKILL.md', + }, + { + id: 'skill_gokapso-automate-whatsapp_da1d1c5bfc', + title: 'gokapso/automate-whatsapp', + description: + 'gokapso/automate-whatsapp skill for Claude workflows from VoltAgent/awesome-agent-skills.', + kind: 'skill', + link: 'https://github.com/gokapso/agent-skills/blob/master/skills/automate-whatsapp/SKILL.md', + }, + { + id: 'skill_gokapso-integrate-whatsapp_d129fe8a45', + title: 'gokapso/integrate-whatsapp', + description: + 'gokapso/integrate-whatsapp skill for Claude workflows from VoltAgent/awesome-agent-skills.', + kind: 'skill', + link: 'https://github.com/gokapso/agent-skills/blob/master/skills/integrate-whatsapp/SKILL.md', + }, + { + id: 'skill_gokapso-observe-whatsapp_9bca60cae0', + title: 'gokapso/observe-whatsapp', + description: + 'gokapso/observe-whatsapp skill for Claude workflows from VoltAgent/awesome-agent-skills.', + kind: 'skill', + link: 'https://github.com/gokapso/agent-skills/blob/master/skills/observe-whatsapp/SKILL.md', + }, + { + id: 'skill_google-gemini-gemini-api-dev_63ea66e9f9', + title: 'google-gemini/gemini-api-dev', + description: + 'google-gemini/gemini-api-dev skill for Claude workflows from VoltAgent/awesome-agent-skills.', + kind: 'skill', + link: 'https://github.com/google-gemini/gemini-skills/blob/main/skills/gemini-api-dev/SKILL.md', + }, + { + id: 'skill_google-gemini-gemini-interactions-api_42565e69ab', + title: 'google-gemini/gemini-interactions-api', + description: + 'google-gemini/gemini-interactions-api skill for Claude workflows from VoltAgent/awesome-agent-skills.', + kind: 'skill', + link: 'https://github.com/google-gemini/gemini-skills/blob/main/skills/gemini-interactions-api/SKILL.md', + }, + { + id: 'skill_google-gemini-gemini-live-api-dev_bfe3fd128e', + title: 'google-gemini/gemini-live-api-dev', + description: + 'google-gemini/gemini-live-api-dev skill for Claude workflows from VoltAgent/awesome-agent-skills.', + kind: 'skill', + link: 'https://github.com/google-gemini/gemini-skills/blob/main/skills/gemini-live-api-dev/SKILL.md', + }, + { + id: 'skill_google-gemini-vertex-ai-api-dev_fdd5e84322', + title: 'google-gemini/vertex-ai-api-dev', + description: + 'google-gemini/vertex-ai-api-dev skill for Claude workflows from VoltAgent/awesome-agent-skills.', + kind: 'skill', + link: 'https://github.com/google-gemini/gemini-skills/blob/main/skills/vertex-ai-api-dev/SKILL.md', + }, + { + id: 'skill_google-labs-code-design-md_bc203b8eae', + title: 'google-labs-code/design-md', + description: + 'google-labs-code/design-md skill for Claude workflows from VoltAgent/awesome-agent-skills.', + kind: 'skill', + link: 'https://github.com/google-labs-code/stitch-skills/blob/main/skills/design-md/SKILL.md', + }, + { + id: 'skill_google-labs-code-enhance-prompt_b95c2a053e', + title: 'google-labs-code/enhance-prompt', + description: + 'google-labs-code/enhance-prompt skill for Claude workflows from VoltAgent/awesome-agent-skills.', + kind: 'skill', + link: 'https://github.com/google-labs-code/stitch-skills/blob/main/skills/enhance-prompt/SKILL.md', + }, + { + id: 'skill_google-labs-code-react-components_2d28b2c743', + title: 'google-labs-code/react-components', + description: + 'google-labs-code/react-components skill for Claude workflows from VoltAgent/awesome-agent-skills.', + kind: 'skill', + link: 'https://github.com/google-labs-code/stitch-skills/blob/main/skills/react-components/SKILL.md', + }, + { + id: 'skill_google-labs-code-remotion_d780f170fd', + title: 'google-labs-code/remotion', + description: + 'google-labs-code/remotion skill for Claude workflows from VoltAgent/awesome-agent-skills.', + kind: 'skill', + link: 'https://github.com/google-labs-code/stitch-skills/blob/main/skills/remotion/SKILL.md', + }, + { + id: 'skill_google-labs-code-shadcn-ui_5f77788b8f', + title: 'google-labs-code/shadcn-ui', + description: + 'google-labs-code/shadcn-ui skill for Claude workflows from VoltAgent/awesome-agent-skills.', + kind: 'skill', + link: 'https://github.com/google-labs-code/stitch-skills/blob/main/skills/shadcn-ui/SKILL.md', + }, + { + id: 'skill_google-labs-code-stitch-loop_08997f1963', + title: 'google-labs-code/stitch-loop', + description: + 'google-labs-code/stitch-loop skill for Claude workflows from VoltAgent/awesome-agent-skills.', + kind: 'skill', + link: 'https://github.com/google-labs-code/stitch-skills/blob/main/skills/stitch-loop/SKILL.md', + }, + { + id: 'skill_google-tts_0ff898dccd', + title: 'google-tts', + description: 'google-tts skill for Claude workflows from BehiSecc/awesome-claude-skills.', + kind: 'skill', + link: 'https://github.com/sanjay3290/ai-skills/blob/main/skills/google-tts/SKILL.md', + }, + { + id: 'skill_google-workspace-skills_2701acfe80', + title: 'google-workspace-skills', + description: + 'google-workspace-skills skill for Claude workflows from BehiSecc/awesome-claude-skills.', + kind: 'skill', + link: 'https://github.com/sanjay3290/ai-skills/blob/main/skills/SKILL.md', + }, + { + id: 'skill_googleworkspace-gws-admin_32e4acd595', + title: 'googleworkspace/gws-admin', + description: + 'googleworkspace/gws-admin skill for Claude workflows from VoltAgent/awesome-agent-skills.', + kind: 'skill', + link: 'https://github.com/googleworkspace/cli/blob/main/skills/gws-admin/SKILL.md', + }, + { + id: 'skill_googleworkspace-gws-admin-reports_f670ec73fa', + title: 'googleworkspace/gws-admin-reports', + description: + 'googleworkspace/gws-admin-reports skill for Claude workflows from VoltAgent/awesome-agent-skills.', + kind: 'skill', + link: 'https://github.com/googleworkspace/cli/blob/main/skills/gws-admin-reports/SKILL.md', + }, + { + id: 'skill_googleworkspace-gws-alertcenter_b80a53faa2', + title: 'googleworkspace/gws-alertcenter', + description: + 'googleworkspace/gws-alertcenter skill for Claude workflows from VoltAgent/awesome-agent-skills.', + kind: 'skill', + link: 'https://github.com/googleworkspace/cli/blob/main/skills/gws-alertcenter/SKILL.md', + }, + { + id: 'skill_googleworkspace-gws-apps-script_4699349a2a', + title: 'googleworkspace/gws-apps-script', + description: + 'googleworkspace/gws-apps-script skill for Claude workflows from VoltAgent/awesome-agent-skills.', + kind: 'skill', + link: 'https://github.com/googleworkspace/cli/blob/main/skills/gws-apps-script/SKILL.md', + }, + { + id: 'skill_googleworkspace-gws-calendar_ec82a49f13', + title: 'googleworkspace/gws-calendar', + description: + 'googleworkspace/gws-calendar skill for Claude workflows from VoltAgent/awesome-agent-skills.', + kind: 'skill', + link: 'https://github.com/googleworkspace/cli/blob/main/skills/gws-calendar/SKILL.md', + }, + { + id: 'skill_googleworkspace-gws-chat_90971bcb45', + title: 'googleworkspace/gws-chat', + description: + 'googleworkspace/gws-chat skill for Claude workflows from VoltAgent/awesome-agent-skills.', + kind: 'skill', + link: 'https://github.com/googleworkspace/cli/blob/main/skills/gws-chat/SKILL.md', + }, + { + id: 'skill_googleworkspace-gws-classroom_e2a017781e', + title: 'googleworkspace/gws-classroom', + description: + 'googleworkspace/gws-classroom skill for Claude workflows from VoltAgent/awesome-agent-skills.', + kind: 'skill', + link: 'https://github.com/googleworkspace/cli/blob/main/skills/gws-classroom/SKILL.md', + }, + { + id: 'skill_googleworkspace-gws-cloudidentity_7473c2d15e', + title: 'googleworkspace/gws-cloudidentity', + description: + 'googleworkspace/gws-cloudidentity skill for Claude workflows from VoltAgent/awesome-agent-skills.', + kind: 'skill', + link: 'https://github.com/googleworkspace/cli/blob/main/skills/gws-cloudidentity/SKILL.md', + }, + { + id: 'skill_googleworkspace-gws-docs_0584880aea', + title: 'googleworkspace/gws-docs', + description: + 'googleworkspace/gws-docs skill for Claude workflows from VoltAgent/awesome-agent-skills.', + kind: 'skill', + link: 'https://github.com/googleworkspace/cli/blob/main/skills/gws-docs/SKILL.md', + }, + { + id: 'skill_googleworkspace-gws-drive_f8cd3dea9c', + title: 'googleworkspace/gws-drive', + description: + 'googleworkspace/gws-drive skill for Claude workflows from VoltAgent/awesome-agent-skills.', + kind: 'skill', + link: 'https://github.com/googleworkspace/cli/blob/main/skills/gws-drive/SKILL.md', + }, + { + id: 'skill_googleworkspace-gws-events_90123a3621', + title: 'googleworkspace/gws-events', + description: + 'googleworkspace/gws-events skill for Claude workflows from VoltAgent/awesome-agent-skills.', + kind: 'skill', + link: 'https://github.com/googleworkspace/cli/blob/main/skills/gws-events/SKILL.md', + }, + { + id: 'skill_googleworkspace-gws-forms_1563e144bc', + title: 'googleworkspace/gws-forms', + description: + 'googleworkspace/gws-forms skill for Claude workflows from VoltAgent/awesome-agent-skills.', + kind: 'skill', + link: 'https://github.com/googleworkspace/cli/blob/main/skills/gws-forms/SKILL.md', + }, + { + id: 'skill_googleworkspace-gws-gmail_03cc2c6e6b', + title: 'googleworkspace/gws-gmail', + description: + 'googleworkspace/gws-gmail skill for Claude workflows from VoltAgent/awesome-agent-skills.', + kind: 'skill', + link: 'https://github.com/googleworkspace/cli/blob/main/skills/gws-gmail/SKILL.md', + }, + { + id: 'skill_googleworkspace-gws-groupssettings_ab9e75f908', + title: 'googleworkspace/gws-groupssettings', + description: + 'googleworkspace/gws-groupssettings skill for Claude workflows from VoltAgent/awesome-agent-skills.', + kind: 'skill', + link: 'https://github.com/googleworkspace/cli/blob/main/skills/gws-groupssettings/SKILL.md', + }, + { + id: 'skill_googleworkspace-gws-keep_d51e6b8172', + title: 'googleworkspace/gws-keep', + description: + 'googleworkspace/gws-keep skill for Claude workflows from VoltAgent/awesome-agent-skills.', + kind: 'skill', + link: 'https://github.com/googleworkspace/cli/blob/main/skills/gws-keep/SKILL.md', + }, + { + id: 'skill_googleworkspace-gws-licensing_3b78251421', + title: 'googleworkspace/gws-licensing', + description: + 'googleworkspace/gws-licensing skill for Claude workflows from VoltAgent/awesome-agent-skills.', + kind: 'skill', + link: 'https://github.com/googleworkspace/cli/blob/main/skills/gws-licensing/SKILL.md', + }, + { + id: 'skill_googleworkspace-gws-meet_08be9ff02c', + title: 'googleworkspace/gws-meet', + description: + 'googleworkspace/gws-meet skill for Claude workflows from VoltAgent/awesome-agent-skills.', + kind: 'skill', + link: 'https://github.com/googleworkspace/cli/blob/main/skills/gws-meet/SKILL.md', + }, + { + id: 'skill_googleworkspace-gws-modelarmor_8a5d9a9fa8', + title: 'googleworkspace/gws-modelarmor', + description: + 'googleworkspace/gws-modelarmor skill for Claude workflows from VoltAgent/awesome-agent-skills.', + kind: 'skill', + link: 'https://github.com/googleworkspace/cli/blob/main/skills/gws-modelarmor/SKILL.md', + }, + { + id: 'skill_googleworkspace-gws-people_539be65870', + title: 'googleworkspace/gws-people', + description: + 'googleworkspace/gws-people skill for Claude workflows from VoltAgent/awesome-agent-skills.', + kind: 'skill', + link: 'https://github.com/googleworkspace/cli/blob/main/skills/gws-people/SKILL.md', + }, + { + id: 'skill_googleworkspace-gws-reseller_7af511b695', + title: 'googleworkspace/gws-reseller', + description: + 'googleworkspace/gws-reseller skill for Claude workflows from VoltAgent/awesome-agent-skills.', + kind: 'skill', + link: 'https://github.com/googleworkspace/cli/blob/main/skills/gws-reseller/SKILL.md', + }, + { + id: 'skill_googleworkspace-gws-shared_b8b83cdbe6', + title: 'googleworkspace/gws-shared', + description: + 'googleworkspace/gws-shared skill for Claude workflows from VoltAgent/awesome-agent-skills.', + kind: 'skill', + link: 'https://github.com/googleworkspace/cli/blob/main/skills/gws-shared/SKILL.md', + }, + { + id: 'skill_googleworkspace-gws-sheets_688ca0576d', + title: 'googleworkspace/gws-sheets', + description: + 'googleworkspace/gws-sheets skill for Claude workflows from VoltAgent/awesome-agent-skills.', + kind: 'skill', + link: 'https://github.com/googleworkspace/cli/blob/main/skills/gws-sheets/SKILL.md', + }, + { + id: 'skill_googleworkspace-gws-slides_1de4c40750', + title: 'googleworkspace/gws-slides', + description: + 'googleworkspace/gws-slides skill for Claude workflows from VoltAgent/awesome-agent-skills.', + kind: 'skill', + link: 'https://github.com/googleworkspace/cli/blob/main/skills/gws-slides/SKILL.md', + }, + { + id: 'skill_googleworkspace-gws-tasks_0c6fc1df19', + title: 'googleworkspace/gws-tasks', + description: + 'googleworkspace/gws-tasks skill for Claude workflows from VoltAgent/awesome-agent-skills.', + kind: 'skill', + link: 'https://github.com/googleworkspace/cli/blob/main/skills/gws-tasks/SKILL.md', + }, + { + id: 'skill_googleworkspace-gws-vault_65ed002b39', + title: 'googleworkspace/gws-vault', + description: + 'googleworkspace/gws-vault skill for Claude workflows from VoltAgent/awesome-agent-skills.', + kind: 'skill', + link: 'https://github.com/googleworkspace/cli/blob/main/skills/gws-vault/SKILL.md', + }, + { + id: 'skill_googleworkspace-gws-workflow_2fb9133b3d', + title: 'googleworkspace/gws-workflow', + description: + 'googleworkspace/gws-workflow skill for Claude workflows from VoltAgent/awesome-agent-skills.', + kind: 'skill', + link: 'https://github.com/googleworkspace/cli/blob/main/skills/gws-workflow/SKILL.md', + }, + { + id: 'skill_greensock-gsap-core_5bd9719a0c', + title: 'greensock/gsap-core', + description: + 'greensock/gsap-core skill for Claude workflows from VoltAgent/awesome-agent-skills.', + kind: 'skill', + link: 'https://github.com/greensock/gsap-skills/blob/main/skills/gsap-core/SKILL.md', + }, + { + id: 'skill_greensock-gsap-frameworks_224df781bf', + title: 'greensock/gsap-frameworks', + description: + 'greensock/gsap-frameworks skill for Claude workflows from VoltAgent/awesome-agent-skills.', + kind: 'skill', + link: 'https://github.com/greensock/gsap-skills/blob/main/skills/gsap-frameworks/SKILL.md', + }, + { + id: 'skill_greensock-gsap-performance_1e811fdbb8', + title: 'greensock/gsap-performance', + description: + 'greensock/gsap-performance skill for Claude workflows from VoltAgent/awesome-agent-skills.', + kind: 'skill', + link: 'https://github.com/greensock/gsap-skills/blob/main/skills/gsap-performance/SKILL.md', + }, + { + id: 'skill_greensock-gsap-plugins_fdee29a886', + title: 'greensock/gsap-plugins', + description: + 'greensock/gsap-plugins skill for Claude workflows from VoltAgent/awesome-agent-skills.', + kind: 'skill', + link: 'https://github.com/greensock/gsap-skills/blob/main/skills/gsap-plugins/SKILL.md', + }, + { + id: 'skill_greensock-gsap-react_04d4c36c0f', + title: 'greensock/gsap-react', + description: + 'greensock/gsap-react skill for Claude workflows from VoltAgent/awesome-agent-skills.', + kind: 'skill', + link: 'https://github.com/greensock/gsap-skills/blob/main/skills/gsap-react/SKILL.md', + }, + { + id: 'skill_greensock-gsap-scrolltrigger_c81083f09c', + title: 'greensock/gsap-scrolltrigger', + description: + 'greensock/gsap-scrolltrigger skill for Claude workflows from VoltAgent/awesome-agent-skills.', + kind: 'skill', + link: 'https://github.com/greensock/gsap-skills/blob/main/skills/gsap-scrolltrigger/SKILL.md', + }, + { + id: 'skill_greensock-gsap-timeline_881e446246', + title: 'greensock/gsap-timeline', + description: + 'greensock/gsap-timeline skill for Claude workflows from VoltAgent/awesome-agent-skills.', + kind: 'skill', + link: 'https://github.com/greensock/gsap-skills/blob/main/skills/gsap-timeline/SKILL.md', + }, + { + id: 'skill_greensock-gsap-utils_51c4961fa7', + title: 'greensock/gsap-utils', + description: + 'greensock/gsap-utils skill for Claude workflows from VoltAgent/awesome-agent-skills.', + kind: 'skill', + link: 'https://github.com/greensock/gsap-skills/blob/main/skills/gsap-utils/SKILL.md', + }, + { + id: 'skill_hamelsmu-build-review-interface_9728bc3349', + title: 'hamelsmu/build-review-interface', + description: + 'hamelsmu/build-review-interface skill for Claude workflows from VoltAgent/awesome-agent-skills.', + kind: 'skill', + link: 'https://github.com/hamelsmu/prompts/blob/main/evals-skills/skills/build-review-interface/SKILL.md', + }, + { + id: 'skill_hamelsmu-error-analysis_00fa1ee33c', + title: 'hamelsmu/error-analysis', + description: + 'hamelsmu/error-analysis skill for Claude workflows from VoltAgent/awesome-agent-skills.', + kind: 'skill', + link: 'https://github.com/hamelsmu/prompts/blob/main/evals-skills/skills/error-analysis/SKILL.md', + }, + { + id: 'skill_hamelsmu-eval-audit_36d50fe3cd', + title: 'hamelsmu/eval-audit', + description: + 'hamelsmu/eval-audit skill for Claude workflows from VoltAgent/awesome-agent-skills.', + kind: 'skill', + link: 'https://github.com/hamelsmu/prompts/blob/main/evals-skills/skills/eval-audit/SKILL.md', + }, + { + id: 'skill_hamelsmu-evaluate-rag_d227ad7bff', + title: 'hamelsmu/evaluate-rag', + description: + 'hamelsmu/evaluate-rag skill for Claude workflows from VoltAgent/awesome-agent-skills.', + kind: 'skill', + link: 'https://github.com/hamelsmu/prompts/blob/main/evals-skills/skills/evaluate-rag/SKILL.md', + }, + { + id: 'skill_hamelsmu-generate-synthetic-data_54b00988c3', + title: 'hamelsmu/generate-synthetic-data', + description: + 'hamelsmu/generate-synthetic-data skill for Claude workflows from VoltAgent/awesome-agent-skills.', + kind: 'skill', + link: 'https://github.com/hamelsmu/prompts/blob/main/evals-skills/skills/generate-synthetic-data/SKILL.md', + }, + { + id: 'skill_hamelsmu-validate-evaluator_9ba52d5aa6', + title: 'hamelsmu/validate-evaluator', + description: + 'hamelsmu/validate-evaluator skill for Claude workflows from VoltAgent/awesome-agent-skills.', + kind: 'skill', + link: 'https://github.com/hamelsmu/prompts/blob/main/evals-skills/skills/validate-evaluator/SKILL.md', + }, + { + id: 'skill_hamelsmu-write-judge-prompt_d49bfac14f', + title: 'hamelsmu/write-judge-prompt', + description: + 'hamelsmu/write-judge-prompt skill for Claude workflows from VoltAgent/awesome-agent-skills.', + kind: 'skill', + link: 'https://github.com/hamelsmu/prompts/blob/main/evals-skills/skills/write-judge-prompt/SKILL.md', + }, + { + id: 'skill_hanfang-claude-memory-skill_a509e7be83', + title: 'hanfang/claude-memory-skill', + description: + 'hanfang/claude-memory-skill skill for Claude workflows from VoltAgent/awesome-agent-skills.', + kind: 'skill', + link: 'https://github.com/hanfang/claude-memory-skill/blob/main/SKILL.md', + }, + { + id: 'skill_hashicorp-agent-skills_2cf7a709a1', + title: 'hashicorp-agent-skills', + description: + 'hashicorp-agent-skills skill for Claude workflows from BehiSecc/awesome-claude-skills.', + kind: 'skill', + link: 'https://github.com/hashicorp/agent-skills/blob/main/SKILL.md', + }, + { + id: 'skill_hashicorp-terraform-code-generation_ef0d871297', + title: 'hashicorp/terraform-code-generation', + description: + 'hashicorp/terraform-code-generation skill for Claude workflows from VoltAgent/awesome-agent-skills.', + kind: 'skill', + link: 'https://github.com/hashicorp/agent-skills/blob/main/terraform/code-generation/SKILL.md', + }, + { + id: 'skill_hashicorp-terraform-module-generation_ffb839100c', + title: 'hashicorp/terraform-module-generation', + description: + 'hashicorp/terraform-module-generation skill for Claude workflows from VoltAgent/awesome-agent-skills.', + kind: 'skill', + link: 'https://github.com/hashicorp/agent-skills/blob/main/terraform/module-generation/SKILL.md', + }, + { + id: 'skill_hashicorp-terraform-provider-development_93acef542c', + title: 'hashicorp/terraform-provider-development', + description: + 'hashicorp/terraform-provider-development skill for Claude workflows from VoltAgent/awesome-agent-skills.', + kind: 'skill', + link: 'https://github.com/hashicorp/agent-skills/blob/main/terraform/provider-development/SKILL.md', + }, + { + id: 'skill_helius-labs-helius-skills_0fdf4d9815', + title: 'helius-labs/helius-skills', + description: + 'helius-labs/helius-skills skill for Claude workflows from VoltAgent/awesome-agent-skills.', + kind: 'skill', + link: 'https://github.com/helius-labs/core-ai/blob/main/helius-skills/SKILL.md', + }, + { + id: 'skill_huggingface-hf-cli_3cba0fb6b8', + title: 'huggingface/hf-cli', + description: + 'huggingface/hf-cli skill for Claude workflows from VoltAgent/awesome-agent-skills.', + kind: 'skill', + link: 'https://github.com/huggingface/skills/blob/main/skills/hf-cli/SKILL.md', + }, + { + id: 'skill_huggingface-hugging-face-cli_ce911d0d3c', + title: 'huggingface/hugging-face-cli', + description: + 'huggingface/hugging-face-cli skill for Claude workflows from VoltAgent/awesome-agent-skills.', + kind: 'skill', + link: 'https://github.com/huggingface/skills/blob/main/skills/hugging-face-cli/SKILL.md', + }, + { + id: 'skill_huggingface-hugging-face-dataset-viewer_a72d32a8c6', + title: 'huggingface/hugging-face-dataset-viewer', + description: + 'huggingface/hugging-face-dataset-viewer skill for Claude workflows from VoltAgent/awesome-agent-skills.', + kind: 'skill', + link: 'https://github.com/huggingface/skills/blob/main/skills/hugging-face-dataset-viewer/SKILL.md', + }, + { + id: 'skill_huggingface-hugging-face-datasets_d1bdfdb2ea', + title: 'huggingface/hugging-face-datasets', + description: + 'huggingface/hugging-face-datasets skill for Claude workflows from VoltAgent/awesome-agent-skills.', + kind: 'skill', + link: 'https://github.com/huggingface/skills/blob/main/skills/hugging-face-datasets/SKILL.md', + }, + { + id: 'skill_huggingface-hugging-face-evaluation_5f7239ee42', + title: 'huggingface/hugging-face-evaluation', + description: + 'huggingface/hugging-face-evaluation skill for Claude workflows from VoltAgent/awesome-agent-skills.', + kind: 'skill', + link: 'https://github.com/huggingface/skills/blob/main/skills/hugging-face-evaluation/SKILL.md', + }, + { + id: 'skill_huggingface-hugging-face-jobs_cbd8bfbb7b', + title: 'huggingface/hugging-face-jobs', + description: + 'huggingface/hugging-face-jobs skill for Claude workflows from VoltAgent/awesome-agent-skills.', + kind: 'skill', + link: 'https://github.com/huggingface/skills/blob/main/skills/hugging-face-jobs/SKILL.md', + }, + { + id: 'skill_huggingface-hugging-face-model-trainer_c429ca1340', + title: 'huggingface/hugging-face-model-trainer', + description: + 'huggingface/hugging-face-model-trainer skill for Claude workflows from VoltAgent/awesome-agent-skills.', + kind: 'skill', + link: 'https://github.com/huggingface/skills/blob/main/skills/hugging-face-model-trainer/SKILL.md', + }, + { + id: 'skill_huggingface-hugging-face-paper-pages_aa877e367b', + title: 'huggingface/hugging-face-paper-pages', + description: + 'huggingface/hugging-face-paper-pages skill for Claude workflows from VoltAgent/awesome-agent-skills.', + kind: 'skill', + link: 'https://github.com/huggingface/skills/blob/main/skills/hugging-face-paper-pages/SKILL.md', + }, + { + id: 'skill_huggingface-hugging-face-paper-publisher_c3acffa81e', + title: 'huggingface/hugging-face-paper-publisher', + description: + 'huggingface/hugging-face-paper-publisher skill for Claude workflows from VoltAgent/awesome-agent-skills.', + kind: 'skill', + link: 'https://github.com/huggingface/skills/blob/main/skills/hugging-face-paper-publisher/SKILL.md', + }, + ], + // Page 6 + [ + { + id: 'skill_huggingface-hugging-face-tool-builder_b36d3b73bf', + title: 'huggingface/hugging-face-tool-builder', + description: + 'huggingface/hugging-face-tool-builder skill for Claude workflows from VoltAgent/awesome-agent-skills.', + kind: 'skill', + link: 'https://github.com/huggingface/skills/blob/main/skills/hugging-face-tool-builder/SKILL.md', + }, + { + id: 'skill_huggingface-hugging-face-trackio_8fd96c1226', + title: 'huggingface/hugging-face-trackio', + description: + 'huggingface/hugging-face-trackio skill for Claude workflows from VoltAgent/awesome-agent-skills.', + kind: 'skill', + link: 'https://github.com/huggingface/skills/blob/main/skills/hugging-face-trackio/SKILL.md', + }, + { + id: 'skill_huggingface-hugging-face-vision-trainer_20caa64d66', + title: 'huggingface/hugging-face-vision-trainer', + description: + 'huggingface/hugging-face-vision-trainer skill for Claude workflows from VoltAgent/awesome-agent-skills.', + kind: 'skill', + link: 'https://github.com/huggingface/skills/blob/main/skills/hugging-face-vision-trainer/SKILL.md', + }, + { + id: 'skill_huggingface-huggingface-gradio_70b23401df', + title: 'huggingface/huggingface-gradio', + description: + 'huggingface/huggingface-gradio skill for Claude workflows from VoltAgent/awesome-agent-skills.', + kind: 'skill', + link: 'https://github.com/huggingface/skills/blob/main/skills/huggingface-gradio/SKILL.md', + }, + { + id: 'skill_huggingface-transformers-js_7675c46213', + title: 'huggingface/transformers.js', + description: + 'huggingface/transformers.js skill for Claude workflows from VoltAgent/awesome-agent-skills.', + kind: 'skill', + link: 'https://github.com/huggingface/skills/blob/main/skills/transformers.js/SKILL.md', + }, + { + id: 'skill_ibelick-ui-skills_634ff6a85b', + title: 'ibelick/ui-skills', + description: + 'ibelick/ui-skills skill for Claude workflows from VoltAgent/awesome-agent-skills.', + kind: 'skill', + link: 'https://github.com/ibelick/ui-skills/blob/main/SKILL.md', + }, + { + id: 'skill_image-enhancer_b0e7349485', + title: 'image-enhancer', + description: 'image-enhancer skill for Claude workflows from BehiSecc/awesome-claude-skills.', + kind: 'skill', + link: 'https://github.com/ComposioHQ/awesome-claude-skills/blob/master/image-enhancer/SKILL.md', + }, + { + id: 'skill_imagen_86a1b6ce7d', + title: 'imagen', + description: 'imagen skill for Claude workflows from BehiSecc/awesome-claude-skills.', + kind: 'skill', + link: 'https://github.com/sanjay3290/ai-skills/blob/main/skills/imagen/SKILL.md', + }, + { + id: 'skill_invoice-organizer_e6110a4db1', + title: 'invoice-organizer', + description: + 'invoice-organizer skill for Claude workflows from BehiSecc/awesome-claude-skills.', + kind: 'skill', + link: 'https://github.com/ComposioHQ/awesome-claude-skills/blob/master/invoice-organizer/SKILL.md', + }, + { + id: 'skill_jeffersonwarrior-claudisms_ed4e76470c', + title: 'jeffersonwarrior/claudisms', + description: + 'jeffersonwarrior/claudisms skill for Claude workflows from VoltAgent/awesome-agent-skills.', + kind: 'skill', + link: 'https://github.com/jeffersonwarrior/claudisms/blob/main/SKILL.md', + }, + { + id: 'skill_joannis-claude-skills_e86d7cc505', + title: 'Joannis/claude-skills', + description: + 'Joannis/claude-skills skill for Claude workflows from VoltAgent/awesome-agent-skills.', + kind: 'skill', + link: 'https://github.com/Joannis/claude-skills/blob/main/SKILL.md', + }, + { + id: 'skill_jules_d219c134e6', + title: 'jules', + description: 'jules skill for Claude workflows from BehiSecc/awesome-claude-skills.', + kind: 'skill', + link: 'https://github.com/sanjay3290/ai-skills/blob/main/skills/jules/SKILL.md', + }, + { + id: 'skill_k-kolomeitsev-data-structure-protocol_1fb9a4db5a', + title: 'k-kolomeitsev/data-structure-protocol', + description: + 'k-kolomeitsev/data-structure-protocol skill for Claude workflows from VoltAgent/awesome-agent-skills.', + kind: 'skill', + link: 'https://github.com/k-kolomeitsev/data-structure-protocol/blob/main/SKILL.md', + }, + { + id: 'skill_kaggle-skill_abba68fc2c', + title: 'kaggle-skill', + description: 'kaggle-skill skill for Claude workflows from BehiSecc/awesome-claude-skills.', + kind: 'skill', + link: 'https://github.com/shepsci/kaggle-skill/blob/main/SKILL.md', + }, + { + id: 'skill_kanban-skill_1da8133299', + title: 'kanban-skill', + description: 'kanban-skill skill for Claude workflows from BehiSecc/awesome-claude-skills.', + kind: 'skill', + link: 'https://github.com/mattjoyce/kanban-skill/blob/main/SKILL.md', + }, + { + id: 'skill_kevin7qi-codex-collab_3bf2b2567b', + title: 'Kevin7Qi/codex-collab', + description: + 'Kevin7Qi/codex-collab skill for Claude workflows from VoltAgent/awesome-agent-skills.', + kind: 'skill', + link: 'https://github.com/Kevin7Qi/codex-collab/blob/main/SKILL.md', + }, + { + id: 'skill_komal-skynet-claude-skill-homeassistant_7527a6a89d', + title: 'komal-SkyNET/claude-skill-homeassistant', + description: + 'komal-SkyNET/claude-skill-homeassistant skill for Claude workflows from VoltAgent/awesome-agent-skills.', + kind: 'skill', + link: 'https://github.com/komal-SkyNET/claude-skill-homeassistant/blob/main/SKILL.md', + }, + { + id: 'skill_kreuzberg-dev-kreuzberg_fa7de73049', + title: 'kreuzberg-dev/kreuzberg', + description: + 'kreuzberg-dev/kreuzberg skill for Claude workflows from VoltAgent/awesome-agent-skills.', + kind: 'skill', + link: 'https://github.com/kreuzberg-dev/kreuzberg/blob/main/skills/kreuzberg/SKILL.md', + }, + { + id: 'skill_lackeyjb-playwright-skill_3eaefc20b3', + title: 'lackeyjb/playwright-skill', + description: + 'lackeyjb/playwright-skill skill for Claude workflows from VoltAgent/awesome-agent-skills.', + kind: 'skill', + link: 'https://github.com/lackeyjb/playwright-skill/blob/main/SKILL.md', + }, + { + id: 'skill_lawvable-awesome-legal-skills_9bce6c32b7', + title: 'lawvable/awesome-legal-skills', + description: + 'lawvable/awesome-legal-skills skill for Claude workflows from VoltAgent/awesome-agent-skills.', + kind: 'skill', + link: 'https://github.com/lawvable/awesome-legal-skills/blob/main/SKILL.md', + }, + { + id: 'skill_leonxlnx-taste-skill_a5e9782adc', + title: 'Leonxlnx/taste-skill', + description: + 'Leonxlnx/taste-skill skill for Claude workflows from VoltAgent/awesome-agent-skills.', + kind: 'skill', + link: 'https://github.com/Leonxlnx/taste-skill/blob/main/SKILL.md', + }, + { + id: 'skill_lightning-architecture-review_c6a1aaa7d6', + title: 'lightning-architecture-review', + description: + 'lightning-architecture-review skill for Claude workflows from BehiSecc/awesome-claude-skills.', + kind: 'skill', + link: 'https://github.com/8144225309/superscalar-mcp/blob/master/skills/lightning-architecture-review/SKILL.md', + }, + { + id: 'skill_lightning-channel-factories_ad3095cbbc', + title: 'lightning-channel-factories', + description: + 'lightning-channel-factories skill for Claude workflows from BehiSecc/awesome-claude-skills.', + kind: 'skill', + link: 'https://github.com/8144225309/superscalar-mcp/blob/master/skills/lightning-channel-factories/SKILL.md', + }, + { + id: 'skill_lightning-factory-explainer_2454c79c25', + title: 'lightning-factory-explainer', + description: + 'lightning-factory-explainer skill for Claude workflows from BehiSecc/awesome-claude-skills.', + kind: 'skill', + link: 'https://github.com/8144225309/superscalar-mcp/blob/master/skills/lightning-factory-explainer/SKILL.md', + }, + { + id: 'skill_linear-claude-skill_a97c392420', + title: 'linear-claude-skill', + description: + 'linear-claude-skill skill for Claude workflows from BehiSecc/awesome-claude-skills.', + kind: 'skill', + link: 'https://github.com/wrsmith108/linear-claude-skill/blob/main/SKILL.md', + }, + { + id: 'skill_linear-cli-skill_c4d2eb3253', + title: 'linear-cli-skill', + description: + 'linear-cli-skill skill for Claude workflows from BehiSecc/awesome-claude-skills.', + kind: 'skill', + link: 'https://github.com/Valian/linear-cli-skill/blob/main/SKILL.md', + }, + { + id: 'skill_linkedin_25fbc76817', + title: 'linkedin', + description: 'linkedin skill for Claude workflows from BehiSecc/awesome-claude-skills.', + kind: 'skill', + link: 'https://github.com/Linked-API/linkedin-skills/blob/main/SKILL.md', + }, + { + id: 'skill_makenotion-knowledge-capture_08b1a780d2', + title: 'makenotion/knowledge-capture', + description: + 'makenotion/knowledge-capture skill for Claude workflows from VoltAgent/awesome-agent-skills.', + kind: 'skill', + link: 'https://github.com/makenotion/claude-code-notion-plugin/blob/main/skills/notion/knowledge-capture/SKILL.md', + }, + { + id: 'skill_makenotion-knowledge-capture_513df03f3e', + title: 'makenotion/knowledge-capture', + description: + 'makenotion/knowledge-capture skill for Claude workflows from VoltAgent/awesome-agent-skills.', + kind: 'skill', + link: 'https://github.com/makenotion/notion-cookbook/blob/main/skills/claude/knowledge-capture/SKILL.md', + }, + { + id: 'skill_makenotion-meeting-intelligence_1ccd265908', + title: 'makenotion/meeting-intelligence', + description: + 'makenotion/meeting-intelligence skill for Claude workflows from VoltAgent/awesome-agent-skills.', + kind: 'skill', + link: 'https://github.com/makenotion/claude-code-notion-plugin/blob/main/skills/notion/meeting-intelligence/SKILL.md', + }, + { + id: 'skill_makenotion-meeting-intelligence_46a6ba6841', + title: 'makenotion/meeting-intelligence', + description: + 'makenotion/meeting-intelligence skill for Claude workflows from VoltAgent/awesome-agent-skills.', + kind: 'skill', + link: 'https://github.com/makenotion/notion-cookbook/blob/main/skills/claude/meeting-intelligence/SKILL.md', + }, + { + id: 'skill_makenotion-research-documentation_01d13dffc9', + title: 'makenotion/research-documentation', + description: + 'makenotion/research-documentation skill for Claude workflows from VoltAgent/awesome-agent-skills.', + kind: 'skill', + link: 'https://github.com/makenotion/claude-code-notion-plugin/blob/main/skills/notion/research-documentation/SKILL.md', + }, + { + id: 'skill_makenotion-research-documentation_c707cdb5b5', + title: 'makenotion/research-documentation', + description: + 'makenotion/research-documentation skill for Claude workflows from VoltAgent/awesome-agent-skills.', + kind: 'skill', + link: 'https://github.com/makenotion/notion-cookbook/blob/main/skills/claude/research-documentation/SKILL.md', + }, + { + id: 'skill_makenotion-spec-to-implementation_fc5da31120', + title: 'makenotion/spec-to-implementation', + description: + 'makenotion/spec-to-implementation skill for Claude workflows from VoltAgent/awesome-agent-skills.', + kind: 'skill', + link: 'https://github.com/makenotion/claude-code-notion-plugin/blob/main/skills/notion/spec-to-implementation/SKILL.md', + }, + { + id: 'skill_makenotion-spec-to-implementation_adea2148e0', + title: 'makenotion/spec-to-implementation', + description: + 'makenotion/spec-to-implementation skill for Claude workflows from VoltAgent/awesome-agent-skills.', + kind: 'skill', + link: 'https://github.com/makenotion/notion-cookbook/blob/main/skills/claude/spec-to-implementation/SKILL.md', + }, + { + id: 'skill_manus_f5769976ae', + title: 'manus', + description: 'manus skill for Claude workflows from BehiSecc/awesome-claude-skills.', + kind: 'skill', + link: 'https://github.com/sanjay3290/ai-skills/blob/main/skills/manus/SKILL.md', + }, + { + id: 'skill_massimodeluisa-recursive-decomposition-skill_5ca4913622', + title: 'massimodeluisa/recursive-decomposition-skill', + description: + 'massimodeluisa/recursive-decomposition-skill skill for Claude workflows from VoltAgent/awesome-agent-skills.', + kind: 'skill', + link: 'https://github.com/massimodeluisa/recursive-decomposition-skill/blob/main/SKILL.md', + }, + { + id: 'skill_materials-simulation-skills_ed3c026177', + title: 'materials-simulation-skills', + description: + 'materials-simulation-skills skill for Claude workflows from BehiSecc/awesome-claude-skills, VoltAgent/awesome-agent-skills.', + kind: 'skill', + link: 'https://github.com/HeshamFS/materials-simulation-skills/blob/main/SKILL.md', + }, + { + id: 'skill_mattpocock-skills_17322062f7', + title: 'mattpocock/skills', + description: + 'mattpocock/skills skill for Claude workflows from VoltAgent/awesome-agent-skills.', + kind: 'skill', + link: 'https://github.com/mattpocock/skills/blob/main/SKILL.md', + }, + { + id: 'skill_mcollina-skills_c42b7bdcdf', + title: 'mcollina/skills', + description: + 'mcollina/skills skill for Claude workflows from VoltAgent/awesome-agent-skills.', + kind: 'skill', + link: 'https://github.com/mcollina/skills/blob/main/skills/SKILL.md', + }, + { + id: 'skill_meeting-insights-analyzer_1be0b0f988', + title: 'meeting-insights-analyzer', + description: + 'meeting-insights-analyzer skill for Claude workflows from BehiSecc/awesome-claude-skills.', + kind: 'skill', + link: 'https://github.com/ComposioHQ/awesome-claude-skills/blob/master/meeting-insights-analyzer/SKILL.md', + }, + { + id: 'skill_meodai-skill-color-expert_b7a26b0238', + title: 'meodai/skill.color-expert', + description: + 'meodai/skill.color-expert skill for Claude workflows from VoltAgent/awesome-agent-skills.', + kind: 'skill', + link: 'https://github.com/meodai/skill.color-expert/blob/main/SKILL.md', + }, + { + id: 'skill_microsoft-agent-framework-azure-ai-py_fa732611f8', + title: 'microsoft/agent-framework-azure-ai-py', + description: + 'microsoft/agent-framework-azure-ai-py skill for Claude workflows from VoltAgent/awesome-agent-skills.', + kind: 'skill', + link: 'https://github.com/microsoft/skills/blob/main/.github/plugins/azure-sdk-python/skills/agent-framework-azure-ai-py/SKILL.md', + }, + { + id: 'skill_microsoft-agents-v2-py_1e276b4745', + title: 'microsoft/agents-v2-py', + description: + 'microsoft/agents-v2-py skill for Claude workflows from VoltAgent/awesome-agent-skills.', + kind: 'skill', + link: 'https://github.com/microsoft/skills/blob/main/.github/plugins/azure-sdk-python/skills/agents-v2-py/SKILL.md', + }, + { + id: 'skill_microsoft-azure-ai-anomalydetector-java_b2b6e619ec', + title: 'microsoft/azure-ai-anomalydetector-java', + description: + 'microsoft/azure-ai-anomalydetector-java skill for Claude workflows from VoltAgent/awesome-agent-skills.', + kind: 'skill', + link: 'https://github.com/microsoft/skills/blob/main/.github/plugins/azure-sdk-java/skills/azure-ai-anomalydetector-java/SKILL.md', + }, + { + id: 'skill_microsoft-azure-ai-contentsafety-java_790c735a89', + title: 'microsoft/azure-ai-contentsafety-java', + description: + 'microsoft/azure-ai-contentsafety-java skill for Claude workflows from VoltAgent/awesome-agent-skills.', + kind: 'skill', + link: 'https://github.com/microsoft/skills/blob/main/.github/plugins/azure-sdk-java/skills/azure-ai-contentsafety-java/SKILL.md', + }, + { + id: 'skill_microsoft-azure-ai-contentsafety-py_45fad43c42', + title: 'microsoft/azure-ai-contentsafety-py', + description: + 'microsoft/azure-ai-contentsafety-py skill for Claude workflows from VoltAgent/awesome-agent-skills.', + kind: 'skill', + link: 'https://github.com/microsoft/skills/blob/main/.github/plugins/azure-sdk-python/skills/azure-ai-contentsafety-py/SKILL.md', + }, + { + id: 'skill_microsoft-azure-ai-contentsafety-ts_c396ce6c32', + title: 'microsoft/azure-ai-contentsafety-ts', + description: + 'microsoft/azure-ai-contentsafety-ts skill for Claude workflows from VoltAgent/awesome-agent-skills.', + kind: 'skill', + link: 'https://github.com/microsoft/skills/blob/main/.github/plugins/azure-sdk-typescript/skills/azure-ai-contentsafety-ts/SKILL.md', + }, + { + id: 'skill_microsoft-azure-ai-contentunderstanding-py_0e50a2a256', + title: 'microsoft/azure-ai-contentunderstanding-py', + description: + 'microsoft/azure-ai-contentunderstanding-py skill for Claude workflows from VoltAgent/awesome-agent-skills.', + kind: 'skill', + link: 'https://github.com/microsoft/skills/blob/main/.github/plugins/azure-sdk-python/skills/azure-ai-contentunderstanding-py/SKILL.md', + }, + { + id: 'skill_microsoft-azure-ai-document-intelligence-dotnet_f2be7dd2d2', + title: 'microsoft/azure-ai-document-intelligence-dotnet', + description: + 'microsoft/azure-ai-document-intelligence-dotnet skill for Claude workflows from VoltAgent/awesome-agent-skills.', + kind: 'skill', + link: 'https://github.com/microsoft/skills/blob/main/.github/plugins/azure-sdk-dotnet/skills/azure-ai-document-intelligence-dotnet/SKILL.md', + }, + { + id: 'skill_microsoft-azure-ai-document-intelligence-ts_7bcb653a35', + title: 'microsoft/azure-ai-document-intelligence-ts', + description: + 'microsoft/azure-ai-document-intelligence-ts skill for Claude workflows from VoltAgent/awesome-agent-skills.', + kind: 'skill', + link: 'https://github.com/microsoft/skills/blob/main/.github/plugins/azure-sdk-typescript/skills/azure-ai-document-intelligence-ts/SKILL.md', + }, + { + id: 'skill_microsoft-azure-ai-formrecognizer-java_d426a93853', + title: 'microsoft/azure-ai-formrecognizer-java', + description: + 'microsoft/azure-ai-formrecognizer-java skill for Claude workflows from VoltAgent/awesome-agent-skills.', + kind: 'skill', + link: 'https://github.com/microsoft/skills/blob/main/.github/plugins/azure-sdk-java/skills/azure-ai-formrecognizer-java/SKILL.md', + }, + { + id: 'skill_microsoft-azure-ai-ml-py_029cf8682b', + title: 'microsoft/azure-ai-ml-py', + description: + 'microsoft/azure-ai-ml-py skill for Claude workflows from VoltAgent/awesome-agent-skills.', + kind: 'skill', + link: 'https://github.com/microsoft/skills/blob/main/.github/plugins/azure-sdk-python/skills/azure-ai-ml-py/SKILL.md', + }, + { + id: 'skill_microsoft-azure-ai-openai-dotnet_219bb539b9', + title: 'microsoft/azure-ai-openai-dotnet', + description: + 'microsoft/azure-ai-openai-dotnet skill for Claude workflows from VoltAgent/awesome-agent-skills.', + kind: 'skill', + link: 'https://github.com/microsoft/skills/blob/main/.github/plugins/azure-sdk-dotnet/skills/azure-ai-openai-dotnet/SKILL.md', + }, + { + id: 'skill_microsoft-azure-ai-projects-dotnet_dc5dfed03f', + title: 'microsoft/azure-ai-projects-dotnet', + description: + 'microsoft/azure-ai-projects-dotnet skill for Claude workflows from VoltAgent/awesome-agent-skills.', + kind: 'skill', + link: 'https://github.com/microsoft/skills/blob/main/.github/plugins/azure-sdk-dotnet/skills/azure-ai-projects-dotnet/SKILL.md', + }, + { + id: 'skill_microsoft-azure-ai-projects-java_90fa0c2ff7', + title: 'microsoft/azure-ai-projects-java', + description: + 'microsoft/azure-ai-projects-java skill for Claude workflows from VoltAgent/awesome-agent-skills.', + kind: 'skill', + link: 'https://github.com/microsoft/skills/blob/main/.github/plugins/azure-sdk-java/skills/azure-ai-projects-java/SKILL.md', + }, + { + id: 'skill_microsoft-azure-ai-projects-py_3a29bcaac4', + title: 'microsoft/azure-ai-projects-py', + description: + 'microsoft/azure-ai-projects-py skill for Claude workflows from VoltAgent/awesome-agent-skills.', + kind: 'skill', + link: 'https://github.com/microsoft/skills/blob/main/.github/plugins/azure-sdk-python/skills/azure-ai-projects-py/SKILL.md', + }, + { + id: 'skill_microsoft-azure-ai-projects-ts_b884383a6f', + title: 'microsoft/azure-ai-projects-ts', + description: + 'microsoft/azure-ai-projects-ts skill for Claude workflows from VoltAgent/awesome-agent-skills.', + kind: 'skill', + link: 'https://github.com/microsoft/skills/blob/main/.github/plugins/azure-sdk-typescript/skills/azure-ai-projects-ts/SKILL.md', + }, + { + id: 'skill_microsoft-azure-ai-textanalytics-py_848b9a7ad1', + title: 'microsoft/azure-ai-textanalytics-py', + description: + 'microsoft/azure-ai-textanalytics-py skill for Claude workflows from VoltAgent/awesome-agent-skills.', + kind: 'skill', + link: 'https://github.com/microsoft/skills/blob/main/.github/plugins/azure-sdk-python/skills/azure-ai-textanalytics-py/SKILL.md', + }, + { + id: 'skill_microsoft-azure-ai-transcription-py_7e7bff7b1e', + title: 'microsoft/azure-ai-transcription-py', + description: + 'microsoft/azure-ai-transcription-py skill for Claude workflows from VoltAgent/awesome-agent-skills.', + kind: 'skill', + link: 'https://github.com/microsoft/skills/blob/main/.github/plugins/azure-sdk-python/skills/azure-ai-transcription-py/SKILL.md', + }, + { + id: 'skill_microsoft-azure-ai-translation-document-py_02dea6852f', + title: 'microsoft/azure-ai-translation-document-py', + description: + 'microsoft/azure-ai-translation-document-py skill for Claude workflows from VoltAgent/awesome-agent-skills.', + kind: 'skill', + link: 'https://github.com/microsoft/skills/blob/main/.github/plugins/azure-sdk-python/skills/azure-ai-translation-document-py/SKILL.md', + }, + { + id: 'skill_microsoft-azure-ai-translation-text-py_870ded27c4', + title: 'microsoft/azure-ai-translation-text-py', + description: + 'microsoft/azure-ai-translation-text-py skill for Claude workflows from VoltAgent/awesome-agent-skills.', + kind: 'skill', + link: 'https://github.com/microsoft/skills/blob/main/.github/plugins/azure-sdk-python/skills/azure-ai-translation-text-py/SKILL.md', + }, + { + id: 'skill_microsoft-azure-ai-translation-ts_50e605243f', + title: 'microsoft/azure-ai-translation-ts', + description: + 'microsoft/azure-ai-translation-ts skill for Claude workflows from VoltAgent/awesome-agent-skills.', + kind: 'skill', + link: 'https://github.com/microsoft/skills/blob/main/.github/plugins/azure-sdk-typescript/skills/azure-ai-translation-ts/SKILL.md', + }, + { + id: 'skill_microsoft-azure-ai-vision-imageanalysis-java_34b597b1f1', + title: 'microsoft/azure-ai-vision-imageanalysis-java', + description: + 'microsoft/azure-ai-vision-imageanalysis-java skill for Claude workflows from VoltAgent/awesome-agent-skills.', + kind: 'skill', + link: 'https://github.com/microsoft/skills/blob/main/.github/plugins/azure-sdk-java/skills/azure-ai-vision-imageanalysis-java/SKILL.md', + }, + { + id: 'skill_microsoft-azure-ai-vision-imageanalysis-py_7bc7f48dd0', + title: 'microsoft/azure-ai-vision-imageanalysis-py', + description: + 'microsoft/azure-ai-vision-imageanalysis-py skill for Claude workflows from VoltAgent/awesome-agent-skills.', + kind: 'skill', + link: 'https://github.com/microsoft/skills/blob/main/.github/plugins/azure-sdk-python/skills/azure-ai-vision-imageanalysis-py/SKILL.md', + }, + { + id: 'skill_microsoft-azure-ai-voicelive-dotnet_dc77497bb9', + title: 'microsoft/azure-ai-voicelive-dotnet', + description: + 'microsoft/azure-ai-voicelive-dotnet skill for Claude workflows from VoltAgent/awesome-agent-skills.', + kind: 'skill', + link: 'https://github.com/microsoft/skills/blob/main/.github/plugins/azure-sdk-dotnet/skills/azure-ai-voicelive-dotnet/SKILL.md', + }, + { + id: 'skill_microsoft-azure-ai-voicelive-java_c98decef49', + title: 'microsoft/azure-ai-voicelive-java', + description: + 'microsoft/azure-ai-voicelive-java skill for Claude workflows from VoltAgent/awesome-agent-skills.', + kind: 'skill', + link: 'https://github.com/microsoft/skills/blob/main/.github/plugins/azure-sdk-java/skills/azure-ai-voicelive-java/SKILL.md', + }, + { + id: 'skill_microsoft-azure-ai-voicelive-py_ff10907095', + title: 'microsoft/azure-ai-voicelive-py', + description: + 'microsoft/azure-ai-voicelive-py skill for Claude workflows from VoltAgent/awesome-agent-skills.', + kind: 'skill', + link: 'https://github.com/microsoft/skills/blob/main/.github/plugins/azure-sdk-python/skills/azure-ai-voicelive-py/SKILL.md', + }, + { + id: 'skill_microsoft-azure-ai-voicelive-ts_e96ec9e2d1', + title: 'microsoft/azure-ai-voicelive-ts', + description: + 'microsoft/azure-ai-voicelive-ts skill for Claude workflows from VoltAgent/awesome-agent-skills.', + kind: 'skill', + link: 'https://github.com/microsoft/skills/blob/main/.github/plugins/azure-sdk-typescript/skills/azure-ai-voicelive-ts/SKILL.md', + }, + { + id: 'skill_microsoft-azure-appconfiguration-java_b9cb311114', + title: 'microsoft/azure-appconfiguration-java', + description: + 'microsoft/azure-appconfiguration-java skill for Claude workflows from VoltAgent/awesome-agent-skills.', + kind: 'skill', + link: 'https://github.com/microsoft/skills/blob/main/.github/plugins/azure-sdk-java/skills/azure-appconfiguration-java/SKILL.md', + }, + { + id: 'skill_microsoft-azure-appconfiguration-py_285b6db2ec', + title: 'microsoft/azure-appconfiguration-py', + description: + 'microsoft/azure-appconfiguration-py skill for Claude workflows from VoltAgent/awesome-agent-skills.', + kind: 'skill', + link: 'https://github.com/microsoft/skills/blob/main/.github/plugins/azure-sdk-python/skills/azure-appconfiguration-py/SKILL.md', + }, + { + id: 'skill_microsoft-azure-appconfiguration-ts_babe567800', + title: 'microsoft/azure-appconfiguration-ts', + description: + 'microsoft/azure-appconfiguration-ts skill for Claude workflows from VoltAgent/awesome-agent-skills.', + kind: 'skill', + link: 'https://github.com/microsoft/skills/blob/main/.github/plugins/azure-sdk-typescript/skills/azure-appconfiguration-ts/SKILL.md', + }, + { + id: 'skill_microsoft-azure-communication-callautomation-jav_522cc134fa', + title: 'microsoft/azure-communication-callautomation-java', + description: + 'microsoft/azure-communication-callautomation-java skill for Claude workflows from VoltAgent/awesome-agent-skills.', + kind: 'skill', + link: 'https://github.com/microsoft/skills/blob/main/.github/plugins/azure-sdk-java/skills/azure-communication-callautomation-java/SKILL.md', + }, + { + id: 'skill_microsoft-azure-communication-callingserver-java_6d5219ff58', + title: 'microsoft/azure-communication-callingserver-java', + description: + 'microsoft/azure-communication-callingserver-java skill for Claude workflows from VoltAgent/awesome-agent-skills.', + kind: 'skill', + link: 'https://github.com/microsoft/skills/blob/main/.github/plugins/azure-sdk-java/skills/azure-communication-callingserver-java/SKILL.md', + }, + { + id: 'skill_microsoft-azure-communication-chat-java_6de4eb9a7d', + title: 'microsoft/azure-communication-chat-java', + description: + 'microsoft/azure-communication-chat-java skill for Claude workflows from VoltAgent/awesome-agent-skills.', + kind: 'skill', + link: 'https://github.com/microsoft/skills/blob/main/.github/plugins/azure-sdk-java/skills/azure-communication-chat-java/SKILL.md', + }, + { + id: 'skill_microsoft-azure-communication-common-java_d90203c432', + title: 'microsoft/azure-communication-common-java', + description: + 'microsoft/azure-communication-common-java skill for Claude workflows from VoltAgent/awesome-agent-skills.', + kind: 'skill', + link: 'https://github.com/microsoft/skills/blob/main/.github/plugins/azure-sdk-java/skills/azure-communication-common-java/SKILL.md', + }, + { + id: 'skill_microsoft-azure-communication-sms-java_ebf1295f34', + title: 'microsoft/azure-communication-sms-java', + description: + 'microsoft/azure-communication-sms-java skill for Claude workflows from VoltAgent/awesome-agent-skills.', + kind: 'skill', + link: 'https://github.com/microsoft/skills/blob/main/.github/plugins/azure-sdk-java/skills/azure-communication-sms-java/SKILL.md', + }, + { + id: 'skill_microsoft-azure-compute-batch-java_54b8396854', + title: 'microsoft/azure-compute-batch-java', + description: + 'microsoft/azure-compute-batch-java skill for Claude workflows from VoltAgent/awesome-agent-skills.', + kind: 'skill', + link: 'https://github.com/microsoft/skills/blob/main/.github/plugins/azure-sdk-java/skills/azure-compute-batch-java/SKILL.md', + }, + { + id: 'skill_microsoft-azure-containerregistry-py_2093cc6df0', + title: 'microsoft/azure-containerregistry-py', + description: + 'microsoft/azure-containerregistry-py skill for Claude workflows from VoltAgent/awesome-agent-skills.', + kind: 'skill', + link: 'https://github.com/microsoft/skills/blob/main/.github/plugins/azure-sdk-python/skills/azure-containerregistry-py/SKILL.md', + }, + { + id: 'skill_microsoft-azure-cosmos-db-py_c46d748db2', + title: 'microsoft/azure-cosmos-db-py', + description: + 'microsoft/azure-cosmos-db-py skill for Claude workflows from VoltAgent/awesome-agent-skills.', + kind: 'skill', + link: 'https://github.com/microsoft/skills/blob/main/.github/plugins/azure-sdk-python/skills/azure-cosmos-db-py/SKILL.md', + }, + { + id: 'skill_microsoft-azure-cosmos-java_2da42d76ab', + title: 'microsoft/azure-cosmos-java', + description: + 'microsoft/azure-cosmos-java skill for Claude workflows from VoltAgent/awesome-agent-skills.', + kind: 'skill', + link: 'https://github.com/microsoft/skills/blob/main/.github/plugins/azure-sdk-java/skills/azure-cosmos-java/SKILL.md', + }, + { + id: 'skill_microsoft-azure-cosmos-py_9aec5eaef1', + title: 'microsoft/azure-cosmos-py', + description: + 'microsoft/azure-cosmos-py skill for Claude workflows from VoltAgent/awesome-agent-skills.', + kind: 'skill', + link: 'https://github.com/microsoft/skills/blob/main/.github/plugins/azure-sdk-python/skills/azure-cosmos-py/SKILL.md', + }, + { + id: 'skill_microsoft-azure-cosmos-rust_855a366e1c', + title: 'microsoft/azure-cosmos-rust', + description: + 'microsoft/azure-cosmos-rust skill for Claude workflows from VoltAgent/awesome-agent-skills.', + kind: 'skill', + link: 'https://github.com/microsoft/skills/blob/main/.github/plugins/azure-sdk-rust/skills/azure-cosmos-rust/SKILL.md', + }, + { + id: 'skill_microsoft-azure-cosmos-ts_0b5fab766a', + title: 'microsoft/azure-cosmos-ts', + description: + 'microsoft/azure-cosmos-ts skill for Claude workflows from VoltAgent/awesome-agent-skills.', + kind: 'skill', + link: 'https://github.com/microsoft/skills/blob/main/.github/plugins/azure-sdk-typescript/skills/azure-cosmos-ts/SKILL.md', + }, + { + id: 'skill_microsoft-azure-data-tables-java_4165c6637e', + title: 'microsoft/azure-data-tables-java', + description: + 'microsoft/azure-data-tables-java skill for Claude workflows from VoltAgent/awesome-agent-skills.', + kind: 'skill', + link: 'https://github.com/microsoft/skills/blob/main/.github/plugins/azure-sdk-java/skills/azure-data-tables-java/SKILL.md', + }, + { + id: 'skill_microsoft-azure-data-tables-py_38c6f4130a', + title: 'microsoft/azure-data-tables-py', + description: + 'microsoft/azure-data-tables-py skill for Claude workflows from VoltAgent/awesome-agent-skills.', + kind: 'skill', + link: 'https://github.com/microsoft/skills/blob/main/.github/plugins/azure-sdk-python/skills/azure-data-tables-py/SKILL.md', + }, + { + id: 'skill_microsoft-azure-eventgrid-dotnet_b06013952c', + title: 'microsoft/azure-eventgrid-dotnet', + description: + 'microsoft/azure-eventgrid-dotnet skill for Claude workflows from VoltAgent/awesome-agent-skills.', + kind: 'skill', + link: 'https://github.com/microsoft/skills/blob/main/.github/plugins/azure-sdk-dotnet/skills/azure-eventgrid-dotnet/SKILL.md', + }, + { + id: 'skill_microsoft-azure-eventgrid-java_191197c7af', + title: 'microsoft/azure-eventgrid-java', + description: + 'microsoft/azure-eventgrid-java skill for Claude workflows from VoltAgent/awesome-agent-skills.', + kind: 'skill', + link: 'https://github.com/microsoft/skills/blob/main/.github/plugins/azure-sdk-java/skills/azure-eventgrid-java/SKILL.md', + }, + { + id: 'skill_microsoft-azure-eventgrid-py_58b283e202', + title: 'microsoft/azure-eventgrid-py', + description: + 'microsoft/azure-eventgrid-py skill for Claude workflows from VoltAgent/awesome-agent-skills.', + kind: 'skill', + link: 'https://github.com/microsoft/skills/blob/main/.github/plugins/azure-sdk-python/skills/azure-eventgrid-py/SKILL.md', + }, + { + id: 'skill_microsoft-azure-eventhub-dotnet_e8b18905e5', + title: 'microsoft/azure-eventhub-dotnet', + description: + 'microsoft/azure-eventhub-dotnet skill for Claude workflows from VoltAgent/awesome-agent-skills.', + kind: 'skill', + link: 'https://github.com/microsoft/skills/blob/main/.github/plugins/azure-sdk-dotnet/skills/azure-eventhub-dotnet/SKILL.md', + }, + { + id: 'skill_microsoft-azure-eventhub-java_4e01154e5c', + title: 'microsoft/azure-eventhub-java', + description: + 'microsoft/azure-eventhub-java skill for Claude workflows from VoltAgent/awesome-agent-skills.', + kind: 'skill', + link: 'https://github.com/microsoft/skills/blob/main/.github/plugins/azure-sdk-java/skills/azure-eventhub-java/SKILL.md', + }, + { + id: 'skill_microsoft-azure-eventhub-py_78edab32d1', + title: 'microsoft/azure-eventhub-py', + description: + 'microsoft/azure-eventhub-py skill for Claude workflows from VoltAgent/awesome-agent-skills.', + kind: 'skill', + link: 'https://github.com/microsoft/skills/blob/main/.github/plugins/azure-sdk-python/skills/azure-eventhub-py/SKILL.md', + }, + { + id: 'skill_microsoft-azure-eventhub-rust_7209dbd00c', + title: 'microsoft/azure-eventhub-rust', + description: + 'microsoft/azure-eventhub-rust skill for Claude workflows from VoltAgent/awesome-agent-skills.', + kind: 'skill', + link: 'https://github.com/microsoft/skills/blob/main/.github/plugins/azure-sdk-rust/skills/azure-eventhub-rust/SKILL.md', + }, + { + id: 'skill_microsoft-azure-eventhub-ts_4f0a351363', + title: 'microsoft/azure-eventhub-ts', + description: + 'microsoft/azure-eventhub-ts skill for Claude workflows from VoltAgent/awesome-agent-skills.', + kind: 'skill', + link: 'https://github.com/microsoft/skills/blob/main/.github/plugins/azure-sdk-typescript/skills/azure-eventhub-ts/SKILL.md', + }, + { + id: 'skill_microsoft-azure-identity-dotnet_e54624d544', + title: 'microsoft/azure-identity-dotnet', + description: + 'microsoft/azure-identity-dotnet skill for Claude workflows from VoltAgent/awesome-agent-skills.', + kind: 'skill', + link: 'https://github.com/microsoft/skills/blob/main/.github/plugins/azure-sdk-dotnet/skills/azure-identity-dotnet/SKILL.md', + }, + { + id: 'skill_microsoft-azure-identity-java_b3ad2381bd', + title: 'microsoft/azure-identity-java', + description: + 'microsoft/azure-identity-java skill for Claude workflows from VoltAgent/awesome-agent-skills.', + kind: 'skill', + link: 'https://github.com/microsoft/skills/blob/main/.github/plugins/azure-sdk-java/skills/azure-identity-java/SKILL.md', + }, + { + id: 'skill_microsoft-azure-identity-py_a2348cb052', + title: 'microsoft/azure-identity-py', + description: + 'microsoft/azure-identity-py skill for Claude workflows from VoltAgent/awesome-agent-skills.', + kind: 'skill', + link: 'https://github.com/microsoft/skills/blob/main/.github/plugins/azure-sdk-python/skills/azure-identity-py/SKILL.md', + }, + { + id: 'skill_microsoft-azure-identity-rust_824a377fc1', + title: 'microsoft/azure-identity-rust', + description: + 'microsoft/azure-identity-rust skill for Claude workflows from VoltAgent/awesome-agent-skills.', + kind: 'skill', + link: 'https://github.com/microsoft/skills/blob/main/.github/plugins/azure-sdk-rust/skills/azure-identity-rust/SKILL.md', + }, + { + id: 'skill_microsoft-azure-identity-ts_c426491457', + title: 'microsoft/azure-identity-ts', + description: + 'microsoft/azure-identity-ts skill for Claude workflows from VoltAgent/awesome-agent-skills.', + kind: 'skill', + link: 'https://github.com/microsoft/skills/blob/main/.github/plugins/azure-sdk-typescript/skills/azure-identity-ts/SKILL.md', + }, + { + id: 'skill_microsoft-azure-keyvault-certificates-rust_5a20f39856', + title: 'microsoft/azure-keyvault-certificates-rust', + description: + 'microsoft/azure-keyvault-certificates-rust skill for Claude workflows from VoltAgent/awesome-agent-skills.', + kind: 'skill', + link: 'https://github.com/microsoft/skills/blob/main/.github/plugins/azure-sdk-rust/skills/azure-keyvault-certificates-rust/SKILL.md', + }, + ], + // Page 7 + [ + { + id: 'skill_microsoft-azure-keyvault-keys-rust_fbf3de5b5b', + title: 'microsoft/azure-keyvault-keys-rust', + description: + 'microsoft/azure-keyvault-keys-rust skill for Claude workflows from VoltAgent/awesome-agent-skills.', + kind: 'skill', + link: 'https://github.com/microsoft/skills/blob/main/.github/plugins/azure-sdk-rust/skills/azure-keyvault-keys-rust/SKILL.md', + }, + { + id: 'skill_microsoft-azure-keyvault-keys-ts_d8b3f04957', + title: 'microsoft/azure-keyvault-keys-ts', + description: + 'microsoft/azure-keyvault-keys-ts skill for Claude workflows from VoltAgent/awesome-agent-skills.', + kind: 'skill', + link: 'https://github.com/microsoft/skills/blob/main/.github/plugins/azure-sdk-typescript/skills/azure-keyvault-keys-ts/SKILL.md', + }, + { + id: 'skill_microsoft-azure-keyvault-py_1e49a323b2', + title: 'microsoft/azure-keyvault-py', + description: + 'microsoft/azure-keyvault-py skill for Claude workflows from VoltAgent/awesome-agent-skills.', + kind: 'skill', + link: 'https://github.com/microsoft/skills/blob/main/.github/plugins/azure-sdk-python/skills/azure-keyvault-py/SKILL.md', + }, + { + id: 'skill_microsoft-azure-keyvault-secrets-rust_f506b21c49', + title: 'microsoft/azure-keyvault-secrets-rust', + description: + 'microsoft/azure-keyvault-secrets-rust skill for Claude workflows from VoltAgent/awesome-agent-skills.', + kind: 'skill', + link: 'https://github.com/microsoft/skills/blob/main/.github/plugins/azure-sdk-rust/skills/azure-keyvault-secrets-rust/SKILL.md', + }, + { + id: 'skill_microsoft-azure-keyvault-secrets-ts_5f1dcc312e', + title: 'microsoft/azure-keyvault-secrets-ts', + description: + 'microsoft/azure-keyvault-secrets-ts skill for Claude workflows from VoltAgent/awesome-agent-skills.', + kind: 'skill', + link: 'https://github.com/microsoft/skills/blob/main/.github/plugins/azure-sdk-typescript/skills/azure-keyvault-secrets-ts/SKILL.md', + }, + { + id: 'skill_microsoft-azure-maps-search-dotnet_25238654f6', + title: 'microsoft/azure-maps-search-dotnet', + description: + 'microsoft/azure-maps-search-dotnet skill for Claude workflows from VoltAgent/awesome-agent-skills.', + kind: 'skill', + link: 'https://github.com/microsoft/skills/blob/main/.github/plugins/azure-sdk-dotnet/skills/azure-maps-search-dotnet/SKILL.md', + }, + { + id: 'skill_microsoft-azure-messaging-webpubsub-java_b47165087c', + title: 'microsoft/azure-messaging-webpubsub-java', + description: + 'microsoft/azure-messaging-webpubsub-java skill for Claude workflows from VoltAgent/awesome-agent-skills.', + kind: 'skill', + link: 'https://github.com/microsoft/skills/blob/main/.github/plugins/azure-sdk-java/skills/azure-messaging-webpubsub-java/SKILL.md', + }, + { + id: 'skill_microsoft-azure-messaging-webpubsubservice-py_4155198826', + title: 'microsoft/azure-messaging-webpubsubservice-py', + description: + 'microsoft/azure-messaging-webpubsubservice-py skill for Claude workflows from VoltAgent/awesome-agent-skills.', + kind: 'skill', + link: 'https://github.com/microsoft/skills/blob/main/.github/plugins/azure-sdk-python/skills/azure-messaging-webpubsubservice-py/SKILL.md', + }, + { + id: 'skill_microsoft-azure-mgmt-apicenter-dotnet_7521e139e5', + title: 'microsoft/azure-mgmt-apicenter-dotnet', + description: + 'microsoft/azure-mgmt-apicenter-dotnet skill for Claude workflows from VoltAgent/awesome-agent-skills.', + kind: 'skill', + link: 'https://github.com/microsoft/skills/blob/main/.github/plugins/azure-sdk-dotnet/skills/azure-mgmt-apicenter-dotnet/SKILL.md', + }, + { + id: 'skill_microsoft-azure-mgmt-apicenter-py_c04b8fcf02', + title: 'microsoft/azure-mgmt-apicenter-py', + description: + 'microsoft/azure-mgmt-apicenter-py skill for Claude workflows from VoltAgent/awesome-agent-skills.', + kind: 'skill', + link: 'https://github.com/microsoft/skills/blob/main/.github/plugins/azure-sdk-python/skills/azure-mgmt-apicenter-py/SKILL.md', + }, + { + id: 'skill_microsoft-azure-mgmt-apimanagement-dotnet_afe4494a1e', + title: 'microsoft/azure-mgmt-apimanagement-dotnet', + description: + 'microsoft/azure-mgmt-apimanagement-dotnet skill for Claude workflows from VoltAgent/awesome-agent-skills.', + kind: 'skill', + link: 'https://github.com/microsoft/skills/blob/main/.github/plugins/azure-sdk-dotnet/skills/azure-mgmt-apimanagement-dotnet/SKILL.md', + }, + { + id: 'skill_microsoft-azure-mgmt-apimanagement-py_a5e7a742fc', + title: 'microsoft/azure-mgmt-apimanagement-py', + description: + 'microsoft/azure-mgmt-apimanagement-py skill for Claude workflows from VoltAgent/awesome-agent-skills.', + kind: 'skill', + link: 'https://github.com/microsoft/skills/blob/main/.github/plugins/azure-sdk-python/skills/azure-mgmt-apimanagement-py/SKILL.md', + }, + { + id: 'skill_microsoft-azure-mgmt-applicationinsights-dotnet_7fac4687f2', + title: 'microsoft/azure-mgmt-applicationinsights-dotnet', + description: + 'microsoft/azure-mgmt-applicationinsights-dotnet skill for Claude workflows from VoltAgent/awesome-agent-skills.', + kind: 'skill', + link: 'https://github.com/microsoft/skills/blob/main/.github/plugins/azure-sdk-dotnet/skills/azure-mgmt-applicationinsights-dotnet/SKILL.md', + }, + { + id: 'skill_microsoft-azure-mgmt-arizeaiobservabilityeval-do_392aad1cec', + title: 'microsoft/azure-mgmt-arizeaiobservabilityeval-dotnet', + description: + 'microsoft/azure-mgmt-arizeaiobservabilityeval-dotnet skill for Claude workflows from VoltAgent/awesome-agent-skills.', + kind: 'skill', + link: 'https://github.com/microsoft/skills/blob/main/.github/plugins/azure-sdk-dotnet/skills/azure-mgmt-arizeaiobservabilityeval-dotnet/SKILL.md', + }, + { + id: 'skill_microsoft-azure-mgmt-botservice-dotnet_233b06deb1', + title: 'microsoft/azure-mgmt-botservice-dotnet', + description: + 'microsoft/azure-mgmt-botservice-dotnet skill for Claude workflows from VoltAgent/awesome-agent-skills.', + kind: 'skill', + link: 'https://github.com/microsoft/skills/blob/main/.github/plugins/azure-sdk-dotnet/skills/azure-mgmt-botservice-dotnet/SKILL.md', + }, + { + id: 'skill_microsoft-azure-mgmt-botservice-py_836fa80393', + title: 'microsoft/azure-mgmt-botservice-py', + description: + 'microsoft/azure-mgmt-botservice-py skill for Claude workflows from VoltAgent/awesome-agent-skills.', + kind: 'skill', + link: 'https://github.com/microsoft/skills/blob/main/.github/plugins/azure-sdk-python/skills/azure-mgmt-botservice-py/SKILL.md', + }, + { + id: 'skill_microsoft-azure-mgmt-fabric-dotnet_8902b817be', + title: 'microsoft/azure-mgmt-fabric-dotnet', + description: + 'microsoft/azure-mgmt-fabric-dotnet skill for Claude workflows from VoltAgent/awesome-agent-skills.', + kind: 'skill', + link: 'https://github.com/microsoft/skills/blob/main/.github/plugins/azure-sdk-dotnet/skills/azure-mgmt-fabric-dotnet/SKILL.md', + }, + { + id: 'skill_microsoft-azure-mgmt-fabric-py_ec43c4f040', + title: 'microsoft/azure-mgmt-fabric-py', + description: + 'microsoft/azure-mgmt-fabric-py skill for Claude workflows from VoltAgent/awesome-agent-skills.', + kind: 'skill', + link: 'https://github.com/microsoft/skills/blob/main/.github/plugins/azure-sdk-python/skills/azure-mgmt-fabric-py/SKILL.md', + }, + { + id: 'skill_microsoft-azure-mgmt-mongodbatlas-dotnet_0f53bb1a22', + title: 'microsoft/azure-mgmt-mongodbatlas-dotnet', + description: + 'microsoft/azure-mgmt-mongodbatlas-dotnet skill for Claude workflows from VoltAgent/awesome-agent-skills.', + kind: 'skill', + link: 'https://github.com/microsoft/skills/blob/main/.github/plugins/azure-sdk-dotnet/skills/azure-mgmt-mongodbatlas-dotnet/SKILL.md', + }, + { + id: 'skill_microsoft-azure-mgmt-weightsandbiases-dotnet_bd7d238ed0', + title: 'microsoft/azure-mgmt-weightsandbiases-dotnet', + description: + 'microsoft/azure-mgmt-weightsandbiases-dotnet skill for Claude workflows from VoltAgent/awesome-agent-skills.', + kind: 'skill', + link: 'https://github.com/microsoft/skills/blob/main/.github/plugins/azure-sdk-dotnet/skills/azure-mgmt-weightsandbiases-dotnet/SKILL.md', + }, + { + id: 'skill_microsoft-azure-microsoft-playwright-testing-ts_8c1e254fb2', + title: 'microsoft/azure-microsoft-playwright-testing-ts', + description: + 'microsoft/azure-microsoft-playwright-testing-ts skill for Claude workflows from VoltAgent/awesome-agent-skills.', + kind: 'skill', + link: 'https://github.com/microsoft/skills/blob/main/.github/plugins/azure-sdk-typescript/skills/azure-microsoft-playwright-testing-ts/SKILL.md', + }, + { + id: 'skill_microsoft-azure-monitor-ingestion-java_9ec3b76c40', + title: 'microsoft/azure-monitor-ingestion-java', + description: + 'microsoft/azure-monitor-ingestion-java skill for Claude workflows from VoltAgent/awesome-agent-skills.', + kind: 'skill', + link: 'https://github.com/microsoft/skills/blob/main/.github/plugins/azure-sdk-java/skills/azure-monitor-ingestion-java/SKILL.md', + }, + { + id: 'skill_microsoft-azure-monitor-ingestion-py_d38d2857a6', + title: 'microsoft/azure-monitor-ingestion-py', + description: + 'microsoft/azure-monitor-ingestion-py skill for Claude workflows from VoltAgent/awesome-agent-skills.', + kind: 'skill', + link: 'https://github.com/microsoft/skills/blob/main/.github/plugins/azure-sdk-python/skills/azure-monitor-ingestion-py/SKILL.md', + }, + { + id: 'skill_microsoft-azure-monitor-opentelemetry-exporter-j_5644e431f5', + title: 'microsoft/azure-monitor-opentelemetry-exporter-java', + description: + 'microsoft/azure-monitor-opentelemetry-exporter-java skill for Claude workflows from VoltAgent/awesome-agent-skills.', + kind: 'skill', + link: 'https://github.com/microsoft/skills/blob/main/.github/plugins/azure-sdk-java/skills/azure-monitor-opentelemetry-exporter-java/SKILL.md', + }, + { + id: 'skill_microsoft-azure-monitor-opentelemetry-exporter-p_4c448fa8b9', + title: 'microsoft/azure-monitor-opentelemetry-exporter-py', + description: + 'microsoft/azure-monitor-opentelemetry-exporter-py skill for Claude workflows from VoltAgent/awesome-agent-skills.', + kind: 'skill', + link: 'https://github.com/microsoft/skills/blob/main/.github/plugins/azure-sdk-python/skills/azure-monitor-opentelemetry-exporter-py/SKILL.md', + }, + { + id: 'skill_microsoft-azure-monitor-opentelemetry-py_b0a0fa9e8d', + title: 'microsoft/azure-monitor-opentelemetry-py', + description: + 'microsoft/azure-monitor-opentelemetry-py skill for Claude workflows from VoltAgent/awesome-agent-skills.', + kind: 'skill', + link: 'https://github.com/microsoft/skills/blob/main/.github/plugins/azure-sdk-python/skills/azure-monitor-opentelemetry-py/SKILL.md', + }, + { + id: 'skill_microsoft-azure-monitor-opentelemetry-ts_c0415b10ea', + title: 'microsoft/azure-monitor-opentelemetry-ts', + description: + 'microsoft/azure-monitor-opentelemetry-ts skill for Claude workflows from VoltAgent/awesome-agent-skills.', + kind: 'skill', + link: 'https://github.com/microsoft/skills/blob/main/.github/plugins/azure-sdk-typescript/skills/azure-monitor-opentelemetry-ts/SKILL.md', + }, + { + id: 'skill_microsoft-azure-monitor-query-java_7df593b316', + title: 'microsoft/azure-monitor-query-java', + description: + 'microsoft/azure-monitor-query-java skill for Claude workflows from VoltAgent/awesome-agent-skills.', + kind: 'skill', + link: 'https://github.com/microsoft/skills/blob/main/.github/plugins/azure-sdk-java/skills/azure-monitor-query-java/SKILL.md', + }, + { + id: 'skill_microsoft-azure-monitor-query-py_e2ec5175ff', + title: 'microsoft/azure-monitor-query-py', + description: + 'microsoft/azure-monitor-query-py skill for Claude workflows from VoltAgent/awesome-agent-skills.', + kind: 'skill', + link: 'https://github.com/microsoft/skills/blob/main/.github/plugins/azure-sdk-python/skills/azure-monitor-query-py/SKILL.md', + }, + { + id: 'skill_microsoft-azure-postgres-ts_c0090e0bf9', + title: 'microsoft/azure-postgres-ts', + description: + 'microsoft/azure-postgres-ts skill for Claude workflows from VoltAgent/awesome-agent-skills.', + kind: 'skill', + link: 'https://github.com/microsoft/skills/blob/main/.github/plugins/azure-sdk-typescript/skills/azure-postgres-ts/SKILL.md', + }, + { + id: 'skill_microsoft-azure-resource-manager-cosmosdb-dotnet_faec5d8938', + title: 'microsoft/azure-resource-manager-cosmosdb-dotnet', + description: + 'microsoft/azure-resource-manager-cosmosdb-dotnet skill for Claude workflows from VoltAgent/awesome-agent-skills.', + kind: 'skill', + link: 'https://github.com/microsoft/skills/blob/main/.github/plugins/azure-sdk-dotnet/skills/azure-resource-manager-cosmosdb-dotnet/SKILL.md', + }, + { + id: 'skill_microsoft-azure-resource-manager-durabletask-dot_de025d71de', + title: 'microsoft/azure-resource-manager-durabletask-dotnet', + description: + 'microsoft/azure-resource-manager-durabletask-dotnet skill for Claude workflows from VoltAgent/awesome-agent-skills.', + kind: 'skill', + link: 'https://github.com/microsoft/skills/blob/main/.github/plugins/azure-sdk-dotnet/skills/azure-resource-manager-durabletask-dotnet/SKILL.md', + }, + { + id: 'skill_microsoft-azure-resource-manager-mysql-dotnet_a8956cddc4', + title: 'microsoft/azure-resource-manager-mysql-dotnet', + description: + 'microsoft/azure-resource-manager-mysql-dotnet skill for Claude workflows from VoltAgent/awesome-agent-skills.', + kind: 'skill', + link: 'https://github.com/microsoft/skills/blob/main/.github/plugins/azure-sdk-dotnet/skills/azure-resource-manager-mysql-dotnet/SKILL.md', + }, + { + id: 'skill_microsoft-azure-resource-manager-playwright-dotn_9c3f2691c8', + title: 'microsoft/azure-resource-manager-playwright-dotnet', + description: + 'microsoft/azure-resource-manager-playwright-dotnet skill for Claude workflows from VoltAgent/awesome-agent-skills.', + kind: 'skill', + link: 'https://github.com/microsoft/skills/blob/main/.github/plugins/azure-sdk-dotnet/skills/azure-resource-manager-playwright-dotnet/SKILL.md', + }, + { + id: 'skill_microsoft-azure-resource-manager-postgresql-dotn_5e205fd0cf', + title: 'microsoft/azure-resource-manager-postgresql-dotnet', + description: + 'microsoft/azure-resource-manager-postgresql-dotnet skill for Claude workflows from VoltAgent/awesome-agent-skills.', + kind: 'skill', + link: 'https://github.com/microsoft/skills/blob/main/.github/plugins/azure-sdk-dotnet/skills/azure-resource-manager-postgresql-dotnet/SKILL.md', + }, + { + id: 'skill_microsoft-azure-resource-manager-redis-dotnet_b84b682320', + title: 'microsoft/azure-resource-manager-redis-dotnet', + description: + 'microsoft/azure-resource-manager-redis-dotnet skill for Claude workflows from VoltAgent/awesome-agent-skills.', + kind: 'skill', + link: 'https://github.com/microsoft/skills/blob/main/.github/plugins/azure-sdk-dotnet/skills/azure-resource-manager-redis-dotnet/SKILL.md', + }, + { + id: 'skill_microsoft-azure-resource-manager-sql-dotnet_a9b19c9e09', + title: 'microsoft/azure-resource-manager-sql-dotnet', + description: + 'microsoft/azure-resource-manager-sql-dotnet skill for Claude workflows from VoltAgent/awesome-agent-skills.', + kind: 'skill', + link: 'https://github.com/microsoft/skills/blob/main/.github/plugins/azure-sdk-dotnet/skills/azure-resource-manager-sql-dotnet/SKILL.md', + }, + { + id: 'skill_microsoft-azure-search-documents-dotnet_40edd700c5', + title: 'microsoft/azure-search-documents-dotnet', + description: + 'microsoft/azure-search-documents-dotnet skill for Claude workflows from VoltAgent/awesome-agent-skills.', + kind: 'skill', + link: 'https://github.com/microsoft/skills/blob/main/.github/plugins/azure-sdk-dotnet/skills/azure-search-documents-dotnet/SKILL.md', + }, + { + id: 'skill_microsoft-azure-search-documents-py_0a5fd88c2d', + title: 'microsoft/azure-search-documents-py', + description: + 'microsoft/azure-search-documents-py skill for Claude workflows from VoltAgent/awesome-agent-skills.', + kind: 'skill', + link: 'https://github.com/microsoft/skills/blob/main/.github/plugins/azure-sdk-python/skills/azure-search-documents-py/SKILL.md', + }, + { + id: 'skill_microsoft-azure-search-documents-ts_aa93aa9c79', + title: 'microsoft/azure-search-documents-ts', + description: + 'microsoft/azure-search-documents-ts skill for Claude workflows from VoltAgent/awesome-agent-skills.', + kind: 'skill', + link: 'https://github.com/microsoft/skills/blob/main/.github/plugins/azure-sdk-typescript/skills/azure-search-documents-ts/SKILL.md', + }, + { + id: 'skill_microsoft-azure-security-keyvault-keys-dotnet_4e3d4be8b6', + title: 'microsoft/azure-security-keyvault-keys-dotnet', + description: + 'microsoft/azure-security-keyvault-keys-dotnet skill for Claude workflows from VoltAgent/awesome-agent-skills.', + kind: 'skill', + link: 'https://github.com/microsoft/skills/blob/main/.github/plugins/azure-sdk-dotnet/skills/azure-security-keyvault-keys-dotnet/SKILL.md', + }, + { + id: 'skill_microsoft-azure-security-keyvault-keys-java_44a78e1058', + title: 'microsoft/azure-security-keyvault-keys-java', + description: + 'microsoft/azure-security-keyvault-keys-java skill for Claude workflows from VoltAgent/awesome-agent-skills.', + kind: 'skill', + link: 'https://github.com/microsoft/skills/blob/main/.github/plugins/azure-sdk-java/skills/azure-security-keyvault-keys-java/SKILL.md', + }, + { + id: 'skill_microsoft-azure-security-keyvault-secrets-java_ab8dbb2bde', + title: 'microsoft/azure-security-keyvault-secrets-java', + description: + 'microsoft/azure-security-keyvault-secrets-java skill for Claude workflows from VoltAgent/awesome-agent-skills.', + kind: 'skill', + link: 'https://github.com/microsoft/skills/blob/main/.github/plugins/azure-sdk-java/skills/azure-security-keyvault-secrets-java/SKILL.md', + }, + { + id: 'skill_microsoft-azure-servicebus-dotnet_303b046142', + title: 'microsoft/azure-servicebus-dotnet', + description: + 'microsoft/azure-servicebus-dotnet skill for Claude workflows from VoltAgent/awesome-agent-skills.', + kind: 'skill', + link: 'https://github.com/microsoft/skills/blob/main/.github/plugins/azure-sdk-dotnet/skills/azure-servicebus-dotnet/SKILL.md', + }, + { + id: 'skill_microsoft-azure-servicebus-py_51604e3bd2', + title: 'microsoft/azure-servicebus-py', + description: + 'microsoft/azure-servicebus-py skill for Claude workflows from VoltAgent/awesome-agent-skills.', + kind: 'skill', + link: 'https://github.com/microsoft/skills/blob/main/.github/plugins/azure-sdk-python/skills/azure-servicebus-py/SKILL.md', + }, + { + id: 'skill_microsoft-azure-servicebus-ts_ae305eaebb', + title: 'microsoft/azure-servicebus-ts', + description: + 'microsoft/azure-servicebus-ts skill for Claude workflows from VoltAgent/awesome-agent-skills.', + kind: 'skill', + link: 'https://github.com/microsoft/skills/blob/main/.github/plugins/azure-sdk-typescript/skills/azure-servicebus-ts/SKILL.md', + }, + { + id: 'skill_microsoft-azure-speech-to-text-rest-py_09ced79a26', + title: 'microsoft/azure-speech-to-text-rest-py', + description: + 'microsoft/azure-speech-to-text-rest-py skill for Claude workflows from VoltAgent/awesome-agent-skills.', + kind: 'skill', + link: 'https://github.com/microsoft/skills/blob/main/.github/plugins/azure-sdk-python/skills/azure-speech-to-text-rest-py/SKILL.md', + }, + { + id: 'skill_microsoft-azure-storage-blob-java_a7c8429246', + title: 'microsoft/azure-storage-blob-java', + description: + 'microsoft/azure-storage-blob-java skill for Claude workflows from VoltAgent/awesome-agent-skills.', + kind: 'skill', + link: 'https://github.com/microsoft/skills/blob/main/.github/plugins/azure-sdk-java/skills/azure-storage-blob-java/SKILL.md', + }, + { + id: 'skill_microsoft-azure-storage-blob-py_7b82e4c22f', + title: 'microsoft/azure-storage-blob-py', + description: + 'microsoft/azure-storage-blob-py skill for Claude workflows from VoltAgent/awesome-agent-skills.', + kind: 'skill', + link: 'https://github.com/microsoft/skills/blob/main/.github/plugins/azure-sdk-python/skills/azure-storage-blob-py/SKILL.md', + }, + { + id: 'skill_microsoft-azure-storage-blob-rust_4cc6af4637', + title: 'microsoft/azure-storage-blob-rust', + description: + 'microsoft/azure-storage-blob-rust skill for Claude workflows from VoltAgent/awesome-agent-skills.', + kind: 'skill', + link: 'https://github.com/microsoft/skills/blob/main/.github/plugins/azure-sdk-rust/skills/azure-storage-blob-rust/SKILL.md', + }, + { + id: 'skill_microsoft-azure-storage-blob-ts_34ce151ff6', + title: 'microsoft/azure-storage-blob-ts', + description: + 'microsoft/azure-storage-blob-ts skill for Claude workflows from VoltAgent/awesome-agent-skills.', + kind: 'skill', + link: 'https://github.com/microsoft/skills/blob/main/.github/plugins/azure-sdk-typescript/skills/azure-storage-blob-ts/SKILL.md', + }, + { + id: 'skill_microsoft-azure-storage-file-datalake-py_a65fbc2ce8', + title: 'microsoft/azure-storage-file-datalake-py', + description: + 'microsoft/azure-storage-file-datalake-py skill for Claude workflows from VoltAgent/awesome-agent-skills.', + kind: 'skill', + link: 'https://github.com/microsoft/skills/blob/main/.github/plugins/azure-sdk-python/skills/azure-storage-file-datalake-py/SKILL.md', + }, + { + id: 'skill_microsoft-azure-storage-file-share-py_5d618841bf', + title: 'microsoft/azure-storage-file-share-py', + description: + 'microsoft/azure-storage-file-share-py skill for Claude workflows from VoltAgent/awesome-agent-skills.', + kind: 'skill', + link: 'https://github.com/microsoft/skills/blob/main/.github/plugins/azure-sdk-python/skills/azure-storage-file-share-py/SKILL.md', + }, + { + id: 'skill_microsoft-azure-storage-file-share-ts_37ac343f66', + title: 'microsoft/azure-storage-file-share-ts', + description: + 'microsoft/azure-storage-file-share-ts skill for Claude workflows from VoltAgent/awesome-agent-skills.', + kind: 'skill', + link: 'https://github.com/microsoft/skills/blob/main/.github/plugins/azure-sdk-typescript/skills/azure-storage-file-share-ts/SKILL.md', + }, + { + id: 'skill_microsoft-azure-storage-queue-py_8be7bc34f0', + title: 'microsoft/azure-storage-queue-py', + description: + 'microsoft/azure-storage-queue-py skill for Claude workflows from VoltAgent/awesome-agent-skills.', + kind: 'skill', + link: 'https://github.com/microsoft/skills/blob/main/.github/plugins/azure-sdk-python/skills/azure-storage-queue-py/SKILL.md', + }, + { + id: 'skill_microsoft-azure-storage-queue-ts_70cc67ff9b', + title: 'microsoft/azure-storage-queue-ts', + description: + 'microsoft/azure-storage-queue-ts skill for Claude workflows from VoltAgent/awesome-agent-skills.', + kind: 'skill', + link: 'https://github.com/microsoft/skills/blob/main/.github/plugins/azure-sdk-typescript/skills/azure-storage-queue-ts/SKILL.md', + }, + { + id: 'skill_microsoft-azure-web-pubsub-ts_ec8cf7c269', + title: 'microsoft/azure-web-pubsub-ts', + description: + 'microsoft/azure-web-pubsub-ts skill for Claude workflows from VoltAgent/awesome-agent-skills.', + kind: 'skill', + link: 'https://github.com/microsoft/skills/blob/main/.github/plugins/azure-sdk-typescript/skills/azure-web-pubsub-ts/SKILL.md', + }, + { + id: 'skill_microsoft-cloud-solution-architect_236f64c97d', + title: 'microsoft/cloud-solution-architect', + description: + 'microsoft/cloud-solution-architect skill for Claude workflows from VoltAgent/awesome-agent-skills.', + kind: 'skill', + link: 'https://github.com/microsoft/skills/blob/main/.github/skills/cloud-solution-architect/SKILL.md', + }, + { + id: 'skill_microsoft-continual-learning_75a9fc0f88', + title: 'microsoft/continual-learning', + description: + 'microsoft/continual-learning skill for Claude workflows from VoltAgent/awesome-agent-skills.', + kind: 'skill', + link: 'https://github.com/microsoft/skills/blob/main/.github/skills/continual-learning/SKILL.md', + }, + { + id: 'skill_microsoft-copilot-sdk_519d2a69ab', + title: 'microsoft/copilot-sdk', + description: + 'microsoft/copilot-sdk skill for Claude workflows from VoltAgent/awesome-agent-skills.', + kind: 'skill', + link: 'https://github.com/microsoft/skills/blob/main/.github/skills/copilot-sdk/SKILL.md', + }, + { + id: 'skill_microsoft-entra-agent-id_45f9cb0725', + title: 'microsoft/entra-agent-id', + description: + 'microsoft/entra-agent-id skill for Claude workflows from VoltAgent/awesome-agent-skills.', + kind: 'skill', + link: 'https://github.com/microsoft/skills/blob/main/.github/skills/entra-agent-id/SKILL.md', + }, + { + id: 'skill_microsoft-fastapi-router-py_00c366fa70', + title: 'microsoft/fastapi-router-py', + description: + 'microsoft/fastapi-router-py skill for Claude workflows from VoltAgent/awesome-agent-skills.', + kind: 'skill', + link: 'https://github.com/microsoft/skills/blob/main/.github/plugins/azure-sdk-python/skills/fastapi-router-py/SKILL.md', + }, + { + id: 'skill_microsoft-frontend-design-review_f05d6ea85e', + title: 'microsoft/frontend-design-review', + description: + 'microsoft/frontend-design-review skill for Claude workflows from VoltAgent/awesome-agent-skills.', + kind: 'skill', + link: 'https://github.com/microsoft/skills/blob/main/.github/skills/frontend-design-review/SKILL.md', + }, + { + id: 'skill_microsoft-frontend-ui-dark-ts_96d76c2e97', + title: 'microsoft/frontend-ui-dark-ts', + description: + 'microsoft/frontend-ui-dark-ts skill for Claude workflows from VoltAgent/awesome-agent-skills.', + kind: 'skill', + link: 'https://github.com/microsoft/skills/blob/main/.github/plugins/azure-sdk-typescript/skills/frontend-ui-dark-ts/SKILL.md', + }, + { + id: 'skill_microsoft-github-issue-creator_98ea5210e0', + title: 'microsoft/github-issue-creator', + description: + 'microsoft/github-issue-creator skill for Claude workflows from VoltAgent/awesome-agent-skills.', + kind: 'skill', + link: 'https://github.com/microsoft/skills/blob/main/.github/skills/github-issue-creator/SKILL.md', + }, + { + id: 'skill_microsoft-m365-agents-dotnet_844a50392c', + title: 'microsoft/m365-agents-dotnet', + description: + 'microsoft/m365-agents-dotnet skill for Claude workflows from VoltAgent/awesome-agent-skills.', + kind: 'skill', + link: 'https://github.com/microsoft/skills/blob/main/.github/plugins/azure-sdk-dotnet/skills/m365-agents-dotnet/SKILL.md', + }, + { + id: 'skill_microsoft-m365-agents-py_7ba7b881ca', + title: 'microsoft/m365-agents-py', + description: + 'microsoft/m365-agents-py skill for Claude workflows from VoltAgent/awesome-agent-skills.', + kind: 'skill', + link: 'https://github.com/microsoft/skills/blob/main/.github/plugins/azure-sdk-python/skills/m365-agents-py/SKILL.md', + }, + { + id: 'skill_microsoft-m365-agents-ts_f9c0e59b9c', + title: 'microsoft/m365-agents-ts', + description: + 'microsoft/m365-agents-ts skill for Claude workflows from VoltAgent/awesome-agent-skills.', + kind: 'skill', + link: 'https://github.com/microsoft/skills/blob/main/.github/plugins/azure-sdk-typescript/skills/m365-agents-ts/SKILL.md', + }, + { + id: 'skill_microsoft-mcp-builder_815d90c286', + title: 'microsoft/mcp-builder', + description: + 'microsoft/mcp-builder skill for Claude workflows from VoltAgent/awesome-agent-skills.', + kind: 'skill', + link: 'https://github.com/microsoft/skills/blob/main/.github/skills/mcp-builder/SKILL.md', + }, + { + id: 'skill_microsoft-microsoft-azure-webjobs-extensions-aut_03645acc75', + title: 'microsoft/microsoft-azure-webjobs-extensions-authentication-events-dotnet', + description: + 'microsoft/microsoft-azure-webjobs-extensions-authentication-events-dotnet skill for Claude workflows from VoltAgent/awesome-agent-skills.', + kind: 'skill', + link: 'https://github.com/microsoft/skills/blob/main/.github/plugins/azure-sdk-dotnet/skills/microsoft-azure-webjobs-extensions-authentication-events-dotnet/SKILL.md', + }, + { + id: 'skill_microsoft-podcast-generation_add4eb4703', + title: 'microsoft/podcast-generation', + description: + 'microsoft/podcast-generation skill for Claude workflows from VoltAgent/awesome-agent-skills.', + kind: 'skill', + link: 'https://github.com/microsoft/skills/blob/main/.github/skills/podcast-generation/SKILL.md', + }, + { + id: 'skill_microsoft-pydantic-models-py_3379692928', + title: 'microsoft/pydantic-models-py', + description: + 'microsoft/pydantic-models-py skill for Claude workflows from VoltAgent/awesome-agent-skills.', + kind: 'skill', + link: 'https://github.com/microsoft/skills/blob/main/.github/plugins/azure-sdk-python/skills/pydantic-models-py/SKILL.md', + }, + { + id: 'skill_microsoft-react-flow-node-ts_8454ff2e78', + title: 'microsoft/react-flow-node-ts', + description: + 'microsoft/react-flow-node-ts skill for Claude workflows from VoltAgent/awesome-agent-skills.', + kind: 'skill', + link: 'https://github.com/microsoft/skills/blob/main/.github/plugins/azure-sdk-typescript/skills/react-flow-node-ts/SKILL.md', + }, + { + id: 'skill_microsoft-skill-creator_6064e7f051', + title: 'microsoft/skill-creator', + description: + 'microsoft/skill-creator skill for Claude workflows from VoltAgent/awesome-agent-skills.', + kind: 'skill', + link: 'https://github.com/microsoft/skills/blob/main/.github/skills/skill-creator/SKILL.md', + }, + { + id: 'skill_microsoft-zustand-store-ts_9d8b51f6eb', + title: 'microsoft/zustand-store-ts', + description: + 'microsoft/zustand-store-ts skill for Claude workflows from VoltAgent/awesome-agent-skills.', + kind: 'skill', + link: 'https://github.com/microsoft/skills/blob/main/.github/plugins/azure-sdk-typescript/skills/zustand-store-ts/SKILL.md', + }, + { + id: 'skill_minimax-ai-android-native-dev_188da7cc49', + title: 'MiniMax-AI/android-native-dev', + description: + 'MiniMax-AI/android-native-dev skill for Claude workflows from VoltAgent/awesome-agent-skills.', + kind: 'skill', + link: 'https://github.com/MiniMax-AI/skills/blob/main/skills/android-native-dev/SKILL.md', + }, + { + id: 'skill_minimax-ai-frontend-dev_3d6b4821d1', + title: 'MiniMax-AI/frontend-dev', + description: + 'MiniMax-AI/frontend-dev skill for Claude workflows from VoltAgent/awesome-agent-skills.', + kind: 'skill', + link: 'https://github.com/MiniMax-AI/skills/blob/main/skills/frontend-dev/SKILL.md', + }, + { + id: 'skill_minimax-ai-fullstack-dev_b6231f3895', + title: 'MiniMax-AI/fullstack-dev', + description: + 'MiniMax-AI/fullstack-dev skill for Claude workflows from VoltAgent/awesome-agent-skills.', + kind: 'skill', + link: 'https://github.com/MiniMax-AI/skills/blob/main/skills/fullstack-dev/SKILL.md', + }, + { + id: 'skill_minimax-ai-gif-sticker-maker_e917bb711c', + title: 'MiniMax-AI/gif-sticker-maker', + description: + 'MiniMax-AI/gif-sticker-maker skill for Claude workflows from VoltAgent/awesome-agent-skills.', + kind: 'skill', + link: 'https://github.com/MiniMax-AI/skills/blob/main/skills/gif-sticker-maker/SKILL.md', + }, + { + id: 'skill_minimax-ai-ios-application-dev_5afc1ea6b1', + title: 'MiniMax-AI/ios-application-dev', + description: + 'MiniMax-AI/ios-application-dev skill for Claude workflows from VoltAgent/awesome-agent-skills.', + kind: 'skill', + link: 'https://github.com/MiniMax-AI/skills/blob/main/skills/ios-application-dev/SKILL.md', + }, + { + id: 'skill_minimax-ai-minimax-docx_37c1423291', + title: 'MiniMax-AI/minimax-docx', + description: + 'MiniMax-AI/minimax-docx skill for Claude workflows from VoltAgent/awesome-agent-skills.', + kind: 'skill', + link: 'https://github.com/MiniMax-AI/skills/blob/main/skills/minimax-docx/SKILL.md', + }, + { + id: 'skill_minimax-ai-minimax-pdf_172cd37a51', + title: 'MiniMax-AI/minimax-pdf', + description: + 'MiniMax-AI/minimax-pdf skill for Claude workflows from VoltAgent/awesome-agent-skills.', + kind: 'skill', + link: 'https://github.com/MiniMax-AI/skills/blob/main/skills/minimax-pdf/SKILL.md', + }, + { + id: 'skill_minimax-ai-minimax-xlsx_cf204eeb6f', + title: 'MiniMax-AI/minimax-xlsx', + description: + 'MiniMax-AI/minimax-xlsx skill for Claude workflows from VoltAgent/awesome-agent-skills.', + kind: 'skill', + link: 'https://github.com/MiniMax-AI/skills/blob/main/skills/minimax-xlsx/SKILL.md', + }, + { + id: 'skill_minimax-ai-pptx-generator_931003872c', + title: 'MiniMax-AI/pptx-generator', + description: + 'MiniMax-AI/pptx-generator skill for Claude workflows from VoltAgent/awesome-agent-skills.', + kind: 'skill', + link: 'https://github.com/MiniMax-AI/skills/blob/main/skills/pptx-generator/SKILL.md', + }, + { + id: 'skill_minimax-ai-shader-dev_965dcf8236', + title: 'MiniMax-AI/shader-dev', + description: + 'MiniMax-AI/shader-dev skill for Claude workflows from VoltAgent/awesome-agent-skills.', + kind: 'skill', + link: 'https://github.com/MiniMax-AI/skills/blob/main/skills/shader-dev/SKILL.md', + }, + { + id: 'skill_moltdj_c949f1ed6f', + title: 'moltdj', + description: 'moltdj skill for Claude workflows from BehiSecc/awesome-claude-skills.', + kind: 'skill', + link: 'https://github.com/polaroteam/moltdj-skill/blob/main/SKILL.md', + }, + { + id: 'skill_more-io-apple-bridges_6abe3d3493', + title: 'more-io/apple-bridges', + description: + 'more-io/apple-bridges skill for Claude workflows from VoltAgent/awesome-agent-skills.', + kind: 'skill', + link: 'https://github.com/more-io/claude-apple-bridges/blob/main/SKILL.md', + }, + { + id: 'skill_motion_0100ca6070', + title: 'motion', + description: 'motion skill for Claude workflows from onmax/nuxt-skills.', + kind: 'skill', + link: 'https://github.com/onmax/nuxt-skills/blob/main/skills/motion/SKILL.md', + }, + { + id: 'skill_move-code-quality-skill_c05e68f549', + title: 'move-code-quality-skill', + description: + 'move-code-quality-skill skill for Claude workflows from BehiSecc/awesome-claude-skills.', + kind: 'skill', + link: 'https://github.com/1NickPappas/move-code-quality-skill/blob/main/SKILL.md', + }, + { + id: 'skill_mssql_d44f28acc4', + title: 'mssql', + description: 'mssql skill for Claude workflows from BehiSecc/awesome-claude-skills.', + kind: 'skill', + link: 'https://github.com/sanjay3290/ai-skills/blob/main/skills/mssql/SKILL.md', + }, + { + id: 'skill_mukul975-anthropic-cybersecurity-skills_0c3aed5ce8', + title: 'mukul975/Anthropic-Cybersecurity-Skills', + description: + 'mukul975/Anthropic-Cybersecurity-Skills skill for Claude workflows from VoltAgent/awesome-agent-skills.', + kind: 'skill', + link: 'https://github.com/mukul975/Anthropic-Cybersecurity-Skills/blob/main/SKILL.md', + }, + { + id: 'skill_muratcankoylan-context-compression_1cf304b7dd', + title: 'muratcankoylan/context-compression', + description: + 'muratcankoylan/context-compression skill for Claude workflows from VoltAgent/awesome-agent-skills.', + kind: 'skill', + link: 'https://github.com/muratcankoylan/Agent-Skills-for-Context-Engineering/blob/main/skills/context-compression/SKILL.md', + }, + { + id: 'skill_muratcankoylan-context-degradation_b97d170571', + title: 'muratcankoylan/context-degradation', + description: + 'muratcankoylan/context-degradation skill for Claude workflows from VoltAgent/awesome-agent-skills.', + kind: 'skill', + link: 'https://github.com/muratcankoylan/Agent-Skills-for-Context-Engineering/blob/main/skills/context-degradation/SKILL.md', + }, + { + id: 'skill_muratcankoylan-context-fundamentals_2dbf14f256', + title: 'muratcankoylan/context-fundamentals', + description: + 'muratcankoylan/context-fundamentals skill for Claude workflows from VoltAgent/awesome-agent-skills.', + kind: 'skill', + link: 'https://github.com/muratcankoylan/Agent-Skills-for-Context-Engineering/blob/main/skills/context-fundamentals/SKILL.md', + }, + { + id: 'skill_muratcankoylan-context-optimization_d208d35cd4', + title: 'muratcankoylan/context-optimization', + description: + 'muratcankoylan/context-optimization skill for Claude workflows from VoltAgent/awesome-agent-skills.', + kind: 'skill', + link: 'https://github.com/muratcankoylan/Agent-Skills-for-Context-Engineering/blob/main/skills/context-optimization/SKILL.md', + }, + { + id: 'skill_muratcankoylan-evaluation_6e6b81e14e', + title: 'muratcankoylan/evaluation', + description: + 'muratcankoylan/evaluation skill for Claude workflows from VoltAgent/awesome-agent-skills.', + kind: 'skill', + link: 'https://github.com/muratcankoylan/Agent-Skills-for-Context-Engineering/blob/main/skills/evaluation/SKILL.md', + }, + { + id: 'skill_muratcankoylan-memory-systems_79684035fe', + title: 'muratcankoylan/memory-systems', + description: + 'muratcankoylan/memory-systems skill for Claude workflows from VoltAgent/awesome-agent-skills.', + kind: 'skill', + link: 'https://github.com/muratcankoylan/Agent-Skills-for-Context-Engineering/blob/main/skills/memory-systems/SKILL.md', + }, + { + id: 'skill_muratcankoylan-multi-agent-patterns_7cfc2cf53a', + title: 'muratcankoylan/multi-agent-patterns', + description: + 'muratcankoylan/multi-agent-patterns skill for Claude workflows from VoltAgent/awesome-agent-skills.', + kind: 'skill', + link: 'https://github.com/muratcankoylan/Agent-Skills-for-Context-Engineering/blob/main/skills/multi-agent-patterns/SKILL.md', + }, + { + id: 'skill_muratcankoylan-tool-design_44e6a4df50', + title: 'muratcankoylan/tool-design', + description: + 'muratcankoylan/tool-design skill for Claude workflows from VoltAgent/awesome-agent-skills.', + kind: 'skill', + link: 'https://github.com/muratcankoylan/Agent-Skills-for-Context-Engineering/blob/main/skills/tool-design/SKILL.md', + }, + { + id: 'skill_muthuishere-hand-drawn-diagrams_2bcca3162e', + title: 'muthuishere/hand-drawn-diagrams', + description: + 'muthuishere/hand-drawn-diagrams skill for Claude workflows from VoltAgent/awesome-agent-skills.', + kind: 'skill', + link: 'https://github.com/muthuishere/hand-drawn-diagrams/blob/main/SKILL.md', + }, + ], + // Page 8 + [ + { + id: 'skill_mysql_966e56ee10', + title: 'mysql', + description: 'mysql skill for Claude workflows from BehiSecc/awesome-claude-skills.', + kind: 'skill', + link: 'https://github.com/sanjay3290/ai-skills/blob/main/skills/mysql/SKILL.md', + }, + { + id: 'skill_neolabhq-code-review_7d21baa163', + title: 'NeoLabHQ/code-review', + description: + 'NeoLabHQ/code-review skill for Claude workflows from VoltAgent/awesome-agent-skills.', + kind: 'skill', + link: 'https://github.com/NeoLabHQ/context-engineering-kit/blob/master/plugins/code-review/SKILL.md', + }, + { + id: 'skill_neolabhq-ddd_ef8278fd4e', + title: 'NeoLabHQ/ddd', + description: 'NeoLabHQ/ddd skill for Claude workflows from VoltAgent/awesome-agent-skills.', + kind: 'skill', + link: 'https://github.com/NeoLabHQ/context-engineering-kit/blob/master/plugins/ddd/SKILL.md', + }, + { + id: 'skill_neolabhq-kaizen_01289f3637', + title: 'NeoLabHQ/kaizen', + description: + 'NeoLabHQ/kaizen skill for Claude workflows from VoltAgent/awesome-agent-skills.', + kind: 'skill', + link: 'https://github.com/NeoLabHQ/context-engineering-kit/blob/master/plugins/kaizen/SKILL.md', + }, + { + id: 'skill_neolabhq-prompt-engineering_054cff736e', + title: 'NeoLabHQ/prompt-engineering', + description: + 'NeoLabHQ/prompt-engineering skill for Claude workflows from VoltAgent/awesome-agent-skills.', + kind: 'skill', + link: 'https://github.com/NeoLabHQ/context-engineering-kit/blob/master/plugins/customaize-agent/skills/prompt-engineering/SKILL.md', + }, + { + id: 'skill_neolabhq-reflexion_22369b8642', + title: 'NeoLabHQ/reflexion', + description: + 'NeoLabHQ/reflexion skill for Claude workflows from VoltAgent/awesome-agent-skills.', + kind: 'skill', + link: 'https://github.com/NeoLabHQ/context-engineering-kit/blob/master/plugins/reflexion/SKILL.md', + }, + { + id: 'skill_neolabhq-sadd_a85ca9fb0b', + title: 'NeoLabHQ/sadd', + description: 'NeoLabHQ/sadd skill for Claude workflows from VoltAgent/awesome-agent-skills.', + kind: 'skill', + link: 'https://github.com/NeoLabHQ/context-engineering-kit/blob/master/plugins/sadd/SKILL.md', + }, + { + id: 'skill_neolabhq-sdd_bb31434767', + title: 'NeoLabHQ/sdd', + description: 'NeoLabHQ/sdd skill for Claude workflows from VoltAgent/awesome-agent-skills.', + kind: 'skill', + link: 'https://github.com/NeoLabHQ/context-engineering-kit/blob/master/plugins/sdd/SKILL.md', + }, + { + id: 'skill_neolabhq-write-concisely_143e522188', + title: 'NeoLabHQ/write-concisely', + description: + 'NeoLabHQ/write-concisely skill for Claude workflows from VoltAgent/awesome-agent-skills.', + kind: 'skill', + link: 'https://github.com/NeoLabHQ/context-engineering-kit/blob/master/plugins/docs/skills/write-concisely/SKILL.md', + }, + { + id: 'skill_neondatabase-claimable-postgres_13f27f8c0a', + title: 'neondatabase/claimable-postgres', + description: + 'neondatabase/claimable-postgres skill for Claude workflows from VoltAgent/awesome-agent-skills.', + kind: 'skill', + link: 'https://github.com/neondatabase/agent-skills/blob/main/skills/claimable-postgres/SKILL.md', + }, + { + id: 'skill_neondatabase-neon-postgres_7dcf26a717', + title: 'neondatabase/neon-postgres', + description: + 'neondatabase/neon-postgres skill for Claude workflows from VoltAgent/awesome-agent-skills.', + kind: 'skill', + link: 'https://github.com/neondatabase/agent-skills/blob/main/skills/neon-postgres/SKILL.md', + }, + { + id: 'skill_neondatabase-neon-postgres-egress-optimizer_6338ce546c', + title: 'neondatabase/neon-postgres-egress-optimizer', + description: + 'neondatabase/neon-postgres-egress-optimizer skill for Claude workflows from VoltAgent/awesome-agent-skills.', + kind: 'skill', + link: 'https://github.com/neondatabase/agent-skills/blob/main/skills/neon-postgres-egress-optimizer/SKILL.md', + }, + { + id: 'skill_netlify-netlify-ai-gateway_cbe5dd6b3d', + title: 'netlify/netlify-ai-gateway', + description: + 'netlify/netlify-ai-gateway skill for Claude workflows from VoltAgent/awesome-agent-skills.', + kind: 'skill', + link: 'https://github.com/netlify/context-and-tools/blob/main/skills/netlify-ai-gateway/SKILL.md', + }, + { + id: 'skill_netlify-netlify-blobs_0112f8e957', + title: 'netlify/netlify-blobs', + description: + 'netlify/netlify-blobs skill for Claude workflows from VoltAgent/awesome-agent-skills.', + kind: 'skill', + link: 'https://github.com/netlify/context-and-tools/blob/main/skills/netlify-blobs/SKILL.md', + }, + { + id: 'skill_netlify-netlify-caching_77e4c1635a', + title: 'netlify/netlify-caching', + description: + 'netlify/netlify-caching skill for Claude workflows from VoltAgent/awesome-agent-skills.', + kind: 'skill', + link: 'https://github.com/netlify/context-and-tools/blob/main/skills/netlify-caching/SKILL.md', + }, + { + id: 'skill_netlify-netlify-cli-and-deploy_5049bf1ff8', + title: 'netlify/netlify-cli-and-deploy', + description: + 'netlify/netlify-cli-and-deploy skill for Claude workflows from VoltAgent/awesome-agent-skills.', + kind: 'skill', + link: 'https://github.com/netlify/context-and-tools/blob/main/skills/netlify-cli-and-deploy/SKILL.md', + }, + { + id: 'skill_netlify-netlify-config_e656136926', + title: 'netlify/netlify-config', + description: + 'netlify/netlify-config skill for Claude workflows from VoltAgent/awesome-agent-skills.', + kind: 'skill', + link: 'https://github.com/netlify/context-and-tools/blob/main/skills/netlify-config/SKILL.md', + }, + { + id: 'skill_netlify-netlify-db_74acd9b3a6', + title: 'netlify/netlify-db', + description: + 'netlify/netlify-db skill for Claude workflows from VoltAgent/awesome-agent-skills.', + kind: 'skill', + link: 'https://github.com/netlify/context-and-tools/blob/main/skills/netlify-db/SKILL.md', + }, + { + id: 'skill_netlify-netlify-deploy_8910a359a7', + title: 'netlify/netlify-deploy', + description: + 'netlify/netlify-deploy skill for Claude workflows from VoltAgent/awesome-agent-skills.', + kind: 'skill', + link: 'https://github.com/netlify/context-and-tools/blob/main/skills/netlify-deploy/SKILL.md', + }, + { + id: 'skill_netlify-netlify-edge-functions_c83dd98731', + title: 'netlify/netlify-edge-functions', + description: + 'netlify/netlify-edge-functions skill for Claude workflows from VoltAgent/awesome-agent-skills.', + kind: 'skill', + link: 'https://github.com/netlify/context-and-tools/blob/main/skills/netlify-edge-functions/SKILL.md', + }, + { + id: 'skill_netlify-netlify-forms_20648d1243', + title: 'netlify/netlify-forms', + description: + 'netlify/netlify-forms skill for Claude workflows from VoltAgent/awesome-agent-skills.', + kind: 'skill', + link: 'https://github.com/netlify/context-and-tools/blob/main/skills/netlify-forms/SKILL.md', + }, + { + id: 'skill_netlify-netlify-frameworks_9627e7a0d3', + title: 'netlify/netlify-frameworks', + description: + 'netlify/netlify-frameworks skill for Claude workflows from VoltAgent/awesome-agent-skills.', + kind: 'skill', + link: 'https://github.com/netlify/context-and-tools/blob/main/skills/netlify-frameworks/SKILL.md', + }, + { + id: 'skill_netlify-netlify-functions_dddb35dde7', + title: 'netlify/netlify-functions', + description: + 'netlify/netlify-functions skill for Claude workflows from VoltAgent/awesome-agent-skills.', + kind: 'skill', + link: 'https://github.com/netlify/context-and-tools/blob/main/skills/netlify-functions/SKILL.md', + }, + { + id: 'skill_netlify-netlify-image-cdn_5cfcfb9f36', + title: 'netlify/netlify-image-cdn', + description: + 'netlify/netlify-image-cdn skill for Claude workflows from VoltAgent/awesome-agent-skills.', + kind: 'skill', + link: 'https://github.com/netlify/context-and-tools/blob/main/skills/netlify-image-cdn/SKILL.md', + }, + { + id: 'skill_nextlevelbuilder-ui-ux-pro-max-skill_b66d7c37a9', + title: 'nextlevelbuilder/ui-ux-pro-max-skill', + description: + 'nextlevelbuilder/ui-ux-pro-max-skill skill for Claude workflows from VoltAgent/awesome-agent-skills.', + kind: 'skill', + link: 'https://github.com/nextlevelbuilder/ui-ux-pro-max-skill/blob/main/SKILL.md', + }, + { + id: 'skill_noizai-skills_f4c561ae3b', + title: 'NoizAI/skills', + description: + 'NoizAI/skills skill for Claude workflows from BehiSecc/awesome-claude-skills, VoltAgent/awesome-agent-skills.', + kind: 'skill', + link: 'https://github.com/NoizAI/skills/blob/main/SKILL.md', + }, + { + id: 'skill_notebooklm_6f545133ef', + title: 'notebooklm', + description: 'notebooklm skill for Claude workflows from BehiSecc/awesome-claude-skills.', + kind: 'skill', + link: 'https://github.com/sanjay3290/ai-skills/blob/main/skills/notebooklm/SKILL.md', + }, + { + id: 'skill_notion-cookbook_a068bb7013', + title: 'notion-cookbook', + description: + 'notion-cookbook skill for Claude workflows from VoltAgent/awesome-agent-skills.', + kind: 'skill', + link: 'https://github.com/makenotion/notion-cookbook/blob/main/skills/claude/SKILL.md', + }, + { + id: 'skill_notmyself-claude-win11-speckit-update-skill_157294b0c8', + title: 'NotMyself/claude-win11-speckit-update-skill', + description: + 'NotMyself/claude-win11-speckit-update-skill skill for Claude workflows from VoltAgent/awesome-agent-skills.', + kind: 'skill', + link: 'https://github.com/NotMyself/claude-win11-speckit-update-skill/blob/main/SKILL.md', + }, + { + id: 'skill_nuxt_20679c5671', + title: 'nuxt', + description: 'nuxt skill for Claude workflows from onmax/nuxt-skills.', + kind: 'skill', + link: 'https://github.com/onmax/nuxt-skills/blob/main/skills/nuxt/SKILL.md', + }, + { + id: 'skill_nuxt-better-auth_011933dd43', + title: 'nuxt-better-auth', + description: 'nuxt-better-auth skill for Claude workflows from onmax/nuxt-skills.', + kind: 'skill', + link: 'https://github.com/onmax/nuxt-skills/blob/main/skills/nuxt-better-auth/SKILL.md', + }, + { + id: 'skill_nuxt-content_3301a10923', + title: 'nuxt-content', + description: 'nuxt-content skill for Claude workflows from onmax/nuxt-skills.', + kind: 'skill', + link: 'https://github.com/onmax/nuxt-skills/blob/main/skills/nuxt-content/SKILL.md', + }, + { + id: 'skill_nuxt-modules_c07efa15d5', + title: 'nuxt-modules', + description: 'nuxt-modules skill for Claude workflows from onmax/nuxt-skills.', + kind: 'skill', + link: 'https://github.com/onmax/nuxt-skills/blob/main/skills/nuxt-modules/SKILL.md', + }, + { + id: 'skill_nuxt-seo_ae88328481', + title: 'nuxt-seo', + description: 'nuxt-seo skill for Claude workflows from onmax/nuxt-skills.', + kind: 'skill', + link: 'https://github.com/onmax/nuxt-skills/blob/main/skills/nuxt-seo/SKILL.md', + }, + { + id: 'skill_nuxt-ui_e51f1876f5', + title: 'nuxt-ui', + description: 'nuxt-ui skill for Claude workflows from onmax/nuxt-skills.', + kind: 'skill', + link: 'https://github.com/onmax/nuxt-skills/blob/main/skills/nuxt-ui/SKILL.md', + }, + { + id: 'skill_nuxthub_e90901b023', + title: 'nuxthub', + description: 'nuxthub skill for Claude workflows from onmax/nuxt-skills.', + kind: 'skill', + link: 'https://github.com/onmax/nuxt-skills/blob/main/skills/nuxthub/SKILL.md', + }, + { + id: 'command_obra-commands_83858fae0e', + title: 'obra/commands', + description: + 'obra/commands command for Claude workflows from VoltAgent/awesome-agent-skills.', + kind: 'command', + link: 'https://github.com/obra/superpowers/blob/main/skills/commands/SKILL.md', + }, + { + id: 'skill_obra-condition-based-waiting_7a895f138d', + title: 'obra/condition-based-waiting', + description: + 'obra/condition-based-waiting skill for Claude workflows from VoltAgent/awesome-agent-skills.', + kind: 'skill', + link: 'https://github.com/obra/superpowers/blob/main/skills/condition-based-waiting/SKILL.md', + }, + { + id: 'skill_obra-dispatching-parallel-agents_97f1ad148e', + title: 'obra/dispatching-parallel-agents', + description: + 'obra/dispatching-parallel-agents skill for Claude workflows from VoltAgent/awesome-agent-skills.', + kind: 'skill', + link: 'https://github.com/obra/superpowers/blob/main/skills/dispatching-parallel-agents/SKILL.md', + }, + { + id: 'skill_obra-executing-plans_b65c2cffe1', + title: 'obra/executing-plans', + description: + 'obra/executing-plans skill for Claude workflows from VoltAgent/awesome-agent-skills.', + kind: 'skill', + link: 'https://github.com/obra/superpowers/blob/main/skills/executing-plans/SKILL.md', + }, + { + id: 'skill_obra-receiving-code-review_9dd0537832', + title: 'obra/receiving-code-review', + description: + 'obra/receiving-code-review skill for Claude workflows from VoltAgent/awesome-agent-skills.', + kind: 'skill', + link: 'https://github.com/obra/superpowers/blob/main/skills/receiving-code-review/SKILL.md', + }, + { + id: 'skill_obra-requesting-code-review_a4812899a1', + title: 'obra/requesting-code-review', + description: + 'obra/requesting-code-review skill for Claude workflows from VoltAgent/awesome-agent-skills.', + kind: 'skill', + link: 'https://github.com/obra/superpowers/blob/main/skills/requesting-code-review/SKILL.md', + }, + { + id: 'skill_obra-root-cause-tracing_5052784211', + title: 'obra/root-cause-tracing', + description: + 'obra/root-cause-tracing skill for Claude workflows from VoltAgent/awesome-agent-skills.', + kind: 'skill', + link: 'https://github.com/obra/superpowers/blob/main/skills/root-cause-tracing/SKILL.md', + }, + { + id: 'skill_obra-sharing-skills_0572b6dc87', + title: 'obra/sharing-skills', + description: + 'obra/sharing-skills skill for Claude workflows from VoltAgent/awesome-agent-skills.', + kind: 'skill', + link: 'https://github.com/obra/superpowers/blob/main/skills/sharing-skills/SKILL.md', + }, + { + id: 'agent_obra-subagent-driven-development_b6b2a3f8ac', + title: 'obra/subagent-driven-development', + description: + 'obra/subagent-driven-development agent for Claude workflows from VoltAgent/awesome-agent-skills.', + kind: 'agent', + link: 'https://github.com/obra/superpowers/blob/main/skills/subagent-driven-development/SKILL.md', + }, + { + id: 'skill_obra-superpowers-lab_b3a909020d', + title: 'obra/superpowers-lab', + description: + 'obra/superpowers-lab skill for Claude workflows from VoltAgent/awesome-agent-skills.', + kind: 'skill', + link: 'https://github.com/obra/superpowers-lab/blob/main/SKILL.md', + }, + { + id: 'skill_obra-systematic-debugging_a703bdbec0', + title: 'obra/systematic-debugging', + description: + 'obra/systematic-debugging skill for Claude workflows from VoltAgent/awesome-agent-skills.', + kind: 'skill', + link: 'https://github.com/obra/superpowers/blob/main/skills/systematic-debugging/SKILL.md', + }, + { + id: 'skill_obra-test-driven-development_7038c27281', + title: 'obra/test-driven-development', + description: + 'obra/test-driven-development skill for Claude workflows from VoltAgent/awesome-agent-skills.', + kind: 'skill', + link: 'https://github.com/obra/superpowers/blob/main/skills/test-driven-development/SKILL.md', + }, + { + id: 'skill_obra-testing-anti-patterns_f90216ace8', + title: 'obra/testing-anti-patterns', + description: + 'obra/testing-anti-patterns skill for Claude workflows from VoltAgent/awesome-agent-skills.', + kind: 'skill', + link: 'https://github.com/obra/superpowers/blob/main/skills/testing-anti-patterns/SKILL.md', + }, + { + id: 'skill_obra-using-git-worktrees_22fe7c2a32', + title: 'obra/using-git-worktrees', + description: + 'obra/using-git-worktrees skill for Claude workflows from VoltAgent/awesome-agent-skills.', + kind: 'skill', + link: 'https://github.com/obra/superpowers/blob/main/skills/using-git-worktrees/SKILL.md', + }, + { + id: 'skill_obra-using-superpowers_36fc574a0f', + title: 'obra/using-superpowers', + description: + 'obra/using-superpowers skill for Claude workflows from VoltAgent/awesome-agent-skills.', + kind: 'skill', + link: 'https://github.com/obra/superpowers/blob/main/skills/using-superpowers/SKILL.md', + }, + { + id: 'skill_obra-verification-before-completion_dbfc6d4e57', + title: 'obra/verification-before-completion', + description: + 'obra/verification-before-completion skill for Claude workflows from VoltAgent/awesome-agent-skills.', + kind: 'skill', + link: 'https://github.com/obra/superpowers/blob/main/skills/verification-before-completion/SKILL.md', + }, + { + id: 'skill_obra-writing-plans_7b5f863cdf', + title: 'obra/writing-plans', + description: + 'obra/writing-plans skill for Claude workflows from VoltAgent/awesome-agent-skills.', + kind: 'skill', + link: 'https://github.com/obra/superpowers/blob/main/skills/writing-plans/SKILL.md', + }, + { + id: 'skill_obra-writing-skills_b300f918e7', + title: 'obra/writing-skills', + description: + 'obra/writing-skills skill for Claude workflows from VoltAgent/awesome-agent-skills.', + kind: 'skill', + link: 'https://github.com/obra/superpowers/blob/main/skills/writing-skills/SKILL.md', + }, + { + id: 'skill_octav-api-skill_5b459a8304', + title: 'octav-api-skill', + description: + 'octav-api-skill skill for Claude workflows from BehiSecc/awesome-claude-skills.', + kind: 'skill', + link: 'https://github.com/Octav-Labs/octav-api-skill/blob/main/SKILL.md', + }, + { + id: 'skill_ognjengt-founder-skills_48f06e6cc1', + title: 'ognjengt/founder-skills', + description: + 'ognjengt/founder-skills skill for Claude workflows from VoltAgent/awesome-agent-skills.', + kind: 'skill', + link: 'https://github.com/ognjengt/founder-skills/blob/main/SKILL.md', + }, + { + id: 'skill_oiloil-ui-ux-guide_f5448e746c', + title: 'oiloil-ui-ux-guide', + description: + 'oiloil-ui-ux-guide skill for Claude workflows from BehiSecc/awesome-claude-skills.', + kind: 'skill', + link: 'https://github.com/oil-oil/oiloil-ui-ux-guide/blob/main/SKILL.md', + }, + { + id: 'skill_omkamal-pypict-skill_48418c82cc', + title: 'omkamal/pypict-skill', + description: + 'omkamal/pypict-skill skill for Claude workflows from VoltAgent/awesome-agent-skills.', + kind: 'skill', + link: 'https://github.com/omkamal/pypict-claude-skill/blob/main/SKILL.md', + }, + { + id: 'skill_op7418-nanobanana-ppt-skills_f7c494fd73', + title: 'op7418/NanoBanana-PPT-Skills', + description: + 'op7418/NanoBanana-PPT-Skills skill for Claude workflows from VoltAgent/awesome-agent-skills.', + kind: 'skill', + link: 'https://github.com/op7418/NanoBanana-PPT-Skills/blob/main/SKILL.md', + }, + { + id: 'skill_op7418-youtube-clipper-skill_c3bef67ca1', + title: 'op7418/Youtube-clipper-skill', + description: + 'op7418/Youtube-clipper-skill skill for Claude workflows from VoltAgent/awesome-agent-skills.', + kind: 'skill', + link: 'https://github.com/op7418/Youtube-clipper-skill/blob/main/SKILL.md', + }, + { + id: 'skill_open-an-issue_7efefa837a', + title: 'open an issue', + description: 'open an issue skill for Claude workflows from VoltAgent/awesome-agent-skills.', + kind: 'skill', + link: 'https://github.com/VoltAgent/awesome-agent-skills/issues', + }, + { + id: 'skill_openai-aspnet-core_d2e5a38901', + title: 'openai/aspnet-core', + description: + 'openai/aspnet-core skill for Claude workflows from VoltAgent/awesome-agent-skills.', + kind: 'skill', + link: 'https://github.com/openai/skills/blob/main/skills/.curated/aspnet-core/SKILL.md', + }, + { + id: 'skill_openai-chatgpt-apps_e54a4ff0bd', + title: 'openai/chatgpt-apps', + description: + 'openai/chatgpt-apps skill for Claude workflows from VoltAgent/awesome-agent-skills.', + kind: 'skill', + link: 'https://github.com/openai/skills/blob/main/skills/.curated/chatgpt-apps/SKILL.md', + }, + { + id: 'skill_openai-cloudflare-deploy_bfb721fca8', + title: 'openai/cloudflare-deploy', + description: + 'openai/cloudflare-deploy skill for Claude workflows from VoltAgent/awesome-agent-skills.', + kind: 'skill', + link: 'https://github.com/openai/skills/blob/main/skills/.curated/cloudflare-deploy/SKILL.md', + }, + { + id: 'skill_openai-develop-web-game_f3911a7f17', + title: 'openai/develop-web-game', + description: + 'openai/develop-web-game skill for Claude workflows from VoltAgent/awesome-agent-skills.', + kind: 'skill', + link: 'https://github.com/openai/skills/blob/main/skills/.curated/develop-web-game/SKILL.md', + }, + { + id: 'skill_openai-doc_d8b559c292', + title: 'openai/doc', + description: 'openai/doc skill for Claude workflows from VoltAgent/awesome-agent-skills.', + kind: 'skill', + link: 'https://github.com/openai/skills/blob/main/skills/.curated/doc/SKILL.md', + }, + { + id: 'skill_openai-figma_9055e9d13c', + title: 'openai/figma', + description: 'openai/figma skill for Claude workflows from VoltAgent/awesome-agent-skills.', + kind: 'skill', + link: 'https://github.com/openai/skills/blob/main/skills/.curated/figma/SKILL.md', + }, + { + id: 'skill_openai-figma-code-connect-components_de0c483bcc', + title: 'openai/figma-code-connect-components', + description: + 'openai/figma-code-connect-components skill for Claude workflows from VoltAgent/awesome-agent-skills.', + kind: 'skill', + link: 'https://github.com/openai/skills/blob/main/skills/.curated/figma-code-connect-components/SKILL.md', + }, + { + id: 'skill_openai-figma-create-design-system-rules_774e3eec76', + title: 'openai/figma-create-design-system-rules', + description: + 'openai/figma-create-design-system-rules skill for Claude workflows from VoltAgent/awesome-agent-skills.', + kind: 'skill', + link: 'https://github.com/openai/skills/blob/main/skills/.curated/figma-create-design-system-rules/SKILL.md', + }, + { + id: 'skill_openai-figma-create-new-file_ba197a46db', + title: 'openai/figma-create-new-file', + description: + 'openai/figma-create-new-file skill for Claude workflows from VoltAgent/awesome-agent-skills.', + kind: 'skill', + link: 'https://github.com/openai/skills/blob/main/skills/.curated/figma-create-new-file/SKILL.md', + }, + { + id: 'skill_openai-figma-generate-design_2444f105cd', + title: 'openai/figma-generate-design', + description: + 'openai/figma-generate-design skill for Claude workflows from VoltAgent/awesome-agent-skills.', + kind: 'skill', + link: 'https://github.com/openai/skills/blob/main/skills/.curated/figma-generate-design/SKILL.md', + }, + { + id: 'skill_openai-figma-generate-library_06dc80735e', + title: 'openai/figma-generate-library', + description: + 'openai/figma-generate-library skill for Claude workflows from VoltAgent/awesome-agent-skills.', + kind: 'skill', + link: 'https://github.com/openai/skills/blob/main/skills/.curated/figma-generate-library/SKILL.md', + }, + { + id: 'skill_openai-figma-implement-design_fe3b1aa331', + title: 'openai/figma-implement-design', + description: + 'openai/figma-implement-design skill for Claude workflows from VoltAgent/awesome-agent-skills.', + kind: 'skill', + link: 'https://github.com/openai/skills/blob/main/skills/.curated/figma-implement-design/SKILL.md', + }, + { + id: 'skill_openai-figma-use_c0602f864f', + title: 'openai/figma-use', + description: + 'openai/figma-use skill for Claude workflows from VoltAgent/awesome-agent-skills.', + kind: 'skill', + link: 'https://github.com/openai/skills/blob/main/skills/.curated/figma-use/SKILL.md', + }, + { + id: 'skill_openai-frontend-skill_d152b3bc67', + title: 'openai/frontend-skill', + description: + 'openai/frontend-skill skill for Claude workflows from VoltAgent/awesome-agent-skills.', + kind: 'skill', + link: 'https://github.com/openai/skills/blob/main/skills/.curated/frontend-skill/SKILL.md', + }, + { + id: 'skill_openai-gh-address-comments_fdbbce5b28', + title: 'openai/gh-address-comments', + description: + 'openai/gh-address-comments skill for Claude workflows from VoltAgent/awesome-agent-skills.', + kind: 'skill', + link: 'https://github.com/openai/skills/blob/main/skills/.curated/gh-address-comments/SKILL.md', + }, + { + id: 'skill_openai-gh-fix-ci_9ca1a74bb5', + title: 'openai/gh-fix-ci', + description: + 'openai/gh-fix-ci skill for Claude workflows from VoltAgent/awesome-agent-skills.', + kind: 'skill', + link: 'https://github.com/openai/skills/blob/main/skills/.curated/gh-fix-ci/SKILL.md', + }, + { + id: 'skill_openai-imagegen_eb4037a9c1', + title: 'openai/imagegen', + description: + 'openai/imagegen skill for Claude workflows from VoltAgent/awesome-agent-skills.', + kind: 'skill', + link: 'https://github.com/openai/skills/blob/main/skills/.curated/imagegen/SKILL.md', + }, + { + id: 'skill_openai-jupyter-notebook_d44211320a', + title: 'openai/jupyter-notebook', + description: + 'openai/jupyter-notebook skill for Claude workflows from VoltAgent/awesome-agent-skills.', + kind: 'skill', + link: 'https://github.com/openai/skills/blob/main/skills/.curated/jupyter-notebook/SKILL.md', + }, + { + id: 'skill_openai-linear_7427a06ca6', + title: 'openai/linear', + description: 'openai/linear skill for Claude workflows from VoltAgent/awesome-agent-skills.', + kind: 'skill', + link: 'https://github.com/openai/skills/blob/main/skills/.curated/linear/SKILL.md', + }, + { + id: 'skill_openai-netlify-deploy_a39d1e56d7', + title: 'openai/netlify-deploy', + description: + 'openai/netlify-deploy skill for Claude workflows from VoltAgent/awesome-agent-skills.', + kind: 'skill', + link: 'https://github.com/openai/skills/blob/main/skills/.curated/netlify-deploy/SKILL.md', + }, + { + id: 'skill_openai-notion-knowledge-capture_dbe293be26', + title: 'openai/notion-knowledge-capture', + description: + 'openai/notion-knowledge-capture skill for Claude workflows from VoltAgent/awesome-agent-skills.', + kind: 'skill', + link: 'https://github.com/openai/skills/blob/main/skills/.curated/notion-knowledge-capture/SKILL.md', + }, + { + id: 'skill_openai-notion-meeting-intelligence_d59bdb0a4b', + title: 'openai/notion-meeting-intelligence', + description: + 'openai/notion-meeting-intelligence skill for Claude workflows from VoltAgent/awesome-agent-skills.', + kind: 'skill', + link: 'https://github.com/openai/skills/blob/main/skills/.curated/notion-meeting-intelligence/SKILL.md', + }, + { + id: 'skill_openai-notion-research-documentation_21246b6293', + title: 'openai/notion-research-documentation', + description: + 'openai/notion-research-documentation skill for Claude workflows from VoltAgent/awesome-agent-skills.', + kind: 'skill', + link: 'https://github.com/openai/skills/blob/main/skills/.curated/notion-research-documentation/SKILL.md', + }, + { + id: 'skill_openai-notion-spec-to-implementation_02ea735a4c', + title: 'openai/notion-spec-to-implementation', + description: + 'openai/notion-spec-to-implementation skill for Claude workflows from VoltAgent/awesome-agent-skills.', + kind: 'skill', + link: 'https://github.com/openai/skills/blob/main/skills/.curated/notion-spec-to-implementation/SKILL.md', + }, + { + id: 'skill_openai-pdf_b28827945a', + title: 'openai/pdf', + description: 'openai/pdf skill for Claude workflows from VoltAgent/awesome-agent-skills.', + kind: 'skill', + link: 'https://github.com/openai/skills/blob/main/skills/.curated/pdf/SKILL.md', + }, + { + id: 'skill_openai-playwright-interactive_9d14462f7a', + title: 'openai/playwright-interactive', + description: + 'openai/playwright-interactive skill for Claude workflows from VoltAgent/awesome-agent-skills.', + kind: 'skill', + link: 'https://github.com/openai/skills/blob/main/skills/.curated/playwright-interactive/SKILL.md', + }, + { + id: 'skill_openai-render-deploy_58d9dd3dfa', + title: 'openai/render-deploy', + description: + 'openai/render-deploy skill for Claude workflows from VoltAgent/awesome-agent-skills.', + kind: 'skill', + link: 'https://github.com/openai/skills/blob/main/skills/.curated/render-deploy/SKILL.md', + }, + { + id: 'skill_openai-screenshot_69cb1f17a1', + title: 'openai/screenshot', + description: + 'openai/screenshot skill for Claude workflows from VoltAgent/awesome-agent-skills.', + kind: 'skill', + link: 'https://github.com/openai/skills/blob/main/skills/.curated/screenshot/SKILL.md', + }, + { + id: 'skill_openai-security-best-practices_2cb7677ef1', + title: 'openai/security-best-practices', + description: + 'openai/security-best-practices skill for Claude workflows from VoltAgent/awesome-agent-skills.', + kind: 'skill', + link: 'https://github.com/openai/skills/blob/main/skills/.curated/security-best-practices/SKILL.md', + }, + { + id: 'skill_openai-security-ownership-map_d6aa0a1e8e', + title: 'openai/security-ownership-map', + description: + 'openai/security-ownership-map skill for Claude workflows from VoltAgent/awesome-agent-skills.', + kind: 'skill', + link: 'https://github.com/openai/skills/blob/main/skills/.curated/security-ownership-map/SKILL.md', + }, + { + id: 'skill_openai-security-threat-model_b3d6b0d78f', + title: 'openai/security-threat-model', + description: + 'openai/security-threat-model skill for Claude workflows from VoltAgent/awesome-agent-skills.', + kind: 'skill', + link: 'https://github.com/openai/skills/blob/main/skills/.curated/security-threat-model/SKILL.md', + }, + { + id: 'skill_openai-sentry_6ba5f38e60', + title: 'openai/sentry', + description: 'openai/sentry skill for Claude workflows from VoltAgent/awesome-agent-skills.', + kind: 'skill', + link: 'https://github.com/openai/skills/blob/main/skills/.curated/sentry/SKILL.md', + }, + { + id: 'skill_openai-slides_e12e3fa621', + title: 'openai/slides', + description: 'openai/slides skill for Claude workflows from VoltAgent/awesome-agent-skills.', + kind: 'skill', + link: 'https://github.com/openai/skills/blob/main/skills/.curated/slides/SKILL.md', + }, + { + id: 'skill_openai-sora_7ac56949cb', + title: 'openai/sora', + description: 'openai/sora skill for Claude workflows from VoltAgent/awesome-agent-skills.', + kind: 'skill', + link: 'https://github.com/openai/skills/blob/main/skills/.curated/sora/SKILL.md', + }, + { + id: 'skill_openai-speech_66f4ed4e99', + title: 'openai/speech', + description: 'openai/speech skill for Claude workflows from VoltAgent/awesome-agent-skills.', + kind: 'skill', + link: 'https://github.com/openai/skills/blob/main/skills/.curated/speech/SKILL.md', + }, + { + id: 'skill_openai-spreadsheet_cc4b854662', + title: 'openai/spreadsheet', + description: + 'openai/spreadsheet skill for Claude workflows from VoltAgent/awesome-agent-skills.', + kind: 'skill', + link: 'https://github.com/openai/skills/blob/main/skills/.curated/spreadsheet/SKILL.md', + }, + ], + // Page 9 + [ + { + id: 'skill_openai-transcribe_682b0b847c', + title: 'openai/transcribe', + description: + 'openai/transcribe skill for Claude workflows from VoltAgent/awesome-agent-skills.', + kind: 'skill', + link: 'https://github.com/openai/skills/blob/main/skills/.curated/transcribe/SKILL.md', + }, + { + id: 'skill_openai-vercel-deploy_a0b5a56751', + title: 'openai/vercel-deploy', + description: + 'openai/vercel-deploy skill for Claude workflows from VoltAgent/awesome-agent-skills.', + kind: 'skill', + link: 'https://github.com/openai/skills/blob/main/skills/.curated/vercel-deploy/SKILL.md', + }, + { + id: 'skill_openai-winui-app_d0f910d44e', + title: 'openai/winui-app', + description: + 'openai/winui-app skill for Claude workflows from VoltAgent/awesome-agent-skills.', + kind: 'skill', + link: 'https://github.com/openai/skills/blob/main/skills/.curated/winui-app/SKILL.md', + }, + { + id: 'skill_openai-yeet_c03eaddc3d', + title: 'openai/yeet', + description: 'openai/yeet skill for Claude workflows from VoltAgent/awesome-agent-skills.', + kind: 'skill', + link: 'https://github.com/openai/skills/blob/main/skills/.curated/yeet/SKILL.md', + }, + { + id: 'skill_openpaw_85daee1422', + title: 'OpenPaw', + description: 'OpenPaw skill for Claude workflows from BehiSecc/awesome-claude-skills.', + kind: 'skill', + link: 'https://github.com/daxaur/openpaw/blob/main/SKILL.md', + }, + { + id: 'skill_orchestra-research-ai-research-skills_03010e0e01', + title: 'Orchestra-Research/AI-research-SKILLs', + description: + 'Orchestra-Research/AI-research-SKILLs skill for Claude workflows from VoltAgent/awesome-agent-skills.', + kind: 'skill', + link: 'https://github.com/Orchestra-Research/AI-research-SKILLs/blob/main/SKILL.md', + }, + { + id: 'skill_outline_804067216e', + title: 'outline', + description: 'outline skill for Claude workflows from BehiSecc/awesome-claude-skills.', + kind: 'skill', + link: 'https://github.com/sanjay3290/ai-skills/blob/main/skills/outline/SKILL.md', + }, + { + id: 'skill_owasp-security_76e77e1492', + title: 'owasp-security', + description: 'owasp-security skill for Claude workflows from BehiSecc/awesome-claude-skills.', + kind: 'skill', + link: 'https://github.com/agamm/claude-code-owasp/blob/main/SKILL.md', + }, + { + id: 'skill_paper-search_1f11f2b718', + title: 'paper-search', + description: 'paper-search skill for Claude workflows from BehiSecc/awesome-claude-skills.', + kind: 'skill', + link: 'https://github.com/ykdojo/paper-search/blob/main/SKILL.md', + }, + { + id: 'skill_paramchoudhary-resumeskills_70501404ee', + title: 'Paramchoudhary/ResumeSkills', + description: + 'Paramchoudhary/ResumeSkills skill for Claude workflows from VoltAgent/awesome-agent-skills.', + kind: 'skill', + link: 'https://github.com/Paramchoudhary/ResumeSkills/blob/main/SKILL.md', + }, + { + id: 'skill_pawe-huryn_afb0f747e4', + title: 'Paweł Huryn', + description: 'Paweł Huryn skill for Claude workflows from VoltAgent/awesome-agent-skills.', + kind: 'skill', + link: 'https://github.com/phuryn', + }, + { + id: 'skill_phuryn-ab-test-analysis_186f3ed04a', + title: 'phuryn/ab-test-analysis', + description: + 'phuryn/ab-test-analysis skill for Claude workflows from VoltAgent/awesome-agent-skills.', + kind: 'skill', + link: 'https://github.com/phuryn/pm-skills/blob/main/pm-data-analytics/skills/ab-test-analysis/SKILL.md', + }, + { + id: 'skill_phuryn-analyze-feature-requests_94f37d8e2a', + title: 'phuryn/analyze-feature-requests', + description: + 'phuryn/analyze-feature-requests skill for Claude workflows from VoltAgent/awesome-agent-skills.', + kind: 'skill', + link: 'https://github.com/phuryn/pm-skills/blob/main/pm-product-discovery/skills/analyze-feature-requests/SKILL.md', + }, + { + id: 'skill_phuryn-ansoff-matrix_80f9e4b70e', + title: 'phuryn/ansoff-matrix', + description: + 'phuryn/ansoff-matrix skill for Claude workflows from VoltAgent/awesome-agent-skills.', + kind: 'skill', + link: 'https://github.com/phuryn/pm-skills/blob/main/pm-product-strategy/skills/ansoff-matrix/SKILL.md', + }, + { + id: 'skill_phuryn-beachhead-segment_1e9b46dae9', + title: 'phuryn/beachhead-segment', + description: + 'phuryn/beachhead-segment skill for Claude workflows from VoltAgent/awesome-agent-skills.', + kind: 'skill', + link: 'https://github.com/phuryn/pm-skills/blob/main/pm-go-to-market/skills/beachhead-segment/SKILL.md', + }, + { + id: 'skill_phuryn-brainstorm-experiments-existing_b3d71b0856', + title: 'phuryn/brainstorm-experiments-existing', + description: + 'phuryn/brainstorm-experiments-existing skill for Claude workflows from VoltAgent/awesome-agent-skills.', + kind: 'skill', + link: 'https://github.com/phuryn/pm-skills/blob/main/pm-product-discovery/skills/brainstorm-experiments-existing/SKILL.md', + }, + { + id: 'skill_phuryn-brainstorm-experiments-new_c08b3f1b12', + title: 'phuryn/brainstorm-experiments-new', + description: + 'phuryn/brainstorm-experiments-new skill for Claude workflows from VoltAgent/awesome-agent-skills.', + kind: 'skill', + link: 'https://github.com/phuryn/pm-skills/blob/main/pm-product-discovery/skills/brainstorm-experiments-new/SKILL.md', + }, + { + id: 'skill_phuryn-brainstorm-ideas-existing_40302e5f73', + title: 'phuryn/brainstorm-ideas-existing', + description: + 'phuryn/brainstorm-ideas-existing skill for Claude workflows from VoltAgent/awesome-agent-skills.', + kind: 'skill', + link: 'https://github.com/phuryn/pm-skills/blob/main/pm-product-discovery/skills/brainstorm-ideas-existing/SKILL.md', + }, + { + id: 'skill_phuryn-brainstorm-ideas-new_6405aded2e', + title: 'phuryn/brainstorm-ideas-new', + description: + 'phuryn/brainstorm-ideas-new skill for Claude workflows from VoltAgent/awesome-agent-skills.', + kind: 'skill', + link: 'https://github.com/phuryn/pm-skills/blob/main/pm-product-discovery/skills/brainstorm-ideas-new/SKILL.md', + }, + { + id: 'skill_phuryn-brainstorm-okrs_87bf679366', + title: 'phuryn/brainstorm-okrs', + description: + 'phuryn/brainstorm-okrs skill for Claude workflows from VoltAgent/awesome-agent-skills.', + kind: 'skill', + link: 'https://github.com/phuryn/pm-skills/blob/main/pm-execution/skills/brainstorm-okrs/SKILL.md', + }, + { + id: 'skill_phuryn-business-model_30fee60567', + title: 'phuryn/business-model', + description: + 'phuryn/business-model skill for Claude workflows from VoltAgent/awesome-agent-skills.', + kind: 'skill', + link: 'https://github.com/phuryn/pm-skills/blob/main/pm-product-strategy/skills/business-model/SKILL.md', + }, + { + id: 'skill_phuryn-cohort-analysis_7ff1a87d64', + title: 'phuryn/cohort-analysis', + description: + 'phuryn/cohort-analysis skill for Claude workflows from VoltAgent/awesome-agent-skills.', + kind: 'skill', + link: 'https://github.com/phuryn/pm-skills/blob/main/pm-data-analytics/skills/cohort-analysis/SKILL.md', + }, + { + id: 'skill_phuryn-competitive-battlecard_f77b6aac37', + title: 'phuryn/competitive-battlecard', + description: + 'phuryn/competitive-battlecard skill for Claude workflows from VoltAgent/awesome-agent-skills.', + kind: 'skill', + link: 'https://github.com/phuryn/pm-skills/blob/main/pm-go-to-market/skills/competitive-battlecard/SKILL.md', + }, + { + id: 'skill_phuryn-competitor-analysis_c01ff68c34', + title: 'phuryn/competitor-analysis', + description: + 'phuryn/competitor-analysis skill for Claude workflows from VoltAgent/awesome-agent-skills.', + kind: 'skill', + link: 'https://github.com/phuryn/pm-skills/blob/main/pm-market-research/skills/competitor-analysis/SKILL.md', + }, + { + id: 'skill_phuryn-create-prd_2e0eef37bc', + title: 'phuryn/create-prd', + description: + 'phuryn/create-prd skill for Claude workflows from VoltAgent/awesome-agent-skills.', + kind: 'skill', + link: 'https://github.com/phuryn/pm-skills/blob/main/pm-execution/skills/create-prd/SKILL.md', + }, + { + id: 'skill_phuryn-customer-journey-map_48d4c56d63', + title: 'phuryn/customer-journey-map', + description: + 'phuryn/customer-journey-map skill for Claude workflows from VoltAgent/awesome-agent-skills.', + kind: 'skill', + link: 'https://github.com/phuryn/pm-skills/blob/main/pm-market-research/skills/customer-journey-map/SKILL.md', + }, + { + id: 'skill_phuryn-draft-nda_a74c1a0dd9', + title: 'phuryn/draft-nda', + description: + 'phuryn/draft-nda skill for Claude workflows from VoltAgent/awesome-agent-skills.', + kind: 'skill', + link: 'https://github.com/phuryn/pm-skills/blob/main/pm-toolkit/skills/draft-nda/SKILL.md', + }, + { + id: 'skill_phuryn-dummy-dataset_daf35e00c3', + title: 'phuryn/dummy-dataset', + description: + 'phuryn/dummy-dataset skill for Claude workflows from VoltAgent/awesome-agent-skills.', + kind: 'skill', + link: 'https://github.com/phuryn/pm-skills/blob/main/pm-execution/skills/dummy-dataset/SKILL.md', + }, + { + id: 'skill_phuryn-grammar-check_a39ef18067', + title: 'phuryn/grammar-check', + description: + 'phuryn/grammar-check skill for Claude workflows from VoltAgent/awesome-agent-skills.', + kind: 'skill', + link: 'https://github.com/phuryn/pm-skills/blob/main/pm-toolkit/skills/grammar-check/SKILL.md', + }, + { + id: 'skill_phuryn-growth-loops_fdf694c3d1', + title: 'phuryn/growth-loops', + description: + 'phuryn/growth-loops skill for Claude workflows from VoltAgent/awesome-agent-skills.', + kind: 'skill', + link: 'https://github.com/phuryn/pm-skills/blob/main/pm-go-to-market/skills/growth-loops/SKILL.md', + }, + { + id: 'skill_phuryn-gtm-motions_e5096dc511', + title: 'phuryn/gtm-motions', + description: + 'phuryn/gtm-motions skill for Claude workflows from VoltAgent/awesome-agent-skills.', + kind: 'skill', + link: 'https://github.com/phuryn/pm-skills/blob/main/pm-go-to-market/skills/gtm-motions/SKILL.md', + }, + { + id: 'skill_phuryn-gtm-strategy_67a4f791ec', + title: 'phuryn/gtm-strategy', + description: + 'phuryn/gtm-strategy skill for Claude workflows from VoltAgent/awesome-agent-skills.', + kind: 'skill', + link: 'https://github.com/phuryn/pm-skills/blob/main/pm-go-to-market/skills/gtm-strategy/SKILL.md', + }, + { + id: 'skill_phuryn-ideal-customer-profile_87019ad8aa', + title: 'phuryn/ideal-customer-profile', + description: + 'phuryn/ideal-customer-profile skill for Claude workflows from VoltAgent/awesome-agent-skills.', + kind: 'skill', + link: 'https://github.com/phuryn/pm-skills/blob/main/pm-go-to-market/skills/ideal-customer-profile/SKILL.md', + }, + { + id: 'skill_phuryn-identify-assumptions-existing_8870d44375', + title: 'phuryn/identify-assumptions-existing', + description: + 'phuryn/identify-assumptions-existing skill for Claude workflows from VoltAgent/awesome-agent-skills.', + kind: 'skill', + link: 'https://github.com/phuryn/pm-skills/blob/main/pm-product-discovery/skills/identify-assumptions-existing/SKILL.md', + }, + { + id: 'skill_phuryn-identify-assumptions-new_b3acfd4d9f', + title: 'phuryn/identify-assumptions-new', + description: + 'phuryn/identify-assumptions-new skill for Claude workflows from VoltAgent/awesome-agent-skills.', + kind: 'skill', + link: 'https://github.com/phuryn/pm-skills/blob/main/pm-product-discovery/skills/identify-assumptions-new/SKILL.md', + }, + { + id: 'skill_phuryn-interview-script_498d8026d5', + title: 'phuryn/interview-script', + description: + 'phuryn/interview-script skill for Claude workflows from VoltAgent/awesome-agent-skills.', + kind: 'skill', + link: 'https://github.com/phuryn/pm-skills/blob/main/pm-product-discovery/skills/interview-script/SKILL.md', + }, + { + id: 'skill_phuryn-job-stories_ece8f7d00f', + title: 'phuryn/job-stories', + description: + 'phuryn/job-stories skill for Claude workflows from VoltAgent/awesome-agent-skills.', + kind: 'skill', + link: 'https://github.com/phuryn/pm-skills/blob/main/pm-execution/skills/job-stories/SKILL.md', + }, + { + id: 'skill_phuryn-lean-canvas_155d2bd7a5', + title: 'phuryn/lean-canvas', + description: + 'phuryn/lean-canvas skill for Claude workflows from VoltAgent/awesome-agent-skills.', + kind: 'skill', + link: 'https://github.com/phuryn/pm-skills/blob/main/pm-product-strategy/skills/lean-canvas/SKILL.md', + }, + { + id: 'skill_phuryn-market-segments_c0ed4e2360', + title: 'phuryn/market-segments', + description: + 'phuryn/market-segments skill for Claude workflows from VoltAgent/awesome-agent-skills.', + kind: 'skill', + link: 'https://github.com/phuryn/pm-skills/blob/main/pm-market-research/skills/market-segments/SKILL.md', + }, + { + id: 'skill_phuryn-market-sizing_fcae6e2b6b', + title: 'phuryn/market-sizing', + description: + 'phuryn/market-sizing skill for Claude workflows from VoltAgent/awesome-agent-skills.', + kind: 'skill', + link: 'https://github.com/phuryn/pm-skills/blob/main/pm-market-research/skills/market-sizing/SKILL.md', + }, + { + id: 'skill_phuryn-marketing-ideas_5b7b6c1785', + title: 'phuryn/marketing-ideas', + description: + 'phuryn/marketing-ideas skill for Claude workflows from VoltAgent/awesome-agent-skills.', + kind: 'skill', + link: 'https://github.com/phuryn/pm-skills/blob/main/pm-marketing-growth/skills/marketing-ideas/SKILL.md', + }, + { + id: 'skill_phuryn-metrics-dashboard_a189c2088d', + title: 'phuryn/metrics-dashboard', + description: + 'phuryn/metrics-dashboard skill for Claude workflows from VoltAgent/awesome-agent-skills.', + kind: 'skill', + link: 'https://github.com/phuryn/pm-skills/blob/main/pm-product-discovery/skills/metrics-dashboard/SKILL.md', + }, + { + id: 'skill_phuryn-monetization-strategy_62b0f09979', + title: 'phuryn/monetization-strategy', + description: + 'phuryn/monetization-strategy skill for Claude workflows from VoltAgent/awesome-agent-skills.', + kind: 'skill', + link: 'https://github.com/phuryn/pm-skills/blob/main/pm-product-strategy/skills/monetization-strategy/SKILL.md', + }, + { + id: 'skill_phuryn-north-star-metric_97a081fa11', + title: 'phuryn/north-star-metric', + description: + 'phuryn/north-star-metric skill for Claude workflows from VoltAgent/awesome-agent-skills.', + kind: 'skill', + link: 'https://github.com/phuryn/pm-skills/blob/main/pm-marketing-growth/skills/north-star-metric/SKILL.md', + }, + { + id: 'skill_phuryn-opportunity-solution-tree_3356f27d08', + title: 'phuryn/opportunity-solution-tree', + description: + 'phuryn/opportunity-solution-tree skill for Claude workflows from VoltAgent/awesome-agent-skills.', + kind: 'skill', + link: 'https://github.com/phuryn/pm-skills/blob/main/pm-product-discovery/skills/opportunity-solution-tree/SKILL.md', + }, + { + id: 'skill_phuryn-outcome-roadmap_1a5c43fbed', + title: 'phuryn/outcome-roadmap', + description: + 'phuryn/outcome-roadmap skill for Claude workflows from VoltAgent/awesome-agent-skills.', + kind: 'skill', + link: 'https://github.com/phuryn/pm-skills/blob/main/pm-execution/skills/outcome-roadmap/SKILL.md', + }, + { + id: 'skill_phuryn-pestle-analysis_40f09f3967', + title: 'phuryn/pestle-analysis', + description: + 'phuryn/pestle-analysis skill for Claude workflows from VoltAgent/awesome-agent-skills.', + kind: 'skill', + link: 'https://github.com/phuryn/pm-skills/blob/main/pm-product-strategy/skills/pestle-analysis/SKILL.md', + }, + { + id: 'skill_phuryn-porters-five-forces_b067185fd8', + title: 'phuryn/porters-five-forces', + description: + 'phuryn/porters-five-forces skill for Claude workflows from VoltAgent/awesome-agent-skills.', + kind: 'skill', + link: 'https://github.com/phuryn/pm-skills/blob/main/pm-product-strategy/skills/porters-five-forces/SKILL.md', + }, + { + id: 'skill_phuryn-positioning-ideas_1de405be32', + title: 'phuryn/positioning-ideas', + description: + 'phuryn/positioning-ideas skill for Claude workflows from VoltAgent/awesome-agent-skills.', + kind: 'skill', + link: 'https://github.com/phuryn/pm-skills/blob/main/pm-marketing-growth/skills/positioning-ideas/SKILL.md', + }, + { + id: 'skill_phuryn-pre-mortem_bc55ed15cb', + title: 'phuryn/pre-mortem', + description: + 'phuryn/pre-mortem skill for Claude workflows from VoltAgent/awesome-agent-skills.', + kind: 'skill', + link: 'https://github.com/phuryn/pm-skills/blob/main/pm-execution/skills/pre-mortem/SKILL.md', + }, + { + id: 'skill_phuryn-pricing-strategy_d8fc5a29c6', + title: 'phuryn/pricing-strategy', + description: + 'phuryn/pricing-strategy skill for Claude workflows from VoltAgent/awesome-agent-skills.', + kind: 'skill', + link: 'https://github.com/phuryn/pm-skills/blob/main/pm-product-strategy/skills/pricing-strategy/SKILL.md', + }, + { + id: 'skill_phuryn-prioritization-frameworks_2e5cd8893f', + title: 'phuryn/prioritization-frameworks', + description: + 'phuryn/prioritization-frameworks skill for Claude workflows from VoltAgent/awesome-agent-skills.', + kind: 'skill', + link: 'https://github.com/phuryn/pm-skills/blob/main/pm-execution/skills/prioritization-frameworks/SKILL.md', + }, + { + id: 'skill_phuryn-prioritize-assumptions_87a1562aac', + title: 'phuryn/prioritize-assumptions', + description: + 'phuryn/prioritize-assumptions skill for Claude workflows from VoltAgent/awesome-agent-skills.', + kind: 'skill', + link: 'https://github.com/phuryn/pm-skills/blob/main/pm-product-discovery/skills/prioritize-assumptions/SKILL.md', + }, + { + id: 'skill_phuryn-prioritize-features_3d0af9fee6', + title: 'phuryn/prioritize-features', + description: + 'phuryn/prioritize-features skill for Claude workflows from VoltAgent/awesome-agent-skills.', + kind: 'skill', + link: 'https://github.com/phuryn/pm-skills/blob/main/pm-product-discovery/skills/prioritize-features/SKILL.md', + }, + { + id: 'skill_phuryn-privacy-policy_574bfa94ee', + title: 'phuryn/privacy-policy', + description: + 'phuryn/privacy-policy skill for Claude workflows from VoltAgent/awesome-agent-skills.', + kind: 'skill', + link: 'https://github.com/phuryn/pm-skills/blob/main/pm-toolkit/skills/privacy-policy/SKILL.md', + }, + { + id: 'skill_phuryn-product-name_56289a38c8', + title: 'phuryn/product-name', + description: + 'phuryn/product-name skill for Claude workflows from VoltAgent/awesome-agent-skills.', + kind: 'skill', + link: 'https://github.com/phuryn/pm-skills/blob/main/pm-marketing-growth/skills/product-name/SKILL.md', + }, + { + id: 'skill_phuryn-product-strategy_0e1ed77299', + title: 'phuryn/product-strategy', + description: + 'phuryn/product-strategy skill for Claude workflows from VoltAgent/awesome-agent-skills.', + kind: 'skill', + link: 'https://github.com/phuryn/pm-skills/blob/main/pm-product-strategy/skills/product-strategy/SKILL.md', + }, + { + id: 'skill_phuryn-product-vision_212063ad57', + title: 'phuryn/product-vision', + description: + 'phuryn/product-vision skill for Claude workflows from VoltAgent/awesome-agent-skills.', + kind: 'skill', + link: 'https://github.com/phuryn/pm-skills/blob/main/pm-product-strategy/skills/product-vision/SKILL.md', + }, + { + id: 'skill_phuryn-release-notes_c70dba4874', + title: 'phuryn/release-notes', + description: + 'phuryn/release-notes skill for Claude workflows from VoltAgent/awesome-agent-skills.', + kind: 'skill', + link: 'https://github.com/phuryn/pm-skills/blob/main/pm-execution/skills/release-notes/SKILL.md', + }, + { + id: 'skill_phuryn-retro_83fefc907d', + title: 'phuryn/retro', + description: 'phuryn/retro skill for Claude workflows from VoltAgent/awesome-agent-skills.', + kind: 'skill', + link: 'https://github.com/phuryn/pm-skills/blob/main/pm-execution/skills/retro/SKILL.md', + }, + { + id: 'skill_phuryn-review-resume_c2e3235c11', + title: 'phuryn/review-resume', + description: + 'phuryn/review-resume skill for Claude workflows from VoltAgent/awesome-agent-skills.', + kind: 'skill', + link: 'https://github.com/phuryn/pm-skills/blob/main/pm-toolkit/skills/review-resume/SKILL.md', + }, + { + id: 'skill_phuryn-sentiment-analysis_3d8cf439cb', + title: 'phuryn/sentiment-analysis', + description: + 'phuryn/sentiment-analysis skill for Claude workflows from VoltAgent/awesome-agent-skills.', + kind: 'skill', + link: 'https://github.com/phuryn/pm-skills/blob/main/pm-market-research/skills/sentiment-analysis/SKILL.md', + }, + { + id: 'skill_phuryn-sprint-plan_d46a31fecd', + title: 'phuryn/sprint-plan', + description: + 'phuryn/sprint-plan skill for Claude workflows from VoltAgent/awesome-agent-skills.', + kind: 'skill', + link: 'https://github.com/phuryn/pm-skills/blob/main/pm-execution/skills/sprint-plan/SKILL.md', + }, + { + id: 'skill_phuryn-sql-queries_2da80aa802', + title: 'phuryn/sql-queries', + description: + 'phuryn/sql-queries skill for Claude workflows from VoltAgent/awesome-agent-skills.', + kind: 'skill', + link: 'https://github.com/phuryn/pm-skills/blob/main/pm-data-analytics/skills/sql-queries/SKILL.md', + }, + { + id: 'skill_phuryn-stakeholder-map_ca1de1c6cf', + title: 'phuryn/stakeholder-map', + description: + 'phuryn/stakeholder-map skill for Claude workflows from VoltAgent/awesome-agent-skills.', + kind: 'skill', + link: 'https://github.com/phuryn/pm-skills/blob/main/pm-execution/skills/stakeholder-map/SKILL.md', + }, + { + id: 'skill_phuryn-startup-canvas_70927ae938', + title: 'phuryn/startup-canvas', + description: + 'phuryn/startup-canvas skill for Claude workflows from VoltAgent/awesome-agent-skills.', + kind: 'skill', + link: 'https://github.com/phuryn/pm-skills/blob/main/pm-product-strategy/skills/startup-canvas/SKILL.md', + }, + { + id: 'skill_phuryn-summarize-interview_80aff2af32', + title: 'phuryn/summarize-interview', + description: + 'phuryn/summarize-interview skill for Claude workflows from VoltAgent/awesome-agent-skills.', + kind: 'skill', + link: 'https://github.com/phuryn/pm-skills/blob/main/pm-product-discovery/skills/summarize-interview/SKILL.md', + }, + { + id: 'skill_phuryn-summarize-meeting_1024e9531e', + title: 'phuryn/summarize-meeting', + description: + 'phuryn/summarize-meeting skill for Claude workflows from VoltAgent/awesome-agent-skills.', + kind: 'skill', + link: 'https://github.com/phuryn/pm-skills/blob/main/pm-execution/skills/summarize-meeting/SKILL.md', + }, + { + id: 'skill_phuryn-swot-analysis_03cdb0385e', + title: 'phuryn/swot-analysis', + description: + 'phuryn/swot-analysis skill for Claude workflows from VoltAgent/awesome-agent-skills.', + kind: 'skill', + link: 'https://github.com/phuryn/pm-skills/blob/main/pm-product-strategy/skills/swot-analysis/SKILL.md', + }, + { + id: 'skill_phuryn-test-scenarios_2e75736a96', + title: 'phuryn/test-scenarios', + description: + 'phuryn/test-scenarios skill for Claude workflows from VoltAgent/awesome-agent-skills.', + kind: 'skill', + link: 'https://github.com/phuryn/pm-skills/blob/main/pm-execution/skills/test-scenarios/SKILL.md', + }, + { + id: 'skill_phuryn-user-personas_c32f4b4dd5', + title: 'phuryn/user-personas', + description: + 'phuryn/user-personas skill for Claude workflows from VoltAgent/awesome-agent-skills.', + kind: 'skill', + link: 'https://github.com/phuryn/pm-skills/blob/main/pm-market-research/skills/user-personas/SKILL.md', + }, + { + id: 'skill_phuryn-user-segmentation_bb6c7e1772', + title: 'phuryn/user-segmentation', + description: + 'phuryn/user-segmentation skill for Claude workflows from VoltAgent/awesome-agent-skills.', + kind: 'skill', + link: 'https://github.com/phuryn/pm-skills/blob/main/pm-market-research/skills/user-segmentation/SKILL.md', + }, + { + id: 'skill_phuryn-user-stories_38fd93d90c', + title: 'phuryn/user-stories', + description: + 'phuryn/user-stories skill for Claude workflows from VoltAgent/awesome-agent-skills.', + kind: 'skill', + link: 'https://github.com/phuryn/pm-skills/blob/main/pm-execution/skills/user-stories/SKILL.md', + }, + { + id: 'skill_phuryn-value-prop-statements_d094a08b9d', + title: 'phuryn/value-prop-statements', + description: + 'phuryn/value-prop-statements skill for Claude workflows from VoltAgent/awesome-agent-skills.', + kind: 'skill', + link: 'https://github.com/phuryn/pm-skills/blob/main/pm-marketing-growth/skills/value-prop-statements/SKILL.md', + }, + { + id: 'skill_phuryn-value-proposition_a2f647310e', + title: 'phuryn/value-proposition', + description: + 'phuryn/value-proposition skill for Claude workflows from VoltAgent/awesome-agent-skills.', + kind: 'skill', + link: 'https://github.com/phuryn/pm-skills/blob/main/pm-product-strategy/skills/value-proposition/SKILL.md', + }, + { + id: 'skill_phuryn-wwas_b41d39e474', + title: 'phuryn/wwas', + description: 'phuryn/wwas skill for Claude workflows from VoltAgent/awesome-agent-skills.', + kind: 'skill', + link: 'https://github.com/phuryn/pm-skills/blob/main/pm-execution/skills/wwas/SKILL.md', + }, + { + id: 'skill_plannotator_7664219c3d', + title: 'plannotator', + description: 'plannotator skill for Claude workflows from BehiSecc/awesome-claude-skills.', + kind: 'skill', + link: 'https://github.com/backnotprop/plannotator/blob/main/SKILL.md', + }, + { + id: 'skill_playwright-skill_f4e074814b', + title: 'Playwright Skill', + description: + 'Playwright Skill skill for Claude workflows from BehiSecc/awesome-claude-skills.', + kind: 'skill', + link: 'https://github.com/testdino-hq/playwright-skill/blob/main/SKILL.md', + }, + { + id: 'skill_pleaseprompto-notebooklm-skill_95516b80b3', + title: 'PleasePrompto/notebooklm-skill', + description: + 'PleasePrompto/notebooklm-skill skill for Claude workflows from VoltAgent/awesome-agent-skills.', + kind: 'skill', + link: 'https://github.com/PleasePrompto/notebooklm-skill/blob/main/SKILL.md', + }, + { + id: 'skill_plugin-authoring_98d83f1485', + title: 'plugin-authoring', + description: + 'plugin-authoring skill for Claude workflows from BehiSecc/awesome-claude-skills.', + kind: 'skill', + link: 'https://github.com/ivan-magda/claude-code-plugin-template/blob/main/plugins/plugin-development/skills/plugin-authoring/SKILL.md', + }, + { + id: 'skill_pm-skills_c12d6262ac', + title: 'pm-skills', + description: 'pm-skills skill for Claude workflows from BehiSecc/awesome-claude-skills.', + kind: 'skill', + link: 'https://github.com/product-on-purpose/pm-skills/blob/main/SKILL.md', + }, + { + id: 'skill_polaris-datainsight-doc-extract_7340dfc762', + title: 'polaris-datainsight-doc-extract', + description: + 'polaris-datainsight-doc-extract skill for Claude workflows from BehiSecc/awesome-claude-skills.', + kind: 'skill', + link: 'https://github.com/jacob-g-park/polaris-datainsight-doc-extract/blob/main/SKILL.md', + }, + { + id: 'skill_product-manager-skills_d16380f90b', + title: 'Product-Manager-Skills', + description: + 'Product-Manager-Skills skill for Claude workflows from BehiSecc/awesome-claude-skills.', + kind: 'skill', + link: 'https://github.com/deanpeters/Product-Manager-Skills/blob/main/SKILL.md', + }, + { + id: 'skill_prompt-security-clawsec_fe08dd4bed', + title: 'prompt-security/clawsec', + description: + 'prompt-security/clawsec skill for Claude workflows from VoltAgent/awesome-agent-skills.', + kind: 'skill', + link: 'https://github.com/prompt-security/clawsec/blob/main/SKILL.md', + }, + { + id: 'skill_pspdfkit-labs-nutrient-agent-skill_d3aa1fcc54', + title: 'PSPDFKit-labs/nutrient-agent-skill', + description: + 'PSPDFKit-labs/nutrient-agent-skill skill for Claude workflows from VoltAgent/awesome-agent-skills.', + kind: 'skill', + link: 'https://github.com/PSPDFKit-labs/nutrient-agent-skill/blob/main/SKILL.md', + }, + { + id: 'skill_raintree-technology-apple-hig-skills_57435a90f6', + title: 'raintree-technology/apple-hig-skills', + description: + 'raintree-technology/apple-hig-skills skill for Claude workflows from VoltAgent/awesome-agent-skills.', + kind: 'skill', + link: 'https://github.com/raintree-technology/apple-hig-skills/blob/main/SKILL.md', + }, + { + id: 'skill_rameerez-claude-code-startup-skills_9b248291ae', + title: 'rameerez/claude-code-startup-skills', + description: + 'rameerez/claude-code-startup-skills skill for Claude workflows from VoltAgent/awesome-agent-skills.', + kind: 'skill', + link: 'https://github.com/rameerez/claude-code-startup-skills/blob/main/SKILL.md', + }, + { + id: 'skill_ramzesenok-ios-accessibility-audit-skill_9caf45bfe4', + title: 'ramzesenok/iOS-Accessibility-Audit-Skill', + description: + 'ramzesenok/iOS-Accessibility-Audit-Skill skill for Claude workflows from VoltAgent/awesome-agent-skills.', + kind: 'skill', + link: 'https://github.com/ramzesenok/iOS-Accessibility-Audit-Skill/blob/main/SKILL.md', + }, + { + id: 'skill_recommendations_b047a7f629', + title: 'recommendations', + description: + 'recommendations skill for Claude workflows from BehiSecc/awesome-claude-skills.', + kind: 'skill', + link: 'https://github.com/tasteray/skills/blob/main/recommendations/SKILL.md', + }, + { + id: 'skill_reka-ui_254fe673ff', + title: 'reka-ui', + description: 'reka-ui skill for Claude workflows from onmax/nuxt-skills.', + kind: 'skill', + link: 'https://github.com/onmax/nuxt-skills/blob/main/skills/reka-ui/SKILL.md', + }, + { + id: 'skill_remotion-dev-remotion_52c9877c81', + title: 'remotion-dev/remotion', + description: + 'remotion-dev/remotion skill for Claude workflows from VoltAgent/awesome-agent-skills.', + kind: 'skill', + link: 'https://github.com/remotion-dev/skills/blob/main/skills/remotion/SKILL.md', + }, + { + id: 'skill_replicate-replicate_7cb868336e', + title: 'replicate/replicate', + description: + 'replicate/replicate skill for Claude workflows from VoltAgent/awesome-agent-skills.', + kind: 'skill', + link: 'https://github.com/replicate/skills/blob/main/skills/replicate/SKILL.md', + }, + { + id: 'skill_resciencelab-opc-skills_d0fb13408f', + title: 'ReScienceLab/opc-skills', + description: + 'ReScienceLab/opc-skills skill for Claude workflows from VoltAgent/awesome-agent-skills.', + kind: 'skill', + link: 'https://github.com/ReScienceLab/opc-skills/blob/main/SKILL.md', + }, + { + id: 'skill_revealjs-skill_8a74443f95', + title: 'revealjs-skill', + description: 'revealjs-skill skill for Claude workflows from BehiSecc/awesome-claude-skills.', + kind: 'skill', + link: 'https://github.com/ryanbbrown/revealjs-skill/tree/main', + }, + { + id: 'skill_review-claudemd_24ad43ee2f', + title: 'review-claudemd', + description: + 'review-claudemd skill for Claude workflows from BehiSecc/awesome-claude-skills.', + kind: 'skill', + link: 'https://github.com/ykdojo/claude-code-tips/blob/main/skills/review-claudemd/SKILL.md', + }, + { + id: 'skill_review-implementing_263c232479', + title: 'review-implementing', + description: + 'review-implementing skill for Claude workflows from BehiSecc/awesome-claude-skills.', + kind: 'skill', + link: 'https://github.com/mhattingpete/claude-skills-marketplace/blob/main/engineering-workflow-plugin/skills/review-implementing/SKILL.md', + }, + { + id: 'skill_robzolkos-skill-rails-upgrade_40bac86173', + title: 'robzolkos/skill-rails-upgrade', + description: + 'robzolkos/skill-rails-upgrade skill for Claude workflows from VoltAgent/awesome-agent-skills.', + kind: 'skill', + link: 'https://github.com/robzolkos/skill-rails-upgrade/blob/main/SKILL.md', + }, + { + id: 'skill_rootly-mcp-server_434f4bf055', + title: 'Rootly MCP Server', + description: + 'Rootly MCP Server skill for Claude workflows from VoltAgent/awesome-agent-skills.', + kind: 'skill', + link: 'https://github.com/Rootly-AI-Labs/Rootly-MCP-server/blob/main/SKILL.md', + }, + ], + // Page 10 + [ + { + id: 'skill_rootly-ai-labs-rootly-incident-responder_c8452f11e2', + title: 'Rootly-AI-Labs/rootly-incident-responder', + description: + 'Rootly-AI-Labs/rootly-incident-responder skill for Claude workflows from VoltAgent/awesome-agent-skills.', + kind: 'skill', + link: 'https://github.com/Rootly-AI-Labs/Rootly-MCP-server/blob/main/examples/skills/rootly-incident-responder.md', + }, + { + id: 'skill_roundtable02-tutor-skills_e0ccffd5d9', + title: 'RoundTable02/tutor-skills', + description: + 'RoundTable02/tutor-skills skill for Claude workflows from VoltAgent/awesome-agent-skills.', + kind: 'skill', + link: 'https://github.com/RoundTable02/tutor-skills/blob/main/SKILL.md', + }, + { + id: 'skill_rudrankriyam-app-store-connect-cli-skills_5a1ac648c3', + title: 'rudrankriyam/app-store-connect-cli-skills', + description: + 'rudrankriyam/app-store-connect-cli-skills skill for Claude workflows from VoltAgent/awesome-agent-skills.', + kind: 'skill', + link: 'https://github.com/rudrankriyam/app-store-connect-cli-skills/blob/main/SKILL.md', + }, + { + id: 'skill_sanitize_c2279daccd', + title: 'sanitize', + description: 'sanitize skill for Claude workflows from BehiSecc/awesome-claude-skills.', + kind: 'skill', + link: 'https://github.com/openclaw/skills/blob/main/skills/agentward-ai/sanitize/SKILL.md', + }, + { + id: 'skill_sanity-io-content-experimentation-best-practices_fad179f1c8', + title: 'sanity-io/content-experimentation-best-practices', + description: + 'sanity-io/content-experimentation-best-practices skill for Claude workflows from VoltAgent/awesome-agent-skills.', + kind: 'skill', + link: 'https://github.com/sanity-io/agent-toolkit/blob/main/skills/content-experimentation-best-practices/SKILL.md', + }, + { + id: 'skill_sanity-io-content-modeling-best-practices_f0cf6b6d53', + title: 'sanity-io/content-modeling-best-practices', + description: + 'sanity-io/content-modeling-best-practices skill for Claude workflows from VoltAgent/awesome-agent-skills.', + kind: 'skill', + link: 'https://github.com/sanity-io/agent-toolkit/blob/main/skills/content-modeling-best-practices/SKILL.md', + }, + { + id: 'skill_sanity-io-sanity-best-practices_75bb24bf17', + title: 'sanity-io/sanity-best-practices', + description: + 'sanity-io/sanity-best-practices skill for Claude workflows from VoltAgent/awesome-agent-skills.', + kind: 'skill', + link: 'https://github.com/sanity-io/agent-toolkit/blob/main/skills/sanity-best-practices/SKILL.md', + }, + { + id: 'skill_sanity-io-seo-aeo-best-practices_03efb4040e', + title: 'sanity-io/seo-aeo-best-practices', + description: + 'sanity-io/seo-aeo-best-practices skill for Claude workflows from VoltAgent/awesome-agent-skills.', + kind: 'skill', + link: 'https://github.com/sanity-io/agent-toolkit/blob/main/skills/seo-aeo-best-practices/SKILL.md', + }, + { + id: 'skill_scarletkc-vexor_ce10dea14c', + title: 'scarletkc/vexor', + description: + 'scarletkc/vexor skill for Claude workflows from VoltAgent/awesome-agent-skills.', + kind: 'skill', + link: 'https://github.com/scarletkc/vexor/blob/main/SKILL.md', + }, + { + id: 'skill_seanzor-claude-speed-reader_80a566f409', + title: 'SeanZoR/claude-speed-reader', + description: + 'SeanZoR/claude-speed-reader skill for Claude workflows from VoltAgent/awesome-agent-skills.', + kind: 'skill', + link: 'https://github.com/SeanZoR/claude-speed-reader/blob/main/SKILL.md', + }, + { + id: 'skill_shadowpr0-beautiful-prose_391e822c41', + title: 'SHADOWPR0/beautiful_prose', + description: + 'SHADOWPR0/beautiful_prose skill for Claude workflows from VoltAgent/awesome-agent-skills.', + kind: 'skill', + link: 'https://github.com/SHADOWPR0/beautiful_prose/blob/main/SKILL.md', + }, + { + id: 'skill_shadowpr0-security-bluebook-builder_7126bc3ef0', + title: 'SHADOWPR0/security-bluebook-builder', + description: + 'SHADOWPR0/security-bluebook-builder skill for Claude workflows from VoltAgent/awesome-agent-skills.', + kind: 'skill', + link: 'https://github.com/SHADOWPR0/security-bluebook-builder/blob/main/SKILL.md', + }, + { + id: 'skill_ship-learn-next_016166dd46', + title: 'ship-learn-next', + description: + 'ship-learn-next skill for Claude workflows from BehiSecc/awesome-claude-skills.', + kind: 'skill', + link: 'https://github.com/michalparkola/tapestry-skills-for-claude-code/blob/main/ship-learn-next/SKILL.md', + }, + { + id: 'skill_shpigford-readme_7edab20b6d', + title: 'Shpigford/readme', + description: + 'Shpigford/readme skill for Claude workflows from VoltAgent/awesome-agent-skills.', + kind: 'skill', + link: 'https://github.com/Shpigford/skills/blob/main/readme/SKILL.md', + }, + { + id: 'skill_shpigford-screenshots_e48ffc7f36', + title: 'Shpigford/screenshots', + description: + 'Shpigford/screenshots skill for Claude workflows from VoltAgent/awesome-agent-skills.', + kind: 'skill', + link: 'https://github.com/Shpigford/skills/blob/main/screenshots/SKILL.md', + }, + { + id: 'skill_shunsukehayashi-agent-skill-bus_8525a695c8', + title: 'ShunsukeHayashi/agent-skill-bus', + description: + 'ShunsukeHayashi/agent-skill-bus skill for Claude workflows from VoltAgent/awesome-agent-skills.', + kind: 'skill', + link: 'https://github.com/ShunsukeHayashi/agent-skill-bus/blob/main/SKILL.md', + }, + { + id: 'skill_skill-seekers_275784ada3', + title: 'Skill_Seekers', + description: 'Skill_Seekers skill for Claude workflows from VoltAgent/awesome-agent-skills.', + kind: 'skill', + link: 'https://github.com/yusufkaraaslan/Skill_Seekers/blob/main/SKILL.md', + }, + { + id: 'skill_smixs-creative-director-skill_552593b6f4', + title: 'smixs/creative-director-skill', + description: + 'smixs/creative-director-skill skill for Claude workflows from VoltAgent/awesome-agent-skills.', + kind: 'skill', + link: 'https://github.com/smixs/creative-director-skill/blob/main/SKILL.md', + }, + { + id: 'skill_stripe-stripe-best-practices_1bf53a393a', + title: 'stripe/stripe-best-practices', + description: + 'stripe/stripe-best-practices skill for Claude workflows from VoltAgent/awesome-agent-skills.', + kind: 'skill', + link: 'https://github.com/stripe/ai/blob/main/skills/stripe-best-practices/SKILL.md', + }, + { + id: 'skill_stripe-upgrade-stripe_8002eda12d', + title: 'stripe/upgrade-stripe', + description: + 'stripe/upgrade-stripe skill for Claude workflows from VoltAgent/awesome-agent-skills.', + kind: 'skill', + link: 'https://github.com/stripe/ai/blob/main/skills/upgrade-stripe/SKILL.md', + }, + { + id: 'skill_supabase-postgres-best-practices_7babb8def0', + title: 'supabase/postgres-best-practices', + description: + 'supabase/postgres-best-practices skill for Claude workflows from VoltAgent/awesome-agent-skills.', + kind: 'skill', + link: 'https://github.com/supabase/agent-skills/blob/main/skills/supabase-postgres-best-practices/SKILL.md', + }, + { + id: 'skill_synk-skill-security-scanner_80fcf38ffb', + title: 'Synk Skill Security Scanner', + description: + 'Synk Skill Security Scanner skill for Claude workflows from VoltAgent/awesome-agent-skills.', + kind: 'skill', + link: 'https://github.com/snyk/agent-scan/blob/main/SKILL.md', + }, + { + id: 'skill_tapestry_54e01f0c64', + title: 'tapestry', + description: 'tapestry skill for Claude workflows from BehiSecc/awesome-claude-skills.', + kind: 'skill', + link: 'https://github.com/michalparkola/tapestry-skills-for-claude-code/blob/main/tapestry/SKILL.md', + }, + { + id: 'skill_task-observer_ca54719d4b', + title: 'task-observer', + description: 'task-observer skill for Claude workflows from BehiSecc/awesome-claude-skills.', + kind: 'skill', + link: 'https://github.com/rebelytics/one-skill-to-rule-them-all/blob/main/SKILL.md', + }, + { + id: 'skill_test-fixing_cce702983a', + title: 'test-fixing', + description: 'test-fixing skill for Claude workflows from BehiSecc/awesome-claude-skills.', + kind: 'skill', + link: 'https://github.com/mhattingpete/claude-skills-marketplace/blob/main/engineering-workflow-plugin/skills/test-fixing/SKILL.md', + }, + { + id: 'skill_tinybirdco-tinybird-best-practices_80af21b7e9', + title: 'tinybirdco/tinybird-best-practices', + description: + 'tinybirdco/tinybird-best-practices skill for Claude workflows from VoltAgent/awesome-agent-skills.', + kind: 'skill', + link: 'https://github.com/tinybirdco/tinybird-agent-skills/blob/main/skills/tinybird-best-practices/SKILL.md', + }, + { + id: 'skill_tinybirdco-tinybird-cli-guidelines_e6fff12c17', + title: 'tinybirdco/tinybird-cli-guidelines', + description: + 'tinybirdco/tinybird-cli-guidelines skill for Claude workflows from VoltAgent/awesome-agent-skills.', + kind: 'skill', + link: 'https://github.com/tinybirdco/tinybird-agent-skills/blob/main/skills/tinybird-cli-guidelines/SKILL.md', + }, + { + id: 'skill_tinybirdco-tinybird-python-sdk-guidelines_572ac0d7f1', + title: 'tinybirdco/tinybird-python-sdk-guidelines', + description: + 'tinybirdco/tinybird-python-sdk-guidelines skill for Claude workflows from VoltAgent/awesome-agent-skills.', + kind: 'skill', + link: 'https://github.com/tinybirdco/tinybird-agent-skills/blob/main/skills/tinybird-python-sdk-guidelines/SKILL.md', + }, + { + id: 'skill_tinybirdco-tinybird-typescript-sdk-guidelines_72d3a0a6a3', + title: 'tinybirdco/tinybird-typescript-sdk-guidelines', + description: + 'tinybirdco/tinybird-typescript-sdk-guidelines skill for Claude workflows from VoltAgent/awesome-agent-skills.', + kind: 'skill', + link: 'https://github.com/tinybirdco/tinybird-agent-skills/blob/main/skills/tinybird-typescript-sdk-guidelines/SKILL.md', + }, + { + id: 'skill_trail-of-bits-security-skills_d4add2d7e6', + title: 'Trail of Bits Security Skills', + description: + 'Trail of Bits Security Skills skill for Claude workflows from BehiSecc/awesome-claude-skills.', + kind: 'skill', + link: 'https://github.com/trailofbits/skills/blob/main/SKILL.md', + }, + { + id: 'skill_trailofbits-ask-questions-if-underspecified_bcc0393393', + title: 'trailofbits/ask-questions-if-underspecified', + description: + 'trailofbits/ask-questions-if-underspecified skill for Claude workflows from VoltAgent/awesome-agent-skills.', + kind: 'skill', + link: 'https://github.com/trailofbits/skills/blob/main/plugins/ask-questions-if-underspecified/SKILL.md', + }, + { + id: 'skill_trailofbits-audit-context-building_3a2b79faa5', + title: 'trailofbits/audit-context-building', + description: + 'trailofbits/audit-context-building skill for Claude workflows from VoltAgent/awesome-agent-skills.', + kind: 'skill', + link: 'https://github.com/trailofbits/skills/blob/main/plugins/audit-context-building/SKILL.md', + }, + { + id: 'skill_trailofbits-building-secure-contracts_a7b24d0a53', + title: 'trailofbits/building-secure-contracts', + description: + 'trailofbits/building-secure-contracts skill for Claude workflows from VoltAgent/awesome-agent-skills.', + kind: 'skill', + link: 'https://github.com/trailofbits/skills/blob/main/plugins/building-secure-contracts/SKILL.md', + }, + { + id: 'skill_trailofbits-burpsuite-project-parser_3922d23b0f', + title: 'trailofbits/burpsuite-project-parser', + description: + 'trailofbits/burpsuite-project-parser skill for Claude workflows from VoltAgent/awesome-agent-skills.', + kind: 'skill', + link: 'https://github.com/trailofbits/skills/blob/main/plugins/burpsuite-project-parser/SKILL.md', + }, + { + id: 'skill_trailofbits-claude-in-chrome-troubleshooting_ffae3adb26', + title: 'trailofbits/claude-in-chrome-troubleshooting', + description: + 'trailofbits/claude-in-chrome-troubleshooting skill for Claude workflows from VoltAgent/awesome-agent-skills.', + kind: 'skill', + link: 'https://github.com/trailofbits/skills/blob/main/plugins/claude-in-chrome-troubleshooting/SKILL.md', + }, + { + id: 'skill_trailofbits-constant-time-analysis_b4e580c960', + title: 'trailofbits/constant-time-analysis', + description: + 'trailofbits/constant-time-analysis skill for Claude workflows from VoltAgent/awesome-agent-skills.', + kind: 'skill', + link: 'https://github.com/trailofbits/skills/blob/main/plugins/constant-time-analysis/SKILL.md', + }, + { + id: 'skill_trailofbits-culture-index_4327d8e6f7', + title: 'trailofbits/culture-index', + description: + 'trailofbits/culture-index skill for Claude workflows from VoltAgent/awesome-agent-skills.', + kind: 'skill', + link: 'https://github.com/trailofbits/skills/blob/main/plugins/culture-index/SKILL.md', + }, + { + id: 'skill_trailofbits-differential-review_f2cd2b7e42', + title: 'trailofbits/differential-review', + description: + 'trailofbits/differential-review skill for Claude workflows from VoltAgent/awesome-agent-skills.', + kind: 'skill', + link: 'https://github.com/trailofbits/skills/blob/main/plugins/differential-review/SKILL.md', + }, + { + id: 'skill_trailofbits-dwarf-expert_c5e3a35271', + title: 'trailofbits/dwarf-expert', + description: + 'trailofbits/dwarf-expert skill for Claude workflows from VoltAgent/awesome-agent-skills.', + kind: 'skill', + link: 'https://github.com/trailofbits/skills/blob/main/plugins/dwarf-expert/SKILL.md', + }, + { + id: 'skill_trailofbits-entry-point-analyzer_616367e9b7', + title: 'trailofbits/entry-point-analyzer', + description: + 'trailofbits/entry-point-analyzer skill for Claude workflows from VoltAgent/awesome-agent-skills.', + kind: 'skill', + link: 'https://github.com/trailofbits/skills/blob/main/plugins/entry-point-analyzer/SKILL.md', + }, + { + id: 'skill_trailofbits-firebase-apk-scanner_5472505403', + title: 'trailofbits/firebase-apk-scanner', + description: + 'trailofbits/firebase-apk-scanner skill for Claude workflows from VoltAgent/awesome-agent-skills.', + kind: 'skill', + link: 'https://github.com/trailofbits/skills/blob/main/plugins/firebase-apk-scanner/SKILL.md', + }, + { + id: 'skill_trailofbits-fix-review_090a0b876b', + title: 'trailofbits/fix-review', + description: + 'trailofbits/fix-review skill for Claude workflows from VoltAgent/awesome-agent-skills.', + kind: 'skill', + link: 'https://github.com/trailofbits/skills/blob/main/plugins/fix-review/SKILL.md', + }, + { + id: 'skill_trailofbits-insecure-defaults_38acdb55d7', + title: 'trailofbits/insecure-defaults', + description: + 'trailofbits/insecure-defaults skill for Claude workflows from VoltAgent/awesome-agent-skills.', + kind: 'skill', + link: 'https://github.com/trailofbits/skills/blob/main/plugins/insecure-defaults/SKILL.md', + }, + { + id: 'skill_trailofbits-modern-python_2d69ad0a67', + title: 'trailofbits/modern-python', + description: + 'trailofbits/modern-python skill for Claude workflows from VoltAgent/awesome-agent-skills.', + kind: 'skill', + link: 'https://github.com/trailofbits/skills/blob/main/plugins/modern-python/SKILL.md', + }, + { + id: 'skill_trailofbits-property-based-testing_bfc9c10df3', + title: 'trailofbits/property-based-testing', + description: + 'trailofbits/property-based-testing skill for Claude workflows from VoltAgent/awesome-agent-skills.', + kind: 'skill', + link: 'https://github.com/trailofbits/skills/blob/main/plugins/property-based-testing/SKILL.md', + }, + { + id: 'skill_trailofbits-semgrep-rule-creator_345f731d93', + title: 'trailofbits/semgrep-rule-creator', + description: + 'trailofbits/semgrep-rule-creator skill for Claude workflows from VoltAgent/awesome-agent-skills.', + kind: 'skill', + link: 'https://github.com/trailofbits/skills/blob/main/plugins/semgrep-rule-creator/SKILL.md', + }, + { + id: 'skill_trailofbits-semgrep-rule-variant-creator_0710bf4feb', + title: 'trailofbits/semgrep-rule-variant-creator', + description: + 'trailofbits/semgrep-rule-variant-creator skill for Claude workflows from VoltAgent/awesome-agent-skills.', + kind: 'skill', + link: 'https://github.com/trailofbits/skills/blob/main/plugins/semgrep-rule-variant-creator/SKILL.md', + }, + { + id: 'skill_trailofbits-sharp-edges_1e17020c6d', + title: 'trailofbits/sharp-edges', + description: + 'trailofbits/sharp-edges skill for Claude workflows from VoltAgent/awesome-agent-skills.', + kind: 'skill', + link: 'https://github.com/trailofbits/skills/blob/main/plugins/sharp-edges/SKILL.md', + }, + { + id: 'skill_trailofbits-spec-to-code-compliance_16f8beee1c', + title: 'trailofbits/spec-to-code-compliance', + description: + 'trailofbits/spec-to-code-compliance skill for Claude workflows from VoltAgent/awesome-agent-skills.', + kind: 'skill', + link: 'https://github.com/trailofbits/skills/blob/main/plugins/spec-to-code-compliance/SKILL.md', + }, + { + id: 'skill_trailofbits-static-analysis_93704a301e', + title: 'trailofbits/static-analysis', + description: + 'trailofbits/static-analysis skill for Claude workflows from VoltAgent/awesome-agent-skills.', + kind: 'skill', + link: 'https://github.com/trailofbits/skills/blob/main/plugins/static-analysis/SKILL.md', + }, + { + id: 'skill_trailofbits-testing-handbook-skills_70f63267a8', + title: 'trailofbits/testing-handbook-skills', + description: + 'trailofbits/testing-handbook-skills skill for Claude workflows from VoltAgent/awesome-agent-skills.', + kind: 'skill', + link: 'https://github.com/trailofbits/skills/blob/main/plugins/testing-handbook-skills/SKILL.md', + }, + { + id: 'skill_trailofbits-variant-analysis_08a1a91120', + title: 'trailofbits/variant-analysis', + description: + 'trailofbits/variant-analysis skill for Claude workflows from VoltAgent/awesome-agent-skills.', + kind: 'skill', + link: 'https://github.com/trailofbits/skills/blob/main/plugins/variant-analysis/SKILL.md', + }, + { + id: 'skill_transloadit-skills_0fc8a06cfa', + title: 'transloadit/skills', + description: + 'transloadit/skills skill for Claude workflows from VoltAgent/awesome-agent-skills.', + kind: 'skill', + link: 'https://github.com/transloadit/skills/blob/main/skills/SKILL.md', + }, + { + id: 'skill_tresjs_62ae98fe69', + title: 'tresjs', + description: 'tresjs skill for Claude workflows from onmax/nuxt-skills.', + kind: 'skill', + link: 'https://github.com/onmax/nuxt-skills/blob/main/skills/tresjs/SKILL.md', + }, + { + id: 'skill_truongduy2611-app-store-preflight-skills_d7fa29ce93', + title: 'truongduy2611/app-store-preflight-skills', + description: + 'truongduy2611/app-store-preflight-skills skill for Claude workflows from VoltAgent/awesome-agent-skills.', + kind: 'skill', + link: 'https://github.com/truongduy2611/app-store-preflight-skills/blob/main/SKILL.md', + }, + { + id: 'skill_ts-library_421c561c0f', + title: 'ts-library', + description: 'ts-library skill for Claude workflows from onmax/nuxt-skills.', + kind: 'skill', + link: 'https://github.com/onmax/nuxt-skills/blob/main/skills/ts-library/SKILL.md', + }, + { + id: 'skill_tsdown_efa1984ce4', + title: 'tsdown', + description: 'tsdown skill for Claude workflows from onmax/nuxt-skills.', + kind: 'skill', + link: 'https://github.com/onmax/nuxt-skills/blob/main/skills/tsdown/SKILL.md', + }, + { + id: 'skill_typefully-typefully_0e39cce7ef', + title: 'typefully/typefully', + description: + 'typefully/typefully skill for Claude workflows from VoltAgent/awesome-agent-skills.', + kind: 'skill', + link: 'https://github.com/typefully/agent-skills/blob/main/skills/typefully/SKILL.md', + }, + { + id: 'skill_uucz-moyu_e748e00085', + title: 'uucz/moyu', + description: 'uucz/moyu skill for Claude workflows from VoltAgent/awesome-agent-skills.', + kind: 'skill', + link: 'https://github.com/uucz/moyu/blob/main/SKILL.md', + }, + { + id: 'skill_varlock-claude-skill_2d954835c4', + title: 'varlock-claude-skill', + description: + 'varlock-claude-skill skill for Claude workflows from BehiSecc/awesome-claude-skills.', + kind: 'skill', + link: 'https://github.com/wrsmith108/varlock-claude-skill/blob/main/SKILL.md', + }, + { + id: 'skill_vercel-labs-react-native-skills_3362601f42', + title: 'vercel-labs/react-native-skills', + description: + 'vercel-labs/react-native-skills skill for Claude workflows from VoltAgent/awesome-agent-skills.', + kind: 'skill', + link: 'https://github.com/vercel-labs/agent-skills/blob/main/skills/react-native-skills/SKILL.md', + }, + { + id: 'skill_vercel-labs-vercel-deploy-claimable_f47be426fa', + title: 'vercel-labs/vercel-deploy-claimable', + description: + 'vercel-labs/vercel-deploy-claimable skill for Claude workflows from VoltAgent/awesome-agent-skills.', + kind: 'skill', + link: 'https://github.com/vercel-labs/agent-skills/blob/main/skills/claude.ai/vercel-deploy-claimable/SKILL.md', + }, + { + id: 'skill_video-db-skills_c62766d61b', + title: 'video-db/skills', + description: + 'video-db/skills skill for Claude workflows from VoltAgent/awesome-agent-skills.', + kind: 'skill', + link: 'https://github.com/video-db/skills/blob/main/SKILL.md', + }, + { + id: 'skill_video-downloader_c76f972422', + title: 'video-downloader', + description: + 'video-downloader skill for Claude workflows from BehiSecc/awesome-claude-skills.', + kind: 'skill', + link: 'https://github.com/ComposioHQ/awesome-claude-skills/blob/master/video-downloader/SKILL.md', + }, + { + id: 'skill_video-prompting-skill_826e74d01e', + title: 'video-prompting-skill', + description: + 'video-prompting-skill skill for Claude workflows from BehiSecc/awesome-claude-skills.', + kind: 'skill', + link: 'https://github.com/Square-Zero-Labs/video-prompting-skill/blob/main/SKILL.md', + }, + { + id: 'skill_voltagent-create-voltagent_fe17ec0e0b', + title: 'voltagent/create-voltagent', + description: + 'voltagent/create-voltagent skill for Claude workflows from VoltAgent/awesome-agent-skills.', + kind: 'skill', + link: 'https://github.com/VoltAgent/skills/blob/main/skills/create-voltagent/SKILL.md', + }, + { + id: 'skill_voltagent-voltagent-best-practices_cc6c8e5bf6', + title: 'voltagent/voltagent-best-practices', + description: + 'voltagent/voltagent-best-practices skill for Claude workflows from VoltAgent/awesome-agent-skills.', + kind: 'skill', + link: 'https://github.com/VoltAgent/skills/blob/main/skills/voltagent-best-practices/SKILL.md', + }, + { + id: 'skill_voltagent-voltagent-core-reference_2a56409ea8', + title: 'voltagent/voltagent-core-reference', + description: + 'voltagent/voltagent-core-reference skill for Claude workflows from VoltAgent/awesome-agent-skills.', + kind: 'skill', + link: 'https://github.com/VoltAgent/skills/blob/main/skills/voltagent-core-reference/SKILL.md', + }, + { + id: 'skill_voltagent-voltagent-docs-bundle_63d500b941', + title: 'voltagent/voltagent-docs-bundle', + description: + 'voltagent/voltagent-docs-bundle skill for Claude workflows from VoltAgent/awesome-agent-skills.', + kind: 'skill', + link: 'https://github.com/VoltAgent/skills/blob/main/skills/voltagent-docs-bundle/SKILL.md', + }, + { + id: 'skill_vue_0f8021ca1e', + title: 'vue', + description: 'vue skill for Claude workflows from onmax/nuxt-skills.', + kind: 'skill', + link: 'https://github.com/onmax/nuxt-skills/blob/main/skills/vue/SKILL.md', + }, + { + id: 'skill_vueuse_7bf8bc61f2', + title: 'vueuse', + description: 'vueuse skill for Claude workflows from onmax/nuxt-skills.', + kind: 'skill', + link: 'https://github.com/onmax/nuxt-skills/blob/main/skills/vueuse/SKILL.md', + }, + { + id: 'skill_wanshuiyin-auto-claude-code-research-in-sleep_ddb4b5f5d6', + title: 'wanshuiyin/Auto-claude-code-research-in-sleep', + description: + 'wanshuiyin/Auto-claude-code-research-in-sleep skill for Claude workflows from VoltAgent/awesome-agent-skills.', + kind: 'skill', + link: 'https://github.com/wanshuiyin/Auto-claude-code-research-in-sleep/blob/main/SKILL.md', + }, + { + id: 'skill_wondelai-skills_5480bff7f7', + title: 'wondelai/skills', + description: + 'wondelai/skills skill for Claude workflows from BehiSecc/awesome-claude-skills.', + kind: 'skill', + link: 'https://github.com/wondelai/skills/blob/main/SKILL.md', + }, + { + id: 'skill_wordpress-wordpress-router_488b19db1e', + title: 'WordPress/wordpress-router', + description: + 'WordPress/wordpress-router skill for Claude workflows from VoltAgent/awesome-agent-skills.', + kind: 'skill', + link: 'https://github.com/WordPress/agent-skills/blob/trunk/skills/wordpress-router/SKILL.md', + }, + { + id: 'skill_wordpress-wp-abilities-api_69f3ef8234', + title: 'WordPress/wp-abilities-api', + description: + 'WordPress/wp-abilities-api skill for Claude workflows from VoltAgent/awesome-agent-skills.', + kind: 'skill', + link: 'https://github.com/WordPress/agent-skills/blob/trunk/skills/wp-abilities-api/SKILL.md', + }, + { + id: 'skill_wordpress-wp-block-development_fe737d49c5', + title: 'WordPress/wp-block-development', + description: + 'WordPress/wp-block-development skill for Claude workflows from VoltAgent/awesome-agent-skills.', + kind: 'skill', + link: 'https://github.com/WordPress/agent-skills/blob/trunk/skills/wp-block-development/SKILL.md', + }, + { + id: 'skill_wordpress-wp-block-themes_8afdb820ff', + title: 'WordPress/wp-block-themes', + description: + 'WordPress/wp-block-themes skill for Claude workflows from VoltAgent/awesome-agent-skills.', + kind: 'skill', + link: 'https://github.com/WordPress/agent-skills/blob/trunk/skills/wp-block-themes/SKILL.md', + }, + { + id: 'skill_wordpress-wp-interactivity-api_047fb36dd8', + title: 'WordPress/wp-interactivity-api', + description: + 'WordPress/wp-interactivity-api skill for Claude workflows from VoltAgent/awesome-agent-skills.', + kind: 'skill', + link: 'https://github.com/WordPress/agent-skills/blob/trunk/skills/wp-interactivity-api/SKILL.md', + }, + { + id: 'skill_wordpress-wp-performance_b461154584', + title: 'WordPress/wp-performance', + description: + 'WordPress/wp-performance skill for Claude workflows from VoltAgent/awesome-agent-skills.', + kind: 'skill', + link: 'https://github.com/WordPress/agent-skills/blob/trunk/skills/wp-performance/SKILL.md', + }, + { + id: 'skill_wordpress-wp-phpstan_86c8b64ef7', + title: 'WordPress/wp-phpstan', + description: + 'WordPress/wp-phpstan skill for Claude workflows from VoltAgent/awesome-agent-skills.', + kind: 'skill', + link: 'https://github.com/WordPress/agent-skills/blob/trunk/skills/wp-phpstan/SKILL.md', + }, + { + id: 'skill_wordpress-wp-playground_3aaa1b540b', + title: 'WordPress/wp-playground', + description: + 'WordPress/wp-playground skill for Claude workflows from VoltAgent/awesome-agent-skills.', + kind: 'skill', + link: 'https://github.com/WordPress/agent-skills/blob/trunk/skills/wp-playground/SKILL.md', + }, + { + id: 'skill_wordpress-wp-plugin-development_5b441ac4f1', + title: 'WordPress/wp-plugin-development', + description: + 'WordPress/wp-plugin-development skill for Claude workflows from VoltAgent/awesome-agent-skills.', + kind: 'skill', + link: 'https://github.com/WordPress/agent-skills/blob/trunk/skills/wp-plugin-development/SKILL.md', + }, + { + id: 'skill_wordpress-wp-project-triage_9bff04c1a7', + title: 'WordPress/wp-project-triage', + description: + 'WordPress/wp-project-triage skill for Claude workflows from VoltAgent/awesome-agent-skills.', + kind: 'skill', + link: 'https://github.com/WordPress/agent-skills/blob/trunk/skills/wp-project-triage/SKILL.md', + }, + { + id: 'skill_wordpress-wp-rest-api_07bbdb2105', + title: 'WordPress/wp-rest-api', + description: + 'WordPress/wp-rest-api skill for Claude workflows from VoltAgent/awesome-agent-skills.', + kind: 'skill', + link: 'https://github.com/WordPress/agent-skills/blob/trunk/skills/wp-rest-api/SKILL.md', + }, + { + id: 'skill_wordpress-wp-wpcli-and-ops_da1b4469ce', + title: 'WordPress/wp-wpcli-and-ops', + description: + 'WordPress/wp-wpcli-and-ops skill for Claude workflows from VoltAgent/awesome-agent-skills.', + kind: 'skill', + link: 'https://github.com/WordPress/agent-skills/blob/trunk/skills/wp-wpcli-and-ops/SKILL.md', + }, + { + id: 'skill_wordpress-wpds_da755ae9d7', + title: 'WordPress/wpds', + description: 'WordPress/wpds skill for Claude workflows from VoltAgent/awesome-agent-skills.', + kind: 'skill', + link: 'https://github.com/WordPress/agent-skills/blob/trunk/skills/wpds/SKILL.md', + }, + { + id: 'skill_writing-web-documentation_1343ac4cf7', + title: 'writing-web-documentation', + description: 'writing-web-documentation skill for Claude workflows from onmax/nuxt-skills.', + kind: 'skill', + link: 'https://github.com/onmax/nuxt-skills/blob/main/skills/writing-web-documentation/SKILL.md', + }, + { + id: 'skill_wshuyi-x-article-publisher-skill_df26085f63', + title: 'wshuyi/x-article-publisher-skill', + description: + 'wshuyi/x-article-publisher-skill skill for Claude workflows from VoltAgent/awesome-agent-skills.', + kind: 'skill', + link: 'https://github.com/wshuyi/x-article-publisher-skill/blob/main/SKILL.md', + }, + { + id: 'skill_x-twitter-scraper_636dd26124', + title: 'x-twitter-scraper', + description: + 'x-twitter-scraper skill for Claude workflows from BehiSecc/awesome-claude-skills.', + kind: 'skill', + link: 'https://github.com/Xquik-dev/x-twitter-scraper/blob/main/SKILL.md', + }, + { + id: 'skill_xquik-dev-tweetclaw_eba5544ff8', + title: 'Xquik-dev/tweetclaw', + description: + 'Xquik-dev/tweetclaw skill for Claude workflows from VoltAgent/awesome-agent-skills.', + kind: 'skill', + link: 'https://github.com/Xquik-dev/tweetclaw/blob/main/SKILL.md', + }, + { + id: 'skill_youtube-transcript_19f0be0aa1', + title: 'youtube-transcript', + description: + 'youtube-transcript skill for Claude workflows from BehiSecc/awesome-claude-skills.', + kind: 'skill', + link: 'https://github.com/michalparkola/tapestry-skills-for-claude-code/blob/main/youtube-transcript/SKILL.md', + }, + { + id: 'skill_zarazhangrui-frontend-slides_b46841a180', + title: 'zarazhangrui/frontend-slides', + description: + 'zarazhangrui/frontend-slides skill for Claude workflows from VoltAgent/awesome-agent-skills.', + kind: 'skill', + link: 'https://github.com/zarazhangrui/frontend-slides/blob/main/SKILL.md', + }, + ], + // Page 11 + [ + { + id: 'skill_zechenzhangagi-ai-research-skills_4c108420fd', + title: 'zechenzhangAGI/AI-research-SKILLs', + description: + 'zechenzhangAGI/AI-research-SKILLs skill for Claude workflows from VoltAgent/awesome-agent-skills.', + kind: 'skill', + link: 'https://github.com/zechenzhangAGI/AI-research-SKILLs/blob/main/SKILL.md', + }, + { + id: 'skill_zhanghandong-makepad-skills_95190fc981', + title: 'ZhangHanDong/makepad-skills', + description: + 'ZhangHanDong/makepad-skills skill for Claude workflows from VoltAgent/awesome-agent-skills.', + kind: 'skill', + link: 'https://github.com/ZhangHanDong/makepad-skills/blob/main/SKILL.md', + }, + { + id: 'skill_zscole-model-hierarchy-skill_9991b45dae', + title: 'zscole/model-hierarchy-skill', + description: + 'zscole/model-hierarchy-skill skill for Claude workflows from VoltAgent/awesome-agent-skills.', + kind: 'skill', + link: 'https://github.com/zscole/model-hierarchy-skill/blob/main/SKILL.md', + }, + { + id: 'skill_zw008-vmware-aiops_e46b4b5f64', + title: 'zw008/VMware-AIops', + description: + 'zw008/VMware-AIops skill for Claude workflows from VoltAgent/awesome-agent-skills.', + kind: 'skill', + link: 'https://github.com/zw008/VMware-AIops/blob/main/SKILL.md', + }, + { + id: 'skill_ab-test-setup_e64f7b2ccb', + title: 'ab-test-setup', + description: + 'ab-test-setup skill for Claude workflows from joeking-ly/claude-skills-arsenal.', + kind: 'skill', + link: 'https://github.com/joeking-ly/claude-skills-arsenal/blob/main/skills/ab-test-setup/SKILL.md', + }, + { + id: 'skill_ad-creative_436b6e5154', + title: 'ad-creative', + description: 'ad-creative skill for Claude workflows from joeking-ly/claude-skills-arsenal.', + kind: 'skill', + link: 'https://github.com/joeking-ly/claude-skills-arsenal/blob/main/skills/ad-creative/SKILL.md', + }, + { + id: 'skill_adapt_ee6a328acb', + title: 'adapt', + description: 'adapt skill for Claude workflows from joeking-ly/claude-skills-arsenal.', + kind: 'skill', + link: 'https://github.com/joeking-ly/claude-skills-arsenal/blob/main/skills/adapt/SKILL.md', + }, + { + id: 'skill_ai-seo_af97bf2481', + title: 'ai-seo', + description: 'ai-seo skill for Claude workflows from joeking-ly/claude-skills-arsenal.', + kind: 'skill', + link: 'https://github.com/joeking-ly/claude-skills-arsenal/blob/main/skills/ai-seo/SKILL.md', + }, + { + id: 'skill_analytics-tracking_5e14613b3f', + title: 'analytics-tracking', + description: + 'analytics-tracking skill for Claude workflows from joeking-ly/claude-skills-arsenal.', + kind: 'skill', + link: 'https://github.com/joeking-ly/claude-skills-arsenal/blob/main/skills/analytics-tracking/SKILL.md', + }, + { + id: 'skill_animate_fb610bc4f8', + title: 'animate', + description: 'animate skill for Claude workflows from joeking-ly/claude-skills-arsenal.', + kind: 'skill', + link: 'https://github.com/joeking-ly/claude-skills-arsenal/blob/main/skills/animate/SKILL.md', + }, + { + id: 'skill_arrange_1a59e51054', + title: 'arrange', + description: 'arrange skill for Claude workflows from joeking-ly/claude-skills-arsenal.', + kind: 'skill', + link: 'https://github.com/joeking-ly/claude-skills-arsenal/blob/main/skills/arrange/SKILL.md', + }, + { + id: 'skill_audit_161ce9f888', + title: 'audit', + description: 'audit skill for Claude workflows from joeking-ly/claude-skills-arsenal.', + kind: 'skill', + link: 'https://github.com/joeking-ly/claude-skills-arsenal/blob/main/skills/audit/SKILL.md', + }, + { + id: 'skill_backtest-expert_5b099cfb5c', + title: 'backtest-expert', + description: + 'backtest-expert skill for Claude workflows from joeking-ly/claude-skills-arsenal.', + kind: 'skill', + link: 'https://github.com/joeking-ly/claude-skills-arsenal/blob/main/skills/backtest-expert/SKILL.md', + }, + { + id: 'skill_bolder_c2a59de16b', + title: 'bolder', + description: 'bolder skill for Claude workflows from joeking-ly/claude-skills-arsenal.', + kind: 'skill', + link: 'https://github.com/joeking-ly/claude-skills-arsenal/blob/main/skills/bolder/SKILL.md', + }, + { + id: 'skill_breadth-chart-analyst_18a59424d0', + title: 'breadth-chart-analyst', + description: + 'breadth-chart-analyst skill for Claude workflows from joeking-ly/claude-skills-arsenal.', + kind: 'skill', + link: 'https://github.com/joeking-ly/claude-skills-arsenal/blob/main/skills/breadth-chart-analyst/SKILL.md', + }, + { + id: 'skill_business-growth_b5e094c50a', + title: 'business-growth', + description: + 'business-growth skill for Claude workflows from joeking-ly/claude-skills-arsenal.', + kind: 'skill', + link: 'https://github.com/joeking-ly/claude-skills-arsenal/blob/main/skills/business-growth/SKILL.md', + }, + { + id: 'skill_c-level-advisor_ac2033d72f', + title: 'c-level-advisor', + description: + 'c-level-advisor skill for Claude workflows from joeking-ly/claude-skills-arsenal.', + kind: 'skill', + link: 'https://github.com/joeking-ly/claude-skills-arsenal/blob/main/skills/c-level-advisor/SKILL.md', + }, + { + id: 'skill_canslim-screener_fc3ecc501d', + title: 'canslim-screener', + description: + 'canslim-screener skill for Claude workflows from joeking-ly/claude-skills-arsenal.', + kind: 'skill', + link: 'https://github.com/joeking-ly/claude-skills-arsenal/blob/main/skills/canslim-screener/SKILL.md', + }, + { + id: 'skill_churn-prevention_88912e537c', + title: 'churn-prevention', + description: + 'churn-prevention skill for Claude workflows from joeking-ly/claude-skills-arsenal.', + kind: 'skill', + link: 'https://github.com/joeking-ly/claude-skills-arsenal/blob/main/skills/churn-prevention/SKILL.md', + }, + { + id: 'skill_clarify_815253a6e2', + title: 'clarify', + description: 'clarify skill for Claude workflows from joeking-ly/claude-skills-arsenal.', + kind: 'skill', + link: 'https://github.com/joeking-ly/claude-skills-arsenal/blob/main/skills/clarify/SKILL.md', + }, + { + id: 'skill_cold-email_35bf3b271e', + title: 'cold-email', + description: 'cold-email skill for Claude workflows from joeking-ly/claude-skills-arsenal.', + kind: 'skill', + link: 'https://github.com/joeking-ly/claude-skills-arsenal/blob/main/skills/cold-email/SKILL.md', + }, + { + id: 'skill_colorize_7f9d89a8ba', + title: 'colorize', + description: 'colorize skill for Claude workflows from joeking-ly/claude-skills-arsenal.', + kind: 'skill', + link: 'https://github.com/joeking-ly/claude-skills-arsenal/blob/main/skills/colorize/SKILL.md', + }, + { + id: 'skill_competitor-alternatives_76b9b1a571', + title: 'competitor-alternatives', + description: + 'competitor-alternatives skill for Claude workflows from joeking-ly/claude-skills-arsenal.', + kind: 'skill', + link: 'https://github.com/joeking-ly/claude-skills-arsenal/blob/main/skills/competitor-alternatives/SKILL.md', + }, + { + id: 'skill_content-strategy_1eb0601433', + title: 'content-strategy', + description: + 'content-strategy skill for Claude workflows from joeking-ly/claude-skills-arsenal.', + kind: 'skill', + link: 'https://github.com/joeking-ly/claude-skills-arsenal/blob/main/skills/content-strategy/SKILL.md', + }, + { + id: 'skill_copy-editing_14a6691bcb', + title: 'copy-editing', + description: 'copy-editing skill for Claude workflows from joeking-ly/claude-skills-arsenal.', + kind: 'skill', + link: 'https://github.com/joeking-ly/claude-skills-arsenal/blob/main/skills/copy-editing/SKILL.md', + }, + { + id: 'skill_copywriting_0ed6f8266f', + title: 'copywriting', + description: 'copywriting skill for Claude workflows from joeking-ly/claude-skills-arsenal.', + kind: 'skill', + link: 'https://github.com/joeking-ly/claude-skills-arsenal/blob/main/skills/copywriting/SKILL.md', + }, + { + id: 'skill_critique_70a9e884fb', + title: 'critique', + description: 'critique skill for Claude workflows from joeking-ly/claude-skills-arsenal.', + kind: 'skill', + link: 'https://github.com/joeking-ly/claude-skills-arsenal/blob/main/skills/critique/SKILL.md', + }, + { + id: 'skill_data-quality-checker_aa649d81fb', + title: 'data-quality-checker', + description: + 'data-quality-checker skill for Claude workflows from joeking-ly/claude-skills-arsenal.', + kind: 'skill', + link: 'https://github.com/joeking-ly/claude-skills-arsenal/blob/main/skills/data-quality-checker/SKILL.md', + }, + { + id: 'skill_delight_30fbf92535', + title: 'delight', + description: 'delight skill for Claude workflows from joeking-ly/claude-skills-arsenal.', + kind: 'skill', + link: 'https://github.com/joeking-ly/claude-skills-arsenal/blob/main/skills/delight/SKILL.md', + }, + { + id: 'skill_distill_bdb5bac6fc', + title: 'distill', + description: 'distill skill for Claude workflows from joeking-ly/claude-skills-arsenal.', + kind: 'skill', + link: 'https://github.com/joeking-ly/claude-skills-arsenal/blob/main/skills/distill/SKILL.md', + }, + { + id: 'skill_dividend-growth-pullback-screener_a1c9f358e8', + title: 'dividend-growth-pullback-screener', + description: + 'dividend-growth-pullback-screener skill for Claude workflows from joeking-ly/claude-skills-arsenal.', + kind: 'skill', + link: 'https://github.com/joeking-ly/claude-skills-arsenal/blob/main/skills/dividend-growth-pullback-screener/SKILL.md', + }, + { + id: 'skill_dual-axis-skill-reviewer_b479363e27', + title: 'dual-axis-skill-reviewer', + description: + 'dual-axis-skill-reviewer skill for Claude workflows from joeking-ly/claude-skills-arsenal.', + kind: 'skill', + link: 'https://github.com/joeking-ly/claude-skills-arsenal/blob/main/skills/dual-axis-skill-reviewer/SKILL.md', + }, + { + id: 'skill_earnings-calendar_3ad62b13d4', + title: 'earnings-calendar', + description: + 'earnings-calendar skill for Claude workflows from joeking-ly/claude-skills-arsenal.', + kind: 'skill', + link: 'https://github.com/joeking-ly/claude-skills-arsenal/blob/main/skills/earnings-calendar/SKILL.md', + }, + { + id: 'skill_earnings-trade-analyzer_38411838cd', + title: 'earnings-trade-analyzer', + description: + 'earnings-trade-analyzer skill for Claude workflows from joeking-ly/claude-skills-arsenal.', + kind: 'skill', + link: 'https://github.com/joeking-ly/claude-skills-arsenal/blob/main/skills/earnings-trade-analyzer/SKILL.md', + }, + { + id: 'skill_economic-calendar-fetcher_00defe1392', + title: 'economic-calendar-fetcher', + description: + 'economic-calendar-fetcher skill for Claude workflows from joeking-ly/claude-skills-arsenal.', + kind: 'skill', + link: 'https://github.com/joeking-ly/claude-skills-arsenal/blob/main/skills/economic-calendar-fetcher/SKILL.md', + }, + { + id: 'skill_edge-candidate-agent_43bbd39dda', + title: 'edge-candidate-agent', + description: + 'edge-candidate-agent skill for Claude workflows from joeking-ly/claude-skills-arsenal.', + kind: 'skill', + link: 'https://github.com/joeking-ly/claude-skills-arsenal/blob/main/skills/edge-candidate-agent/SKILL.md', + }, + { + id: 'skill_edge-concept-synthesizer_04583a57de', + title: 'edge-concept-synthesizer', + description: + 'edge-concept-synthesizer skill for Claude workflows from joeking-ly/claude-skills-arsenal.', + kind: 'skill', + link: 'https://github.com/joeking-ly/claude-skills-arsenal/blob/main/skills/edge-concept-synthesizer/SKILL.md', + }, + { + id: 'skill_edge-hint-extractor_cab5de0863', + title: 'edge-hint-extractor', + description: + 'edge-hint-extractor skill for Claude workflows from joeking-ly/claude-skills-arsenal.', + kind: 'skill', + link: 'https://github.com/joeking-ly/claude-skills-arsenal/blob/main/skills/edge-hint-extractor/SKILL.md', + }, + { + id: 'skill_edge-pipeline-orchestrator_4a09c9f784', + title: 'edge-pipeline-orchestrator', + description: + 'edge-pipeline-orchestrator skill for Claude workflows from joeking-ly/claude-skills-arsenal.', + kind: 'skill', + link: 'https://github.com/joeking-ly/claude-skills-arsenal/blob/main/skills/edge-pipeline-orchestrator/SKILL.md', + }, + { + id: 'skill_edge-signal-aggregator_7f227a9ba5', + title: 'edge-signal-aggregator', + description: + 'edge-signal-aggregator skill for Claude workflows from joeking-ly/claude-skills-arsenal.', + kind: 'skill', + link: 'https://github.com/joeking-ly/claude-skills-arsenal/blob/main/skills/edge-signal-aggregator/SKILL.md', + }, + { + id: 'skill_edge-strategy-designer_daeecead88', + title: 'edge-strategy-designer', + description: + 'edge-strategy-designer skill for Claude workflows from joeking-ly/claude-skills-arsenal.', + kind: 'skill', + link: 'https://github.com/joeking-ly/claude-skills-arsenal/blob/main/skills/edge-strategy-designer/SKILL.md', + }, + { + id: 'skill_edge-strategy-reviewer_5d10caa3df', + title: 'edge-strategy-reviewer', + description: + 'edge-strategy-reviewer skill for Claude workflows from joeking-ly/claude-skills-arsenal.', + kind: 'skill', + link: 'https://github.com/joeking-ly/claude-skills-arsenal/blob/main/skills/edge-strategy-reviewer/SKILL.md', + }, + { + id: 'skill_email-sequence_3f72ac5f50', + title: 'email-sequence', + description: + 'email-sequence skill for Claude workflows from joeking-ly/claude-skills-arsenal.', + kind: 'skill', + link: 'https://github.com/joeking-ly/claude-skills-arsenal/blob/main/skills/email-sequence/SKILL.md', + }, + { + id: 'skill_engineering_9c5755ad84', + title: 'engineering', + description: 'engineering skill for Claude workflows from joeking-ly/claude-skills-arsenal.', + kind: 'skill', + link: 'https://github.com/joeking-ly/claude-skills-arsenal/blob/main/skills/engineering/SKILL.md', + }, + { + id: 'skill_exposure-coach_a66a011117', + title: 'exposure-coach', + description: + 'exposure-coach skill for Claude workflows from joeking-ly/claude-skills-arsenal.', + kind: 'skill', + link: 'https://github.com/joeking-ly/claude-skills-arsenal/blob/main/skills/exposure-coach/SKILL.md', + }, + { + id: 'skill_extract_3da1884388', + title: 'extract', + description: 'extract skill for Claude workflows from joeking-ly/claude-skills-arsenal.', + kind: 'skill', + link: 'https://github.com/joeking-ly/claude-skills-arsenal/blob/main/skills/extract/SKILL.md', + }, + { + id: 'skill_finance_c2ceee4567', + title: 'finance', + description: 'finance skill for Claude workflows from joeking-ly/claude-skills-arsenal.', + kind: 'skill', + link: 'https://github.com/joeking-ly/claude-skills-arsenal/blob/main/skills/finance/SKILL.md', + }, + { + id: 'skill_finviz-screener_21f6dcc4fa', + title: 'finviz-screener', + description: + 'finviz-screener skill for Claude workflows from joeking-ly/claude-skills-arsenal.', + kind: 'skill', + link: 'https://github.com/joeking-ly/claude-skills-arsenal/blob/main/skills/finviz-screener/SKILL.md', + }, + { + id: 'skill_form-cro_310fe9796f', + title: 'form-cro', + description: 'form-cro skill for Claude workflows from joeking-ly/claude-skills-arsenal.', + kind: 'skill', + link: 'https://github.com/joeking-ly/claude-skills-arsenal/blob/main/skills/form-cro/SKILL.md', + }, + { + id: 'skill_free-tool-strategy_4b2d7b328a', + title: 'free-tool-strategy', + description: + 'free-tool-strategy skill for Claude workflows from joeking-ly/claude-skills-arsenal.', + kind: 'skill', + link: 'https://github.com/joeking-ly/claude-skills-arsenal/blob/main/skills/free-tool-strategy/SKILL.md', + }, + { + id: 'skill_frontend-design_e97b67472c', + title: 'frontend-design', + description: + 'frontend-design skill for Claude workflows from joeking-ly/claude-skills-arsenal.', + kind: 'skill', + link: 'https://github.com/joeking-ly/claude-skills-arsenal/blob/main/skills/frontend-design/SKILL.md', + }, + { + id: 'skill_ftd-detector_4c9e76d1e3', + title: 'ftd-detector', + description: 'ftd-detector skill for Claude workflows from joeking-ly/claude-skills-arsenal.', + kind: 'skill', + link: 'https://github.com/joeking-ly/claude-skills-arsenal/blob/main/skills/ftd-detector/SKILL.md', + }, + { + id: 'skill_harden_35a97a398e', + title: 'harden', + description: 'harden skill for Claude workflows from joeking-ly/claude-skills-arsenal.', + kind: 'skill', + link: 'https://github.com/joeking-ly/claude-skills-arsenal/blob/main/skills/harden/SKILL.md', + }, + { + id: 'skill_institutional-flow-tracker_72bca147c4', + title: 'institutional-flow-tracker', + description: + 'institutional-flow-tracker skill for Claude workflows from joeking-ly/claude-skills-arsenal.', + kind: 'skill', + link: 'https://github.com/joeking-ly/claude-skills-arsenal/blob/main/skills/institutional-flow-tracker/SKILL.md', + }, + { + id: 'skill_kanchi-dividend-review-monitor_12bb1d3d82', + title: 'kanchi-dividend-review-monitor', + description: + 'kanchi-dividend-review-monitor skill for Claude workflows from joeking-ly/claude-skills-arsenal.', + kind: 'skill', + link: 'https://github.com/joeking-ly/claude-skills-arsenal/blob/main/skills/kanchi-dividend-review-monitor/SKILL.md', + }, + { + id: 'skill_kanchi-dividend-sop_f854455001', + title: 'kanchi-dividend-sop', + description: + 'kanchi-dividend-sop skill for Claude workflows from joeking-ly/claude-skills-arsenal.', + kind: 'skill', + link: 'https://github.com/joeking-ly/claude-skills-arsenal/blob/main/skills/kanchi-dividend-sop/SKILL.md', + }, + { + id: 'skill_kanchi-dividend-us-tax-accounting_c533e59160', + title: 'kanchi-dividend-us-tax-accounting', + description: + 'kanchi-dividend-us-tax-accounting skill for Claude workflows from joeking-ly/claude-skills-arsenal.', + kind: 'skill', + link: 'https://github.com/joeking-ly/claude-skills-arsenal/blob/main/skills/kanchi-dividend-us-tax-accounting/SKILL.md', + }, + { + id: 'skill_launch-strategy_b765aec039', + title: 'launch-strategy', + description: + 'launch-strategy skill for Claude workflows from joeking-ly/claude-skills-arsenal.', + kind: 'skill', + link: 'https://github.com/joeking-ly/claude-skills-arsenal/blob/main/skills/launch-strategy/SKILL.md', + }, + { + id: 'skill_lead-magnets_2357332ec3', + title: 'lead-magnets', + description: 'lead-magnets skill for Claude workflows from joeking-ly/claude-skills-arsenal.', + kind: 'skill', + link: 'https://github.com/joeking-ly/claude-skills-arsenal/blob/main/skills/lead-magnets/SKILL.md', + }, + { + id: 'skill_macro-regime-detector_d577aba99c', + title: 'macro-regime-detector', + description: + 'macro-regime-detector skill for Claude workflows from joeking-ly/claude-skills-arsenal.', + kind: 'skill', + link: 'https://github.com/joeking-ly/claude-skills-arsenal/blob/main/skills/macro-regime-detector/SKILL.md', + }, + { + id: 'skill_market-breadth-analyzer_3b1559d794', + title: 'market-breadth-analyzer', + description: + 'market-breadth-analyzer skill for Claude workflows from joeking-ly/claude-skills-arsenal.', + kind: 'skill', + link: 'https://github.com/joeking-ly/claude-skills-arsenal/blob/main/skills/market-breadth-analyzer/SKILL.md', + }, + { + id: 'skill_market-environment-analysis_b69ad94401', + title: 'market-environment-analysis', + description: + 'market-environment-analysis skill for Claude workflows from joeking-ly/claude-skills-arsenal.', + kind: 'skill', + link: 'https://github.com/joeking-ly/claude-skills-arsenal/blob/main/skills/market-environment-analysis/SKILL.md', + }, + { + id: 'skill_market-news-analyst_7b93098ec2', + title: 'market-news-analyst', + description: + 'market-news-analyst skill for Claude workflows from joeking-ly/claude-skills-arsenal.', + kind: 'skill', + link: 'https://github.com/joeking-ly/claude-skills-arsenal/blob/main/skills/market-news-analyst/SKILL.md', + }, + { + id: 'skill_market-top-detector_cb59ff646f', + title: 'market-top-detector', + description: + 'market-top-detector skill for Claude workflows from joeking-ly/claude-skills-arsenal.', + kind: 'skill', + link: 'https://github.com/joeking-ly/claude-skills-arsenal/blob/main/skills/market-top-detector/SKILL.md', + }, + { + id: 'skill_marketing-ideas_21a1c20e25', + title: 'marketing-ideas', + description: + 'marketing-ideas skill for Claude workflows from joeking-ly/claude-skills-arsenal.', + kind: 'skill', + link: 'https://github.com/joeking-ly/claude-skills-arsenal/blob/main/skills/marketing-ideas/SKILL.md', + }, + { + id: 'skill_marketing-psychology_3d0904bc1f', + title: 'marketing-psychology', + description: + 'marketing-psychology skill for Claude workflows from joeking-ly/claude-skills-arsenal.', + kind: 'skill', + link: 'https://github.com/joeking-ly/claude-skills-arsenal/blob/main/skills/marketing-psychology/SKILL.md', + }, + { + id: 'skill_marketing-skill_d602604090', + title: 'marketing-skill', + description: + 'marketing-skill skill for Claude workflows from joeking-ly/claude-skills-arsenal.', + kind: 'skill', + link: 'https://github.com/joeking-ly/claude-skills-arsenal/blob/main/skills/marketing-skill/SKILL.md', + }, + { + id: 'skill_normalize_666c986e32', + title: 'normalize', + description: 'normalize skill for Claude workflows from joeking-ly/claude-skills-arsenal.', + kind: 'skill', + link: 'https://github.com/joeking-ly/claude-skills-arsenal/blob/main/skills/normalize/SKILL.md', + }, + { + id: 'skill_onboard_291c338a35', + title: 'onboard', + description: 'onboard skill for Claude workflows from joeking-ly/claude-skills-arsenal.', + kind: 'skill', + link: 'https://github.com/joeking-ly/claude-skills-arsenal/blob/main/skills/onboard/SKILL.md', + }, + { + id: 'skill_onboarding-cro_7b46a8ecfe', + title: 'onboarding-cro', + description: + 'onboarding-cro skill for Claude workflows from joeking-ly/claude-skills-arsenal.', + kind: 'skill', + link: 'https://github.com/joeking-ly/claude-skills-arsenal/blob/main/skills/onboarding-cro/SKILL.md', + }, + { + id: 'skill_optimize_a713d5cf36', + title: 'optimize', + description: 'optimize skill for Claude workflows from joeking-ly/claude-skills-arsenal.', + kind: 'skill', + link: 'https://github.com/joeking-ly/claude-skills-arsenal/blob/main/skills/optimize/SKILL.md', + }, + { + id: 'skill_options-strategy-advisor_41525c498d', + title: 'options-strategy-advisor', + description: + 'options-strategy-advisor skill for Claude workflows from joeking-ly/claude-skills-arsenal.', + kind: 'skill', + link: 'https://github.com/joeking-ly/claude-skills-arsenal/blob/main/skills/options-strategy-advisor/SKILL.md', + }, + { + id: 'skill_orchestration_2c0056f892', + title: 'orchestration', + description: + 'orchestration skill for Claude workflows from joeking-ly/claude-skills-arsenal.', + kind: 'skill', + link: 'https://github.com/joeking-ly/claude-skills-arsenal/blob/main/skills/orchestration/SKILL.md', + }, + { + id: 'skill_overdrive_bcc7bfc980', + title: 'overdrive', + description: 'overdrive skill for Claude workflows from joeking-ly/claude-skills-arsenal.', + kind: 'skill', + link: 'https://github.com/joeking-ly/claude-skills-arsenal/blob/main/skills/overdrive/SKILL.md', + }, + { + id: 'skill_page-cro_2e24a52ef2', + title: 'page-cro', + description: 'page-cro skill for Claude workflows from joeking-ly/claude-skills-arsenal.', + kind: 'skill', + link: 'https://github.com/joeking-ly/claude-skills-arsenal/blob/main/skills/page-cro/SKILL.md', + }, + { + id: 'skill_paid-ads_17e89e35a2', + title: 'paid-ads', + description: 'paid-ads skill for Claude workflows from joeking-ly/claude-skills-arsenal.', + kind: 'skill', + link: 'https://github.com/joeking-ly/claude-skills-arsenal/blob/main/skills/paid-ads/SKILL.md', + }, + { + id: 'skill_pair-trade-screener_e6ec33799f', + title: 'pair-trade-screener', + description: + 'pair-trade-screener skill for Claude workflows from joeking-ly/claude-skills-arsenal.', + kind: 'skill', + link: 'https://github.com/joeking-ly/claude-skills-arsenal/blob/main/skills/pair-trade-screener/SKILL.md', + }, + { + id: 'skill_paywall-upgrade-cro_1fdd87322f', + title: 'paywall-upgrade-cro', + description: + 'paywall-upgrade-cro skill for Claude workflows from joeking-ly/claude-skills-arsenal.', + kind: 'skill', + link: 'https://github.com/joeking-ly/claude-skills-arsenal/blob/main/skills/paywall-upgrade-cro/SKILL.md', + }, + { + id: 'skill_pead-screener_677ff13931', + title: 'pead-screener', + description: + 'pead-screener skill for Claude workflows from joeking-ly/claude-skills-arsenal.', + kind: 'skill', + link: 'https://github.com/joeking-ly/claude-skills-arsenal/blob/main/skills/pead-screener/SKILL.md', + }, + { + id: 'skill_polish_b59534da58', + title: 'polish', + description: 'polish skill for Claude workflows from joeking-ly/claude-skills-arsenal.', + kind: 'skill', + link: 'https://github.com/joeking-ly/claude-skills-arsenal/blob/main/skills/polish/SKILL.md', + }, + { + id: 'skill_popup-cro_e41a63b189', + title: 'popup-cro', + description: 'popup-cro skill for Claude workflows from joeking-ly/claude-skills-arsenal.', + kind: 'skill', + link: 'https://github.com/joeking-ly/claude-skills-arsenal/blob/main/skills/popup-cro/SKILL.md', + }, + { + id: 'skill_portfolio-manager_a5e8980833', + title: 'portfolio-manager', + description: + 'portfolio-manager skill for Claude workflows from joeking-ly/claude-skills-arsenal.', + kind: 'skill', + link: 'https://github.com/joeking-ly/claude-skills-arsenal/blob/main/skills/portfolio-manager/SKILL.md', + }, + { + id: 'skill_position-sizer_794294233f', + title: 'position-sizer', + description: + 'position-sizer skill for Claude workflows from joeking-ly/claude-skills-arsenal.', + kind: 'skill', + link: 'https://github.com/joeking-ly/claude-skills-arsenal/blob/main/skills/position-sizer/SKILL.md', + }, + { + id: 'skill_pricing-strategy_3938e47803', + title: 'pricing-strategy', + description: + 'pricing-strategy skill for Claude workflows from joeking-ly/claude-skills-arsenal.', + kind: 'skill', + link: 'https://github.com/joeking-ly/claude-skills-arsenal/blob/main/skills/pricing-strategy/SKILL.md', + }, + { + id: 'skill_product-marketing-context_02392341fb', + title: 'product-marketing-context', + description: + 'product-marketing-context skill for Claude workflows from joeking-ly/claude-skills-arsenal.', + kind: 'skill', + link: 'https://github.com/joeking-ly/claude-skills-arsenal/blob/main/skills/product-marketing-context/SKILL.md', + }, + { + id: 'skill_product-team_cac07e9ba3', + title: 'product-team', + description: 'product-team skill for Claude workflows from joeking-ly/claude-skills-arsenal.', + kind: 'skill', + link: 'https://github.com/joeking-ly/claude-skills-arsenal/blob/main/skills/product-team/SKILL.md', + }, + { + id: 'skill_programmatic-seo_abd90635ce', + title: 'programmatic-seo', + description: + 'programmatic-seo skill for Claude workflows from joeking-ly/claude-skills-arsenal.', + kind: 'skill', + link: 'https://github.com/joeking-ly/claude-skills-arsenal/blob/main/skills/programmatic-seo/SKILL.md', + }, + { + id: 'skill_project-management_e9a59d3348', + title: 'project-management', + description: + 'project-management skill for Claude workflows from joeking-ly/claude-skills-arsenal.', + kind: 'skill', + link: 'https://github.com/joeking-ly/claude-skills-arsenal/blob/main/skills/project-management/SKILL.md', + }, + { + id: 'skill_quieter_8d9a11a4b9', + title: 'quieter', + description: 'quieter skill for Claude workflows from joeking-ly/claude-skills-arsenal.', + kind: 'skill', + link: 'https://github.com/joeking-ly/claude-skills-arsenal/blob/main/skills/quieter/SKILL.md', + }, + { + id: 'skill_referral-program_b5c66e797c', + title: 'referral-program', + description: + 'referral-program skill for Claude workflows from joeking-ly/claude-skills-arsenal.', + kind: 'skill', + link: 'https://github.com/joeking-ly/claude-skills-arsenal/blob/main/skills/referral-program/SKILL.md', + }, + { + id: 'skill_revops_9cdc93c969', + title: 'revops', + description: 'revops skill for Claude workflows from joeking-ly/claude-skills-arsenal.', + kind: 'skill', + link: 'https://github.com/joeking-ly/claude-skills-arsenal/blob/main/skills/revops/SKILL.md', + }, + { + id: 'skill_sales-enablement_72bda5cba3', + title: 'sales-enablement', + description: + 'sales-enablement skill for Claude workflows from joeking-ly/claude-skills-arsenal.', + kind: 'skill', + link: 'https://github.com/joeking-ly/claude-skills-arsenal/blob/main/skills/sales-enablement/SKILL.md', + }, + { + id: 'skill_scenario-analyzer_0b33e02c29', + title: 'scenario-analyzer', + description: + 'scenario-analyzer skill for Claude workflows from joeking-ly/claude-skills-arsenal.', + kind: 'skill', + link: 'https://github.com/joeking-ly/claude-skills-arsenal/blob/main/skills/scenario-analyzer/SKILL.md', + }, + { + id: 'skill_schema-markup_c31a63357f', + title: 'schema-markup', + description: + 'schema-markup skill for Claude workflows from joeking-ly/claude-skills-arsenal.', + kind: 'skill', + link: 'https://github.com/joeking-ly/claude-skills-arsenal/blob/main/skills/schema-markup/SKILL.md', + }, + { + id: 'skill_sector-analyst_bf9303d371', + title: 'sector-analyst', + description: + 'sector-analyst skill for Claude workflows from joeking-ly/claude-skills-arsenal.', + kind: 'skill', + link: 'https://github.com/joeking-ly/claude-skills-arsenal/blob/main/skills/sector-analyst/SKILL.md', + }, + { + id: 'skill_seo-audit_1d9706c4e9', + title: 'seo-audit', + description: 'seo-audit skill for Claude workflows from joeking-ly/claude-skills-arsenal.', + kind: 'skill', + link: 'https://github.com/joeking-ly/claude-skills-arsenal/blob/main/skills/seo-audit/SKILL.md', + }, + { + id: 'skill_signal-postmortem_6f86d7cec9', + title: 'signal-postmortem', + description: + 'signal-postmortem skill for Claude workflows from joeking-ly/claude-skills-arsenal.', + kind: 'skill', + link: 'https://github.com/joeking-ly/claude-skills-arsenal/blob/main/skills/signal-postmortem/SKILL.md', + }, + { + id: 'skill_signup-flow-cro_448f19c609', + title: 'signup-flow-cro', + description: + 'signup-flow-cro skill for Claude workflows from joeking-ly/claude-skills-arsenal.', + kind: 'skill', + link: 'https://github.com/joeking-ly/claude-skills-arsenal/blob/main/skills/signup-flow-cro/SKILL.md', + }, + { + id: 'skill_site-architecture_e51f55a757', + title: 'site-architecture', + description: + 'site-architecture skill for Claude workflows from joeking-ly/claude-skills-arsenal.', + kind: 'skill', + link: 'https://github.com/joeking-ly/claude-skills-arsenal/blob/main/skills/site-architecture/SKILL.md', + }, + { + id: 'skill_skill-designer_cdc5c2a709', + title: 'skill-designer', + description: + 'skill-designer skill for Claude workflows from joeking-ly/claude-skills-arsenal.', + kind: 'skill', + link: 'https://github.com/joeking-ly/claude-skills-arsenal/blob/main/skills/skill-designer/SKILL.md', + }, + ], + // Page 12 + [ + { + id: 'skill_skill-idea-miner_76a527cad4', + title: 'skill-idea-miner', + description: + 'skill-idea-miner skill for Claude workflows from joeking-ly/claude-skills-arsenal.', + kind: 'skill', + link: 'https://github.com/joeking-ly/claude-skills-arsenal/blob/main/skills/skill-idea-miner/SKILL.md', + }, + { + id: 'skill_skill-integration-tester_f7049ad9e0', + title: 'skill-integration-tester', + description: + 'skill-integration-tester skill for Claude workflows from joeking-ly/claude-skills-arsenal.', + kind: 'skill', + link: 'https://github.com/joeking-ly/claude-skills-arsenal/blob/main/skills/skill-integration-tester/SKILL.md', + }, + { + id: 'skill_social-content_e66aea3e5e', + title: 'social-content', + description: + 'social-content skill for Claude workflows from joeking-ly/claude-skills-arsenal.', + kind: 'skill', + link: 'https://github.com/joeking-ly/claude-skills-arsenal/blob/main/skills/social-content/SKILL.md', + }, + { + id: 'skill_gdpr-dsgvo-expert_608edc217d', + title: 'gdpr-dsgvo-expert', + description: + 'gdpr-dsgvo-expert skill for Claude workflows from alirezarezvani/claude-skills.', + kind: 'skill', + link: 'https://github.com/alirezarezvani/claude-skills/blob/main/ra-qm-team/gdpr-dsgvo-expert/SKILL.md', + }, + { + id: 'skill_soc2-compliance_e2145adcd2', + title: 'soc2-compliance', + description: 'soc2-compliance skill for Claude workflows from alirezarezvani/claude-skills.', + kind: 'skill', + link: 'https://github.com/alirezarezvani/claude-skills/blob/main/ra-qm-team/soc2-compliance/SKILL.md', + }, + { + id: 'skill_information-security-manager-iso27001_e758e19331', + title: 'information-security-manager-iso27001', + description: + 'information-security-manager-iso27001 skill for Claude workflows from alirezarezvani/claude-skills.', + kind: 'skill', + link: 'https://github.com/alirezarezvani/claude-skills/blob/main/ra-qm-team/information-security-manager-iso27001/SKILL.md', + }, + { + id: 'skill_isms-audit-expert_02d0bcaa96', + title: 'isms-audit-expert', + description: + 'isms-audit-expert skill for Claude workflows from alirezarezvani/claude-skills.', + kind: 'skill', + link: 'https://github.com/alirezarezvani/claude-skills/blob/main/ra-qm-team/isms-audit-expert/SKILL.md', + }, + { + id: 'skill_risk-management-specialist_5575ade4e4', + title: 'risk-management-specialist', + description: + 'risk-management-specialist skill for Claude workflows from alirezarezvani/claude-skills.', + kind: 'skill', + link: 'https://github.com/alirezarezvani/claude-skills/blob/main/ra-qm-team/risk-management-specialist/SKILL.md', + }, + { + id: 'skill_quality-manager-qms-iso13485_91badd271f', + title: 'quality-manager-qms-iso13485', + description: + 'quality-manager-qms-iso13485 skill for Claude workflows from alirezarezvani/claude-skills.', + kind: 'skill', + link: 'https://github.com/alirezarezvani/claude-skills/blob/main/ra-qm-team/quality-manager-qms-iso13485/SKILL.md', + }, + { + id: 'skill_quality-manager-qmr_34a0e08d76', + title: 'quality-manager-qmr', + description: + 'quality-manager-qmr skill for Claude workflows from alirezarezvani/claude-skills.', + kind: 'skill', + link: 'https://github.com/alirezarezvani/claude-skills/blob/main/ra-qm-team/quality-manager-qmr/SKILL.md', + }, + { + id: 'skill_quality-documentation-manager_ed23a66e93', + title: 'quality-documentation-manager', + description: + 'quality-documentation-manager skill for Claude workflows from alirezarezvani/claude-skills.', + kind: 'skill', + link: 'https://github.com/alirezarezvani/claude-skills/blob/main/ra-qm-team/quality-documentation-manager/SKILL.md', + }, + { + id: 'skill_qms-audit-expert_2823e7d81a', + title: 'qms-audit-expert', + description: 'qms-audit-expert skill for Claude workflows from alirezarezvani/claude-skills.', + kind: 'skill', + link: 'https://github.com/alirezarezvani/claude-skills/blob/main/ra-qm-team/qms-audit-expert/SKILL.md', + }, + { + id: 'skill_capa-officer_b375913319', + title: 'capa-officer', + description: 'capa-officer skill for Claude workflows from alirezarezvani/claude-skills.', + kind: 'skill', + link: 'https://github.com/alirezarezvani/claude-skills/blob/main/ra-qm-team/capa-officer/SKILL.md', + }, + { + id: 'skill_fda-consultant-specialist_35167fd7d7', + title: 'fda-consultant-specialist', + description: + 'fda-consultant-specialist skill for Claude workflows from alirezarezvani/claude-skills.', + kind: 'skill', + link: 'https://github.com/alirezarezvani/claude-skills/blob/main/ra-qm-team/fda-consultant-specialist/SKILL.md', + }, + { + id: 'skill_mdr-745-specialist_5e2646c41f', + title: 'mdr-745-specialist', + description: + 'mdr-745-specialist skill for Claude workflows from alirezarezvani/claude-skills.', + kind: 'skill', + link: 'https://github.com/alirezarezvani/claude-skills/blob/main/ra-qm-team/mdr-745-specialist/SKILL.md', + }, + ], +]; + +export const skills: CatalogItem[] = skillsCatalogPages.flat(); +export const skillsTotalItems = skills.length; +export const skillsTotalPages = skillsCatalogPages.length; diff --git a/website/src/content/data/types.ts b/website/src/content/data/types.ts new file mode 100644 index 00000000..eeb3f7af --- /dev/null +++ b/website/src/content/data/types.ts @@ -0,0 +1,8 @@ +export type CatalogKind = 'skill' | 'rule' | 'agent' | 'command'; +export interface CatalogItem { + id: string; + title: string; + description: string; + kind: CatalogKind; + link: string; +} diff --git a/website/src/content/docs/index.mdx b/website/src/content/docs/index.mdx index 88c3d1b7..7789772b 100644 --- a/website/src/content/docs/index.mdx +++ b/website/src/content/docs/index.mdx @@ -1,10 +1,10 @@ --- draft: false -title: AgentsMesh — One Config. Nine AI Tools. -description: AgentsMesh maintains a single canonical configuration in .agentsmesh/ and syncs it bidirectionally to Claude Code, Cursor, Copilot, Continue, Junie, Gemini CLI, Cline, Codex CLI, and Windsurf. +title: AgentsMesh — One Config. Every Target. +description: AgentsMesh keeps a single canonical configuration under .agentsmesh/ and syncs it bidirectionally to the AI coding tools you enable. See the supported tools matrix for native vs embedded feature coverage per target. template: splash hero: - tagline: One canonical source of truth for AI coding tool configuration — synced bidirectionally to Claude Code, Cursor, Copilot, Gemini CLI, Cline, Codex CLI, Windsurf, Continue, and Junie. + tagline: One canonical source of truth under .agentsmesh/ — import native tool configs, generate target outputs, and keep teams aligned. Which features map how for each tool is documented in the supported tools matrix. image: dark: ../../assets/logo-dark.svg light: ../../assets/logo-light.svg @@ -13,36 +13,34 @@ hero: link: /agentsmesh/getting-started/installation/ icon: right-arrow variant: primary + - text: Supported tools matrix + link: /agentsmesh/reference/supported-tools/ + icon: document + variant: secondary - text: CLI Reference link: /agentsmesh/cli/ icon: open-book variant: secondary - - text: View on GitHub - link: https://github.com/sampleXbro/agentsmesh - icon: github - variant: minimal --- import { Card, CardGrid, Badge, Aside } from '@astrojs/starlight/components'; +import CatalogExplorer from '../../components/catalog-explorer/CatalogExplorer.astro'; + +
Requires Node.js 20+  ·  MIT License  ·  Zero runtime dependencies
-
- npm version - npm downloads - CI - License: MIT -
+
npm versionnpm downloadsCILicense: MIT
## The Problem -Every AI coding tool has its own config format. `.claude/rules/`, `.cursor/rules/`, `.github/copilot-instructions.md`, `.gemini/`, `.cline/`... Change a rule in one place and forget the others. Now your agents behave differently depending on who's editing. +Every AI coding assistant has its own config layout and filenames. Change a rule in one place and forget the others, and behavior drifts between editors and workflows. -Scale that across a team where some use Cursor, some use Claude Code, and some switch between them — and config drift becomes a real problem. +Scale that across a team where people use different tools — and keeping instructions, skills, and automation aligned becomes a maintenance problem. ## The Solution @@ -60,11 +58,8 @@ Scale that across a team where some use Cursor, some use Claude Code, and some s ``` ```bash -# Generate correct output for every tool +# Generate outputs for every target in your config agentsmesh generate - -# Claude Code, Cursor, Copilot, Gemini CLI, Cline, -# Codex CLI, Windsurf, Continue, and Junie — all in sync. ``` ## Why AgentsMesh @@ -90,23 +85,11 @@ agentsmesh generate -## Supported Tools - -| Tool | Rules | Commands | Agents | Skills | MCP | Hooks | Ignore | Permissions | -|------|:-----:|:--------:|:------:|:------:|:---:|:-----:|:------:|:-----------:| -| Claude Code | ✅ | ✅ | ✅ | ✅ | ✅ | ✅ | ✅ | ✅ | -| Cursor | ✅ | ✅ | ✅ | ✅ | ✅ | ✅ | ✅ | 〜 | -| Copilot | ✅ | ✅ | ✅ | ✅ | — | 〜 | — | — | -| Gemini CLI | ✅ | ✅ | ✅ | ✅ | ✅ | 〜 | ✅ | 〜 | -| Cline | ✅ | ✅ | 〜 | ✅ | ✅ | — | ✅ | — | -| Codex CLI | ✅ | 〜 | ✅ | ✅ | ✅ | — | — | — | -| Windsurf | ✅ | ✅ | 〜 | ✅ | 〜 | ✅ | ✅ | — | -| Continue | ✅ | 〜 | — | 〜 | ✅ | — | — | — | -| Junie | ✅ | 〜 | 〜 | 〜 | ✅ | — | ✅ | — | +## Tool compatibility -_✅ Native   〜 Embedded/Partial   — Not supported_ +Targets differ in what they support natively versus what AgentsMesh embeds or projects for round-trip import. The **[supported tools matrix](/agentsmesh/reference/supported-tools/)** is the source of truth for rules, commands, agents, skills, MCP, hooks, ignore patterns, and permissions — per tool, with the full legend. -See the [full feature matrix](/agentsmesh/reference/supported-tools/) for details. +For a generated view from your local install, run [`agentsmesh matrix`](/agentsmesh/cli/matrix/) and compare with the docs matrix when upgrading. ## Quick Start @@ -140,7 +123,7 @@ npx agentsmesh generate "@context": "https://schema.org", "@type": "SoftwareApplication", "name": "AgentsMesh", - "description": "One canonical source of truth for AI coding tool configuration — synced bidirectionally to Claude Code, Cursor, Copilot, Continue, Junie, Gemini CLI, Cline, Codex CLI, and Windsurf.", + "description": "Canonical AI coding assistant configuration under .agentsmesh/ with bidirectional import and generate across enabled targets; see the supported tools matrix for per-target feature coverage.", "applicationCategory": "DeveloperApplication", "operatingSystem": "Node.js 20+", "offers": { diff --git a/website/src/lib/catalog-rows.ts b/website/src/lib/catalog-rows.ts new file mode 100644 index 00000000..927edce0 --- /dev/null +++ b/website/src/lib/catalog-rows.ts @@ -0,0 +1,20 @@ +/** Compact JSON row for homepage catalog (id, title, description, kind, link). */ +export type CatalogRow = { i: string; t: string; d: string; k: string; l: string }; + +type RowSource = { + id: string; + title: string; + description: string; + kind: string; + link: string; +}; + +export function toCatalogRows(items: readonly RowSource[]): CatalogRow[] { + return items.map((x) => ({ + i: x.id, + t: x.title, + d: x.description, + k: x.kind, + l: x.link, + })); +} diff --git a/website/src/styles/catalog-explorer-table.css b/website/src/styles/catalog-explorer-table.css new file mode 100644 index 00000000..cf1fabee --- /dev/null +++ b/website/src/styles/catalog-explorer-table.css @@ -0,0 +1,181 @@ +/* Virtual browse: one table, sticky thead, fixed col % (~3 body rows visible) */ +.am-catalog-table-wrap { + width: 100%; + display: flex; + flex-direction: column; + border: 1px solid var(--sl-color-gray-5); + border-radius: 0.5rem; + background: var(--sl-color-bg-nav); + text-align: left; + overflow: hidden; +} + +.am-catalog-table { + width: 100%; + border-collapse: collapse; + font-size: 0.8rem; + table-layout: fixed; +} + +.am-catalog-table-scroll { + flex-shrink: 0; + overflow-x: auto; + overflow-y: auto; + border-top: 1px solid var(--sl-color-gray-5); + -webkit-overflow-scrolling: touch; + scrollbar-gutter: stable; +} + +.am-catalog-table-scroll--virtual { + border-top: none; +} + +.am-catalog-thead-sticky th { + position: sticky; + top: 0; + z-index: 2; + background: var(--sl-color-gray-6); + box-shadow: 0 1px 0 var(--sl-color-gray-5); +} + +.am-catalog-table thead { + background: var(--sl-color-gray-6); +} + +.am-catalog-table th { + padding: 0.5rem 1.1rem; + font-weight: 600; + text-align: left; + border-bottom: 1px solid var(--sl-color-gray-5); +} + +.am-catalog-table td { + padding: 0 1.1rem; + vertical-align: middle; + border-bottom: 1px solid var(--sl-color-gray-6); +} + +/* + * Catalog lives inside `.sl-markdown-content`; Starlight/prose table rules often zero + * `padding-inline-start` on the first column — restore symmetric horizontal padding. + */ +.am-catalog-explorer .am-catalog-table th, +.am-catalog-explorer .am-catalog-table td { + padding-inline: 1.1rem; +} + +.am-catalog-explorer .am-catalog-table thead th { + padding-block: 0.5rem; +} + +.am-catalog-explorer .am-catalog-table tbody td { + padding-block: 0; +} + +.am-catalog-explorer .am-catalog-table td.am-catalog-vspacer-cell { + padding: 0 !important; +} + +.am-catalog-explorer .am-catalog-table td.am-catalog-table-empty { + padding: 1rem 1.1rem; +} + +.am-col-title { + width: 26%; +} + +.am-col-desc { + width: 54%; +} + +.am-col-link { + width: 20%; +} + +.am-catalog-td-title { + font-weight: 600; + overflow: hidden; + text-overflow: ellipsis; + white-space: nowrap; +} + +.am-catalog-td-desc { + color: var(--sl-color-gray-2); + line-height: 1.35; + overflow: hidden; + display: -webkit-box; + -webkit-box-orient: vertical; + -webkit-line-clamp: 2; + line-clamp: 2; +} + +.am-catalog-td-link { + white-space: nowrap; +} + +.am-catalog-src-link { + color: var(--sl-color-accent-high); + text-decoration: underline; + text-underline-offset: 2px; +} + +.am-catalog-tr { + cursor: pointer; + height: 52px; + min-height: 52px; + max-height: 52px; + box-sizing: border-box; +} + +.am-catalog-tr:hover, +.am-catalog-tr:focus { + outline: none; + background: var(--sl-color-accent-low); +} + +.am-catalog-tr--selected { + background: var(--sl-color-accent-low); + box-shadow: inset 3px 0 0 0 var(--sl-color-accent); +} + +.am-catalog-vspacer-cell { + padding: 0 !important; + border: none !important; + line-height: 0; + font-size: 0; + vertical-align: top; +} + +.am-catalog-table-empty { + padding: 1rem 1.1rem; + text-align: center; + color: var(--sl-color-gray-3); + font-style: italic; +} + +.am-catalog-live { + min-height: 1.25rem; + margin: 0.5rem 0 0; + font-size: 0.8rem; + color: var(--sl-color-gray-3); + text-align: center; +} + +/* Keep virtual body as a real table on small screens (card layout breaks spacer rows) */ +@media (max-width: 768px) { + .am-catalog-table-scroll .am-catalog-table--body tr { + display: table-row; + } + + .am-catalog-table-scroll .am-catalog-table--body td { + display: table-cell; + } + + .am-catalog-table-scroll .am-catalog-table--body td::before { + content: none !important; + } + + .am-catalog-table-scroll { + overflow-x: auto; + } +} diff --git a/website/src/styles/catalog-explorer.css b/website/src/styles/catalog-explorer.css new file mode 100644 index 00000000..5aae630c --- /dev/null +++ b/website/src/styles/catalog-explorer.css @@ -0,0 +1,164 @@ +/* Homepage catalog — shell, copy grid, tabs, search */ +.am-catalog-explorer { + position: relative; + z-index: 5; + width: 100%; + max-width: 56rem; + margin: -1.25rem auto 2.5rem; + padding: 0 1rem; + box-sizing: border-box; +} + +.am-catalog-inner { + width: 100%; + max-width: 52rem; + margin: 0 auto; + display: flex; + flex-direction: column; + align-items: stretch; +} + +.am-catalog-copy-block { + margin-bottom: 1rem; + width: 100%; +} + +.am-catalog-copy-label { + display: block; + font-size: 0.8rem; + font-weight: 600; + color: var(--sl-color-gray-3); + margin-bottom: 0.35rem; + text-align: center; +} + +.am-catalog-copy-row { + display: grid; + grid-template-columns: 1fr auto; + align-items: center; + gap: 0.5rem; + width: 100%; +} + +.am-catalog-copy-input { + min-width: 0; + width: 100%; + height: 2.5rem; + font-family: var(--__sl-font-mono, ui-monospace, monospace); + font-size: 0.8rem; + line-height: 1.35; + padding: 0 0.65rem; + border: 1px solid var(--sl-color-gray-5); + border-radius: 0.5rem; + background: var(--sl-color-bg); + color: var(--sl-color-text); + box-sizing: border-box; + text-align: left; + overflow: hidden; + text-overflow: ellipsis; + white-space: nowrap; +} + +.am-catalog-copy-btn { + flex-shrink: 0; + font-size: 0.85rem; + font-weight: 600; + min-height: 2.5rem; + padding: 0 1rem; + border-radius: 0.5rem; + border: 1px solid var(--sl-color-accent); + background: var(--sl-color-accent); + color: #fff; + cursor: pointer; + white-space: nowrap; + display: inline-flex; + align-items: center; + justify-content: center; + box-sizing: border-box; +} + +.am-catalog-copy-btn--copied { + background: #16a34a; + border-color: #15803d; +} + +.am-catalog-copy-btn:focus-visible { + outline: 2px solid var(--sl-color-accent-high); + outline-offset: 2px; +} + +.am-catalog-copy-btn:hover { + filter: brightness(1.05); +} + +.am-catalog-tabs { + display: flex; + gap: 0.35rem; + align-items: stretch; + justify-content: center; + margin-bottom: 0.75rem; + flex-wrap: wrap; + width: 100%; +} + +.am-catalog-tabs [role='tab'] { + font-size: 0.85rem; + font-weight: 600; + height: 2.5rem; + min-height: 2.5rem; + max-height: 2.5rem; + padding: 0 1rem; + margin: 0; + border-radius: 999px; + border-width: 1px; + border-style: solid; + border-color: var(--sl-color-gray-5); + background: transparent; + color: var(--sl-color-text); + cursor: pointer; + display: inline-flex; + align-items: center; + justify-content: center; + box-sizing: border-box; + line-height: 1.2; + white-space: nowrap; + font-family: inherit; + -webkit-appearance: none; + appearance: none; +} + +.am-catalog-tabs [role='tab'][aria-selected='true'] { + border-color: var(--sl-color-accent); + border-width: 1px; + background: var(--sl-color-accent-low); + color: var(--sl-color-accent-high); +} + +.am-catalog-search-wrap { + position: relative; + width: 100%; + margin-bottom: 0.5rem; +} + +.am-catalog-search-label { + position: absolute; + width: 1px; + height: 1px; + padding: 0; + margin: -1px; + overflow: hidden; + clip: rect(0, 0, 0, 0); + white-space: nowrap; + border: 0; +} + +.am-catalog-search { + width: 100%; + box-sizing: border-box; + font-size: 0.9rem; + padding: 0.5rem 0.65rem; + border: 1px solid var(--sl-color-gray-5); + border-radius: 0.5rem; + background: var(--sl-color-bg); + color: var(--sl-color-text); +} diff --git a/website/src/styles/custom.css b/website/src/styles/custom.css index 5e2ff5ed..3ea6cba2 100644 --- a/website/src/styles/custom.css +++ b/website/src/styles/custom.css @@ -12,9 +12,9 @@ --sl-color-accent-high: #a5b4fc; } -/* Hero section enhancements */ +/* Hero section enhancements — tighter so homepage catalog fits above the fold */ .hero { - padding-block: 4rem; + padding-block: 2.5rem; } .hero > img { @@ -44,6 +44,28 @@ letter-spacing: 0.02em; } +/* + * Homepage hero badges: MDX inserts whitespace text nodes between tags; as flex items they + * pick up line-height and throw off alignment (first badge looks shifted / “margin bottom”). + */ +.am-hero-badges { + display: flex; + flex-wrap: wrap; + gap: 0.5rem; + justify-content: center; + align-items: center; + margin-top: auto; + font-size: 0; + line-height: 0; +} + +.am-hero-badges img { + display: block; + height: 20px; + width: auto; + margin-top: auto; +} + /* Aside / callout */ .starlight-aside { border-radius: 0.5rem;