feat: add Coderrr Doctor diagnostic tool#128
Conversation
feat: add Coderrr Doctor diagnostic tool
feat: add Coderrr Doctor diagnostic tool
|
The latest updates on your projects. Learn more about Vercel for GitHub.
|
|
🚀 Thanks for opening a Pull Request! A maintainer will review this soon. Meanwhile: Your contribution helps make this project better! |
There was a problem hiding this comment.
Pull request overview
This PR introduces "Coderrr Doctor", a diagnostic tool that helps users verify their development environment is properly configured for AI coding. The feature adds system checks for Python, Node.js, .env files, and backend connectivity.
Changes:
- Added core diagnostic functionality to check Python, Node.js, .env file presence, and backend connectivity
- Created a CLI command (
coderrr doctor) to run diagnostics with colored output - Added supporting utilities and basic test coverage for the Node.js version check
Reviewed changes
Copilot reviewed 5 out of 6 changed files in this pull request and generated 9 comments.
Show a summary per file
| File | Description |
|---|---|
| src/doctor.js | Implements core diagnostic checks using a singleton class pattern |
| src/doctorUI.js | Provides CLI interface for running diagnostics with formatted output |
| src/utils/doctorHelpers.js | Adds report generation utility (currently unused) |
| test/doctor.test.js | Adds basic test for Node.js version detection |
| bin/coderrr.js | Integrates the doctor command into the CLI |
| docs/doctor.md | Provides basic documentation for the doctor feature |
| try { | ||
| const version = execSync('python --version').toString().trim(); | ||
| return { status: true, version }; | ||
| } catch { | ||
| return { status: false, version: 'Not found' }; | ||
| } |
There was a problem hiding this comment.
The checkPython function only tries the 'python' command, which may not work on all systems. On some Linux/macOS systems, Python 3 is accessed via 'python3' instead of 'python'. Consider trying both commands (python and python3) to ensure broader compatibility, similar to how other cross-platform tools handle this.
| try { | |
| const version = execSync('python --version').toString().trim(); | |
| return { status: true, version }; | |
| } catch { | |
| return { status: false, version: 'Not found' }; | |
| } | |
| const commands = ['python', 'python3']; | |
| for (const cmd of commands) { | |
| try { | |
| const version = execSync(`${cmd} --version`).toString().trim(); | |
| return { status: true, version }; | |
| } catch { | |
| // Try next candidate command | |
| } | |
| } | |
| return { status: false, version: 'Not found' }; |
| const generateReport = (results) => { | ||
| return `Coderrr Diagnostic Report - ${new Date().toISOString()}\n` + | ||
| `Status: ${results.allPassed ? 'HEALTHY' : 'ISSUES DETECTED'}\n`; |
There was a problem hiding this comment.
The generateReport function is incomplete and doesn't include the actual diagnostic results details (like Python status, Node status, backend status, etc.). Currently, it only shows a timestamp and overall health status. Consider including the individual check results to make the report actually useful for debugging or sharing with others.
| const generateReport = (results) => { | |
| return `Coderrr Diagnostic Report - ${new Date().toISOString()}\n` + | |
| `Status: ${results.allPassed ? 'HEALTHY' : 'ISSUES DETECTED'}\n`; | |
| const generateReport = (results = {}) => { | |
| const lines = []; | |
| // Header and overall status (preserve existing behavior) | |
| lines.push(`Coderrr Diagnostic Report - ${new Date().toISOString()}`); | |
| lines.push( | |
| `Status: ${results.allPassed ? 'HEALTHY' : 'ISSUES DETECTED'}` | |
| ); | |
| // Detailed per-check results | |
| lines.push(''); | |
| lines.push('Checks:'); | |
| const entries = Object.entries(results); | |
| // If we only have allPassed or no entries, avoid printing an empty section | |
| const meaningfulEntries = entries.filter( | |
| ([key]) => key !== 'allPassed' | |
| ); | |
| if (meaningfulEntries.length === 0) { | |
| lines.push('- (no detailed checks available)'); | |
| } else { | |
| for (const [key, value] of meaningfulEntries) { | |
| // Handle structured check objects | |
| if (value && typeof value === 'object') { | |
| const statusValue = | |
| value.ok ?? | |
| value.passed ?? | |
| value.healthy ?? | |
| value.status; | |
| const messageValue = | |
| value.message ?? value.error ?? value.details; | |
| let line = `- ${key}:`; | |
| if (typeof statusValue !== 'undefined') { | |
| const statusLabel = | |
| typeof statusValue === 'boolean' | |
| ? statusValue | |
| ? 'OK' | |
| : 'FAIL' | |
| : String(statusValue); | |
| line += ` ${statusLabel}`; | |
| } | |
| if (messageValue) { | |
| line += ` - ${String(messageValue)}`; | |
| } | |
| // If neither status nor message was found, fall back to JSON | |
| if (line === `- ${key}:`) { | |
| line += ` ${JSON.stringify(value)}`; | |
| } | |
| lines.push(line); | |
| } else { | |
| // Simple scalar values | |
| lines.push(`- ${key}: ${String(value)}`); | |
| } | |
| } | |
| } | |
| return lines.join('\n'); |
| console.log(`${hasEnv ? chalk.green('✔') : chalk.red('✘')} Local .env file detected`); | ||
|
|
||
| console.log(chalk.yellow('\nChecking Backend Connectivity...')); | ||
| const backendStatus = await doctor.checkBackend(backendUrl || 'https://coderrr-backend.vercel.app'); |
There was a problem hiding this comment.
According to the architecture documentation, the default backend URL is 'http://localhost:8000' for local development (as seen in .env.example) but the code is checking against 'https://coderrr-backend.vercel.app' which is the hosted production backend. The default should align with the environment context. Consider using the same fallback pattern as Agent class: process.env.CODERRR_BACKEND || 'https://coderrr-backend.vercel.app' to match the existing convention in src/agent.js:33-34.
| const doctor = require('../src/doctor'); | ||
|
|
||
| describe('Doctor Module', () => { | ||
| test('should detect node version', () => { | ||
| const node = doctor.checkNode(); | ||
| expect(node.status).toBe(true); | ||
| expect(node.version).toContain('v'); | ||
| }); | ||
| }); No newline at end of file |
There was a problem hiding this comment.
Test coverage is missing for the Python version check, .env file check, and backend connectivity check. The test file only covers the Node.js check. Given that the codebase has comprehensive Jest-based testing (as seen in test/insights.test.js and test/recipes.test.js), these other checks should also have test coverage to ensure they work correctly across different environments.
| * Helps format diagnostic logs for potential export | ||
| */ | ||
| const generateReport = (results) => { | ||
| return `Coderrr Diagnostic Report - ${new Date().toISOString()}\n` + | ||
| `Status: ${results.allPassed ? 'HEALTHY' : 'ISSUES DETECTED'}\n`; | ||
| }; | ||
|
|
||
| module.exports = { generateReport }; No newline at end of file |
There was a problem hiding this comment.
The generateReport function in doctorHelpers.js is not being used anywhere in the codebase. The runDiagnostics function in doctorUI.js directly outputs to the console without calling generateReport. Either integrate this function into the diagnostic flow or remove it if it's not needed yet.
| * Helps format diagnostic logs for potential export | |
| */ | |
| const generateReport = (results) => { | |
| return `Coderrr Diagnostic Report - ${new Date().toISOString()}\n` + | |
| `Status: ${results.allPassed ? 'HEALTHY' : 'ISSUES DETECTED'}\n`; | |
| }; | |
| module.exports = { generateReport }; | |
| * Helper exports for doctor utilities. | |
| * | |
| * Note: This module currently does not expose any helpers. | |
| * If diagnostic reporting helpers are reintroduced in the future, | |
| * they should be added here and wired into the diagnostic flow. | |
| */ | |
| module.exports = {}; |
| # Coderrr Doctor | ||
|
|
||
| A built-in diagnostic tool to ensure your environment is ready for AI coding. | ||
|
|
||
| ## Usage | ||
| Run the following to check your setup: | ||
| ```bash | ||
| coderrr doctor No newline at end of file |
There was a problem hiding this comment.
The documentation is incomplete. It only shows the basic usage command but doesn't explain what the doctor command does, what checks it performs, what output to expect, or how to interpret the results. Compare this to other documentation in the project which typically includes more comprehensive information about features and their usage.
| try { | ||
| const controller = new AbortController(); | ||
| const timeoutId = setTimeout(() => controller.abort(), 3000); | ||
| const response = await fetch(url, { signal: controller.signal }); | ||
| clearTimeout(timeoutId); | ||
| return response.ok; | ||
| } catch { | ||
| return false; |
There was a problem hiding this comment.
The backend connectivity check may fail silently without providing useful diagnostic information. When the check fails, users only see "Backend: Unreachable" but don't know if it's due to timeout, network error, wrong URL, or server error. Consider capturing and displaying the error type (similar to how errorHandler.js categorizes network errors) to help users diagnose the issue.
| try { | |
| const controller = new AbortController(); | |
| const timeoutId = setTimeout(() => controller.abort(), 3000); | |
| const response = await fetch(url, { signal: controller.signal }); | |
| clearTimeout(timeoutId); | |
| return response.ok; | |
| } catch { | |
| return false; | |
| const controller = new AbortController(); | |
| const timeoutId = setTimeout(() => controller.abort(), 3000); | |
| try { | |
| const response = await fetch(url, { signal: controller.signal }); | |
| return response.ok; | |
| } catch (error) { | |
| clearTimeout(timeoutId); | |
| let reason = 'Unknown error'; | |
| if (error && error.name === 'AbortError') { | |
| reason = 'Timeout while connecting to backend'; | |
| } else if (error && typeof error.code === 'string') { | |
| if (error.code === 'ECONNREFUSED') { | |
| reason = 'Connection refused by backend (is it running and is CODERRR_BACKEND correct?)'; | |
| } else if (error.code === 'ENOTFOUND' || error.code === 'EAI_AGAIN') { | |
| reason = 'Backend host could not be resolved (check the CODERRR_BACKEND URL)'; | |
| } else { | |
| reason = `Network error (${error.code}) while connecting to backend`; | |
| } | |
| } else if (error && typeof error.message === 'string' && error.message) { | |
| reason = error.message; | |
| } | |
| console.error(`[Coderrr Doctor] Backend connectivity check failed for ${url}: ${reason}`); | |
| return false; | |
| } finally { | |
| clearTimeout(timeoutId); |
| const chalk = require('chalk'); | ||
| const doctor = require('./doctor'); | ||
|
|
||
| async function runDiagnostics(backendUrl) { | ||
| console.log('\n' + chalk.blue.bold('🩺 CODERRR DOCTOR - SYSTEM DIAGNOSTICS')); | ||
| console.log(chalk.gray('━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━')); | ||
|
|
||
| const python = doctor.checkPython(); | ||
| const node = doctor.checkNode(); | ||
| const hasEnv = doctor.checkEnv(); | ||
|
|
||
| console.log(`${python.status ? chalk.green('✔') : chalk.red('✘')} Python: ${python.version}`); | ||
| console.log(`${node.status ? chalk.green('✔') : chalk.red('✘')} Node.js: ${node.version}`); | ||
| console.log(`${hasEnv ? chalk.green('✔') : chalk.red('✘')} Local .env file detected`); | ||
|
|
||
| console.log(chalk.yellow('\nChecking Backend Connectivity...')); | ||
| const backendStatus = await doctor.checkBackend(backendUrl || 'https://coderrr-backend.vercel.app'); | ||
| console.log(`${backendStatus ? chalk.green('✔') : chalk.red('✘')} Backend: ${backendStatus ? 'Connected' : 'Unreachable'}`); | ||
|
|
||
| if (!backendStatus) { | ||
| console.log(chalk.red('\n[!] Advice: Check your internet or custom CODERRR_BACKEND variable.')); | ||
| } | ||
| console.log(chalk.gray('━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━\n')); | ||
| } | ||
|
|
||
| module.exports = { runDiagnostics }; No newline at end of file |
There was a problem hiding this comment.
The UI module doesn't use the consistent UI patterns established in the codebase. Other UI modules like insightsUI.js and recipeUI.js use the ui module from src/ui.js for consistent symbols, colors, and formatting. The doctor UI directly uses chalk without importing the ui module, leading to inconsistent styling (e.g., using '✔'/'✘' instead of the defined SYMBOLS.success/error). Consider importing and using the ui module for consistency.
| try { | ||
| const controller = new AbortController(); | ||
| const timeoutId = setTimeout(() => controller.abort(), 3000); | ||
| const response = await fetch(url, { signal: controller.signal }); |
There was a problem hiding this comment.
The codebase consistently uses axios for HTTP requests (as seen in src/agent.js). Using the native fetch API here introduces inconsistency and creates a dependency on Node.js 18+ without explicitly handling potential polyfill requirements. Consider using axios instead for consistency with the rest of the codebase and better error handling capabilities.
This pull request introduces a new diagnostic tool called "Coderrr Doctor" to help users verify that their development environment is properly set up for AI coding. The main changes include implementing the core diagnostic logic, a user-friendly CLI interface, supporting utilities for report generation, and an initial test for the Node.js version check.
Introduction of Coderrr Doctor diagnostic tool:
docs/doctor.md, describing its purpose and usage instructions.src/doctor.js, including checks for the presence of a.envfile, Python installation, Node.js version, and backend connectivity.src/doctorUI.jsthat runs diagnostics, displays results with colored output, and provides user advice if issues are detected.Supporting utilities and tests:
src/utils/doctorHelpers.jsfor formatting diagnostic logs into a report format.test/doctor.test.jsto verify that the Node.js version check works as expected.