Skip to content

gupta-nu/HireSense

Repository files navigation

HireSense - AI Interview Coach

Next.js TypeScript Tailwind CSS Groq AI License: MIT

HireSense is an AI-powered interview coaching platform that provides real-time feedback, content moderation, personalized suggestions, progress tracking, and audio support to help job seekers improve their interview performance.

Features

🎤 Audio & Voice Support

  • Whisper Integration: Record answers using your microphone
  • Real-time Transcription: Convert speech to text using OpenAI Whisper
  • Audio Analysis: Get feedback on both text and voice responses
  • Professional Audio Processing: Noise suppression and echo cancellation

🤖 AI-Powered Analysis

  • Multi-provider support: Groq (primary), OpenAI, Anthropic, Google AI
  • Real-time feedback with detailed scoring (1-10 scale)
  • Smart fallback system with automatic provider switching

📊 Progress Tracking & Analytics

  • Session History: Automatic saving of all interview sessions
  • Performance Metrics: Track scores across different question categories
  • Improvement Analytics: Identify strengths and areas for improvement
  • Goal Setting: Weekly and monthly session targets
  • Progress Visualization: Charts and trends for performance monitoring

🛡️ Content Moderation

  • Advanced filtering for inappropriate or unprofessional responses
  • Professional standards enforcement for interview scenarios
  • Safety mechanisms to prevent harmful content

Comprehensive Feedback

  • Detailed scoring with strengths, weaknesses, and actionable suggestions
  • Category-specific feedback for behavioral, technical, and situational questions
  • Industry-standard interview evaluation criteria

💾 Database Integration

  • SQLite Database: Local data storage for development
  • Session Management: Automatic saving of questions, answers, and feedback
  • User Progress: Persistent tracking across sessions
  • Analytics Dashboard: Comprehensive performance insights
  • ⚡ Performance & Reliability

  • Sub-second response times with Groq AI
  • High availability with robust error handling
  • Scalable architecture built on Next.js 15
  • Real-time audio processing and transcription

Technology Stack

  • Frontend: Next.js 15, React 19, TypeScript, Tailwind CSS
  • AI/ML: Groq, OpenAI (GPT-4 + Whisper), Anthropic Claude, Google Gemini
  • Database: Prisma ORM with SQLite (development) / PostgreSQL (production)
  • Audio Processing: Web Audio API, MediaRecorder, OpenAI Whisper
  • Deployment: Vercel, Netlify, or any Node.js hosting platform

Quick Start

Prerequisites

  • Node.js 18+ and npm
  • Groq API key (free tier: 14,400 requests/day)
  • OpenAI API key (optional, for Whisper audio transcription)

Installation

  1. Clone and install dependencies:
git clone https://github.com/gupta-nu/HireSense.git
cd HireSense
npm install
  1. Database setup:
npx prisma generate
npx prisma db push
  1. Environment setup:
cp .env.example .env.local

Add your API keys to .env.local:

# Database
DATABASE_URL="file:./dev.db"

# Primary Provider (Required)
GROQ_API_KEY=gsk_your_groq_api_key_here

# OpenAI for Whisper (Audio Transcription)
OPENAI_API_KEY=sk_your_openai_key_here

# Optional Fallback Providers
ANTHROPIC_API_KEY=your_anthropic_key_here
GOOGLE_API_KEY=your_google_ai_key_here

# Demo Mode (set to 'true' to use without API keys)
NEXT_PUBLIC_DEMO_MODE=false
  1. Start development server:
npm run dev

Visit http://localhost:3000 to start practicing interviews.

Configuration

AI Providers

HireSense supports multiple AI providers with automatic fallback:

Provider Speed Free Tier Best For
Groq Fastest 14,400/day Primary choice
OpenAI Fast Limited High quality
Anthropic Good Limited Detailed analysis
Google AI Good Generous Backup option

Getting API Keys

Groq (Recommended - FREE):

  1. Visit console.groq.com
  2. Sign up with Google/GitHub
  3. Create API key (starts with gsk_)

OpenAI (Optional):

  1. Visit platform.openai.com
  2. Create account and add billing
  3. Generate API key (starts with sk-)

API Reference

Interview Analysis Endpoint

POST /api/interview/analyze

// Request
{
  "question": "Tell me about yourself",
  "answer": "I am a software engineer...",
  "category": "general" | "behavioral" | "technical" | "situational",
  "questionId": "unique-question-id",
  "duration": 120, // seconds
  "userId": "user-123",
  "isAudioAnswer": false,
  "transcript": "transcribed audio text" // if audio
}

// Response
{
  "success": true,
  "feedback": {
    "score": 8,
    "strengths": ["Clear communication", "Relevant experience"],
    "weaknesses": ["Could add specific examples"],
    "suggestions": ["Include quantifiable achievements"],
    "overallFeedback": "Strong response with room for improvement..."
  }
}

Audio Transcription Endpoint

POST /api/audio/transcribe

// Request (FormData)
audio: File // Audio file (WebM, MP3, WAV, etc.)

// Response
{
  "success": true,
  "transcript": "Transcribed text from audio",
  "duration": 45, // seconds
  "wordCount": 67
}

User Progress Endpoint

GET /api/user/progress?userId=user-123

// Response
{
  "success": true,
  "progress": {
    "totalSessions": 15,
    "averageScore": 7.2,
    "categoryScores": {
      "behavioral": 8.1,
      "technical": 6.8,
      "situational": 7.0,
      "general": 7.5
    },
    "improvementAreas": ["Adding specific examples", "Quantifying achievements"],
    "strengths": ["Clear communication", "Technical knowledge"],
    "recentSessions": [...] // Last 10 sessions
  }
}

Error Handling

// Error Response
{
  "success": false,
  "error": "Error message",
  "errorType": "quota_exceeded" | "invalid_api_key" | "rate_limit"
}

Architecture

The system uses a multi-provider AI architecture with automatic fallback:

┌─────────────────┐    ┌─────────────────┐    ┌─────────────────┐
│  🎤 Audio Input  │───▶│  🔗 Next.js     │───▶│  🤖 AI Provider │
│  + Text Input   │    │  API Routes     │    │  Manager        │
│  (Whisper)      │    │  /analyze       │    │  (Multi-LLM)    │
└─────────────────┘    └─────────────────┘    └─────────────────┘
         │                       │                       │
         ▼                       ▼                       ▼
┌─────────────────┐    ┌─────────────────┐    ┌─────────────────┐
│  🗄️ Database     │    │  📊 Response    │    │  ⚡ Groq AI      │
│  (SQLite/       │    │  Parser &       │    │  (Primary)      │
│  PostgreSQL)    │    │  Validator      │    │  14.4k req/day  │
└─────────────────┘    └─────────────────┘    └─────────────────┘
         │                       │                       │
         ▼                       ▼                       ▼
┌─────────────────┐    ┌─────────────────┐    ┌─────────────────┐
│  📈 Progress    │    │  🎯 Frontend    │    │  🔄 Fallback    │
│  Analytics &    │◀───│  Interview      │◀───│  OpenAI →       │
│  Tracking       │    │  Simulator      │    │  Anthropic →    │
│  Dashboard      │    │  (React)        │    │  Demo Mode      │
└─────────────────┘    └─────────────────┘    └─────────────────┘
                                │
                                ▼
                       ┌─────────────────┐
                       │  🛡️ Content     │
                       │  Moderation     │
                       │  & Safety       │
                       └─────────────────┘

Key Components

  • Audio Processing: Web Audio API + OpenAI Whisper for speech-to-text
  • AI Provider Manager: Multi-provider system with intelligent fallback
  • Database Layer: Prisma ORM with SQLite/PostgreSQL for session storage
  • Progress Analytics: Real-time tracking and performance visualization
  • Content Moderation: Advanced filtering and safety checks
  • Response Parser: Standardized feedback format validation
  • Interview Simulator: React component with audio/text input modes

Project Structure

HireSense/
├── src/
│   ├── app/
│   │   ├── api/
│   │   │   ├── interview/analyze/     # Interview analysis endpoint
│   │   │   ├── audio/transcribe/      # Whisper audio transcription
│   │   │   ├── user/progress/         # User progress tracking
│   │   │   └── analytics/             # Platform analytics
│   │   ├── globals.css               # Global styles
│   │   ├── layout.tsx                # Root layout
│   │   └── page.tsx                  # Home page
│   ├── components/
│   │   ├── InterviewSimulator.tsx    # Main interview interface
│   │   └── ProgressDashboard.tsx     # Progress visualization
│   ├── lib/
│   │   ├── ai-providers-simple.ts    # Multi-provider AI system
│   │   ├── database.ts               # Database operations
│   │   ├── demo-feedback.ts          # Demo mode responses
│   │   └── interview-utils.ts        # Shared utilities
│   └── types/
│       └── interview.ts              # TypeScript definitions
├── prisma/
│   └── schema.prisma                 # Database schema
├── public/                           # Static assets
├── .env.example                      # Environment template
├── package.json                      # Dependencies
└── README.md                         # Documentation

Contributing

We welcome contributions! Please read our Contributing Guidelines for details on:

  • Code style and standards
  • Development workflow
  • Pull request process
  • Issue reporting

Development Setup

  1. Fork the repository
  2. Create a feature branch (git checkout -b feature/amazing-feature)
  3. Make your changes and add tests
  4. Commit your changes (git commit -m 'Add amazing feature')
  5. Push to the branch (git push origin feature/amazing-feature)
  6. Open a Pull Request

License

This project is licensed under the MIT License - see the LICENSE file for details.

About

HireSense is an AI-powered interview coaching platform that provides real-time feedback, content moderation, and personalized suggestions to help job seekers improve their interview performance.

Resources

License

Contributing

Stars

Watchers

Forks

Releases

No releases published

Packages

 
 
 

Contributors