Skip to content

RaghavSethi006/RageBait-AI

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

9 Commits
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 

Repository files navigation

RageBait-AI: The Ultimate Reactive Systems Simulator

Built with Vite React AI Powered Persistence


🏛 Executive Overview

RageBait-AI is a high-intensity, cyber-ops themed reaction engine designed to stress-test cognitive load under simulated system failure conditions. It integrates a sophisticated React-based game kernel with a distributed serverless backend, leveraging Google's Gemini 2.0 Flash for dynamic, state-aware psychological feedback (colloquially known as "trash-talk").

This isn't just a game; it's a diagnostic tool for the modern operator's reaction latency, wrapped in a premium, glassmorphic aesthetic.


🛠 Technical Stack & Architecture

The system is built upon a modern, distributed architecture designed for low-latency feedback loops and high scalability.

Core Frontend (The Kinetic Layer)

  • Framework: React 18 (Strict Mode enabled).
  • Build System: Vite for near-instant HMR and optimized production bundles.
  • State Orchestration: Custom GameContext using a reducer pattern to manage complex state transitions (Idle → Boot → Tutorial → Playing → Game Over).
  • Visuals & Motion: Tailwind CSS for utility-first styling and Framer Motion for fluid, hardware-accelerated animations.
  • Componentry: Radix UI primitives for accessible, unstyled foundation components.

Backend Services (The Logic Layer)

  • Runtime: Vercel Serverless Functions (TypeScript).
  • AI Integration: Gemini 2.0 Flash via the Google AI SDK, providing real-time, context-aware commentary based on player performance metrics (stability, entropy, streak).
  • Persistence: Firebase Cloud Firestore for the global leaderboard, accessed exclusively via the Firebase Admin SDK for maximum security.
  • Validation: Schema-driven request validation using Zod.

🚀 Key Features

1. Dynamic Cognitive Load Balancing

The engine utilizes a custom "Entropy" model. As the session progresses, the difficulty doesn't just increase linearly; it scales based on the frequency of interactions and current system stability, simulating a real-world cascading failure.

2. Contextual AI Commentary

Unlike traditional "canned" responses, our system analyzes over 7 distinct game-state vectors (Score, Stability %, Streak, Queue Size, Difficulty, Entropy, Tier) and forwards them to Gemini to generate high-fidelity, PG-13 roasts that feel personal and immediate.

3. Progressive Accessibility (Inclusive Design)

We've implemented a comprehensive Blind Mode. This isn't a bolt-on; it's a core system that provides:

  • High-fidelity audio cues for all visual alerts.
  • Screen-reader optimized status updates.
  • ARIA-live announcements for critical system changes.

4. Zero-Trust Persistence

The client has no direct access to the database. All leaderboard submissions and retrievals are brokered through an authenticated, rate-limited API layer, ensuring the integrity of the high scores.


🔧 Infrastructure & Setup

If you're here to build, follow the protocol.

Prerequisites

  • Node.js (v18 or higher)
  • A Firebase Project (with Firestore enabled)
  • A Google AI (Gemini) API Key

1. Environment Configuration

The system requires specific environmental markers. Copy the blueprint:

cp .env.example .env.local
Key Description
FIREBASE_PROJECT_ID Your Firebase Project Identifier.
FIREBASE_CLIENT_EMAIL Service Account email for server-side operations.
FIREBASE_PRIVATE_KEY The RSA private key (ensure \n line breaks are preserved).
GEMINI_API_KEY Your Google AI Studio API Key.

2. Development Workflow

We utilize a split development environment to mirror production serverless behavior.

# Install dependencies
npm install

# Option A: Standard Frontend Dev
npm run dev

# Option B: Full Stack Dev (Recommended)
# Simulates Vercel functions locally on port 3000
npx vercel dev

3. Production Deployment

The project is architected for Vercel. Simply connect your repository, and the api/ directory will be automatically provisioned as edge/serverless functions.


🔒 Security & Performance

  • Rate Limiting: IP-based throttling (5 requests / 60 seconds) on all AI and Leaderboard endpoints to prevent abuse and manage API costs.
  • Validation: Strict input sanitization prevents XSS and injection attacks on the leaderboard.
  • Edge Caching: Leaderboard GET requests utilize stale-while-revalidate caching on the Vercel CDN for sub-100ms response times.
  • Graceful Degradation: If the AI service or Database is unreachable, the system automatically falls back to local "canned" responses and session-only scoring, ensuring zero interruption to the user experience.

📜 License & Contribution

This protocol is released under the RageBait-AI Protective Open Source License (RB-POL v1.0).

Usage Summary:

  • Commercial use is prohibited without a separate commercial license.
  • Personal and educational use is permitted.
  • Attribution is required: "RageBait-AI — Created by Raghav Sethi".
  • Redistribution must remain open-source under this same license.

For the full legal text, please refer to the LICENSE file. For commercial inquiries, contact Sethi.Raghav006@gmail.com.


“Code is temporary; architecture is forever. Build for the failure, and the success will follow.”

About

Resources

License

Stars

Watchers

Forks

Releases

No releases published

Packages

 
 
 

Contributors