ELARA β Explainable Language-Driven AI-Based Recommendation Assistant
Built using React + FastAPI + RAG (Retrieval-Augmented Generation) UI/UX by Priyanshi Β· Backend & Architecture by Sarah Β· Data by Adyasha
# Navigate to frontend
cd ui
# Install dependencies
npm install
# Start development server
npm run dev
# β Opens at http://localhost:3000
# Build for production
npm run build---S
ELARA/
βββ backend/ # FastAPI backend (RAG + APIs)
βββ ui/ # React frontend
β βββ index.html
β βββ vite.config.js # Proxy config (/api β backend)
β βββ package.json
β βββ src/
β βββ main.jsx
β βββ App.jsx
β βββ api.js
βββ data/
β βββ data.csv # Dataset used for recommendations
User Input β React UI β API Call β FastAPI Backend β Data / RAG β Response β UI Render
Vite proxy automatically routes:
/api β http://localhost:8000
Update API calls in App.jsx:
fetch("https://your-backend.onrender.com/api/recommend")Request:
{
"query": "string"
}Response:
{
"recommendations": [
{
"id": 1,
"title": "string",
"type": "Movie",
"year": 2020,
"tags": ["string"],
"score": 90,
"explanation": "string"
}
]
}{ "status": "ok" }| Feature | Status |
|---|---|
| Natural language query input | β |
| Mood / Genre / Era filters | β |
| RAG pipeline visualization | β |
| Recommendation cards with score | β |
| Expandable explanation panel | β |
| Responsive layout | β |
| Reset / new search flow | β |
- Context-aware recommendations using natural language queries
- Explainable outputs powered by LLM logic
- Data-driven filtering via dataset (
data.csv) - Modular full-stack architecture
- Designed for extensibility into full RAG pipeline
-
Dataset stored in:
data/data.csv -
Used for:
- Filtering and matching user queries
- Generating recommendations
-
Prepared and cleaned before backend ingestion
feat(ui): add recommendation card with score ring
fix(ui): handle empty state when no results returned
style(ui): polish filters and layout
chore: add API integration layer
docs: update README
- Defines overall system architecture
- Implements RAG pipeline and LLM integration
- Designs and develops backend APIs (FastAPI)
- Handles recommendation and explanation logic
- Implements embeddings, vector database, and retrieval logic
- Performs retrieval tuning and evaluation
- Manages GitHub repository (branching, structure, commits)
- Leads system integration and ensures frontend-backend connectivity
- Prepares architecture explanation and viva
π Owns: Backend + RAG + Retrieval + Logic + Integration
- Dataset sourcing and validation
- Data cleaning and preprocessing
- Data formatting and structuring for ingestion
- Preparing datasets for embedding and backend usage
- Maintaining dataset consistency and documentation
π Owns: Data Preparation Layer
- Designs and implements user interface
- Builds query input and recommendation display
- Develops explanation UI
- Handles frontendβbackend API integration
- Manages UX flow and usability
- Implements error handling and empty states
- Prepares demo-ready interface
π Owns: User Experience + Frontend + Integration Layer
- CO4: Implementation of advanced LLM + RAG system + VectorDB
- CO1: Application of DevOps practices (Git, modular architecture)
| Component | Platform |
|---|---|
| Frontend | GitHub Pages / Vercel |
| Backend | Render / Railway |
| Data | CSV / Vector DB |
- GitHub Pages hosts only the frontend (static files)
- Backend must be deployed separately
- Replace all
localhostAPI calls before deployment
ELARA is designed to move beyond traditional recommendation systems by providing:
- Explainable recommendations (not black-box output)
- Context-aware reasoning based on user input
- Integration of retrieval + generation (RAG concept)
- A clean, intuitive user interface
| Member | Contribution |
|---|---|
| Sarah | Backend, RAG pipeline, API, architecture |
| Adyasha | Data preparation, dataset pipeline |
| Priyanshi | UI, UX, frontend integration |
ELARA demonstrates a complete AI-powered full-stack system, combining:
- React frontend
- FastAPI backend
- Data pipeline
- Explainable recommendation logic