An AI-powered interactive portfolio that lets visitors chat with an intelligent assistant trained on my real resume, projects, and experience. Built using Retrieval-Augmented Generation (RAG) to deliver grounded, accurate responses.
Live Demo: danchen.dev
This project implements a Retrieval-Augmented Generation (RAG) pipeline that connects structured professional data with natural conversation:
- Embedding Creation: Resume and project content are embedded using OpenAI’s
text-embedding-3-smallmodel. - Vector Storage: These embeddings are stored in Pinecone, enabling fast, high-dimensional semantic search.
- Query Processing: When a user asks a question, it’s converted into an embedding and compared against stored vectors.
- Context Retrieval: The top matching entries are retrieved as context for response generation.
- AI Response: OpenAI’s GPT-5-mini model generates conversational, context-grounded answers referencing the retrieved data.
This ensures highly relevant, fact-based answers grounded in my actual background.
- Frontend: Next.js, React
- Styling: Tailwind CSS
- AI / ML: OpenAI API (GPT-5 + Embeddings)
- Database / Vector Store: Pinecone
- Deployment: Vercel
- Version Control: GitHub
User Query
│
▼
[Embedding via OpenAI API]
│
▼
[Vector Search in Pinecone]
│
▼
[Top Matches Retrieved as Context]
│
▼
[GPT-5 Response Generation]
│
▼
Response → Chat UI (Next.js)