Skip to content

Devanik21/LangChain-AI-Chat-App-

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

57 Commits
 
 
 
 
 
 
 
 
 
 

Repository files navigation

Langchain AI Chat APP

Language Stars Forks Author Status

LangChain-powered conversational AI — memory, tools, and reasoning chains in a production-ready chat application.


Topics: langchain · deep-learning · generative-ai · large-language-models · neural-networks · chatbot · context-aware-responses · conversational-ai · llm · retrieval-augmented-generation

Overview

This application demonstrates LangChain in a fully functional, deployable conversational AI context. LangChain provides the composable primitives: LLMs, prompt templates, memory, tools, and chains — assembled here into a multi-turn conversational assistant with persistent memory and tool-augmented reasoning.

The chat interface provides multi-turn conversation with a memory backend that stores the conversation history in a vector store (FAISS or Chroma), enabling semantic retrieval of relevant earlier context rather than simple linear buffering. This is particularly important for long conversations where the full history would exceed the model's context window.

Tool use is a first-class feature: the agent can invoke a Python REPL for computation, a web search tool for current information, a Wikipedia lookup for factual grounding, and a calculator for arithmetic. The tool selection and invocation are handled by the LLM's function-calling interface, with results fed back into the reasoning loop before the final response is generated.


Motivation

LangChain represents the current frontier of LLM application development: moving beyond single-shot prompt-response patterns toward structured, stateful, multi-step reasoning agents. This project was built to explore and demonstrate these patterns in a real, running application — not just a tutorial notebook.


Architecture

User Message
        │
  ConversationBufferWindowMemory
  + VectorStoreRetrieverMemory
        │
  ReAct Agent (LLM + Tools)
  ┌──────────────────────────┐
  │ Tools: Search, Wiki, Math│
  │        Python REPL       │
  └──────────────────────────┘
        │
  Final Response → Chat UI

Features

Multi-Turn Conversation with Memory

Conversation history is stored in a vector store with semantic retrieval, allowing the agent to reference relevant earlier context even in long conversations that exceed the model's context window.

ReAct Agent Loop

The agent iteratively reasons about which tool to use, invokes it, observes the result, and continues reasoning until it has enough information to respond.

Tool-Augmented Reasoning

The agent can invoke web search (SerpAPI / DuckDuckGo), Wikipedia lookup, a Python REPL for computation, and a calculator — selecting tools based on the LLM's assessment of the question.

Streaming Response Output

LLM response tokens are streamed to the UI character by character, providing low-latency perceived response time even for long outputs.

Multi-Model Backend Support

Switch between OpenAI GPT-4o, Google Gemini, Anthropic Claude, or a locally running Ollama model via environment variable configuration.

Conversation Export

Export the full conversation history as a formatted Markdown file or JSON transcript for sharing or review.

System Prompt Editor

Sidebar text area for customising the agent's system prompt — persona, task focus, language, and response style — without code changes.

Token Usage Tracking

Real-time token counter in the sidebar tracks prompt and completion tokens per message and cumulative session totals.


Tech Stack

Library / Tool Role Why This Choice
LangChain / LangGraph Agent framework Chains, graphs, memory, tool interfaces
Streamlit Chat UI st.chat_message, st.chat_input for ChatGPT-style interface
FAISS / Chroma Vector memory store Semantic conversation history retrieval
OpenAI / Gemini SDK LLM backend GPT-4o / Gemini function-calling support
SerpAPI / DuckDuckGo Web search tool Real-time web search for current information
python-dotenv Env management API key loading from .env file

Key packages detected in this repo: streamlit · google-generativeai · langchain · langchain-google-genai · langchain-community · pypdf · faiss-cpu · tiktoken · docx2txt · unstructured


Getting Started

Prerequisites

  • Python 3.9+ (or Node.js 18+ for TypeScript/JS projects)
  • pip or npm package manager
  • Relevant API keys (see Configuration section)

Installation

git clone https://github.com/Devanik21/LangChain-AI-Chat-App-.git
cd LangChain-AI-Chat-App-
python -m venv venv && source venv/bin/activate
pip install langchain langgraph openai google-generativeai faiss-cpu streamlit python-dotenv

# Create .env file
echo 'OPENAI_API_KEY=sk-...' > .env
echo 'SERPAPI_API_KEY=...' >> .env  # optional, for web search

streamlit run app.py

Usage

# Basic chat
streamlit run app.py

# CLI test
python agent_cli.py --query 'What is the capital of France and its current population?'

# Switch model via environment
MODEL=gemini-2.0-flash streamlit run app.py

Configuration

Variable Default Description
OPENAI_API_KEY (required) OpenAI API key for GPT models
GOOGLE_API_KEY (optional) Google API key for Gemini models
MODEL_BACKEND openai LLM backend: openai, gemini, ollama
MEMORY_TYPE vector Memory backend: buffer, vector, summary
MAX_ITERATIONS 10 Maximum ReAct agent tool-use iterations

Copy .env.example to .env and populate all required values before running.


Project Structure

LangChain-AI-Chat-App-/
├── README.md
├── requirements.txt
├── app.py
└── ...

Roadmap

  • Long-term user memory with persistent vector store (PostgreSQL + pgvector)
  • Multi-agent orchestration: specialist sub-agents for code, research, and creative tasks
  • Document upload and RAG mode: chat over user-provided PDFs
  • Voice I/O integration with Whisper (speech-to-text) and ElevenLabs (text-to-speech)
  • Tracing and observability integration with LangSmith for debugging agent reasoning

Contributing

Contributions, issues, and feature requests are welcome. Please:

  1. Fork the repository
  2. Create a feature branch (git checkout -b feature/your-feature)
  3. Commit your changes (git commit -m 'feat: add your feature')
  4. Push to your branch (git push origin feature/your-feature)
  5. Open a Pull Request

Please follow conventional commit messages and ensure any new code is documented.


Notes

API keys for the selected LLM backend and optional tools (web search) are required. Tool use increases latency and token consumption. The ReAct loop has a configurable maximum iterations limit to prevent runaway tool calls.


Author

Devanik Debnath
B.Tech, Electronics & Communication Engineering
National Institute of Technology Agartala

GitHub LinkedIn


License

This project is open source and available under the MIT License.


Crafted with curiosity, precision, and a belief that good software is worth building well.

About

LangChain conversational agent with FAISS vector memory, ReAct tool loop (web search, Wikipedia, Python REPL), streaming output, and switchable LLM backends via env config.

Topics

Resources

License

Stars

Watchers

Forks

Releases

No releases published

Packages

 
 
 

Contributors

Languages