A supportive and encouraging chat application that provides positive, uplifting responses to help users feel good about themselves and their achievements. Built with a clean separation of frontend (Gradio) and backend (FastAPI) components.
- Hosted on: https://cheerupbot.djhuang.dev/
- Real-time chat interface with Gradio
- RESTful API with FastAPI backend
- Supportive AI responses using GPT-4o-mini
- Cost and token usage tracking
- Modern, responsive UI
- Modular architecture with clear separation of concerns
- In-memory session tracking (no database required)
- Python 3.13+
- uv - Fast Python package manager
- FastAPI - Backend API framework
- Gradio - Frontend web interface
- OpenAI API - For AI responses (gpt-4o-mini)
- Pydantic - Data validation and settings management
- Stateless architecture - No persistent storage required
praising_chatbot/
├── src/ # Source code
│ ├── backend/ # Backend components
│ │ ├── api/ # FastAPI routes and app
│ │ │ ├── app.py # FastAPI app factory
│ │ │ └── routes.py # API endpoints
│ │ ├── models/ # Data models (Pydantic)
│ │ │ └── chat.py # Chat-related models
│ │ └── services/ # Business logic
│ │ ├── openai_service.py # OpenAI API integration
│ │ ├── demo_service.py # Demo/mock service (no API calls)
│ │ └── stats_service.py # Usage statistics tracking
│ ├── frontend/ # Frontend components
│ │ └── components/ # UI components
│ │ └── chat_interface.py # Gradio chat interface
│ └── config/ # Configuration
│ └── settings.py # Environment variables, constants
├── main.py # Application entry point
├── pyproject.toml # Project configuration
├── requirements.txt # Python dependencies
└── .env # Environment variables (not in git)
- Python 3.13 or higher
- uv for dependency management (docs including installation guide)
- OpenAI API key (required only for production mode; app runs in demo mode by default)
- Clone the repository:
git clone [repository-url] cd praising_chatbot - (Optional) Create a
.envfile in the root directory:
-
For demo/testing (default - no API key needed):
# No .env file needed! Demo mode is the default. # Or explicitly set: DEMO_MODE=true
-
For production (with OpenAI API):
OPENAI_API_KEY=your_openai_api_key_here DEMO_MODE=false
-
Sync dependencies and run the application:
uv sync uv run uvicorn main:app
The application will start on
http://localhost:8000
Endpoints:
- Gradio UI:
http://localhost:8000/gradio - API docs:
http://localhost:8000/docs - Health check:
http://localhost:8000/health - Stats API:
http://localhost:8000/api/stats
- Open the application in your browser
- Type your message in the text box
- Click "Send" or press Enter
- Receive supportive and encouraging responses
- View usage statistics in the accordion section
- Clear chat history anytime with the "Clear Chat" button
The application runs in demo mode by default, allowing you to test the interface without making actual OpenAI API calls. This is perfect for:
- Testing the application without an API key
- Development and debugging
- Demonstrations and presentations
- Avoiding API costs during testing
- No API calls: Uses predefined encouraging responses instead of calling OpenAI
- Mock tokens: Simulates token usage for cost tracking (approximately 1 token per 4 characters)
- Same interface: The UI and API work identically to production mode
- Clear indicators: Console logs and UI banner clearly show when demo mode is active
To use real OpenAI API responses, create a .env file with:
# In .env file
OPENAI_API_KEY=your_actual_api_key_here
DEMO_MODE=falseTo switch back to demo mode:
# In .env file
DEMO_MODE=trueOr simply remove both DEMO_MODE and OPENAI_API_KEY from your .env file (demo is the default).
You can modify the chatbot's behavior by editing the SYSTEM_PROMPT in src/config/settings.py:
SYSTEM_PROMPT = """You are a supportive and encouraging friend. Your role is to provide positive,
uplifting responses that make the user feel good about themselves and their achievements.
Always maintain a positive, humorous and fluffy tone and keep the responses within 50 words. No emoji."""Other configuration options in src/config/settings.py:
DEMO_MODE: Enable/disable demo mode (default: true)OPENAI_MODEL: Change the AI model (default: gpt-4o-mini)HOSTandPORT: Server configuration (default: 0.0.0.0:8000)COST_PER_MILLION_TOKENS: Adjust cost calculations
The application tracks:
- Total tokens used
- Total cost incurred (based on GPT-4o-mini pricing: $0.15 per million tokens)
Note: Statistics are stored in-memory and reset when the server restarts.
This application is ready to deploy on Heroku. Follow these steps:
- A Heroku account
- Heroku CLI installed
-
Login to Heroku:
heroku login
-
Create a new Heroku app:
heroku create your-app-name
-
Deploy to Heroku:
git push heroku main
-
Open your application:
heroku open
Your app will be available at
https://your-app-name.herokuapp.com/gradio
Demo Mode (Default): The app deploys in demo mode by default - no API key required!
Production Mode with OpenAI API: To enable real OpenAI responses, set your API key:
heroku config:set OPENAI_API_KEY=your_openai_api_key_here
heroku config:set DEMO_MODE=falseOther Configuration Options:
# Change the OpenAI model
heroku config:set OPENAI_MODEL=gpt-4
# Adjust logging level
heroku config:set LOG_LEVEL=debugheroku logs --tail- UV Support: This project uses
uvfor dependency management. Heroku automatically detectsuv.lockand uses nativeuvsupport for faster, more reliable builds. - Python Version: Heroku uses Python 3.13 (specified in
.python-version). The app is compatible with Python 3.13+. - Port Configuration: Heroku automatically sets the
PORTenvironment variable, which the app uses. - Persistent Storage: The app uses in-memory storage, so stats reset on dyno restart.
- Free Tier: Heroku's free tier may cause the app to sleep after 30 minutes of inactivity.
MIT License - Feel free to use and modify as needed.
Contributions are welcome! Feel free to open issues or submit pull requests.
The new modular structure provides:
- Separation of Concerns: Frontend and backend are clearly separated
- Testability: Each module can be tested independently
- Scalability: Easy to add new features or replace components
- Maintainability: Clear organization makes code easier to understand
- Reusability: Services can be reused across different interfaces
- Add conversation history (with user opt-in)
- Support for multiple AI models
- Customizable themes
- Export chat history
- Multi-language support
- Database integration for persistent storage
- User authentication and profiles