_ ___ _ _ _ _ ____ _
/ \ |_ _| / \ ___ ___(_)___| |_ __ _ _ __ | |_ | __ ) ___ | |_
/ _ \ | | / _ \ / __/ __| / __| __/ _` | '_ \| __| | _ \ / _ \| __|
/ ___ \ | | / ___ \\__ \__ \ \__ \ || (_| | | | | |_ | |_) | (_) | |_
/_/ \_\___| /_/ \_\___/___/_|___/\__\__,_|_| |_|\__| |____/ \___/ \__|
Telegram bot powered by OpenAI with conversation memory, streaming responses, and multiple AI personas
- Streaming Responses — Real-time token-by-token output directly in Telegram
- Conversation History — SQLite-backed persistent message storage per user
- Context Window Management — Automatic trimming and summarization to stay within token limits
- Rate Limiting — Per-user request throttling to prevent abuse
- Multiple AI Personas — Switch between 5 built-in personas or create your own
- FSM-based State Management — Persona selection flow using aiogram finite state machines
- Docker Ready — Production Dockerfile included
| Component | Technology |
|---|---|
| Bot Framework | aiogram 3.x |
| AI Backend | OpenAI API (GPT-4o) |
| Database | SQLite via aiosqlite |
| Language | Python 3.11+ |
| Containerization | Docker |
ai-assistant-bot/
├── src/ai_assistant_bot/
│ ├── handlers/ # Telegram command & message handlers
│ │ ├── start.py # /start, /help, /stats, /clear, /context
│ │ ├── chat.py # Main chat handler with streaming
│ │ └── persona.py # Persona selection FSM
│ ├── services/ # Business logic layer
│ │ ├── database.py # SQLite operations
│ │ ├── openai_service.py # OpenAI API client
│ │ ├── rate_limiter.py # Per-user rate limiting
│ │ └── context_manager.py # Context window management
│ ├── models/ # Data models
│ │ ├── user.py # User model
│ │ ├── message.py # Message model with roles
│ │ └── persona.py # Persona definitions & FSM states
│ ├── middlewares/ # aiogram middlewares
│ │ ├── rate_limit.py # Rate limit middleware
│ │ └── logging_middleware.py # Request logging
│ ├── bot.py # Bot initialization & startup
│ └── config.py # Configuration management
├── tests/ # Test suite
├── Dockerfile
├── Makefile
├── requirements.txt
└── .env.example
- Python 3.11 or higher
- Telegram Bot Token (from @BotFather)
- OpenAI API Key (from platform.openai.com)
git clone https://github.com/N3XT3R1337/ai-assistant-bot.git
cd ai-assistant-bot
python -m venv .venv
source .venv/bin/activate # On Windows: .venv\Scripts\activate
make installcp .env.example .envEdit .env with your credentials:
TELEGRAM_BOT_TOKEN=your-telegram-bot-token
OPENAI_API_KEY=sk-your-openai-api-key
OPENAI_MODEL=gpt-4omake runmake docker-build
make docker-runSend any message to the bot and it will respond using GPT-4o with streaming output.
| Command | Description |
|---|---|
/start |
Initialize the bot and see welcome message |
/help |
List all available commands |
/persona |
Switch between AI personas |
/stats |
View your usage statistics |
/context |
Check context window utilization |
/clear |
Reset your conversation history |
The bot supports multiple AI personas that change its behavior:
🤖 Default — General-purpose helpful assistant
💻 Coder — Expert software engineer
✍️ Writer — Creative storyteller
📊 Analyst — Data analysis specialist
🎓 Tutor — Patient teacher
✨ Custom — Define your own persona
Switch personas with /persona and select from the inline keyboard, or choose "Custom" to define your own system prompt.
User: /persona
Bot: Choose a Persona 🎭
[💻 Code Expert] [✍️ Creative Writer]
[📊 Data Analyst] [🎓 Patient Tutor]
[✨ Custom Persona]
User: [selects 💻 Code Expert]
Bot: 💻 Persona Changed
Now using: Code Expert
Expert programmer and software engineer
User: How do I implement a binary search in Python?
Bot: def binary_search(arr, target):
left, right = 0, len(arr) - 1
while left <= right:
mid = (left + right) // 2
if arr[mid] == target:
return mid
elif arr[mid] < target:
left = mid + 1
else:
right = mid - 1
return -1
...
make testRun with coverage:
make test-covThis project is licensed under the MIT License — see the LICENSE file for details.
Built with ❤️ by panaceya