This backend runs a Telegram webhook bot that can:
- Reply to
updatescommands with status-count summaries (all-time by default; uselast Nhfor time filtering) - Answer free-form questions using your configured Lark Base table as the knowledge source
- Stay grounded to Lark data (no general fallback answers)
- In group/supergroup chats, the bot replies only when mentioned (for example
@GRID_TASK_UPDATES_BOT). - In private chats, any non-empty message is processed.
- If a question cannot be answered from Lark Base, the bot replies with a strict not-found message and asks the user to rephrase.
backend/app is organized as:
api/v1/endpoints/: HTTP routes (/api/v1/health,/api/v1/telegram/webhook)api/dependencies.py: FastAPI dependency providerscore/: typed settings, DI container, logging, app lifecycleservices/: application business logic (intent, updates, KB answers, orchestration)repositories/: data access adaptersclients/: Telegram, Lark, and OpenAI-compatible integrationsmodels/: domain and DTO modelsutils/: parsing, formatting, and retrieval helpers
cd backend
cp .env.example .env
pip install -r requirements.txt
uvicorn app.main:app --host 0.0.0.0 --port 8000 --reloadTelegram:
TELEGRAM_BOT_USERNAMETELEGRAM_BOT_TOKENTELEGRAM_WEBHOOK_SECRET
Optional access control:
ALLOWED_CHAT_IDS(leave empty to allow all chats)ALLOWED_USER_IDS(leave empty to allow all users)
Lark:
LARK_API_BASE(defaulthttps://open.larksuite.com)LARK_APP_IDLARK_APP_SECRETLARK_BASE_APP_TOKENLARK_BASE_TABLE_IDLARK_BASE_VIEW_ID(optional)
Core behavior defaults:
DEFAULT_WINDOW_HOURS(default24)MAX_ITEMS(default10)REQUEST_TIMEOUT_SECONDS(default15)
Conversational QA controls:
ENABLE_CONVERSATIONAL_QA(defaulttrue)QA_TOP_K(default5)QA_MIN_RELEVANCE_SCORE(default0.5)QA_MAX_CONTEXT_CHARS(default5000)QA_NOT_FOUND_MESSAGE(custom fallback text)
Optional OpenAI-compatible LLM (Groq-compatible):
GROQ_API_BASE(defaulthttps://api.groq.com/openai)GROQ_API_KEYGROQ_MODEL(defaultopenai/gpt-oss-20b)ENABLE_LLM_RESPONSE_POLISH(falseby default)
Optional alias names also supported:
LLM_API_BASELLM_API_KEYLLM_MODEL
- Deploy
backend/as your web service. - Fill required environment variables.
- Set Telegram webhook URL to
https://<your-service>/api/v1/telegram/webhook. - Set webhook secret to match
TELEGRAM_WEBHOOK_SECRET.
python3 backend/scripts/production_smoke.py \
--base-url https://<your-service>.onrender.com \
--webhook-secret <TELEGRAM_WEBHOOK_SECRET> \
--bot-username <TELEGRAM_BOT_USERNAME> \
--authorized-chat-id <ALLOWLISTED_CHAT_ID> \
--authorized-user-id <ALLOWLISTED_USER_ID>Manual checks in Telegram:
@<TELEGRAM_BOT_USERNAME> updates@<TELEGRAM_BOT_USERNAME> how many done tickets for stefano?@<TELEGRAM_BOT_USERNAME> what is blocking grid release?
The backend expects these Base columns:
Text(or equivalent title field)StatusDate Created(used as update time whenUpdatedAtis unavailable)Assigned to(for assignee filtering)
Also supported when present:
NumberingTicket TypeEstimated DeadlineTicketURL(orTicket URL,URL,Link)
If your table differs, adjust normalization in backend/app/clients/lark_client.py.