A Python web application that fetches and displays daily trending X/Twitter posts about AI productivity.
- Multi-source data fetching: Uses twikit for X/Twitter data (no API key required)
- Smart content filtering: Keyword matching + semantic similarity using sentence-transformers
- Trending algorithm: Engagement-based scoring with time decay, velocity, and virality bonuses
- Background scheduling: Automatic fetching and score updates via APScheduler
- Interactive dashboard: Streamlit-based UI for exploring trending posts
- Backend: FastAPI (async support, background tasks)
- Frontend: Streamlit (rapid dashboard development)
- Database: SQLite + SQLAlchemy (async)
- Data Source: twikit (X/Twitter GraphQL)
- Scheduling: APScheduler
- NLP: sentence-transformers (all-MiniLM-L6-v2)
ai-productivity-trends/
├── app/
│ ├── main.py # FastAPI entry point
│ ├── config.py # Configuration
│ ├── api/routes.py # API endpoints
│ ├── services/
│ │ ├── data_fetcher.py # Multi-source fetching
│ │ ├── content_filter.py # Topic relevance
│ │ ├── trending_calculator.py # Scoring logic
│ │ └── scheduler.py # Background jobs
│ ├── sources/
│ │ ├── base.py # Abstract interface
│ │ └── twikit_source.py # twikit implementation
│ └── models/
│ ├── database.py # SQLAlchemy setup
│ └── schemas.py # Pydantic models
├── dashboard/
│ └── app.py # Streamlit dashboard
├── requirements.txt
└── README.md
- Clone the repository:
cd ai-productivity-trends- Create a virtual environment:
python -m venv venv
source venv/bin/activate # On Windows: venv\Scripts\activate- Install dependencies:
pip install -r requirements.txt- Configure credentials (copy
.env.exampleto.env):
cp .env.example .env- Edit
.envwith your X/Twitter credentials:
TWITTER_USERNAME=your_username
TWITTER_EMAIL=your_email
TWITTER_PASSWORD=your_password
uvicorn app.main:app --reloadThe API will be available at http://localhost:8000
- API docs:
http://localhost:8000/docs - Health check:
http://localhost:8000/api/health
In a separate terminal:
streamlit run dashboard/app.pyThe dashboard will be available at http://localhost:8501
| Endpoint | Method | Description |
|---|---|---|
/api/health |
GET | Health check |
/api/posts |
GET | Get paginated posts |
/api/trending |
GET | Get trending posts |
/api/posts/{id} |
GET | Get post details |
/api/stats |
GET | Get statistics |
/api/fetch/status |
GET | Fetch status |
/api/fetch/trigger |
POST | Trigger manual fetch |
Score = (likes × 1 + retweets × 2 + replies × 1.5) × time_decay + velocity_bonus + virality_bonus
- Time decay: Exponential decay over 24 hours
- Velocity: Engagement growth rate between checks
- Virality: Retweet-to-like ratio bonus
| Job | Interval | Description |
|---|---|---|
| Full fetch | 4 hours | Fetch new posts from all sources |
| Engagement update | 30 minutes | Update metrics for recent posts |
| Score recalculation | 15 minutes | Recalculate trending scores |
| Cleanup | Daily | Remove posts older than 7 days |
Environment variables (set in .env):
| Variable | Default | Description |
|---|---|---|
TWITTER_USERNAME |
- | X/Twitter username |
TWITTER_EMAIL |
- | X/Twitter email |
TWITTER_PASSWORD |
- | X/Twitter password |
DATABASE_URL |
sqlite+aiosqlite:///./data/posts.db |
Database URL |
POST_RETENTION_DAYS |
7 | Days to keep posts |
MIN_RELEVANCE_SCORE |
0.3 | Minimum relevance threshold |
Posts are filtered for AI + productivity relevance using:
- Keyword matching: AI terms (GPT, Claude, LLM, etc.) and productivity terms (workflow, automation, etc.)
- Semantic similarity: sentence-transformers model comparing against reference phrases
A post needs both AI-related and productivity-related content to score high.
MIT