AI-powered RSS reader, deployable to GitHub Pages or with Docker
-
🪶 No Bloat: Say goodbye to forced logins and app downloads, a responsive static page for all your feed needs
-
🤖 Efficiency First: AI automatically generates article summaries, helping you grasp key points
-
⚙️ Customizable: Full control over RSS sources and AI configuration
-
🚀 Deploy Freely: Zero-cost deployment to GitHub Pages or Docker
-
Aggregation & Summaries: Integrate multi-source RSS feeds with LLM-powered automatic summaries
-
Auto Updates: Keep content fresh via GitHub Actions / Cron jobs
-
Flexible Deployment: Zero-cost static hosting on GitHub Pages / Self-hosted with Docker
-
Modern Experience: Responsive design with light/dark themes
This project is powered by Alibaba Cloud ESA for acceleration, computing, and protection
This project uses GitHub Actions for automatic deployment to GitHub Pages, with a single workflow handling both data updates and website deployment.
-
Fork or Clone the Repository to your GitHub account
-
Set GitHub Secrets
Add the following secrets in your project's Settings - Secrets and variables: Actions:
LLM_API_KEY: API key for AI summary generationLLM_API_BASE: API base URL for the LLM serviceLLM_NAME: Name of the model to use
-
Enable GitHub Pages
In repository settings, choose to deploy from GitHub Actions
-
Manually Trigger the Workflow (optional)
Manually trigger the "Update Data and Deploy" workflow from the Actions page of your GitHub repository
Update Data and Deploy (update-deploy.yml):
- Trigger conditions:
- Scheduled execution (every 3 hours)
- Push code
- Manual trigger
- Execution content:
- Single build process: Fetch RSS content, generate summaries, and build static website in one go
- Multi-platform deployment:
- Automatically deploy to GitHub Pages
- Push build artifacts to
deploybranch for platforms like Vercel to monitor
-
Customize RSS Sources: Edit the
src/config/rss-config.jsfile to modify or add RSS sources. Each source should include:- Name
- URL
- Category
-
Modify Update Frequency: Edit the cron expression in
.github/workflows/update-deploy.yml# For example, change to update once daily at midnight cron: '0 0 * * *'
-
Adjust Retained Items: Modify the
maxItemsPerFeedvalue insrc/config/rss-config.js -
Customize Summary Generation: If you need to customize the summary generation method, such as following a specific format or switching the summary language, modify the
promptvariable inscripts/update-feeds.js
- Go to Vercel Import page, select "GitHub" and authorize access
- Select your forked FeedMe repository, click "Deploy". Initial deployment failure is expected as the default branch is main
- Refer to Deploying Git Repositories with Vercel to change the production branch to
deploy, configure to build production branch only, then redeploy
GitHub Actions will automatically push to the deploy branch after each build, and Vercel will automatically detect and deploy.
- Go to Alibaba Cloud ESA Console and enter Pages service
- Click "New Application", select "GitHub" and authorize access
- Select your forked FeedMe repository and configure as follows:
- Production Branch:
deploy - Assets Directory:
.(a single dot) - Install Command: Leave empty
- Build Command: Leave empty
- Production Branch:
- Click "Deploy"
GitHub Actions will automatically push to the deploy branch after each build, and Alibaba Cloud ESA Pages will automatically detect and deploy. Thanks to Alibaba Cloud ESA's edge acceleration capabilities, the application achieves ultra-fast access speeds globally.
This method uses Docker to run FeedMe locally or on a server. It utilizes an in-container Cron job for automatic data updates and rebuilds, independent of GitHub Actions.
-
Clone the Repository
git clone https://github.com/Seanium/feedme.git cd feedme -
Configure Environment Variables Copy the
.env.examplefile to.envand fill in the necessary API keys:cp .env.example .env
Edit the
.envfile:LLM_API_KEY=your_api_key LLM_API_BASE=LLM_service_api_base_url LLM_NAME=model_name_to_use
-
Build and Start the Docker Container
docker-compose up --build
-
Access the Application The application will be available at http://localhost:3000.
-
Automatic Updates The container will automatically run
pnpm update-feedsandpnpm build, then restart the server based on the schedule insrc/config/crontab-docker(defaults to every 3 hours). To modify the update frequency, edit the cron expression in thesrc/config/crontab-dockerfile (e.g.,0 */6 * * *for updates every 6 hours).
-
Clone the Repository
git clone https://github.com/Seanium/feedme.git cd feedme -
Install Dependencies
pnpm install
-
Configure Environment Variables
Copy the example environment file and edit it:
cp .env.example .env
Fill in the following content:
LLM_API_KEY=your_api_key LLM_API_BASE=LLM service API base URL (e.g., https://api.siliconflow.cn/v1) LLM_NAME=model name (e.g., THUDM/GLM-4-9B-0414)These environment variables are used to configure the article summary generation feature and need to be obtained from an LLM service provider
-
Update RSS Data
pnpm update-feeds
This command fetches RSS sources and generates summaries, saving them to the
public/datadirectory -
Start the Development Server
pnpm dev
Visit http://localhost:3000 to view the application