A sophisticated, containerized Telegram bot that scrapes free proxy details from onlineproxy.io. This tool is designed for users who need quick access to fresh proxies from various countries directly within their Telegram messenger, with easy deployment thanks to Docker.
- Dynamic Web Scraping: Utilizes Selenium to navigate a JavaScript-heavy website, demonstrating advanced scraping techniques.
- Interactive Telegram Bot: A user-friendly bot interface built with the
python-telegram-botlibrary. - Multi-Country Support: Easily fetch proxies from a predefined list of countries.
- Containerized Deployment: Comes with a
Dockerfilefor easy, consistent, and isolated deployment. - Secure Configuration: Employs environment variables for API tokens, keeping your credentials safe.
- Robust Error Handling: Gracefully manages web scraping timeouts and bot command issues.
- Asynchronous Operations: Leverages
asynciofor non-blocking execution, ensuring a responsive bot.
You can run this bot either locally with a Python environment or using Docker. The Docker method is recommended for ease of use and consistency.
- For Docker Deployment: Docker installed on your system.
- For Local Development:
- Python 3.9+
- Google Chrome browser
pipfor package management
Containerizing the bot simplifies dependency management and provides a stable environment.
- Clone the Repository:
git clone https://github.com/amharnisfar/tg-proxy-scraper-bot.gitcd tg-proxy-scraper-bot - Build the Docker Image:
This command builds the image using the provided
Dockerfile. It will install all necessary system and Python dependencies.docker build -t proxy-scraper-bot . - Run the Docker Container:
You must provide your Telegram Bot Token as an environment variable (
-e) when running the container.docker run -d --name proxy-bot-instance -e TELEGRAM_BOT_TOKEN="YOUR_TELEGRAM_BOT_TOKEN" proxy-scraper-bot-d: Runs the container in detached mode (in the background).--name proxy-bot-instance: Assigns a memorable name to your container.
Your bot is now running inside a Docker container!
- View Logs:
docker logs -f proxy-bot-instance - Stop the Container:
docker stop proxy-bot-instance - Restart the Container:
docker start proxy-bot-instance
If you prefer to run the bot directly on your machine, follow these steps.
- Clone the repository:
git clone https://github.com/amharnisfar/tg-proxy-scraper-bot.git cd tg-proxy-scraper-bot - Create a virtual environment:
python -m venv venv source venv/bin/activate # On Windows, usevenv\Scripts\activate - Install the required dependencies:
The
requirements.txtfile contains all the necessary Python packages.pip install -r requirements.txt - Set up your Telegram Bot Token:
Talk to the BotFather on Telegram to create a new bot and get your API token. Then, set it as an environment variable.
On Linux/macOS:
export TELEGRAM_BOT_TOKEN="YOUR_TELEGRAM_BOT_TOKEN"On Windows:
set TELEGRAM_BOT_TOKEN="YOUR_TELEGRAM_BOT_TOKEN" - Run the Bot:
python bot.py
Interact with the bot on Telegram using the following commands:
/start- Displays a welcome message and the list of available countries./help- Provides instructions on how to use the/proxycommand./proxy <country_name>- Scrapes and returns up to four proxies for the specified country.
Example: /proxy united-states
The bot is built using the python-telegram-bot library. It listens for commands from users and triggers the appropriate actions. It uses asyncio to handle multiple user requests concurrently without blocking.
The web scraper is built with Selenium. When a user requests proxies, the scraper:
- Launches a headless Chromium browser inside the container.
- Navigates to the corresponding URL on
onlineproxy.io. - Locates the active proxy listings and avoids archived ones.
- Iteratively clicks on each proxy to open a details modal.
- Interacts with the modal to reveal the full proxy credentials.
- Extracts the protocol, IP, port, login, and password.
- Returns the scraped data to the bot, which then formats and sends it to the user.
your_repository/
│
├── Dockerfile # Instructions for building the Docker image
├── bot.py # Main Python script for the bot and scraper
├── requirements.txt # List of Python dependencies
└── README.md # This file
Contributions are what make the open-source community such an amazing place to learn, inspire, and create. Any contributions you make are greatly appreciated.
If you have a suggestion that would make this better, please fork the repo and create a pull request. You can also simply open an issue with the tag "enhancement".
- Fork the Project
- Create your Feature Branch (
git checkout -b feature/AmazingFeature) - Commit your Changes (
git commit -m 'Add some AmazingFeature') - Push to the Branch (
git push origin feature/AmazingFeature) - Open a Pull Request
© 2025 Amhar Nisfer Dev, Inc.
