Automated EPG (Electronic Program Guide) data generator for LG ProCentric hospitality TV systems. Fetches, processes, and formats TV guide data from multiple sources into LG ProCentric-compatible bundles ready for FTP deployment.
This tool streamlines EPG data management for LG ProCentric servers by:
- Fetching live EPG data from Sky NZ (GraphQL) and XMLTV.net sources
- Processing raw data into structured, validated models
- Formatting output to meet LG ProCentric JSON specifications
- Packaging bundles as dated ZIP files with proper naming conventions
- Deploying via FTP server (Docker mode) or local output (development mode)
- New Zealand: Sky NZ (all channels) - 3 days of EPG data
- Australia: 8 capital cities + 40+ regional cities and areas - ~9 days of EPG data
Note: The amount of EPG data varies by region due to different data sources:
- New Zealand data is fetched from Sky NZ's GraphQL API, configured to retrieve 3 days (today + 2 days)
- Australian data is fetched from XMLTV.net XML feeds, which typically provide approximately 9 days of programming
The easiest way to run this project is using Docker. See DOCKER.md for complete instructions.
Quick start:
docker-compose up -dThis Compose stack now pulls prebuilt images from GHCR by default:
ghcr.io/hcaldicott/procentric-epg-generator:latestghcr.io/hcaldicott/procentric-epg-admin:latest
To build epg_generator and epg_admin locally from source instead:
docker compose -f docker-compose.yml -f docker-compose.build.yml up -d --buildThis will:
- Run EPG generation automatically at midnight daily (configurable)
- Expose generated bundles via FTP server
- Expose SFTPGo and per-user staleness metrics for Prometheus/Grafana
- Automatically manage bundle cleanup and updates
Kubernetes manifests are available under k8s/ with Kustomize support. See KUBERNETES.md for full setup.
Quick start:
kubectl apply -k k8s/baseThe default Kubernetes manifests use single-node scheduling with ReadWriteOnce shared storage for bundle data.
For local development and testing on macOS, use the helper script. See LOCAL_TESTING.md for details.
Quick start:
./epg_generator/run_local.shThis will:
- Set up Python virtual environment automatically
- Install all dependencies
- Run EPG generation locally
- Display results and generated bundles
To run EPG generation automatically on a schedule using the local script (instead of Docker), add a cron job:
# Edit crontab
crontab -e
# Run daily at 2:00 AM
0 2 * * * cd /path/to/ProCentricEPG && ./epg_generator/run_local.sh >> /var/log/epg_cron.log 2>&1
# Run every 6 hours
0 */6 * * * cd /path/to/ProCentricEPG && ./epg_generator/run_local.sh >> /var/log/epg_cron.log 2>&1Replace /path/to/ProCentricEPG with your actual project path.
Once bundles are generated, they must be deployed to an FTP server accessible by your LG ProCentric devices.
Each bundle is a ZIP file containing a single JSON file:
ZIP naming convention:
Procentric_EPG_{COUNTRY_CODE}_{DATE}.zip
Example: Procentric_EPG_NZL_20250207.zip
JSON filename (inside ZIP):
Procentric_EPG.json
JSON structure:
{
"filetype": "Pro:Centric JSON Program Guide Data NZL",
"version": "0.1",
"fetchTime": "2025-02-07T13:22:44+1200",
"maxMinutes": 60,
"channels": [
{
"channelID": "1",
"name": "TVNZ 1",
"resolution": "HD",
"events": [
{
"eventID": "334242",
"title": "6 News",
"eventDescription": "TVNZ New Zealand News",
"rating": "TV-MA",
"date": "2025-02-07",
"startTime": "1800",
"length": "60",
"genre": "News"
}
]
}
]
}When a ProCentric system logs into a remote FTP server, it is not capable of reading bundles directly from the root directory of the FTP server. Hence, this tool is designed to output bundles organized by ISO country code in subdirectories:
/EPG/
├── NZL/
│ └── Procentric_EPG_NZL_20250207.zip
├── AUS/
│ ├── SYD/
│ │ └── Procentric_EPG_SYD_20250207.zip
│ ├── MEL/
│ │ └── Procentric_EPG_MEL_20250207.zip
│ └── BNE/
│ └── Procentric_EPG_BNE_20250207.zip
Your FTP server should be configured to present the bundles to ProCentric systems with the same subdirectory structure - or at least in a subdirectory of some kind.
When using Docker Compose or Kubernetes deployment, SFTPGo is automatically configured:
- Host: Your server IP
- Port: 21 (configurable in
docker-compose.ymlor Kubernetes service manifests) - User: Create per-customer FTP users in
epg-admin(/admin/login) - Password: Set per customer in
epg-admin(manual or auto-generated) - Root:
/srv/epg/(mapped from generated bundles volume)
Bundles are automatically placed in the correct directories and old bundles are cleaned up on each run.
epg-admin UI:
- URL:
http://<host>:8081/admin/login - Auth: existing SFTPGo admin credentials
- Workflow: create customer users with automatic read-only EPG folder/group mapping
SFTPGo built-in Web Admin UI is disabled by default. Set SFTPGO_ENABLE_WEB_ADMIN_UI=1 to enable it.
The stack includes an epg-admin service that polls SFTPGo's REST API and emits Prometheus metrics.
The purpose of the staleness metrics is to alert when Pro:Centric devices/sites have not downloaded fresh EPG data. For reliable alerting, use one FTP account per ProCentric installation.
- Merged metrics (single scrape target):
http://<host>:8081/metrics - Exporter health:
http://<host>:8081/healthz - SFTPGo native metrics are merged internally by
epg-adminand not published separately by default.
Tracked per-user metrics include:
sftpgo_user_last_login_timestampsftpgo_user_seconds_since_last_loginsftpgo_user_stale
Important note:
- Staleness is derived from SFTPGo user
last_loginvia API. - Use one FTP account per customer for accurate alerting.
Default alert timing:
- Staleness threshold defaults to
24hours viaSTALE_AFTER_HOURS=24inepg-admin. - The bundled Grafana alert rule fires immediately when stale is detected (
"for": "0s"ingrafana_templates/sftpgo-alert-rule.json). - Effective default alert latency is approximately
24hplus up to the exporter poll interval (default60s).
How to customize:
- Change stale threshold:
- Docker Compose: set
STALE_AFTER_HOURSforepg-adminindocker-compose.yml. - Kubernetes: set
STALE_AFTER_HOURSink8s/base/deployment-epg-admin.yaml.
- Docker Compose: set
- Change alert wait time before firing:
- Edit the alert rule
forvalue ingrafana_templates/sftpgo-alert-rule.json(for example5m,30m,1h) and re-import the rule.
- Edit the alert rule
Grafana assets:
grafana_templates/sftpgo-observability-dashboard.jsongrafana_templates/sftpgo-alert-rule.jsongrafana_templates/import-alert-rule.sh(helper script to import alert rule with datasource UID resolution)
Quick Grafana setup:
- Import dashboard:
- Grafana -> Dashboards -> New -> Import
- Upload
grafana_templates/sftpgo-observability-dashboard.json - Select your Prometheus datasource
- Import alert rule with helper script:
./grafana_templates/import-alert-rule.sh \ --grafana-url http://localhost:3000 \ --api-token <grafana-api-token> \ --datasource-name Prometheus
- Login to your LG ProCentric server web admin panel.
- Navigate to the Settings tab.
- In the left menu, expand out the "External Service" section.
- Click EPG
- Configure FTP connection:
- FTP Site: Your FTP server IP/hostname
- Site Directory:
/EPG/{COUNTRY_CODE}/ - Site User: FTP server username
- Site Password: FTP server password
- Set "Hours of EPG Data" based on your region:
- New Zealand: Set to "72 hours (3 Days)"
- Australia: Set to "168 hours (7 Days)"
- Test connection and verify EPG data loads.
If not using Docker's built-in FTP server:
# Generate bundles locally
./epg_generator/run_local.sh
# Upload to your FTP server
ftp your-ftp-server.com
> cd /EPG/NZL
> put epg_generator/output/EPG/NZL/Procentric_EPG_NZL_20250207.zip
> quitOr use scp/rsync for automated deployment:
# After running ./epg_generator/run_local.sh
rsync -avz epg_generator/output/EPG/ user@server:/home/procentric/EPG/- Multi-source aggregation: Sky NZ GraphQL API + XMLTV.net feeds
- Timezone handling: Automatic conversion for Australian regions (AEST, AEDT, AWST, ACST, ACDT)
- Data validation: Pydantic models ensure data integrity
- Error resilience: Continues processing if individual cities fail
- Webhook notifications: Real-time alerts to Teams, Discord, or Slack
- Automated scheduling: Built-in cron support (Docker) or manual cron setup (local)
- FTP deployment: Automatic bundle hosting via integrated FTP server (Docker mode)
- Download observability: SFTPGo per-user stale metrics for Prometheus/Grafana alerting
The repository is organized into clear components:
epg_generator/: EPG generation application code, Dockerfile, dependencies, and local runner implementationepg_admin/: Admin UI + Prometheus exporter service for SFTPGo account management and staleness monitoringk8s/: Kubernetes/Kustomize manifestsgrafana_templates/: Grafana dashboard and alert import templates- root docs (
README.md,DOCKER.md,KUBERNETES.md,LOCAL_TESTING.md): deployment and operations guidance
Compatibility note:
- Local helper script is
./epg_generator/run_local.sh.
This repository includes GitHub Actions automation for CI, release management, and image publishing.
- Conventional commit enforcement (PR title format) via
.github/workflows/conventional-commits.yml - Automatic linting with Ruff via
.github/workflows/lint.yml - Separate semantic versioning and release PRs for:
epg_generator(epg-generator-vX.Y.Ztags,epg_generator/CHANGELOG.md)epg_admin(epg-admin-vX.Y.Ztags,epg_admin/CHANGELOG.md) using Release Please (.github/workflows/release-please.yml)
- Automatic container builds/pushes to GHCR via
.github/workflows/containers.yml- Pushes to
mainpublishedge+sha-<shortsha>tags
- Pushes to
- Release image publishing via
.github/workflows/release-please.yml- Component releases publish
<version>+latesttags for each released component
- Component releases publish
ghcr.io/<org-or-user>/procentric-epg-generatorghcr.io/<org-or-user>/procentric-epg-admin
- Enable branch protection on
main. - Require these status checks before merge:
Lint / ruffConventional Commits / validate-pr-title
- Prefer squash merges so PR title becomes the release commit message.
- Ensure repository Actions are allowed to publish packages to GHCR.
Receive real-time alerts for processing errors and completion status.
Set the following environment variables to enable webhook notifications:
WEBHOOK_URL="your-webhook-url" # Required: Your webhook URL
WEBHOOK_TYPE="auto" # Optional: auto (default), teams, discord, slack, generic
WEBHOOK_NOTIFY_SUCCESS="false" # Optional: Set to "true" to notify on successMicrosoft Teams
- In Teams, go to the channel where you want notifications
- Click "..." → "Connectors" → "Incoming Webhook"
- Configure webhook and copy the URL
- Set
WEBHOOK_URLto the copied URL
Example:
WEBHOOK_URL="https://outlook.office.com/webhook/..."Discord
- In Discord, go to Server Settings → Integrations → Webhooks
- Click "New Webhook" and configure
- Copy the webhook URL
- Set
WEBHOOK_URLto the copied URL
Example:
WEBHOOK_URL="https://discord.com/api/webhooks/..."Slack
- Go to https://api.slack.com/apps and create an app
- Enable "Incoming Webhooks" and add webhook to workspace
- Copy the webhook URL
- Set
WEBHOOK_URLto the copied URL
Example:
WEBHOOK_URL="https://hooks.slack.com/services/..."- Error Notifications: Immediate alerts when city processing fails
- Warning Summary: End-of-run summary if any errors occurred
- Success Notifications: Optional completion confirmations (set
WEBHOOK_NOTIFY_SUCCESS=true)
Docker (docker-compose.yml):
environment:
WEBHOOK_URL: "https://outlook.office.com/webhook/your-webhook-url"
WEBHOOK_TYPE: "teams"
WEBHOOK_NOTIFY_SUCCESS: "true"Local:
export WEBHOOK_URL="https://discord.com/api/webhooks/your-webhook-url"
export WEBHOOK_TYPE="discord"
./epg_generator/run_local.sh- requests: HTTP client for API/XML fetching
- pydantic: Data validation and modeling
- pytz: Timezone conversions
- xml.etree.ElementTree: XML parsing (XMLTV feeds)
| Region | Source | Type | Coverage | EPG Duration |
|---|---|---|---|---|
| New Zealand | Sky NZ | GraphQL API | All channels | 3 days |
| Australia | XMLTV.net | XML feeds | 8 capitals + 40+ regional | ~9 days |
Capital Cities: Sydney, Melbourne, Brisbane, Perth, Adelaide, Canberra, Hobart, Darwin
Regional: Albany, Albury/Wodonga, Ballarat, Bendigo, Broken Hill, Bunbury, Cairns, Central Coast, Coffs Harbour, Geelong, Gippsland, Gold Coast, Griffith, Jurien Bay, Launceston, Lismore, Mackay, Mandurah, Mildura/Sunraysia, Newcastle, Orange/Dubbo, Port Augusta, Renmark, Riverland, Rockhampton, Shepparton, South Coast NSW, South East SA, Spencer Gulf, Sunshine Coast, Tamworth, Taree/Port Macquarie, Toowoomba, Townsville, Wagga Wagga, Wide Bay, Wollongong
Regional Bundles: NSW Regional, NT Regional, QLD Regional, SA Regional, TAS Regional, WA Regional
No bundles generated
- Check internet connectivity to data sources
- Verify
epg_generator/output/EPG/directory exists and is writable - Review logs for API/XML fetch errors
LG ProCentric not loading EPG
- Verify FTP server is accessible from LG device
- Check file naming matches convention exactly
- Ensure ZIP contains
Procentric_EPG.json(not nested in subdirectories) - Confirm JSON structure matches LG schema
Timezone issues
- Australian cities use configured timezone offsets (see
epg_generator/src/main.py) - New Zealand uses NZDT/NZST automatically
- Verify
fetchTimein JSON uses correct timezone format
# Enable debug logging
LOG_LEVEL=DEBUG ./epg_generator/run_local.sh
# Check debug output
cat epg_generator/debug/debug_skynz.jsonThanks to garethcheyne for the original code that helped shape the evolution of this project.
See CONTRIBUTING.md for contribution guidelines, conventional commit/versioning rules, and release/container workflow details.