Fully automated burn-severity workflow for Digital Earth Australia. This CLI pulls fire footprints from Postgres, loads Sentinel‑2 ARD, computes delta NBR with landcover-aware thresholds, vectorises severity classes, and optionally ships results to S3—idempotently and with rich QA logging.
- 🛰️ Data ingest: Sentinel‑2 ARD (
ga_s2am_ard_3,ga_s2bm_ard_3,ga_s2cm_ard_3) with S2 Cloudless masking, loaded viadea_tools.load_ard. - 🧭 Temporal windows: Configurable pre/post windows (defaults:
pre_fire_buffer_days=50,post_fire_start_days=15,post_fire_window_days=60) tuned for burn assessments. - 🌿 Landcover-aware severity: Delta NBR against
ga_ls_landcover_class_cyear_3, with grass-class thresholds defined ingrass_classes. - 🧪 Quality signals: Cloud/contiguity/water masking, pixel counts, and per-fire logs for fast QA.
- 🧩 Geo outputs: Severity polygons dissolved by class, plus preview/debug COGs. Ready for downstream GIS or dashboards.
- 🚛 Distribution: Local outputs under
products/by default, optional S3 upload + local cleanup whenupload_to_s3is enabled.
# 1) Install
pip install -e . # editable makes iteration easy; requires dea_tools + datacube deps
# 2) Set DB credentials (env var names are fixed)
export FIRE_DB_HOSTNAME=...
export FIRE_DB_NAME=...
export FIRE_DB_USERNAME=...
export FIRE_DB_PASSWORD=...
export DB_PORT=5432
# 3) Run with defaults (uses packaged YAML)
dea-burn-severity
# 4) Or point at your own config
dea-burn-severity --config /path/to/dea_burn_severity_processing.yamlTip: DEA_BURN_SEVERITY_* env vars mirror CLI flags (e.g. DEA_BURN_SEVERITY_OUTPUT_DIR), since Click’s auto_envvar_prefix is set.
- ✅ Postgres/PostGIS table containing fire footprints; geometry is read via
ST_AsGeoJSON. - ✅ DB creds from env:
FIRE_DB_HOSTNAME,FIRE_DB_NAME,FIRE_DB_USERNAME,FIRE_DB_PASSWORD,DB_PORT(defaults to 5432). - ✅ Optional S3 credentials if uploading outputs.
- ✅ Datacube configured with Sentinel‑2 ARD + landcover products;
dea_toolsavailable on the Python path. - ✅
psycopg2-binaryinstalled when using DB loading (the CLI does not vendor it).
- 📦 Packaged defaults:
src/dea_burn_severity/config/dea_burn_severity_processing.yamlmirrors the legacy shipped YAML. - 🔄 Merge order: defaults → optional YAML (
--configlocal/http(s)/s3) → CLI flags →DEA_BURN_SEVERITY_*env for those flags. DB creds always come from the fixed env var names above. - 🔑 Key fields (see
config/dea_burn_severity_processing.yamlfor all):output_dir: base folder (productsby default).s2_products,s2_measurements: Sentinel‑2 collections + bands (passed as lists when callingload_ard).output_crs,resolution: reprojection and pixel size (default EPSG:3577, -10/10 m).pre_fire_buffer_days,post_fire_start_days,post_fire_window_days: temporal windows.grass_classes: landcover codes treated as grass; determines thresholds.db_table,db_columns,db_geom_column,db_output_crs: how footprints are read and reprojected.upload_to_s3,upload_to_s3_prefix: enable S3 publishing + cleanup of local run dirs.
Minimal custom YAML example:
output_dir: /data/burns
upload_to_s3: true
upload_to_s3_prefix: s3://dea-public-data-dev/projects/burn_cube/derivative/dea_burn_severity/result
db_table: nli_lastboundaries_trigger
db_columns: [fire_id, fire_name, ignition_date, capt_date, capt_method, state, agency, date_retrieved, date_processed]
db_geom_column: geom- Polygon ingest: Load GeoDataFrame from Postgres, dissolve by
fire_idwhen present; ensurefire_idexists for downstream naming. - Date wiring: Derive
ignition_date(or fallback capture date) andextinguish_date. Compute pre/post windows from config. - Baseline stack: Call
load_ardwithmin_gooddata=0.99; if empty, retry with mask dilation +min_gooddata=0.20and build a latest‑valid composite. - Post-fire stack: Call
load_ard_with_fallbackwith decreasingmin_gooddatathresholds (0.99 → 0.90 by default). - Landcover: Load
ga_ls_landcover_class_cyear_3for the year before ignition. - Indices: Compute pre/post NBR via
calculate_indices; derive delta NBR. - Severity classification: Apply grass/woody thresholds (
calculate_severity), generate debug mask (cloud/water/contiguity), and set masked pixels to class6. - Vectorisation: Convert severity raster to vectors, clip to fire footprint, dissolve by class, and attach metadata (
fire_id,fire_name, dates, plus all other attributes fromdb_columns). - Outputs:
- GeoJSON:
products/results/DEA_burn_severity_<fire_id>_<date>.json - Preview COG:
products/s2_postfire_preview_<fire_slug>.tif(first post-fire scene) - Debug COG:
products/debug_mask_raster_<fire_slug>.tif - Optional run log: per-fire pixel counts, baseline/post scene counts, masked/valid stats.
- GeoJSON:
- Distribution: If
upload_to_s3is true, the per-fire folder is uploaded toupload_to_s3_prefixand removed locally after verification.
--config PATH|URL— external YAML to merge.--output-dir PATH— override base output folder.--force-rebuild true|false— ignore existing outputs.--upload-to-s3 true|false— toggle publishing + cleanup.--upload-to-s3-prefix s3://bucket/prefix— target prefix.--app-name NAME— datacube app name.--db-table NAME— override table (columns come from YAML). Usedea-burn-severity --helpfor the live list;DEA_BURN_SEVERITY_OUTPUT_DIRetc. mirror these options.
- GeoJSON: severity polygons dissolved by class, CRS
EPSG:4283. - COGs: one post-fire preview (first time slice) and one debug mask per fire.
- Logs: When
log_pathis provided internally, each fire logs scene counts, grid size, valid/burn/masked pixel totals, and baseline/post contiguity stats. - Idempotency: Existing per-fire outputs are skipped unless
--force-rebuildis set.
- Install:
pip install -e .(ensuredea_tools,datacube,psycopg2-binary, GDAL stack available). - Docker:
docker build -t dea-burn-severity .thendocker run --rm dea-burn-severity dea-burn-severity --help. - Common hiccups:
- 🔑 Missing DB creds → set
FIRE_DB_*env vars. - 🌥️ No baseline scenes → falls back to relaxed composite; still skips if empty.
- 📦 Missing
dea_tools/datacubeimports → install into the same environment. - 📡 S3 upload failures → outputs remain on disk for manual retry.
- 🔑 Missing DB creds → set
- Install dev extras:
pip install -e .[test] - Run locally:
pytest - Docker build and CI run the same suite during image builds.
Happy mapping! 🎉
