This repository contains baseline models for fracture detection and segmentation from paired RGB + DEM outcrop imagery. The goal is to provide tools using classical computer vision filters and deep learning models such as U‑Net and SegFormer for geological fracture mapping. Models are trained on FraXet (10.5281/zenodo.17069947).
- Computer vision filters
- Model baselines for fracture segmentation using:
- U‑Net (CNN encoder–decoder)
- SegFormer (vision transformer)
Both trained to take 4‑channel inputs (RGB + DEM) and produce pixelwise fracture probability maps. (huggingface.co/ayoubft/fraXteX, 10.5281/zenodo.17866853)
- Inference tools for patch‑based prediction on arbitrary imagery.
- Evaluation scripts for standard metrics (IoU, accuracy, F1 etc.).
- Demo datasets and example scripts for easy use.
git clone https://github.com/ayoubft/fractex2D.pt.git
cd fractex2D.pt
pip install -r requirements.txtThere are several ways to run inference:
-
Online Try it on Hugging Face Space.
-
From CLI
python infer.py \
--image path/to/rgb.png \
--dem path/to/dem.tif \
--model unet \
--output pred_mask.png- From Python
from infer_function import run_fracture_inference
mask = run_fracture_inference(
"rgb.png",
"dem.tif",
model_name="segformer",
output_path="pred.png"
)
mask.show()To train a model:
# config at config/main.yaml
python train.py- Predictions depend on data quality, lighting, and texture conditions.
- Not suitable for safety‑critical use without expert validation.
If you use the FraXet2D baselines in academic work, please cite:
Fatihi, A., Caldeira, J., Beucler, T., Thiele, S. T., & Samsu, A. Towards robust fracture mapping: Benchmarking automatic fracture mapping in 2D outcrop imagery. Solid earth. (preprint coming soon)