This repository contains the code for training models on the benchmark CanadaFireSat available online. In this benchmark, we investigate the potential of deep learning with multiple sensors for high-resolution wildfire forecasting.
- 💿 Dataset on Hugging Face
- 📝 Paper on ArXiv
- 📊 Data repository on GitHub
- 🤖 Model Weights on Hugging Face
Summary Representation:
In this repository, we train models following two different deep learning architectures, CNN-based using ResNet encoders and Transformer-based using ViT encoders.
Those models are trained across three data settings namely:
| Setting | Source | Format | Type |
|---|---|---|---|
| SITS ONLY | Sentinel-2 | Spatial | Multi-Spectral Images |
| ENV ONLY | MODIS | Spatial | Environmental Products |
| ERA5-Land | Spatial | Climate Reanalysis | |
| CEMS | Spatial | Fire Indices | |
| Multi-Modal | Sentinel-2 | Spatial | Multi-Spectral Images |
| MODIS | Tabular | Environmental Products | |
| ERA5-Land | Tabular | Climate Reanalysis | |
| CEMS | Tabular | Fire Indices |
CNN-Based Multi-Modal Architecture
ViT-Based Multi-Modal Architecture
-
In order to log the model training, you need to set-up a WandB profile or switch model loggers. You can specify your WandB information in
global_config.yaml. -
Then, you also need to install the Python virtual environment:
python -m venv fire-env
source fire-env/bin/activate
pip install -r requirements/requirements.txt --extra-index-url https://download.pytorch.org/whl/cu117-
You can then download the data from Hugging Face 🤗 leveraging
src.huggingface.download| Config:download.yaml. -
Specify the data paths in the
global_config.yaml.
-
Training: Run the
src.train.segmentation_trainingscript with your selected training config:ResNet_MULTI.yaml,ViT_MULTI.yaml, ... -
Evaluation: Run the
src.eval.evalscript with your selected evaluation config:eval.yaml,eval_tab.yaml, ... . The model config described in the evaluation should matched the one of the its training config.
📊 Performance Analysis: In this table, we describe the models' performances across data settings and architectures.
| Encoder | Modality | Params (M) | Val | Test | Test Hard | Avg | ||||
|---|---|---|---|---|---|---|---|---|---|---|
| PRAUC | F1 | PRAUC | F1 | PRAUC | F1 | PRAUC | F1 | |||
| ResNet-50 | SITS Only | 52.2 | 45.9 | 49.4 | 54.0 | 59.9 | 26.2 | 36.7 | 42.0 | 48.7 |
| ENV Only | 97.5 | 41.6 | 46.7 | 50.8 | 55.2 | 24.5 | 33.1 | 39.0 | 45.0 | |
| Multi-Modal | 52.2 | 46.1 | 51.2 | 57.0 | 60.3 | 27.1 | 37.4 | 43.4 | 49.6 | |
| ViT-S | SITS Only | 36.5 | 45.2 | 50.6 | 51.2 | 51.9 | 25.7 | 33.8 | 40.7 | 45.2 |
| ENV Only | 54.8 | 34.8 | 45.7 | 49.2 | 59.9 | 21.2 | 35.1 | 35.1 | 46.9 | |
| Multi-Modal | 37.7 | 43.9 | 50.0 | 56.2 | 59.2 | 24.7 | 35.6 | 41.6 | 48.3 | |
| Baseline (FWI) | ENV Only | - | 20.0 | 32.7 | 43.1 | 50.3 | 21.1 | 32.7 | 28.1 | 38.6 |
🗺️ Use Cases on large ROI: We plot a large target area where a wildfire occurred in Québec in 2023, then the fire polygons corresponding to the wildfires, then our model predictions across the region.
Figure 1: Sentinel-2 tile from 2023/06/28 of size 14 km × 26 km before a large wildfire in Québec.
Figure 2: Fire polygons for the large wildfire on 2023/07/05 over the same tile.
Figure 3: Binary model predictions (in red) over the 2.64 km × 2.64 km center-cropped positive samples outlined in black.
@article{porta2025canadafiresat,
title={CanadaFireSat: Toward high-resolution wildfire forecasting with multiple modalities},
author={Porta, Hugo and Dalsasso, Emanuele and McCarty, Jessica L and Tuia, Devis},
journal={arXiv preprint arXiv:2506.08690},
year={2025}
}