Over the years, research projects and papers lead to various code and computer tools. Here, they are made available for the sake of reproducible research and to enable you to make use of it for building new extensions.

PhilEO-Bench: a benchmark for geospatial foundation models

image-left

Project page with links to GitHub, HugginFace and Dataset of downsteam tasks.

The PhilEO-bench is a benchmark for geospatial foundation models (e.g. trained on Sneitnel 2 or HLS data) with three tasks (land cover classification, road segmentation, and building density regression) on the same massive and global dataset (400Go). Thus, this is a real game changer for really evaluating EO foundation models and their assumptions of universality (everywhere on Earth) and genericity (multiple tasks). In the paper (pdf) we compare various foundation models such as Prithvi, SeCo, Satlas to our own Foundation Model: PhilEO. Weights are on the huggin face page!

If using this code, please cite: PhilEO Bench: Evaluating Geo-Spatial Foundation Models Casper Fibaek, Luke Camilleri, Andreas Luyts, Nikolaos Dionelis, Bertrand Le Saux IGARSS, July 2024.

@inproceedings{fibaek2024-phileo-bench,
 author = { Fibaek, Casper and  Camilleri, Luke and  Luyts, Andreas and  Dionelis, Nikolaos and  {Le Saux}, Bertrand},
 title = {PhilEO Bench: Evaluating Geo-Spatial Foundation Models},
 journal={IGARSS},
 month = {July},
 year = {2024},
}

AerialMTL: Multi-Task Learning for Aerial Images

image-left

AerialMTL code on github

With Marcela Carvalho, we developped this approach for joint estimation of 3D (Digital Height Models) and semantics (urban cartography) from aerial images. [ Related: GRSL article / preprint / DFC 2018 data ]

It consists in a deep network for Multi-Task Learning and we’ve shown that each task help the other to get better results on both ISPRS Vaihingen and IEEE GRSS Data Fusion Contest 2018.

If using this code, please cite: Multitask learning of Height and Semantics From Aerial Images M. Pinheiro de Carvalho, B. Le Saux, P. Trouvé-Peloux, F. Champagnat, A. Almansa IEEE Geoscience and Remote Sensing Letters (GRSL), Nov. 2019.

@article{carvalho-2019grsl-mtl3D,
 author = {Carvalho, Marcela and {Le Saux}, Bertrand and Trouv{\'e}-Peloux, Pauline and Champagnat, Fr{\'e}d{\'e}ric and Almansa, Andr{\`e}s},
 title = {Multitask learning of Height and Semantics From Aerial Images},
 journal={IEEE Geosci. and Remote Sensing Letters},
 month = {November},
 year = {2019},
}

HyperGANs: GANs for Hyperspectral Toolbox

image-left

HyperGANs code

The HyperGANs toolbox by Nicolas Audebert implements Generative Adversarial Network (GAN) for synthesis of realistic hyperspectral spectra (conditioned by material class / after mixing). [Related: GANs for hyperspectral paper with Sébastien Lefèvre]

It allows to generate spectra of a hyperspectral sensor which are likely with respect to the original distribution of the training example dataset. Moreover, it comes in a class-conditional flavour, which allows to synthetise realistic samples of pure material spectra. It can be easily adapted to new datasets.

If using this code, please cite: Generative adversarial networks for realistic synthesis of hyperspectral samples Nicolas Audebert, Bertrand Le Saux, Sébastien Lefèvre, Proc. IGARSS 2018. https://arxiv.org/abs/1904.10674

@inproceedings{audebert_generative_2018,
 title = {Generative adversarial networks for realistic synthesis of hyperspectral samples},
 booktitle = {Proc. IGARSS},
 author = {Audebert, N. and Le Saux, B. and Lefèvre, S.},
 month = {jul},
 year = {2018},
 } 

D3Net: An encoder-decoder FCN with dense blocks

image-left

D3-Net code on github

Marcela Carvalho’s code for depth estimation from a single image: it ranked among the top ones of the state of the art on the NUYv2 dataset while being simpler to train in a single phase than most competitors. [ Related: ICIP 2018 paper / ECCV/W 2018 paper / video ]

It has a fully-convolutional network architecture which incorporates the nice features of densely connected conv nets and skipping connections a la U-net in an encoder-decoder net. Moreover, upsampling is simpler than Tiramisu, which results in a smaller model, usable on most GPUs.

If using this code, please cite: On Regression Losses for Deep Depth Estimation M. Pinheiro de Carvalho, B. Le Saux, P. Trouvé-Peloux, F. Champagnat, A. Almansa IEEE Int. Conf. on Image Processing (ICIP’2018) Athens, Greece, October 2018

@inproceedings{carvalho-18icip-losses,
 author = {Carvalho, Marcela and {Le Saux}, Bertrand and Trouv{\'e}-Peloux, Pauline and Champagnat, Fr{\'e}d{\'e}ric and Almansa, Andr{\`e}s},
 title = {On Regression Losses for Deep Depth Estimation},
 booktitle = {IEEE Int. Conf. on Image Processing ({ICIP})},
 address = {Athens, Greece},
 year = {2018},
}

DeepHyperX: Deep Learning for Hyperspectral Imaging Toolbox

image-left

DeepHyperX code

Nicolas Audebert coded this toolbox with various machine learning / deep neural network approaches for hyperspectral imaging, in support of a review we wrote with Sébastien Lefèvre. [ Related: Deep Learning for Hyperspectral review in GRSM ]

It contains various models, from SVM to convolutional nets, including 1D, 2D or 3D CNNs, multi-scale or sumi-supervised. Various approaches of the State of the Art are reproduced. Various standard datasets are already included (including Indian Pines, Pavia or DFC 2018), and there is a tutorial to include our own ones. The most straightforward way to start with deep learning in hyperspectral!

If using this code, please cite: Deep Learning for Classification of Hyperspectral Data: A Comparative Review Nicolas Audebert, Bertrand Le Saux, Sébastien Lefèvre, IEEE Geoscience Remote Sensing Magazine, vol. 7 (2), 2019. https://arxiv.org/abs/1904.10674

@article{audebert-19grsm-deep-hyper-X,
 author = {Audebert, Nicolas and {Le Saux}, Bertrand and Lef{\`e}vre, S{\'e}bastien},
 title = {Deep Learning for Classification of Hyperspectral Data: A Comparative Review},
 journal = {IEEE Geoscience Remote Sensing Magazine},
 volume = {7},
 number = {2},
 year = {2019},
 month={June},
}

SnapNet: Multi-view conv net for 3D semantic labeling

image-left

SnapNet code on github

With Alexandre Boulch, we conceived SnapNet, a multi-view conv net for semantic labeling of unstructured 3D point clouds. During more than one year, it led the semantic3D leaderboard for 3D urban mapping, and still is among the top ones. [ Related: CaG 2017 paper / ICCV/W 2017 paper ]

In particular, it is computationally efficient and allows to deal with large datasets in tractable times. With Joris Guerry, we developped a variant which was aplied on robotics datasets such as NYUv2 or SunRGBD with excellent classification results.

If using this code, please cite: SnapNet: Unstructured point cloud semantic labeling using deep segmentation networks Alexandre Boulch, Joris Guerry, Bertrand Le Saux, Nicolas Audebert, Computer and Graphics, 2017

@article{boulch-17cag-snapnet,
  title={SnapNet: 3D point cloud semantic labeling with 2D deep segmentation networks},
  author={Boulch, Alexandre and Guerry, Joris and {Le Saux}, Bertrand and Audebert, Nicolas},
  journal={Computers \& Graphics},
  year={2017},
  publisher={Elsevier}
}

DeepNetsForEO: Deep learning for Earth Observation

image-left

DeepNetsForEO code

With Nicolas Audebert and Sébastien Lefèvre, we released DeepNetsForEO, a deep learning python software for semantic labeling of Earth Observation images. [ Related: ISPRS Journal of Photogrammetry 2017 paper / ACCV 2016 paper ]

It is a deep neural network based on the SegNet architecture, with pre-trained weights on various public remote sensing datasets like ISPRS Vaihingen and ISPRS Potsdam. The v1 (Caffe and python interface) was the first deep learning model for Earth-observation data available in the Caffe model zoo. The v2 is purely python with pytorch functions, and comes with a handy python notebook.

If using this code, please cite: Beyond RGB: Very high resolution urban remote sensing with multimodal deep networks Nicolas Audebert, Bertrand Le Saux, Sébastien Lefèvre, ISPRS Journal of Photogrammetry and Remote Sensing, 2018. https://arxiv.org/abs/1711.08681

@article{audebert_beyondRGB_2018,
title = "Beyond RGB: Very high resolution urban remote sensing with multimodal deep networks",
journal = "ISPRS Journal of Photogrammetry and Remote Sensing",
year = "2018",
issn = "0924-2716",
doi = "https://doi.org/10.1016/j.isprsjprs.2017.11.011",
author = "Audebert, Nicolas and {Le Saux}, Bertrand and Lef{\`e}vre, S{\'e}bastien",
keywords = "Deep learning, Remote sensing, Semantic mapping, Data fusion"
}