Over the years, research projects and papers lead to various code and computer tools. They are here made available for the sake of reproducible research and to enable you to make use of it for building new extensions.

D3Net: An encoder-decoder FCN with dense blocks


D3-Net code

Marcela Carvalho designed a fully-convolutional network architecture which incorporates the nice features of densely connected conv nets and skipping connections a la U-net in an encoder-decoder net. Moreover, upsampling is simpler than Tiramisu, which results in a smaller model, usable on most GPUs.

It was successfully applied to depth estimation from a single image, and ranked among the top ones of the state of the art on the NUYv2 dataset while being simpler to train in a single phase than most competitors.

If using this code, please cite: On Regression Losses for Deep Depth Estimation M. Pinheiro de Carvalho, B. Le Saux, P. Trouvé-Peloux, F. Champagnat, A. Almansa IEEE Int. Conf. on Image Processing (ICIP’2018) Athens, Greece, October 2018

 author = {Carvalho, Marcela and {Le Saux}, Bertrand and Trouv{\'e}-Peloux, Pauline and Champagnat, Fr{\'e}d{\'e}ric and Almansa, Andr{\`e}s},
 title = {On Regression Losses for Deep Depth Estimation},
 booktitle = {IEEE Int. Conf. on Image Processing ({ICIP})},
 address = {Athens, Greece},
 year = {2018},

[ Related: ICIP 2018 paper / ECCV/W 2018 paper / video ]

DeepHyperX: Deep Learning for Hyperspectral Imaging Toolbox


DeepHyperX code

Nicolas Audebert coded this toolbox with various machine learning approaches for hyperspectral imaging, in support of a review we wrote with Sébastien Lefèvre (to be published soon). It contains various models, from SVM to convolutional nets, including 1D, 2D or 3D CNNs, multi-scale or sumi-supervised. Various approaches of the State of the Art are reproduced. Various standard datasets are already included (including Indian Pines, Pavia or DFC 2018), and there is a tutorial to include our own ones.

The most straightforward way to start with deep learning in hyperspectral!

SnapNet: Multi-view conv net for 3D semantic labeling


SnapNet code

With Alexandre Boulch, we conceived SnapNet, a multi-view conv net for semantic labeling of unstructured 3D point clouds. During more than one year, it led the semantic3D leaderboard for 3D urban mapping, and still is among the top ones. In particular, it is computationally efficient and allows to deal with large datasets in tractable times. With Joris Guerry, we developped a variant which was aplied on robotics datasets such as NYUv2 or SunRGBD with excellent classification results.

If using this code, please cite: SnapNet: Unstructured point cloud semantic labeling using deep segmentation networks Alexandre Boulch, Joris Guerry, Bertrand Le Saux, Nicolas Audebert, Computer and Graphics, 2017

  title={SnapNet: 3D point cloud semantic labeling with 2D deep segmentation networks},
  author={Boulch, Alexandre and Guerry, Joris and {Le Saux}, Bertrand and Audebert, Nicolas},
  journal={Computers \& Graphics},

[ Related: CaG 2017 paper / ICCV/W 2017 paper ]

DeepNetsForEO: Deep learning for Earth Observation


DeepNetsForEO code

With Nicolas Audebert and Sébastien Lefèvre, we released DeepNetsForEO, a deep learning software for semantic labeling of Earth Observation images. It is a deep neural network based on the SegNet architecture, with pre-trained weights on various public remote sensing datasets like ISPRS Vaihingen and ISPRS Potsdam. The v1 (Caffe and python interface) was the first deep learning model for Earth-observation data available in the Caffe model zoo. The v2 is purely python with pytorch functions, and comes with a handy python notebook.

If using this code, please cite: Beyond RGB: Very high resolution urban remote sensing with multimodal deep networks Nicolas Audebert, Bertrand Le Saux, Sébastien Lefèvre, ISPRS Journal of Photogrammetry and Remote Sensing, 2018. https://arxiv.org/abs/1711.08681

title = "Beyond RGB: Very high resolution urban remote sensing with multimodal deep networks",
journal = "ISPRS Journal of Photogrammetry and Remote Sensing",
year = "2018",
issn = "0924-2716",
doi = "https://doi.org/10.1016/j.isprsjprs.2017.11.011",
author = "Audebert, Nicolas and {Le Saux}, Bertrand and Lef{\`e}vre, S{\'e}bastien",
keywords = "Deep learning, Remote sensing, Semantic mapping, Data fusion"

[ Related: ISPRS Journal of Photogrammetry 2017 paper / ACCV 2016 paper ]