My current projects include:

Semantic Change Detection


With the very high resolution now available even from space, local changes can now be characterized precisely. Rodrigo Daudt, Alexandre Boulch, Yann Gousseau and I have proposed the first deep neural network architectures for change detection in Earth-observation. We also created and released OSCD, a dataset with reference data for training such nets. The last evolution of this line of work is Semantic Change Detection, which allows to characterize the modification of land use, and we propose a Multi-Task Learning network to solve this problem automatically.

[ ICIP paper on siamese nets for change detection / code / OSCD dataset / arxiv ]

Depth Estimation from a Single Image


Turning 2D images into depth is now possible with a monocular camera, without neither stereo nor active sensor. With Marcela Carvalho and Pauline Trouvé, we designed a dense network for depth estimation from a single image. We investigate how to model the right loss for such a network, and how blur from defocus can help us predict better estimates. This network ranks among the top ones of the state of the art on the NUYv2 dataset while being simpler to train in a single phase than most competitors.

[ ICIP’18 paper / ECCV/W’18 paper / video / code ]

Joint Use of EO Data and Cartography


Cartography and especially crowd-sourced geographic information like OpenStreetMap is a great way to drive a neural network towards a correct classification. With Nicolas Audebert and Sébastien Lefèvre, we built fusion networks able handle efficiently this new input.

The SpaceNet Challenge round 2 winner is using a similar solution: see his blog post which mentions our paper. OSM as input is promising !

[ CVPR’17 paper / arxiv ]

SnapNet: 3D Semantic Labeling


As 3D sensors become ubiquitous, recognizing stuff and things in 3D data is essential. So, we developed SnapNet, a multi-view conv net for semantic labeling of unstructured 3D point clouds. During more than one year, it led the semantic3D leaderboard for 3D urban mapping, and still is among the top ones. The paper was presented at EuroGraphics/3DOR 2017 and has now been published in Computer and Graphics. The code is also available for playing with your own data.

[ paper / code ]

Object Detection in Remote Sensing


With the accuracy of deep conv nets for pixelwise labeling, it is now possible to build powerful object detectors for aerial imagery. We proposed an approach to detect and segment vehicles, and then recognize their type. Our work was awarded the award for the best contribution to the ISPRS 2D semantic labeling benchmark at GeoBIA’16.

[ Segment-before-detect paper]

Object Recognition for Robotics


In the context of robotic exploration (using micro-drones or ground robots), we aim at developing efficient object detectors and trackers that are able to adapt to a new environment. We explore how multimodal RGB-D data offers reliable and complementary ways of sensing in challenging conditions. Joris Guerry has developped multimodal networks that gets high detection rates for people detection and released the ONERA.ROOM dataset. We also proposed the SnapNet-R multi-view network for 3D-consistent data augmentation: it gets top state-of-the-art results on NYUv2 and Sun RGB-D datasets for robotic semantic labeling.

[ ONERA.ROOM / video / ECMR paper about people detection / ICCV/3DRMS paper about robotic semantic labeling with SnapNet-R ]

Search-and-Rescue with 3D captured from UAVs


We are designing classifiers for 3D data captured using Lidar sensors or photogrammetry. In the FP7 Inachus Project, we build tools for urban Search and Rescue after natural or industrial disasters: semantic maps (including safe roads and risk maps) or analysis of building damages (as shown in the image on the left: intact/blue to debris/purple). They are based on SnapNet, our multi-view convolutional net for 3D point-cloud semantic labeling.

[ code / video ]

Older projects can be found here