I was previously involved in these projects (now finished, or which evolve in a new one):
SnapNet: 3D Semantic Labeling
As 3D sensors become ubiquitous, recognizing stuff and things in 3D data is essential. So, we developed SnapNet, a multi-view conv net for semantic labeling of unstructured 3D point clouds. During more than one year, it led the semantic3D leaderboard for 3D urban mapping, and still is among the top ones. The paper was presented at EuroGraphics/3DOR 2017 and has now been published in Computer and Graphics. The code is also available for playing with your own data.
Object Detection in Remote Sensing
With the accuracy of deep conv nets for pixelwise labeling, it is now possible to build powerful object detectors for aerial imagery. We proposed an approach to detect and segment vehicles, and then recognize their type. Our work was awarded the award for the best contribution to the ISPRS 2D semantic labeling benchmark at GeoBIA’16.
[ Segment-before-detect paper]
Object Recognition for Robotics
In the context of robotic exploration (using micro-drones or ground robots), we aim at developing efficient object detectors and trackers that are able to adapt to a new environment. We explore how multimodal RGB-D data offers reliable and complementary ways of sensing in challenging conditions. Joris Guerry has developped multimodal networks that gets high detection rates for people detection and released the ONERA.ROOM dataset. We also proposed the SnapNet-R multi-view network for 3D-consistent data augmentation: it gets top state-of-the-art results on NYUv2 and Sun RGB-D datasets for robotic semantic labeling.
Search-and-Rescue with 3D captured from UAVs
We are designing classifiers for 3D data captured using Lidar sensors or photogrammetry. In the FP7 Inachus Project, we build tools for urban Search and Rescue after natural or industrial disasters: semantic maps (including safe roads and risk maps) or analysis of building damages (as shown in the image on the left: intact/blue to debris/purple). They are based on SnapNet, our multi-view convolutional net for 3D point-cloud semantic labeling.
UAV Object Detection and Recognition
With Martial Sanfourche, we designed detectors of object of interest in images obtained from airborne sensors (UAV and planes), using a mix of geometric-template matching and learning-based classifiers. A typical use-case is a Search-and-Rescue mission in an urban environment, which objectives like cartography, obstacle avoidance or people and vehicle detection [video]. This research was carried on in the FP7 Darius and Azur projects.
We presented our work on UAV-based 3D modelling and event localization [video] at the 2nd field trial of the FP7 Darius project which simulated an Urban (Earthquake) SaR Demonstration.
Car Detection in Aerial Images
With Hicham Randrianarivo and Marin Ferecatu, we built powerful and fast detectors able to retrieve cars in aerial images. Our Discriminatively-trained model mixture (DtMM) was able to encode the various orientations and appearances of the cars for retrieval in higly-complex urban environments. It relied on a HOG encoding for description and a hard-negative search ofr training of linear classifiers
Over the years, I worked on developping various methods for interactive and user-friendly design of classifiers and detectors, typically non-parametric methods like boosting and support-vector machines. The main application we investigated in the DGA-funded project Efusion was online learning of patterns of interest (objects or changes) in aerial and satellite images.
[See ICPR’2014 paper for a synthesis ].
Deformable Part Models in Remote Sensing
With Hicham Randrianarivo, we adapted Felzenswalb’s infamous Deformable Part Models to object detection in aerial images. First we shown they could be used for man-made structures in difficult urban environments [cf. paper at IGARSS 2013] and then pushed them for fusion of multi-resolution, multimodal optical and hyperspectral imagery [cf. paper at IGARSS’14].
I was once interested in 3D reconstruction in tomographic imaging. The new confocal microscope we worked with was deplyed at Institut Pasteur in Spencer Shorte’s team and made possible the observation of non-adherent living cells. We used bayesian inference, data fusion and deconvolution to produce 3D volumic images of these living cells. This work was achieved in the FP6 Automation project, with Bernard Chalmond, Jiaping Wang and Alain Trouvé from the Applied Mathematics Lab of the ENS Cachan.
Image Content Recognition
My postdoctoral project was carried out at the University of Bern with Horst Bunke and the CNR di Pisa with Giuseppe Amato, as a member of the ERCIM fellowship program. I have designed predictors that can learn how to recognize scenes, like particular landscapes, sport pictures, images with people. Techniques include feature selection, kernel methods, graph matching and bayesian combination of classifiers. This was used to generate automatic annotation of multimedia documents and improve search facilities in digital libraries.
Image and Video Indexing
I did my PhD at INRIA/Imedia research group, which is interested on content-based image retrieval. I worked on techniques of supervised and unsupervised classification to find and manage categories of visually similar images. I have developed an original algorithm for clustering : ARC (Adaptive Robust Competition).
For years since the time of my PhD and my then perl-generated pure html homepage, I used to include a link to these nice kittens who play music on the beach. Years after, this link is still up. Internet is awesome. Enjoy.