We currently have 3 open PhD positions to work on machine learning for scene perception and understanding. Please e-mail me with CV and detailed motivation if interested. These positions are filled or no longer open
All positions are located at ONERA centre in Palaiseau, near Paris, France. European or Swiss citizenships might be prefered for funding reasons.
Neural Networks for Multimodal (aerial / streetview / text) Geospatial Analysis
More and more data are now geo-localized, and this opens a whole new research area at the intersection of remote sensing (aerial or satellite images), computer vision (standard images shot from the ground) and machine learning (text and structured information). Hence, this relationship between heterogeneous data leads to ask questions with many practical applications: self localization in autonomous driving; fake news disambiguation; precise land use classification; making land classification understandable for humans. Long description
Deep Neural Networks for 3D Prediction in the Wild
Co-supervised with Pauline Trouvé-Peloux and Frédéric Champagnat.
3D estimation is crucial for scene understanding (autonomous driving…) and accurate 3D reconstruction (3D mapping, robotics…). However, open environments, outdoors or indoors (“in the wild”) abound with challenging situations which require robust and efficient systems. With Marcela Carvalho, we developed award-winning, state-of-the-art approaches to depth estimation by deep learning.
The objective of the thesis project is two-fold. First, to push further this research work by developing convolutional neural networks (CNNs) for directly estimating 3D point clouds, instead of depth rasters. Indeed, 3D point clouds are the standard in 3D data acquisition with laser and photogrammetry, and hence in 3D perception. Second, to co-design a smart system able to predict depth in the wild by combining specific camera optics with the adequate deep learning model. Long description
Neural Networks for Multi-temporal 3D Data Semantic Segmentation
Co-supervised with Alexandre Boulch
3D data are now becoming the common standard for environment perception and analysis. They even replace images in many usecases: autonomous vehicles, robotics, urban cartography, forensics, etc. In the field of 3D scene analysis, we developed SnapNet which is among the leading state-of-the-art approaches for urban semantic segmentation and robotics. This thesis aims at improving 3D scene understanding techniques for robotics. It will tackle the case of dynamic 3D data, either as a mean for adding consistency to the semantics or a proxy to detect changes. Long description