Artificial Intelligence
1 Dec 2020

Vision-based Neural Scene Representations for Spacecraft

A spacecraft's capacity to be aware of its position relative to another object is essential for close-proximity operations such as active space debris removal, automatic docking, or exploration missions. Current satellites carry specific tools to measure the surrounding environment. For the cooperative objects, spacecraft typically use a radio-frequency (RF) module or a Global Navigation Satellite System (GNSS) receiver. For uncooperative objects, the most common systems used are stereoscopic cameras or Light Detection and Ranging (LiDAR) systems.

Most spacecraft embed monocular cameras, but these are rarely used to build a representation of their surrounding space environment. The objective of this project is to use Neural Rendering in order to perform 3D image synthesis of images of satellites, using a single camera. Following recent advances in computer graphics, we leverage Neural Scene Representations to learn a 3D model of a space object, using only 2D images as input. Unlike previous stereo-photogrammetry approaches, Neural Reflectance Fields are able to account for differences in lighting conditions and specular materials that occur in space.


Project overview

Neural Scene Representation is a way to learn a rich representation of a 3D environment using neural networks. A recent study proposed the Neural Radiance Field (NeRF) [1], which computes a continuous mapping from a 3D location and a 2D viewing direction to an RGB color value. This neural network is a dense multi-layer perceptron that renders view using a discretized ray-marching through the scene. More recently, a generative model for radiance fields has performed the successful 3D-aware image synthesis of objects from unposed 2D images. Generative Radiance Fields (GRAF) [2] have a patch-based discriminator that allows to sample the images at different scales and to learn a continuous representation without the information of their pose. They also learn to disentangle the shape of the object from its appearance. The generator can then be used to generate images of the object at different poses and to modify its shape and its appearance.

We used an internal tool to generate realistic images using a 3D scene of the spacecraft. We work on this dataset to compare those models and their ability to learn the original scene representation. The task's challenges rely on the scene's complex lighting condition and the specular materials used on spacecraft.

Objectives of the proposed approaches
Objectives of the proposed approaches

References:

  1. Mildenhall, B., Srinivasan, P. P., Tancik, M., Barron, J. T., Ramamoorthi, R., & Ng, R. (2020). Nerf: Representing scenes as neural radiance fields for view synthesis. arXiv preprint arXiv:2003.08934.
  2. Schwarz, K., Liao, Y., Niemeyer, M., & Geiger, A. (2020). Graf: Generative radiance fields for 3d-aware image synthesis. arXiv preprint arXiv:2007.02442.

Outcome

Artificial Intelligence Conference paper
Vision-based Neural Scene Representations for Spacecraft
Mergy, A. and Lecuyer, G. and Derksen, D. and Izzo, D.
CVPR - AI in Space workshop, arXiv preprint arXiv:2105.06405
(2021)
Download
BibTex
Hamburger icon
Menu
Advanced Concepts Team