Artificial Intelligence
1 Apr 2022

Hyperspectral Image Demosaicing for the HERA Mission

Background

Fig. 1: Filter Layout of Hyperscout sensor [2]
Fig. 1: Filter Layout of Hyperscout sensor [2]

In the world’s first test of asteroid deflection, Hera will perform a detailed post-impact survey of the target asteroid, Dimorphos – the orbiting Moonlet in a binary asteroid system known as Didymos. Once NASA’s DART mission has impacted the moonlet, Hera will turn the grand-scale experiment into a well-understood and repeatable planetary defence technique. [1]

In order to collect information about the impact HERA has a version of the Hyperscout 2 instrument onboard, a hyperspectral imaging camera with 25 spectral bands (channels) ranging from 400 to 1000 nm, to record images of Dimorphos after the impact of DART.

The camera has a filter array where each sensor pixel is able to record exactly one of the 25 channels (Fig. 1). This allows to capture all channels at once without any time delay. However it has the disadvantage that each channel will have spatial information gaps. To obtain a complete hyperspectral image one needs to “fill in” the missing pixel values. This process is called image demosaicing.

Project goals

Fig. 2: Hyperspectral Quantum Efficiency of Hyperscout Sensor [2]
Fig. 2: Hyperspectral Quantum Efficiency of Hyperscout Sensor [2]
Extensive research on demosaicing algorithms on true color (RGB) images has happened in recent years. However, performing demosaicing on hyperspectral images versus on RGB images can be significantly more difficult. RGB images have 3 channels meaning that for one known pixel value we need to predict 2 missing ones. In our hyperspectral case we have 25 channels meaning that for one known pixel value we need to predict 24 missing ones. Additionally, the hyperspectral camera sensor introduces crosstalk (Fig. 2) across the channels.

This project aims to address those issues and will provide a novel and better way to reconstruct hyperspectral images. It consists of two major steps: (i) Creating a realistic dataset of images and (ii) training a neural network.

Dataset creation

Fig. 3: Trajectory of HERA (gray) around Dimorphos (red) with position samples used for image generation (blue).
Fig. 3: Trajectory of HERA (gray) around Dimorphos (red) with position samples used for image generation (blue).
Since the HERA mission is the first of its kind, there is no data about Dimorphos or comparable asteroids available. Therefore, we use a simulator based on blender provided by the Institute of Geology of the Czech Academy of Sciences and University of Helsinki to generate ground truth images at the desired spectral bands. Fig. 3 shows the position samples of the HERA spacecraft that were used to as input to the simulator. Afterwards crosstalk and spatial gaps due to mosaicing gets modelled to obtain the raw, noisy images.

Network training

We propose a modified version of a Densely Connected Residual Network by Park et al. [3] that we adapted to our hyperspectral case to train on our dataset. In a first training approach with only 70k trainable parameters and a model footprint of less than 1MB we are able to reconstruct the hyperspectral image with a PSNR higher than baseline approaches (Bicubic Upsampling). We are currently developing and evaluating our model further.

References

[1] European Space Agency. ESA's planetary defence mission. https://www.esa.int/Safety_Security/Hera

[2] Mihoubi, S. (2018). Snapshot multispectral image demosaicing and classification. Doctoral Thesis, Université de Lille. https://hal.archives-ouvertes.fr/tel-01953493

[3] Park, B., & Jeong, J. (2019). Color Filter Array Demosaicking Using Densely Connected Residual Network. IEEE Access. PP. 1-1. 10.1109/ACCESS.2019.2939578.

Hamburger icon
Menu
Advanced Concepts Team