Deep Learning in Deep Space
Deep learning is the name given to a set of techniques that make possible training neural networks with many hidden layers. It has already become a trend in the machine learning community for very good reasons. The keys of this success and many of its applications are reviewed by three of the main researches of the field in this recent paper.
‘Shallow’ learning methods allow us to teach a computer to solve different problems, however, it usually requires to handcraft a careful representation of the input data in order to achieve good results. Deep neural networks overcome this problem by finding appropriate representations with different levels of abstraction at each layer. Because of this ability to extract information directly from raw natural data they are a perfect match for tasks such as image recognition and natural language understanding. The representation capability of these networks is so large that they have been succesfully taught to mimic the style of famous artists.
Deep networks with different architectures have been developed to tackle several problems. For instance, convolutional neural networks (CNN) are specially suitable for locally structured data (e.g. images, audio). Another important architecture is the autoencoder, which can be used with unlabeled data to learn relevant features. Finally, deep networks for reinforcement learning are becoming increasingly pupular as they can be trained without supervision.
Deep learning has been used in a wide range of fields including vision, astronomy and art. The Advanced Concepts Team is pioneering its use in the space sector. Applications such as the classification of stellar data and planning of landing trajectories are under investigation. The latter is a case where machine learning can be extremely useful; given that it is not possible to operate spacecrafts in real time, they must be able to react autonomously to unexepected changes or unknown environments and we can teach them to do that.
Our ongoing research includes the extension of visual landing using deep learning. As showed in this video it is possible to land a spacecraft using only basic visual observables. A deep network might learn how to obtain this relevant information from a camera and use it to control the spacecraft, so the full pipeline would be controlled by one single network. This is particularly interesting if we train the network with reinforcement learning in a simulator (a similar work has been done to teach a network how to play Atari games). Using this approach, we do not need to tell the spacecraft how to land, it will discover the best way by itself taking into account the capabilities of its own sensors.