Artificial Intelligence
1 Jul 2011

Time-To-Contact for spacecraft landing

Can the time-to-contact be a useful visual cue to achieve robust and mass-efficient spacecraft landing?

Many recent studies on autonomous spacecraft landing use computer vision methods to improve the accuracy of the state estimates used for landing [1-3]. Typically, these studies integrate the vision module with other exteroceptive sensors such as laser or radar altimeters. This is a sensible approach for the main landing system of a large spacecraft. However, for a backup emergency system or for much smaller spacecrafts, a robust light-weight solution entirely based on vision and proprioceptive sensors (e.g. gyros) would be desirable.


Study Description

Small flying animals are capable of safe and accurate landings while relying only on proprioceptive and visual information. Since this capability holds a promise of landing safely with limited sensors and processing, it has served as inspiration for recent spacecraft landing studies [4-6].

The focus of these studies has been on the use of ventral optic flow , a measure of the translational velocity divided by the height. Bees are known to use optic flow for controlling their speed and height, also when landing [7-9]. In particular, when landing, they use a strategy of keeping the ventral optic flow constant [10]. Valette et al. study a control law that implements this strategy and they simulate some landings on the moon [5].

The disadvantages of the sole use of ventral optic flow for landing are two-fold. First, the vertical dynamics of the lander is left free. The ventral flow can have the same constant value for a trajectory in which the lander ascends while accelerating and a trajectory in which the lander descends while decelerating. Thus, one has to directly or indirectly assume some type of descent profile, for example by introducing a pitch law for the spacecraft [5]. Without the use of additional exteroceptive information to compute an optimal pitch profile, this leads to a considerable expense of propellant [6] and to undefined final low-gate conditions. Second, in the case of a straight vertical landing the ventral flow is (close to) zero. In such a case, e.g. in an asteroid landing scenario, the ventral flow does not provide any information on how to land the spacecraft.

In this study, we investigate a landing strategy that complements ventral optic flow with another visually observable measure known to play an important role in animal landings: the time-to-contact (TTC) [11,12]. In the context of landing, the time-to-contact is a measure of the height divided by the vertical velocity. When approaching the landing surface, there is an increasing expansion of imaged ground features. This expansion can be measured to estimate the time-to-contact (cf. [13,14]).

Preliminary results indicate that the use of TTC leads to a safe and mass-efficient strategy for landing that only relies on proprioception and vision. For example, in a theoretical analysis we show that assuming the ventral optic flow and time-to-contact to be known from the camera images, a spacecraft can make a soft landing on a planetary surface with a minimal mass penalty with respect to an optimal mass landing. Our latest experiments involve measuring the TTC from images in the realistic space simulator PANGU. They indicate that the landing strategy still functions in the presence of the noise and delay expected in a real application.

The landing strategy studied in this project is also relevant to areas outside of the space sector, such as autonomous landing of Unmanned Air Vehicles (cf. [15,16]).

References

  1. Johnson, A., Willson, R., Cheng, Y., Goguen, J., Leger, C., Sanmartin, M., and Matthies, L., Design through operation of an image-based velocity estimation system for Mars landing, International Journal of Computer Vision, Vol. 74, No. 3, 2007, pp. 319-341.
  2. Mourikis, A., Trawny, N., Roumeliotis, S., Johnson, A., Ansar, A., and Matthies, L., Vision-Aided Inertial Navigation for Spacecraft Entry, Descent, and Landing, IEEE Transactions on Robotics, Vol. 25, No. 2, 2009, pp. 264-280.
  3. Sibley, G., Matthies, L., and Sukathme, G., Sliding window filter with application to planetary landing, Journal of Field Robotics. Special Issue: Visual Mapping and Navigation Outdoors., Vol. 27, No. 5, 2010, pp. 587-608.
  4. Orchard, G., Bartolozzi, C., and Indiveri, G., Applying neuromorphic vision sensors to planetary landing tasks, Proceedings of the Biomedical Circuits and Systems Conference, BIOCAS, IEEE, Nov. 2009, pp. 201-204.
  5. Valette, F., Ruffier, F., Viollet, S., and Seidl, T., Biomimetic optic flow sensing applied to a lunar landing scenario, Proceedings of the IEEE International Conference on Robotics and Automation (ICRA), IEEE, 2010, pp. 2253-2260.
  6. Izzo, D., Weiss, N., and Seidl, T., Constant-Optic-Flow Lunar Landing: Optimality and Guidance, Journal of Guidance, Control, and Dynamics, Vol. 34, No. 5, 2011, pp. 1383-1395.
  7. Preiss, R., Motion parallax and figural properties of depth control flight speed in an insect, Biological Cybernetics, Vol. 57, No. 1{2, 1987, pp. 1-9.
  8. Baird, E., Srinivasan, M., Zhang, S., and Cowling, A., Visual control of flight speed in honeybees, Journal of Experimental Biology, Vol. 208, 2005, pp. 3895-3905.
  9. Baird, E., Srinivasan, M., Zhang, S., Lamont, R., and Cowling, A., Visual control of flight speed and height in the honeybee, Lecture notes in computer science, 2006, pp. 40-51.
  10. Srinivasan, M., Zhang, S., Lehrer, M., and Collett, T., Honeybee Navigation en Route to the Goal: Visual Flight Control and Odometry, The Journal of Experimental Biology, Vol. 199, 1996, pp. 237-244.
  11. Lee, D., A theory of visual control of braking based on information about time-to-collision, Perception, Vol. 5, 1967, pp. 437-459.
  12. Lee, D., Davies, M., Green, P., and Weel, F. v., Visual control of velocity of approach by pigeons when landing, Journal of experimental biology, Vol. 180, 1993, pp. 85-104.
  13. Koenderink, J. and Doorn, A. v., Facts on optic flow, Biological Cybernetics, Vol. 56, 1987, pp. 247-254.
  14. Takeda, N., Watanabe, M., and Onoguchi, K., Moving obstacle detection using residual error of FOE estimation, Proceedings of the 1996 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS), Vol. 3, IEEE, 1996, pp. 1642-1647.
  15. Green, W., Oh, P., Barrows, G., and Sevcik, K., Autonomous Landing for Indoor Flying Robots Using Optic Flow, Proceedings of the ASME International Mechanical Engineering Congress and Exposition (IMECE), Washington, D.C., Vol. 2, 2003, pp. 2347-2352.
  16. Green, W. and Oh, P., Optic Flow Based Collision Avoidance on a Hybrid MAV, IEEE Robotics and Automation Magazine, Vol. 15, No. 1, 2008, pp. 96-103.

Outcome

Artificial Intelligence Peer reviewed article
Landing with Time-to-Contact and Ventral Optic Flow Estimates
Izzo, D. and de Croon, G.C.H.E.
Journal of Guidance Control and Dynamics 35, no. 4: 1362
(2012)
Download
BibTex
Artificial Intelligence Peer reviewed article
Time-to-Contact Estimation in Landing Scenarios using Feature Scales
de Croon, G.C.H.E. and Izzo, D. and Schiavone, G.
Acta Futura 5: 73-82
(2012)
Download
BibTex
Hamburger icon
Menu
Advanced Concepts Team