Evolution of adaptive behavior in a gravity-varying environment
Can we design a controller that can adapt to changing environmental conditions? If the environment is unknown in advance and dynamic, what is the best strategy to use for an agent and which methodology?
Optimal control theory can provide the best control algorithm given perfect knowledge of the environment an agent (rover, spacecraft etc.) will operate in. However, such conventional design tools fail to design good control strategies when the environment is dynamic or in general not fully predictable. This raises the following questions:
- What is the best strategy an agent could use when prediction is impossible?
- What methodological tools can we employ to design a controller using this strategy?
The aim of the study is to design controllers for autonomous agents that are general enough to produce adaptive behaviour in unknown environments. The focus on the design is shifted from optimal to adaptive design.
In particular, we use artificial neural networks (ANNs) designed by artificial evolution as controllers for robots or spacecrafts that have to navigate in environments with varying and a priori unknown gravity fields. It is well known that ANNs have very good generalisation capabilities and traning them via evolutionary techniques, (a technique called Evolutionary Robotics), can produce good solutions that take advantage of the interaction between the agent and the environment.