Taking
inspiration from psychologist and neurobiologist works, we have already
proposed solutions to perform robot
navigation in small environments (one or two rooms) based on a
dynamical process
linking sensory and motor information.Our robotic architecture thus
codes for
place in the environment (based on place cell model) and learns to link
this code
with a motor command. A robot learning a few couples of this kind can
exhibit
homing behaviour : a competitive mechanism on these place/command
associations tend to create a basin of attraction that converges to the
goal place. The same mechanism can also be used for patrol behavior
(trajectory learned with human interaction).
But several questions remain open for
performing the same task in large environments.
Autonomous robotic systems are facing
measure imprecision coming from sensors and uncertainty linked with
dynamic environments. Long term navigation and exploration are very
complex tasks as emphased by the complexity of brain neural structures
involved in their processing. Scale changes on the size of the
environment in which the robot navigates imply to learn and handle much
larger volume of
data. Furthermore how to overcome the signal over noise ratio that
tends to
decrease as more information have to be learned? How to disambiguate
visual information to reduce uncertainty on localisation?
Robotic system aiming at an autonomous
behavior have to cope with two main challenges.
First, the system has to
actively extract and learn robust information
that it found relevant to adapt
its current behaviour. Second, this kind
of robotic system must be able to
auto-evaluate its performances to detect failure or dead lock that
might occur while
interacting with a complex and dynamical
environment (while local
elementary decision can be correct!).
Hence we can define autonomy for a robot as
the ability to detect and to correct
failures in its behavior.