My research interests are at the interface between computational neuroscience, psychology and robotics, focusing on cognition and emotions in both biological and artificial autonomous systems.
The goal of my research is twofold :
- Firstly, it strives to a better understanding of the neural machinery in living beings involved in cognitive, emotional and social functions.
- Secondly, the modelling of these functions could lead to new control architectures for autonomous mobile robots able to learn and to exhibit flexible and adaptive behaviors.
Common to most animals, navigation is also a required skill for autonomous mobile robots and it thus provides an ideal ground for studying both biological and artificial cognition.
I thus study, through neurobiologically inspired robotics (neurorobotics), how the cognitive abilities involved in navigation, ranging from multi-modal perception, sensorimotor coupling to action selection, are built by the robot body's interactions with the environment (physical and social). Navigation behaviors of these autonomous robots are evaluated both in indoor and large outdoor environments.
In parrallal to these works, I also investigate how emotional signals could modulate several of these cognitive functions (perception, action selection, attention).
Research topics
- Biologically Inspired Vision System
Vision plays a prominent role in our robotic control architectures. Basicaly, the visual chain is composed of a neural mechanism that learns, for each interesting region (landmark) extracted in the image by an attentionnal mechanism, the product of the "What" information (its identity) and the "Where" information (its azymuth).
I propose and study two variants of this model:
- Visual scene recognition based on global and local image features:
- Global features extracted to code visual contexts (GIST).
- Contexts drive the recognition of local visual descriptors (hierarchical model).
- Application to Outdoor Robot localisation.
- RobotSoc: Implementation of attentional mechanisms in an artificial vision system:
- Scale space vision system
- Hierarchical neural network for multi-scale feature learning (coarse-to-fine).
- Hardware (FPGA) implementation of the vision space.
- Application to a smart-camera for indoor robot localisation and navigation.
- Modelling brain structures involved in spatial cognition
In close link with neurobiologist we develop a model of several brain structures (and their interactions) implicated in spatial memory and navigation. This model is evaluated both in simulation and on real robots performing motivated navigation (see next item) and it tries to take into account biological data about the four following brain structures:
- Entorhinal Cortex (EC):
- Grid cell model (GC). Based on a modulo compression of path integration signal.
- Relicating Hatfting 2005 experiment on a robot.
- Hippocampus (HS).
- Place cell (PC) model based on visual input and GC activities.
- Transition cell model. Link two successive PCs.
- Hierarchical PC model. PC with large place field (local context) modulates PC activities with a narrow place field.
- Prefrontal cortex (PFC). Cognitive map
- Basal Banglia (BG). Dynamical neural field (DNF) for stable action selection.
- Neurorobotics architecture for robot navigation
Application of the above models to robot navigation/localisation following a bottom-up approach. Navigation strategies are designed in these control architectures from sensorimotor loops learned in the environement either autonomously or with human interactions.
Recently I also investigate how such models could pass the scale change by evaluating and developing new architectures for self-driving vehicles (PhD thesis of Y. Espada).
-
Sensorimotor learning:
- Place cell/action
- Transition cell/action
- Sensorimotor strategies:
- Obstacle avoidance (DNF, "reflex" control mechanism, no learning)
- Road following (vanishing point, "reflex" control mechanism, no learning)
- Route following (place cell/action association, learning). Can detect "loop-closure".
- Path planning (transition cells, cognitive map, learning, motivations)
- Studying the interplay between cognition and emotions mechanisms
I investigate the role played by emotions in learning, adaptive behavior and autonomy. Our model is based on a continuous appraisal of both robot sensations and internal state signals (novelty, progress and stagnation). This model can elicit affects (frustration and boredom) usefull for comunication purpose (HRI, learning by immitation) but that can also play a key role in behavior learning and regulation. I thus studied how emotional signals could modulate several functions:
- How a self-assesment mechanism driven by the evolution of its sensorimotor predictions (novelty detection) can autonomously control the learning and adaptation of sensorimotor coupling to increase the robot behavior performance.
.
- How the same self-regulation mechanism driven by emotions (frustration and boredom) can modulate decisonnal process in different robotic tasks:
- Selection of the most appropriate navigationnal strategy to apply in a given context.
- Selection of visual target by an attention mechanism while the robot performs a visual search task.
- How the emotional valence extracted from sensation can modulate the robot perception (peripersonal space) and its impact on interaction with others.
Projects
- 2016-20XX: Neurorobotics architectures for Self-driving vehicles (in partnership with the VEDECOM institut).
- 2014-20XX: RobotSoc: a robotic system to study visual perception.
- 2011-2014: Neurorobot (ANR): Coding of information on different time scales for spatial decision - Grid cell modelling
- 2010-2013: Auto-eval (Dim LSC): Self-assesment of a mobile robot