A Biomimetic Robotic Head Using a Model Of Ocular Tracking
Abstract
This paper describes a biomimetic vision platform that tracks moving targets with self-generated pursuit and saccadic intervals. Extensions to the controller add image analysis capabilities that provide a measure of prediction and low-level target selection. A model for the bottom-up control of visual attention in primates is presented and experimentally tested in the platform. Given an input image, the system predicts which location in the image will automatically and unconsciously shift a person’s attention towards it. Target selection relies on the extraction of a pair of 2D feature maps based on spatial discontinuities in the modalities of intensity and velocity (brightness and slip). Both maps are then combined into a single 2D “saliency map” which encodes the desired features for each pixel in the scene, irrespective of the particular feature which detected this location as conspicuous. A winner-take-all system then detects the highest- salience point in the map at any given time, and draws the focus of attention towards this location. That allows the selection of a target in a visual scene containing multiple distractors without the need of first recognizing the objects. The intensity of the target is also embodied into the gains of the controller altering the alertness of the anthropomorphic robot with respect to the brightness of the target. The parallel observation of multiple targets and the tracking of the most salient one enhance further the biomimetic nature of the robot allowing its controller to judge the significance of a target that suddenly comes into its visual field