Biologically inspired reactive robot navigation based on a combination of central and peripheral vision
In this work, we present a new method for vision-based, reactive robot navigation that enables a robot to move in the middle of the free space by exploiting both central and peripheral vision. The robot employs a forward-looking camera for central vision and two side-looking cameras for sensing the periphery of its visual field. The developed method combines the information acquired by this trinocular vision system and produces low-level motor commands that keep the robot in the middle of the free space. The approach follows the purposive vision paradigm in the sense that vision is not studied in isolation but in the context of the behaviors that the system is engaged as well as the environment and the robot's motor capabilities. It is demonstrated that by taking into account these issues, vision processing can be drastically simplified, still giving rise to quite complex behaviors. The proposed method does not make strict assumptions about the environment, requires very low level information to be extracted from the images, produces a robust robot behavior and is computationally efficient. Results obtained by both simulations and from a prototype on-line implementation demonstrate the effectiveness of the method.
The geometry of the camera placement.
Actual placement of peripheral cameras on top of Charlie
Simulation of the robot behavior.
*** You may also download a video of the above simulation, or download a video (4,37 Mbytes) showing an on-line experiment ***