Robot manipulators require knowledge about their environment in order to perforiii their desired actions. In several robotic tasks, vision sensors play a critical role by providing the necessary quantity and quality of information regarding the robot's environment. For example, “visual serveing” algorithms may control a robot manipulator in order to track moving objects that are being imaged by a camera. Current visual serveing systems often lack the ability to detect automatically objects that appear within the camera’s field of view. In this research, we present a robust “figure/ground” framework for visually detecting objects of interest. An important contribution of this research 1s a reflection of optlinization schemes that allow the detection framework to operate within the real-time limits of visual terming systemic. The most significant of these schemes involves the use of “spontaneous” and “coatinuocs" éomelas. The nuatber ant location of coa“Muous domains are allowed to change over time, adjusting to the dynamic conditions of the detection process. We have developed actual serveing systems in order to test the framework’s feasibility and to demonstrate its usefulness for visually controlling a robot manipulator. Qc 1997 Elsevier Science Ltd.70715
Keywords—robotic visual serveing, tracking; detection; sensor-based robot control; real-time robot vision; frame-differencing
1. 1. Overview
1. INTRODUCVON
ourselves with operations that are performed with respect to them.
Flexible robotic systems require sensory information in order to interact effectively with their environ- ment. The information about a robot’s environment is important because it provides the raw data with which the robot can perceive, analyze and react to specific objects in the environment. Of all the objects that a robot encounters, only a subset is significant to the task that the robot has to accomplish. We term these to be objects of interest, and we concern
Acknow/edgemenr This research is supported by the National Science Foundation through Contracts #IRI- 9410003 and #IRI-9502245, the Center for Transportation Studies through Contract §USDOT/DTRS 93-G-0017-01, the Department of Energy (Sandia National Laboratories) through Contracts VAC-3752D and #AL-3021, the Army High Performance Computing Research Center under the auspices of the Department of the Army, Army Research Laboratory Cooperative Agreement Number DAAH04-95- 2-0003/Contract Number DAAH04-95-C-0008 (the content of which does not necessarily reflect the position of the policy of the government, and no official endorsement should be inferred), and the McKnight Land-Grant Professorship Award Program at the University of Minnesota.
•Author to whom correspondence should be addressed.
101
The sensory information may come from any of a
variety of sensors (e.g. cameras, global positioning systems, lasers, proximity detectors, radar, and tactile sensors). Cameras often provide information that is richer, more complete, and covers a larger area than the other sensors listed above. In addition, the new CCD cameras are less expensive and more accurate than the older vision sensors. However, additional challenges accompany the advantages of the vision modality. One such challenge involves the fact that the vision sensor provides no inherent signal that an object has moved into its field of view. This can be contrasted with robots that use a tactile sensor, where the presence of a nearby object is part of the sensor’s signal.
Existing research has largely focused on other issues, with an understanding that actual implemen- tations would require the addition of an object detection component. Sometimes, the detection problem can be avoided by restricting experimenta- tion to special cases where object detection is trivial. However, this issue must also be given consideration if a robot is to function in unpredictable, natural environments. Therefore, it is the goal of this research to present a method for automatically