
[40] |
Title: Robot hand-eye coordination using neural networks |
Written by: Smagt P van der, Groen F, Kr{\"o}se B |
in: Oct. 1993 |
Volume: Number: CS-93-10 |
on pages: |
Chapter: |
Editor: |
Publisher: |
Series: |
Address: |
Edition: |
ISBN: |
how published: |
Organization: |
School: |
Institution: Dept. of Comp. Sys, University of Amsterdam |
Type: |
DOI: |
URL: |
ARXIVID: |
PMID: |
Note:
Abstract: This paper focuses on static hand-eye coordination. The key issue that will be addressed is the construction of a controller that eliminates the need for calibration. Instead, the system should be self-learning and must be able to adapt itself to changes in the environment. In this application, only positional information in the system will be used; hence the above reference `static.' Three coordinate domains are used to describe the system: the Cartesian world-domain, the vision domain, and the robot domain. The task that is set out to be solved is the following. A robot manipulator has to be positioned directly above a pre-specified target, such that it can be grasped. The target is specified in terms of visual parameters. Only the (x,y,z) position of the end-effector relative to the target is taken into account; this suffices for many pick-and-place problems encountered in industry. (In a number of cases, also the rotation of the hand is of importance, but this rotation can be executed separate from the 3D positioning problem.) Thus the remaining problem is 3 degrees-of-freedom (DoF).