Since the reference points used to self-calibrate are specified in the controller's coordinate space, linear errors in the kinematic model are effectively bypassed. The system must still cope with any nonlinearities in control, as well as those caused by strong perspective effects.
We take an iterative approach, based upon relative positions. The manipulator moves in discrete steps; each motion is proportional to the difference between the gripper's perceived position and orientation, and those of the target plane. This is equivalent to an integrating control architecture, in which the error term is summed at each time step (figure 3).
The gain is set below unity to prevent instability, even when the vision system is miscalibrated. The process repeats until the perceived coordinates of the gripper coincide with those of the target. Alternatively, to implement a particular grasping strategy, an offset can be introduced into the control loop to specify the final pose of the gripper relative to the target plane. Irrespective of the accuracy of stereo and hand-eye calibration visual feedback will ensure that target and gripper are aligned.