p c 。t //: (5) and p y q y =f 2 0, for any two imaged object points

P D  Pc C  P

The tensor  (of  degree  2) Mc D A。t /A。t /−1 can p  and  q。  The  centroid  dynamics  is  given  by  the

following first-order motion field expression [1]:

P

be  characterized  in  terms  of  differential  invariants

[13]。 p 。t / D −f =zc0pxc =c 0 −f  pyc cV cno ;

Pc 0−f =zc pyc =c f   0  −pxc (8)

3。 Hybrid visual servoing

In this section, a hybrid state space representation of camera–object interaction is first derived, and then a robust control law is synthesized。

3。1。  State representation

According to the first-order spatial structure of the motion field of Eq。 (5), the dynamic evolution of any image patch enclosing the object has six degrees of freedom, namely the velocity centroid coordinates vc — accounting for rigid translations of the whole patch — and the entries of the 2 2 tensor Mc — related to changes in shape of the patch [13]。 Let us choose as the state of the system the 6-vector

x D [pxc ; pyc ;   − '; p; q; c]T; (6)

which is a hybrid vector, since it includes both im-age space 2D information and 3D orientation and dis-tance parameters。 Notice that the choice of − ' is due to the fact that this quantity is well defined also in the fronto-parallel configuration, which is a singularity of orientation representation for the angles  ; , and '。 We demonstrate below that the state space repre-sentation of camera–object interaction can be written asxP D B。x/cV cno; (7)

where the notation aV bnc stands for relative twist screw of frame hbi with respect to frame hci ex-pressed in frame hai。 The system described by Eq。(7) is a driftless, input-affine nonlinear system, where cV cno D cV cna − cV ona is the relative twist screw

of camera and object。 cV cna D [cvTcna ; c!Tcna ]T is the control input and cV ona D [cvTona ; c!Tona ]T is a distur-bance input, and hai is an arbitrary reference frame。

Assuming that the object is almost centered in the vi-sual field, and sufficiently far from the camera plane, it 

摘要:在本文中,视觉伺服问题通过耦合非线性控制理论与机器人使用的视觉信息的方便表示来解决。基于线性相机模型的视觉表示是非常紧凑的以符合主动视觉要求。假设精确的模型和状态测量,设计的控制律被证明确保在Lyapunov感觉的全局渐近稳定性。还表明,在有界不确定性的存在下,闭环行为的特征在于全局吸引子。通过选择包括图像空间(2D)信息和3D对象参数的混合视觉状态向量,在控制级解决了使用线性相机模型产生的众所周知的姿态模糊性。阐述了避免相机校准的在线视觉状态估计的方法。模拟和实时实验验证了系统收敛和控制鲁棒性方面的理论框架。 ©1999 Elsevier Science B。V。保留所有权利。文献综述

上一篇:薄壁矩形混凝土填充管(RCFT)柱英文文献和中文翻译
下一篇:气动系统中的冷凝英文文献和中文翻译

ZigBee-RFID混合网络的节电英文文献和中文翻译

模糊PLC系统的伺服机构英文文献和中文翻译

人工神经网络的电液比例...

H∞滤波器视觉伺服系统英文文献和中文翻译

机器视觉维护检测与跟踪...

机器视觉维修系统英文文献和中文翻译

视觉伺服系统英文文献和中文翻译

网络语言“XX体”研究

ASP.net+sqlserver企业设备管理系统设计与开发

麦秸秆还田和沼液灌溉对...

新課改下小學语文洧效阅...

LiMn1-xFexPO4正极材料合成及充放电性能研究

老年2型糖尿病患者运动疗...

我国风险投资的发展现状问题及对策分析

安康汉江网讯

互联网教育”变革路径研究进展【7972字】

张洁小说《无字》中的女性意识