当前我有4对匹配点,图像特征的2D位置以及其对应的3D位置都有,我也标定了相机,获得了相机内参和畸变参数,然后我是用python 里面的 cv2.solvePnP(),方法选择AP3P,因为这个方法所需的特征点最少。求得R、T之后,我在利用 `-R.int()*t`获得相机在世界坐标系的位置。不过我现在有个问题,就是获得的结果中,x轴的偏差很大,有几十厘米的误差,然而y轴只有几厘米的误差,为什么?
下面是我的测试代码,求大佬解答。
import cv2
import numpy as np
# unit mm
IntrinsicMatrix = np.array ([[5816.91523, 0., 2634.55839],
[0., 5787.34531, 3687.93429],
[0., 0., 1.]])
# unit mm
distCoeffs = np.array ([0.0597, -0.0416, 0.005, -0.0058,0.])
# unit cm
Points3D[0, 0,:] = -7.0, 7.0, 0.0
Points3D[1, 0,:] = 7.0, 7.0, 0.0
Points3D[2, 0,:] = 7.0, -7.0, 0.0
Points3D[3, 0,:] = -7.0, -7.0, 0.0
# unit cm
Points2D[0,0,:] = 2365.,3668.
Points2D[1,0,:] = 2766.,3671.
Points2D[2,0,:] = 2771.,3261.
Points2D[3,0,:] = 2364.,3258.
found, rvecs, tvecs = cv2.solvePnP(objectPoints=Points3D,
imagePoints=Points2D,
cameraMatrix=IntrinsicMatrix,
distCoeffs=distCoeffs,
flags=cv2.SOLVEPNP_AP3P
)
if found:
R,_ = cv2.Rodrigues(rvecs)
if np.linalg.matrix_rank(R) == 3:
position_result = -np.linalg.inv(R).dot(tvecs)
我的结果如下:
# unit cm
position :
[[ -39.0898995 ]
[ 2.77867074]
[-192.27239882]]
我用相机正对着图片拍摄,保持水平(图片出现在相机中央)。我以图片为中心为世界坐标系(坐标系平行于相机的XOY)的原点,3D四点分别顺序是参考cv2.solovePnP的解释
. - point 0: [-squareLength / 2, squareLength / 2, 0] . - point 1: [ squareLength / 2, squareLength / 2, 0] . - point 2: [ squareLength / 2, -squareLength / 2, 0] . - point 3: [-squareLength / 2, -squareLength / 2, 0]
。理论上来说,结果应该为[0,0,0]左右,为什么x轴有这么大的误差?