问题描述
我对 OpenCV 相机姿态估计进行了简单的测试.将一张照片和同一张照片放大(放大)后,我使用它们来检测特征、计算基本矩阵和恢复相机姿势.
I run simple test for OpenCV camera pose estimation. Having a photo and the same photo scaled up (zoomed in) I use them to detect features, calculate essential matrix and recover camera poses.
Mat inliers;
Mat E = findEssentialMat(queryPoints, trainPoints, cameraMatrix1, cameraMatrix2,
FM_RANSAC, 0.9, MAX_PIXEL_OFFSET, inliers);
size_t inliersCount =
recoverPose(E, queryGoodPoints, trainGoodPoints, cameraMatrix1, cameraMatrix2, R, T, inliers);
因此,当我将原始图像指定为第一个图像并将缩放图像指定为第二个图像时,我得到的平移 T 接近 [0;0;-1].然而,第二台相机(放大)实际上比第一台更靠近物体.因此,如果 Z 轴从图像平面进入场景,则第二个摄像机应沿 Z 轴具有正偏移.对于我得到的结果,Z 轴从图像平面朝向相机,与其他轴(X 向右,Y 向下)形成左手坐标系.真的吗?为什么此结果与此处所示的坐标系不同?
So when I specify the original image as the first one, and the zoomed image as the second one, I get translation T close to [0; 0; -1]. However the second camera (zoomed) is virtually closer to the object than the first one. So if Z-axis goes from image plane into the scene, the second camera should have positive offset along Z-axis. For the result I get, Z-axis goes from the image plane towards camera, which among with other axes (X goes right, Y goes down) forms left-handed coordinate system. Is that true? Why this result differs from the coordinate system illustrated here?
推荐答案
根据OpenCV document,recoverPose 函数中的算法基于论文Nistér, D. An Effective solution to the 五点相对位姿问题,CVPR 2003".从本文第 2 节中的方程,我们知道它使用了基本的三角形关系(见图 这里):
According to the OpenCV document, the algorithm in the function recoverPose is based on the paper "Nistér, D. An efficient solution to the five-point relative pose problem, CVPR 2003." From equations in Section 2 in this paper, we know it uses the basic triangle relationship (see figure here):
x2 = R*x1 + t
x2 = R*x1 + t
因此,平移 t 是 cam2 帧中从 cam2 到 cam1 的向量.这就解释了为什么你得到的答案 t 接近 [0;0;-1].
Therefore, translation t is the vector from cam2 to cam1 in cam2 frame. This explains why you get the answer t close to [0; 0; -1].
这篇关于OpenCV 中的 recoverPose() 函数是左撇子吗?的文章就介绍到这了,希望我们推荐的答案对大家有所帮助,也希望大家多多支持编程学习网!