使用 OpenCV 将帧转换为好像从上方拍摄的一样

Transform a frame to be as if it was taken from above using OpenCV(使用 OpenCV 将帧转换为好像从上方拍摄的一样)
本文介绍了使用 OpenCV 将帧转换为好像从上方拍摄的一样的处理方法,对大家解决问题具有一定的参考价值,需要的朋友们下面随着小编来一起学习吧!

问题描述

我正在开展一个使用光流技术估计无人机(四轴飞行器)位置的项目.我目前有一个使用

我当前代码的相关部分如下:

for(;;){Mat m, disp, warp;向量<Point2f>角落;//取出帧 - 仍然失真帽子>>原帧;//使用校准参数不扭曲帧cv::undistort(origFrame, undistortFrame, cameraMatrix, distCoeffs, noArray());//通过将图片转换为灰色来降低处理工作量cvtColor(undistortFrame, 灰色, COLOR_BGR2GRAY);如果(!prevgray.empty()){//计算流量calcOpticalFlowFarneback(prevgray, gray, uflow, 0.5, 3/*def 3 */, 10/* def 15*/, 3, 3, 1.2/* def 1.2*/, 0);uflow.copyTo(流);//取平均值calcAvgOpticalFlow(流,16,角);//计算视野范围 - 2*tan(fov/2)*distancerovX = 2*0.44523*距离声纳*100;//2 * tan(48/2) * dist(cm)rovY = 2*0.32492*距离声纳*100;//2 * tan(36/2) * dist(cm)//计算最终的 x, y 位置位置[0] += (currLocation.x/WIDTH_RES)*rovX;位置[1] += (currLocation.y/HEIGHT_RES)*rovY;}//中断条件如果(等待键(1)>=0)休息;如果(结束运行)休息;std::swap(prevgray, gray);}

更新:

成功添加旋转后,我仍然需要将图像居中(并且不要超出框架窗口,如下所示).我想我需要某种翻译.我希望源图像的中心位于目标图像的中心.我该如何添加它?

有效的旋转功能:

void rotateFrame(const Mat &input, Mat &output, Mat &A , double roll, double pitch, double yaw){Mat Rx = (Mat_<double>(3, 3)<<1, 0, 0,0, cos(滚), -sin(滚),0, sin(roll), cos(roll));Mat Ry = (Mat_<double>(3, 3)<<cos(音高),0,sin(音高),0, 1, 0,-sin(音高),0,cos(音高));Mat Rz = (Mat_<double>(3, 3)<<cos(偏航),-sin(偏航),0,sin(偏航), cos(偏航), 0,0, 0, 1);垫 R = Rx*Ry*Rz;Mat trans = A*R*A.inv();warpPerspective(输入,输出,trans,input.size());}

当我使用 rotateFrame(origFrame, processesFrame, cameraMatrix, 0, 0, 0); 运行它时,我得到了预期的图像:

但是当我以 10 度滚动运行时 rotateFrame(origFrame, processesFrame, cameraMatrix, 20*(M_PI/180), 0, 0);.图像超出框架窗口:

解决方案

我得出一个结论,我必须使用 4x4 Homography 矩阵才能得到我想要的.为了找到正确的单应矩阵,我们需要:

  1. 3D 旋转矩阵R.
  2. 相机标定本征矩阵A1及其倒矩阵A2.
  3. 平移矩阵T.

我们可以通过乘以绕X、Y、Z轴的旋转矩阵来组成3D旋转矩阵R:

垫 R = RZ * RY * RX

为了在图像上应用变换并使其居中,我们需要添加由 4x4 矩阵给出的平移,其中 dx=0;dy=0;dz=1 :

Mat T = (Mat_(4, 4) <<<1, 0, 0, dx,0, 1, 0, dy,0, 0, 1, dz,0, 0, 0, 1);

给定所有这些矩阵,我们可以组成我们的单应矩阵H:

垫 H = A2 * (T * (R * A1))

有了这个单应矩阵,我们就可以使用 OpenCV 中的 warpPerspective 函数来应用转换.

warpPerspective(输入, 输出, H, input.size(), INTER_LANCZOS4);

对于这个解决方案的结论和完整性,这里是完整的代码:

void rotateImage(const Mat &input, UMat &output, double roll, double pitch, double yaw,双 dx, 双 dy, 双 dz, 双 f, 双 cx, 双 cy){//相机校准内在矩阵Mat A2 = (Mat_<double>(3,4)<<f, 0, cx, 0,0, f, cy, 0,0, 0, 1, 0);//倒置相机校准内在矩阵Mat A1 = (Mat_<double>(4,3)<<1/f, 0, -cx/f,0, 1/f, -cy/f,0, 0, 0,0, 0, 1);//围绕 X、Y 和 Z 轴的旋转矩阵Mat RX = (Mat_<double>(4, 4)<<1, 0, 0, 0,0, cos(滚动), -sin(滚动), 0,0, sin(roll), cos(roll), 0,0, 0, 0, 1);Mat RY = (Mat_<double>(4, 4)<<cos(音高),0,sin(音高),0,0, 1, 0, 0,-sin(音高),0,cos(音高),0,0, 0, 0, 1);Mat RZ = (Mat_<double>(4, 4)<<cos(偏航), -sin(偏航), 0, 0,sin(偏航), cos(偏航), 0, 0,0, 0, 1, 0,0, 0, 0, 1);//平移矩阵Mat T = (Mat_<double>(4, 4) <<1, 0, 0, dx,0, 1, 0, dy,0, 0, 1, dz,0, 0, 0, 1);//用 (RX, RY, RZ) 组成旋转矩阵垫 R = RZ * RY * RX;//最终变换矩阵垫 H = A2 * (T * (R * A1));//应用矩阵变换warpPerspective(输入,输出,H,input.size(),INTER_LANCZOS4);}

结果:

I am working on a project for estimating a UAV (quadcopter) location using optical-flow technique. I currently have a code that is using farneback algorithm from OpenCV. The current code is working fine when the camera is always pointing to the ground.

Now, I want to add support to the case when the camera is not pointing straight down - meaning that the quadcopter now has a pitch / roll / yaw (Euler angles). The quadcopters Euler angles are known and I am searching for a method to compute and apply the transformation needed based on the known current Euler angles. So that the result image will be as if it was taken from above (see image below).

I found methods that calculates the transformation when having 2 sets (source and destination) of 4 corners via findHomography or getPerspectiveTransform functions from OpenCV. But I couldn't find any methods that can do it with knowing only Euler angle (because I don't know the destination image corenrs).

So my question is what method can I use and how in order to transform a frame to be as if it was taken from above using only Euler angles and camera height from ground if necessary?

In order to demonstrate what I need:

The relevant part of my current code is below:

for(;;)
{
    Mat m, disp, warp;
    vector<Point2f> corners;
    // take out frame- still distorted
    cap >> origFrame;
    // undistort the frame using the calibration parameters
    cv::undistort(origFrame, undistortFrame, cameraMatrix, distCoeffs, noArray());
    // lower the process effort by transforming the picture to gray
    cvtColor(undistortFrame, gray, COLOR_BGR2GRAY);

    if( !prevgray.empty() )
    {
        // calculate flow
        calcOpticalFlowFarneback(prevgray, gray, uflow, 0.5, 3/*def 3 */, 10/* def 15*/, 3, 3, 1.2 /* def 1.2*/, 0);
        uflow.copyTo(flow);

        // get average
        calcAvgOpticalFlow(flow, 16, corners);

        // calculate range of view - 2*tan(fov/2)*distance
        rovX = 2*0.44523*distanceSonar*100;     // 2 * tan(48/2) * dist(cm)
        rovY = 2*0.32492*distanceSonar*100;     // 2 * tan(36/2) * dist(cm)

        // calculate final x, y location
        location[0] += (currLocation.x/WIDTH_RES)*rovX;
        location[1] += (currLocation.y/HEIGHT_RES)*rovY;
    }
    //break conditions
    if(waitKey(1)>=0)
        break;
    if(end_run)
        break;
    std::swap(prevgray, gray);
}  

UPDATE:

After successfully adding the rotation, I still need my image to be centered (and not to go outside of the frame window as shown below). I guess I need some kind of translation. I want the center of the source image to be at the center of the destination image. How can I add this as well?

The rotation function that works:

void rotateFrame(const Mat &input, Mat &output, Mat &A , double roll, double pitch, double yaw){
    Mat Rx = (Mat_<double>(3, 3) <<
              1,          0,           0,
              0, cos(roll), -sin(roll),
              0, sin(roll),  cos(roll));
    Mat Ry = (Mat_<double>(3, 3) <<
              cos(pitch), 0, sin(pitch),
              0, 1,          0,
              -sin(pitch), 0,  cos(pitch));
    Mat Rz = (Mat_<double>(3, 3) <<
              cos(yaw), -sin(yaw), 0,
              sin(yaw),  cos(yaw), 0,
              0,          0,           1);

    Mat R = Rx*Ry*Rz;
    Mat trans = A*R*A.inv();

    warpPerspective(input, output, trans, input.size());
}

When I run it with rotateFrame(origFrame, processedFrame, cameraMatrix, 0, 0, 0); I get image as expected:

But when I run it with 10 degrees in roll rotateFrame(origFrame, processedFrame, cameraMatrix, 20*(M_PI/180), 0, 0);. The image is getting out of the frame window:

解决方案

I came to a conclusion that I had to use the 4x4 Homography matrix in order to be able to get what I wanted. In order to find the right homography matrix we need:

  1. 3D Rotation matrix R.
  2. Camera calibration intrinsic matrix A1 and its inverted matrix A2.
  3. Translation matrix T.

We can compose the 3D rotation matrix R by multiplying the rotation matrices around axes X,Y,Z:

Mat R = RZ * RY * RX  

In order to apply the transformation on the image and keep it centered we need to add translation given by a 4x4 matrix, where dx=0; dy=0; dz=1 :

Mat T = (Mat_<double>(4, 4) <<
         1, 0, 0, dx,
         0, 1, 0, dy,
         0, 0, 1, dz,
         0, 0, 0, 1);

Given all these matrices we can compose our homography matrix H:

Mat H = A2 * (T * (R * A1))

With this homography matrix we can then use warpPerspective function from OpenCV to apply the transformation.

warpPerspective(input, output, H, input.size(), INTER_LANCZOS4);

For conclusion and completeness of this solution here is the full code:

void rotateImage(const Mat &input, UMat &output, double roll, double pitch, double yaw,
                 double dx, double dy, double dz, double f, double cx, double cy)
  {
    // Camera Calibration Intrinsics Matrix
    Mat A2 = (Mat_<double>(3,4) <<
              f, 0, cx, 0,
              0, f, cy, 0,
              0, 0, 1,  0);
    // Inverted Camera Calibration Intrinsics Matrix
    Mat A1 = (Mat_<double>(4,3) <<
              1/f, 0,   -cx/f,
              0,   1/f, -cy/f,
              0,   0,   0,
              0,   0,   1);
    // Rotation matrices around the X, Y, and Z axis
    Mat RX = (Mat_<double>(4, 4) <<
              1, 0,         0,          0,
              0, cos(roll), -sin(roll), 0,
              0, sin(roll), cos(roll),  0,
              0, 0,         0,          1);
    Mat RY = (Mat_<double>(4, 4) <<
              cos(pitch),  0, sin(pitch), 0,
              0,           1, 0,          0,
              -sin(pitch), 0, cos(pitch), 0,
              0,           0, 0,          1);
    Mat RZ = (Mat_<double>(4, 4) <<
              cos(yaw), -sin(yaw), 0, 0,
              sin(yaw),  cos(yaw), 0, 0,
              0,          0,       1, 0,
              0,          0,       0, 1);
    // Translation matrix
    Mat T = (Mat_<double>(4, 4) <<
             1, 0, 0, dx,
             0, 1, 0, dy,
             0, 0, 1, dz,
             0, 0, 0, 1);
    // Compose rotation matrix with (RX, RY, RZ)
    Mat R = RZ * RY * RX;
    // Final transformation matrix
    Mat H = A2 * (T * (R * A1));
    // Apply matrix transformation
    warpPerspective(input, output, H, input.size(), INTER_LANCZOS4);
}

Result:

这篇关于使用 OpenCV 将帧转换为好像从上方拍摄的一样的文章就介绍到这了,希望我们推荐的答案对大家有所帮助,也希望大家多多支持编程学习网!

本站部分内容来源互联网,如果有图片或者内容侵犯您的权益请联系我们删除!

相关文档推荐

Rising edge interrupt triggering multiple times on STM32 Nucleo(在STM32 Nucleo上多次触发上升沿中断)
How to use va_list correctly in a sequence of wrapper functions calls?(如何在一系列包装函数调用中正确使用 va_list?)
OpenGL Perspective Projection Clipping Polygon with Vertex Outside Frustum = Wrong texture mapping?(OpenGL透视投影裁剪多边形,顶点在视锥外=错误的纹理映射?)
How does one properly deserialize a byte array back into an object in C++?(如何正确地将字节数组反序列化回 C++ 中的对象?)
What free tiniest flash file system could you advice for embedded system?(您可以为嵌入式系统推荐什么免费的最小闪存文件系统?)
Volatile member variables vs. volatile object?(易失性成员变量与易失性对象?)