混合不会去除 OpenCV 中的接缝

Blending does not remove seams in OpenCV(混合不会去除 OpenCV 中的接缝)
本文介绍了混合不会去除 OpenCV 中的接缝的处理方法,对大家解决问题具有一定的参考价值,需要的朋友们下面随着小编来一起学习吧!

问题描述

我正在尝试混合 2 个图像,以便它们之间的接缝消失.

I am trying to blend 2 images so that the seams between them disappear.

第一张图片:

第二张图片:

如果混合应用:

如果应用了混合:

我使用了ALPHA BLENDING没有接缝被移除;事实上图像仍然相同但更暗

I used ALPHA BLENDING; NO seam removed; in fact image STILL SAME BUT DARKER

这是我进行混合的部分

Mat warped1;
warpPerspective(left,warped1,perspectiveTransform,front.size());// Warping may be used for correcting image distortion
imshow("combined1",warped1/2+front/2);
            vector<Mat> imgs;
            imgs.push_back(warped1/2);
            imgs.push_back(front/2);
            double alpha = 0.5; 
            int min_x = ( imgs[0].cols - imgs[1].cols)/2 ;
            int min_y = ( imgs[0].rows -imgs[1].rows)/2 ;
            int width, height;
            if(min_x < 0) {
                min_x = 0; 
                width = (imgs).at(0).cols;
            }
            else         
                width = (imgs).at(1).cols;
            if(min_y < 0) {
                min_y = 0; 
                height = (imgs).at(0).rows - 1;
            }

            else         
                height = (imgs).at(1).rows - 1;
            Rect roi = cv::Rect(min_x, min_y, imgs[1].cols, imgs[1].rows);  
            Mat out_image = imgs[0].clone();
            Mat A_roi= imgs[0](roi);
            Mat out_image_roi = out_image(roi);
            addWeighted(A_roi,alpha,imgs[1],1-alpha,0.0,out_image_roi);
            imshow("foo",imgs[0](roi));

推荐答案

我选择根据到对象中心"的距离来定义 alpha 值,离对象中心越远,alpha 值越小.对象"由掩码定义.

I choose to define the alpha value depending on the distance to the "object center", the further the distance from the object center, the smaller the alpha value. The "object" is defined by a mask.

我已将图像与 GIMP 对齐(类似于您的 warpPerspective).它们需要在相同的坐标系中,并且两个图像必须具有相同的大小.

I've aligned the images with GIMP (similar to your warpPerspective). They need to be in same coordinate system and both images must have same size.

我的输入图像如下所示:

My input images look like this:

int main()
{

cv::Mat i1 = cv::imread("blending/i1_2.png");
cv::Mat i2 = cv::imread("blending/i2_2.png");

cv::Mat m1 = cv::imread("blending/i1_2.png",CV_LOAD_IMAGE_GRAYSCALE);
cv::Mat m2 = cv::imread("blending/i2_2.png",CV_LOAD_IMAGE_GRAYSCALE);

    // works too, for background near white
    //  m1 = m1 < 220;
    //  m2 = m2 < 220;

//    edited:  using OTSU thresholding. If not working you have to create your own masks with a better technique
cv::threshold(m1,m1,255,255,cv::THRESH_BINARY_INV|cv::THRESH_OTSU);
cv::threshold(m2,m2,255,255,cv::THRESH_BINARY_INV|cv::THRESH_OTSU);

cv::Mat out = computeAlphaBlending(i1,m1,i2,m2);

cv::waitKey(-1);
return 0;
}

具有混合功能:我想需要一些评论和优化,我稍后会添加.

with blending function: needs some comments and optimizations I guess, I'll add them later.

cv::Mat computeAlphaBlending(cv::Mat image1, cv::Mat mask1, cv::Mat image2, cv::Mat mask2)
{
// edited: find regions where no mask is set
// compute the region where no mask is set at all, to use those color values unblended
cv::Mat bothMasks = mask1 | mask2;
cv::imshow("maskOR",bothMasks);
cv::Mat noMask = 255-bothMasks;
// ------------------------------------------

// create an image with equal alpha values:
cv::Mat rawAlpha = cv::Mat(noMask.rows, noMask.cols, CV_32FC1);
rawAlpha = 1.0f;

// invert the border, so that border values are 0 ... this is needed for the distance transform
cv::Mat border1 = 255-border(mask1);
cv::Mat border2 = 255-border(mask2);

// show the immediate results for debugging and verification, should be an image where the border of the face is black, rest is white
cv::imshow("b1", border1);
cv::imshow("b2", border2);

// compute the distance to the object center
cv::Mat dist1;
cv::distanceTransform(border1,dist1,CV_DIST_L2, 3);

// scale distances to values between 0 and 1
double min, max; cv::Point minLoc, maxLoc;

// find min/max vals
cv::minMaxLoc(dist1,&min,&max, &minLoc, &maxLoc, mask1&(dist1>0));  // edited: find min values > 0
dist1 = dist1* 1.0/max; // values between 0 and 1 since min val should alwaysbe 0

// same for the 2nd image
cv::Mat dist2;
cv::distanceTransform(border2,dist2,CV_DIST_L2, 3);
cv::minMaxLoc(dist2,&min,&max, &minLoc, &maxLoc, mask2&(dist2>0));  // edited: find min values > 0
dist2 = dist2*1.0/max;  // values between 0 and 1


//TODO: now, the exact border has value 0 too... to fix that, enter very small values wherever border pixel is set...

// mask the distance values to reduce information to masked regions
cv::Mat dist1Masked;
rawAlpha.copyTo(dist1Masked,noMask);    // edited: where no mask is set, blend with equal values
dist1.copyTo(dist1Masked,mask1);
rawAlpha.copyTo(dist1Masked,mask1&(255-mask2)); //edited

cv::Mat dist2Masked;
rawAlpha.copyTo(dist2Masked,noMask);    // edited: where no mask is set, blend with equal values
dist2.copyTo(dist2Masked,mask2);
rawAlpha.copyTo(dist2Masked,mask2&(255-mask1)); //edited

cv::imshow("d1", dist1Masked);
cv::imshow("d2", dist2Masked);

// dist1Masked and dist2Masked now hold the "quality" of the pixel of the image, so the higher the value, the more of that pixels information should be kept after blending
// problem: these quality weights don't build a linear combination yet

// you want a linear combination of both image's pixel values, so at the end you have to divide by the sum of both weights
cv::Mat blendMaskSum = dist1Masked+dist2Masked;
//cv::imshow("blendmask==0",(blendMaskSum==0));

// you have to convert the images to float to multiply with the weight
cv::Mat im1Float;
image1.convertTo(im1Float,dist1Masked.type());
cv::imshow("im1Float", im1Float/255.0);

// TODO: you could replace those splitting and merging if you just duplicate the channel of dist1Masked and dist2Masked
// the splitting is just used here to use .mul later... which needs same number of channels
std::vector<cv::Mat> channels1;
cv::split(im1Float,channels1);
// multiply pixel value with the quality weights for image 1
cv::Mat im1AlphaB = dist1Masked.mul(channels1[0]);
cv::Mat im1AlphaG = dist1Masked.mul(channels1[1]);
cv::Mat im1AlphaR = dist1Masked.mul(channels1[2]);

std::vector<cv::Mat> alpha1;
alpha1.push_back(im1AlphaB);
alpha1.push_back(im1AlphaG);
alpha1.push_back(im1AlphaR);
cv::Mat im1Alpha;
cv::merge(alpha1,im1Alpha);
cv::imshow("alpha1", im1Alpha/255.0);

cv::Mat im2Float;
image2.convertTo(im2Float,dist2Masked.type());

std::vector<cv::Mat> channels2;
cv::split(im2Float,channels2);
// multiply pixel value with the quality weights for image 2
cv::Mat im2AlphaB = dist2Masked.mul(channels2[0]);
cv::Mat im2AlphaG = dist2Masked.mul(channels2[1]);
cv::Mat im2AlphaR = dist2Masked.mul(channels2[2]);

std::vector<cv::Mat> alpha2;
alpha2.push_back(im2AlphaB);
alpha2.push_back(im2AlphaG);
alpha2.push_back(im2AlphaR);
cv::Mat im2Alpha;
cv::merge(alpha2,im2Alpha);
cv::imshow("alpha2", im2Alpha/255.0);

// now sum both weighted images and divide by the sum of the weights (linear combination)
cv::Mat imBlendedB = (im1AlphaB + im2AlphaB)/blendMaskSum;
cv::Mat imBlendedG = (im1AlphaG + im2AlphaG)/blendMaskSum;
cv::Mat imBlendedR = (im1AlphaR + im2AlphaR)/blendMaskSum;
std::vector<cv::Mat> channelsBlended;
channelsBlended.push_back(imBlendedB);
channelsBlended.push_back(imBlendedG);
channelsBlended.push_back(imBlendedR);

// merge back to 3 channel image
cv::Mat merged;
cv::merge(channelsBlended,merged);

// convert to 8UC3
cv::Mat merged8U;
merged.convertTo(merged8U,CV_8UC3);

return merged8U;
}

和辅助函数:

cv::Mat border(cv::Mat mask)
{
cv::Mat gx;
cv::Mat gy;

cv::Sobel(mask,gx,CV_32F,1,0,3);
cv::Sobel(mask,gy,CV_32F,0,1,3);

cv::Mat border;
cv::magnitude(gx,gy,border);

return border > 100;
}

结果:

忘记了一个函数;)现在保持原始背景

edit: forgot a function ;) edit: now keeping original background

这篇关于混合不会去除 OpenCV 中的接缝的文章就介绍到这了,希望我们推荐的答案对大家有所帮助,也希望大家多多支持编程学习网!

本站部分内容来源互联网,如果有图片或者内容侵犯您的权益请联系我们删除!

相关文档推荐

Rising edge interrupt triggering multiple times on STM32 Nucleo(在STM32 Nucleo上多次触发上升沿中断)
How to use va_list correctly in a sequence of wrapper functions calls?(如何在一系列包装函数调用中正确使用 va_list?)
OpenGL Perspective Projection Clipping Polygon with Vertex Outside Frustum = Wrong texture mapping?(OpenGL透视投影裁剪多边形,顶点在视锥外=错误的纹理映射?)
How does one properly deserialize a byte array back into an object in C++?(如何正确地将字节数组反序列化回 C++ 中的对象?)
What free tiniest flash file system could you advice for embedded system?(您可以为嵌入式系统推荐什么免费的最小闪存文件系统?)
Volatile member variables vs. volatile object?(易失性成员变量与易失性对象?)