问题描述
如何仅有效地检测 UIImageView
的非透明像素上的触摸?
How would you detect touches only on non-transparent pixels of a UIImageView
, efficiently?
考虑如下图,用 UIImageView
显示.目标是让手势识别器仅在触摸发生在图像的非透明(本例中为黑色)区域时做出响应.
Consider an image like the one below, displayed with UIImageView
. The goal is be to make the gesture recognisers respond only when the touch happens in the non-transparent (black in this case) area of the image.
- 覆盖
hitTest:withEvent:
或pointInside:withEvent:
,尽管这种方法可能非常糟糕效率低下,因为这些方法在触摸事件期间被多次调用. - 检查单个像素是否透明可能会产生意想不到的结果,因为手指大于一个像素.检查命中点周围的圆形像素区域,或尝试找到通向边缘的透明路径可能会更好.
- 最好区分图像的外部和内部透明像素.在示例中,零内的透明像素也应视为有效.
- 如果图像有变换会发生什么?
- 图像处理可以硬件加速吗?
推荐答案
这是我的快速实现:(基于 获取 UIImage 的像素 alpha 值)
Here's my quick implementation: (based on Retrieving a pixel alpha value for a UIImage)
- (BOOL)pointInside:(CGPoint)point withEvent:(UIEvent *)event {
//Using code from https://stackoverflow.com/questions/1042830/retrieving-a-pixel-alpha-value-for-a-uiimage
unsigned char pixel[1] = {0};
CGContextRef context = CGBitmapContextCreate(pixel,
1, 1, 8, 1, NULL,
kCGImageAlphaOnly);
UIGraphicsPushContext(context);
[image drawAtPoint:CGPointMake(-point.x, -point.y)];
UIGraphicsPopContext();
CGContextRelease(context);
CGFloat alpha = pixel[0]/255.0f;
BOOL transparent = alpha < 0.01f;
return !transparent;
}
这假设图像与 point
位于相同的坐标空间中.如果继续缩放,您可能需要在检查像素数据之前转换 point
.
This assumes that the image is in the same coordinate space as the point
. If scaling goes on, you may have to convert the point
before checking the pixel data.
对我来说似乎工作得很快.我正在测量大约.此方法调用需要 0.1-0.4 毫秒.它不做内部空间,并且可能不是最佳的.
Appears to work pretty quickly to me. I was measuring approx. 0.1-0.4 ms for this method call. It doesn't do the interior space, and is probably not optimal.
这篇关于仅有效地检测对 UIImageView 的非透明像素的触摸的文章就介绍到这了,希望我们推荐的答案对大家有所帮助,也希望大家多多支持编程学习网!