Welcome to OGeek Q&A Community for programmer and developer-Open, Learning and Share
Welcome To Ask or Share your Answers For Others

Categories

0 votes
376 views
in Technique[技术] by (71.8m points)

iphone - Detect touches only on non-transparent pixels of UIImageView, efficiently

How would you detect touches only on non-transparent pixels of a UIImageView, efficiently?

Consider an image like the one below, displayed with UIImageView. The goal is be to make the gesture recognisers respond only when the touch happens in the non-transparent (black in this case) area of the image.

enter image description here

Ideas

  • Override hitTest:withEvent: or pointInside:withEvent:, although this approach might be terribly inefficient as these methods get called many times during a touch event.
  • Checking if a single pixel is transparent might create unexpected results, as fingers are bigger than one pixel. Checking a circular area of pixels around the hit point, or trying to find a transparent path towards an edge might work better.

Bonus

  • It'd be nice to differentiate between outer and inner transparent pixels of an image. In the example, the transparent pixels inside the zero should also be considered valid.
  • What happens if the image has a transform?
  • Can the image processing be hardware accelerated?
See Question&Answers more detail:os

与恶龙缠斗过久,自身亦成为恶龙;凝视深渊过久,深渊将回以凝视…
Welcome To Ask or Share your Answers For Others

1 Reply

0 votes
by (71.8m points)

Here's my quick implementation: (based on Retrieving a pixel alpha value for a UIImage)

- (BOOL)pointInside:(CGPoint)point withEvent:(UIEvent *)event {
    //Using code from https://stackoverflow.com/questions/1042830/retrieving-a-pixel-alpha-value-for-a-uiimage

    unsigned char pixel[1] = {0};
    CGContextRef context = CGBitmapContextCreate(pixel,
                                                 1, 1, 8, 1, NULL,
                                                 kCGImageAlphaOnly);
    UIGraphicsPushContext(context);
    [image drawAtPoint:CGPointMake(-point.x, -point.y)];
    UIGraphicsPopContext();
    CGContextRelease(context);
    CGFloat alpha = pixel[0]/255.0f;
    BOOL transparent = alpha < 0.01f;

    return !transparent;
}

This assumes that the image is in the same coordinate space as the point. If scaling goes on, you may have to convert the point before checking the pixel data.

Appears to work pretty quickly to me. I was measuring approx. 0.1-0.4 ms for this method call. It doesn't do the interior space, and is probably not optimal.


与恶龙缠斗过久,自身亦成为恶龙;凝视深渊过久,深渊将回以凝视…
OGeek|极客中国-欢迎来到极客的世界,一个免费开放的程序员编程交流平台!开放,进步,分享!让技术改变生活,让极客改变未来! Welcome to OGeek Q&A Community for programmer and developer-Open, Learning and Share
Click Here to Ask a Question

...