Welcome to OGeek Q&A Community for programmer and developer-Open, Learning and Share
Welcome To Ask or Share your Answers For Others

Categories

0 votes
369 views
in Technique[技术] by (71.8m points)

iphone - How to get a rotated, zoomed and panned image from an UIImageView at its full resolution?

I have an UIImageView which can be rotated, panned and scaled with gesture recognisers. As a result it is cropped in its enclosing view. Everything is working fine but I don't know how to save the visible part of the picture in its full resolution. It's not a screen grab.

I know I get the UIImage straight from the visible content of the UIImageView but it is limited to the resolution of the screen.

I assume that I have to do the same transformations on the UIImage and crop it. IS there an easy way to do that?

Update: For example, I have an UIImageView with an image in high resolution, let's say a 8MP iPhone 4s camera photo, which is transformed with gestures, so it becomes scaled, rotated and moved around in its enclosing view. Obviously there is some cropping going on so only a part of the image is displayed. There is a huge difference between the displayed screen resolution and the underlining image resolution, I need an image in the image resolution. The UIImageView is in UIViewContentModeScaleAspectFit, but a solution with UIViewContentModeScaleAspectFill is also fine.

This is my code:

- (void)rotatePiece:(UIRotationGestureRecognizer *)gestureRecognizer {

    if ([gestureRecognizer state] == UIGestureRecognizerStateBegan || [gestureRecognizer state] == UIGestureRecognizerStateChanged) {
        [gestureRecognizer view].transform = CGAffineTransformRotate([[gestureRecognizer view] transform], [gestureRecognizer rotation]);
        [gestureRecognizer setRotation:0];
    }
}

- (void)scalePiece:(UIPinchGestureRecognizer *)gestureRecognizer {

    if ([gestureRecognizer state] == UIGestureRecognizerStateBegan || [gestureRecognizer state] == UIGestureRecognizerStateChanged) {
        [gestureRecognizer view].transform = CGAffineTransformScale([[gestureRecognizer view] transform], [gestureRecognizer scale], [gestureRecognizer scale]);
        [gestureRecognizer setScale:1];
    }
}

-(void)panGestureMoveAround:(UIPanGestureRecognizer *)gestureRecognizer;
{
    UIView *piece = [gestureRecognizer view];

    //We pass in the gesture to a method that will help us align our touches so that the pan and pinch will seems to originate between the fingers instead of other points or center point of the UIView    
    if ([gestureRecognizer state] == UIGestureRecognizerStateBegan || [gestureRecognizer state] == UIGestureRecognizerStateChanged) {

        CGPoint translation = [gestureRecognizer translationInView:[piece superview]];
        [piece setCenter:CGPointMake([piece center].x + translation.x, [piece center].y+translation.y)];
        [gestureRecognizer setTranslation:CGPointZero inView:[piece superview]];
    } else if([gestureRecognizer state] == UIGestureRecognizerStateEnded) {
        //Put the code that you may want to execute when the UIView became larger than certain value or just to reset them back to their original transform scale
    }
}


- (BOOL)gestureRecognizer:(UIGestureRecognizer *)gestureRecognizer shouldRecognizeSimultaneouslyWithGestureRecognizer:(UIGestureRecognizer *)otherGestureRecognizer {
    // if the gesture recognizers are on different views, don't allow simultaneous recognition
    if (gestureRecognizer.view != otherGestureRecognizer.view)
        return NO;

    // if either of the gesture recognizers is the long press, don't allow simultaneous recognition
    if ([gestureRecognizer isKindOfClass:[UILongPressGestureRecognizer class]] || [otherGestureRecognizer isKindOfClass:[UILongPressGestureRecognizer class]])
        return NO;

    return YES;
}

- (void)viewDidLoad
{
    [super viewDidLoad];
    // Do any additional setup after loading the view from its nib.
    appDelegate = (AppDelegate *)[[UIApplication sharedApplication] delegate];    
    faceImageView.image = appDelegate.faceImage;

    UIRotationGestureRecognizer *rotationGesture = [[UIRotationGestureRecognizer alloc] initWithTarget:self action:@selector(rotatePiece:)];
    [faceImageView addGestureRecognizer:rotationGesture];
    [rotationGesture setDelegate:self];

    UIPinchGestureRecognizer *pinchGesture = [[UIPinchGestureRecognizer alloc] initWithTarget:self action:@selector(scalePiece:)];
    [pinchGesture setDelegate:self];
    [faceImageView addGestureRecognizer:pinchGesture];

    UIPanGestureRecognizer *panRecognizer = [[UIPanGestureRecognizer alloc] initWithTarget:self action:@selector(panGestureMoveAround:)];
    [panRecognizer setMinimumNumberOfTouches:1];
    [panRecognizer setMaximumNumberOfTouches:2];
    [panRecognizer setDelegate:self];
    [faceImageView addGestureRecognizer:panRecognizer];


    [[UIApplication sharedApplication] setStatusBarHidden:YES withAnimation:UIStatusBarAnimationNone];

    [appDelegate fadeObject:moveIcons StartAlpha:0 FinishAlpha:1 Duration:2];
    currentTimer = [NSTimer timerWithTimeInterval:4.0f target:self selector:@selector(fadeoutMoveicons) userInfo:nil repeats:NO];

    [[NSRunLoop mainRunLoop] addTimer: currentTimer forMode: NSDefaultRunLoopMode];

}
See Question&Answers more detail:os

与恶龙缠斗过久,自身亦成为恶龙;凝视深渊过久,深渊将回以凝视…
Welcome To Ask or Share your Answers For Others

1 Reply

0 votes
by (71.8m points)

The following code creates a snapshot of the enclosing view (superview of faceImageView with clipsToBounds set to YES) using a calculated scale factor.

It assumes that the content mode of faceImageView is UIViewContentModeScaleAspectFit and that the frame of faceImageView is set to the enclosingView's bounds.

- (UIImage *)captureView {

    float imageScale = sqrtf(powf(faceImageView.transform.a, 2.f) + powf(faceImageView.transform.c, 2.f));    
    CGFloat widthScale = faceImageView.bounds.size.width / faceImageView.image.size.width;
    CGFloat heightScale = faceImageView.bounds.size.height / faceImageView.image.size.height;
    float contentScale = MIN(widthScale, heightScale);
    float effectiveScale = imageScale * contentScale;

    CGSize captureSize = CGSizeMake(enclosingView.bounds.size.width / effectiveScale, enclosingView.bounds.size.height / effectiveScale);

    NSLog(@"effectiveScale = %0.2f, captureSize = %@", effectiveScale, NSStringFromCGSize(captureSize));

    UIGraphicsBeginImageContextWithOptions(captureSize, YES, 0.0);        
    CGContextRef context = UIGraphicsGetCurrentContext();
    CGContextScaleCTM(context, 1/effectiveScale, 1/effectiveScale);
    [enclosingView.layer renderInContext:context];   
    UIImage *img = UIGraphicsGetImageFromCurrentImageContext();
    UIGraphicsEndImageContext();

    return img;
}

Depending on the current transform the resulting image will have a different size. For example when you zoom in, the size gets smaller. You can also set effectiveScale to a constant value in order to get an image with a constant size.

Your gesture recognizer code does not limit the scale factor, i.e. you can zoom out/in without being limited. That can be very dangerous! My capture method can output really large images when you've zoomed out very much.

If you have zoomed out the background of the captured image will be black. If you want it to be transparent, you must set the opaque-parameter of UIGraphicsBeginImageContextWithOptions to NO.


与恶龙缠斗过久,自身亦成为恶龙;凝视深渊过久,深渊将回以凝视…
OGeek|极客中国-欢迎来到极客的世界,一个免费开放的程序员编程交流平台!开放,进步,分享!让技术改变生活,让极客改变未来! Welcome to OGeek Q&A Community for programmer and developer-Open, Learning and Share
Click Here to Ask a Question

1.4m articles

1.4m replys

5 comments

57.0k users

...