如何以全分辨率从 UIImageView 获取旋转、缩放和平移的图像?

How to get a rotated, zoomed and panned image from an UIImageView at its full resolution?(如何以全分辨率从 UIImageView 获取旋转、缩放和平移的图像?)
本文介绍了如何以全分辨率从 UIImageView 获取旋转、缩放和平移的图像?的处理方法,对大家解决问题具有一定的参考价值,需要的朋友们下面随着小编来一起学习吧!

问题描述

I have an UIImageView which can be rotated, panned and scaled with gesture recognisers. As a result it is cropped in its enclosing view. Everything is working fine but I don't know how to save the visible part of the picture in its full resolution. It's not a screen grab.

I know I get the UIImage straight from the visible content of the UIImageView but it is limited to the resolution of the screen.

I assume that I have to do the same transformations on the UIImage and crop it. IS there an easy way to do that?

Update: For example, I have an UIImageView with an image in high resolution, let's say a 8MP iPhone 4s camera photo, which is transformed with gestures, so it becomes scaled, rotated and moved around in its enclosing view. Obviously there is some cropping going on so only a part of the image is displayed. There is a huge difference between the displayed screen resolution and the underlining image resolution, I need an image in the image resolution. The UIImageView is in UIViewContentModeScaleAspectFit, but a solution with UIViewContentModeScaleAspectFill is also fine.

This is my code:

- (void)rotatePiece:(UIRotationGestureRecognizer *)gestureRecognizer {

    if ([gestureRecognizer state] == UIGestureRecognizerStateBegan || [gestureRecognizer state] == UIGestureRecognizerStateChanged) {
        [gestureRecognizer view].transform = CGAffineTransformRotate([[gestureRecognizer view] transform], [gestureRecognizer rotation]);
        [gestureRecognizer setRotation:0];
    }
}

- (void)scalePiece:(UIPinchGestureRecognizer *)gestureRecognizer {

    if ([gestureRecognizer state] == UIGestureRecognizerStateBegan || [gestureRecognizer state] == UIGestureRecognizerStateChanged) {
        [gestureRecognizer view].transform = CGAffineTransformScale([[gestureRecognizer view] transform], [gestureRecognizer scale], [gestureRecognizer scale]);
        [gestureRecognizer setScale:1];
    }
}

-(void)panGestureMoveAround:(UIPanGestureRecognizer *)gestureRecognizer;
{
    UIView *piece = [gestureRecognizer view];

    //We pass in the gesture to a method that will help us align our touches so that the pan and pinch will seems to originate between the fingers instead of other points or center point of the UIView    
    if ([gestureRecognizer state] == UIGestureRecognizerStateBegan || [gestureRecognizer state] == UIGestureRecognizerStateChanged) {

        CGPoint translation = [gestureRecognizer translationInView:[piece superview]];
        [piece setCenter:CGPointMake([piece center].x + translation.x, [piece center].y+translation.y)];
        [gestureRecognizer setTranslation:CGPointZero inView:[piece superview]];
    } else if([gestureRecognizer state] == UIGestureRecognizerStateEnded) {
        //Put the code that you may want to execute when the UIView became larger than certain value or just to reset them back to their original transform scale
    }
}


- (BOOL)gestureRecognizer:(UIGestureRecognizer *)gestureRecognizer shouldRecognizeSimultaneouslyWithGestureRecognizer:(UIGestureRecognizer *)otherGestureRecognizer {
    // if the gesture recognizers are on different views, don't allow simultaneous recognition
    if (gestureRecognizer.view != otherGestureRecognizer.view)
        return NO;

    // if either of the gesture recognizers is the long press, don't allow simultaneous recognition
    if ([gestureRecognizer isKindOfClass:[UILongPressGestureRecognizer class]] || [otherGestureRecognizer isKindOfClass:[UILongPressGestureRecognizer class]])
        return NO;

    return YES;
}

- (void)viewDidLoad
{
    [super viewDidLoad];
    // Do any additional setup after loading the view from its nib.
    appDelegate = (AppDelegate *)[[UIApplication sharedApplication] delegate];    
    faceImageView.image = appDelegate.faceImage;

    UIRotationGestureRecognizer *rotationGesture = [[UIRotationGestureRecognizer alloc] initWithTarget:self action:@selector(rotatePiece:)];
    [faceImageView addGestureRecognizer:rotationGesture];
    [rotationGesture setDelegate:self];

    UIPinchGestureRecognizer *pinchGesture = [[UIPinchGestureRecognizer alloc] initWithTarget:self action:@selector(scalePiece:)];
    [pinchGesture setDelegate:self];
    [faceImageView addGestureRecognizer:pinchGesture];

    UIPanGestureRecognizer *panRecognizer = [[UIPanGestureRecognizer alloc] initWithTarget:self action:@selector(panGestureMoveAround:)];
    [panRecognizer setMinimumNumberOfTouches:1];
    [panRecognizer setMaximumNumberOfTouches:2];
    [panRecognizer setDelegate:self];
    [faceImageView addGestureRecognizer:panRecognizer];


    [[UIApplication sharedApplication] setStatusBarHidden:YES withAnimation:UIStatusBarAnimationNone];

    [appDelegate fadeObject:moveIcons StartAlpha:0 FinishAlpha:1 Duration:2];
    currentTimer = [NSTimer timerWithTimeInterval:4.0f target:self selector:@selector(fadeoutMoveicons) userInfo:nil repeats:NO];

    [[NSRunLoop mainRunLoop] addTimer: currentTimer forMode: NSDefaultRunLoopMode];

}

解决方案

The following code creates a snapshot of the enclosing view (superview of faceImageView with clipsToBounds set to YES) using a calculated scale factor.

It assumes that the content mode of faceImageView is UIViewContentModeScaleAspectFit and that the frame of faceImageView is set to the enclosingView's bounds.

- (UIImage *)captureView {

    float imageScale = sqrtf(powf(faceImageView.transform.a, 2.f) + powf(faceImageView.transform.c, 2.f));    
    CGFloat widthScale = faceImageView.bounds.size.width / faceImageView.image.size.width;
    CGFloat heightScale = faceImageView.bounds.size.height / faceImageView.image.size.height;
    float contentScale = MIN(widthScale, heightScale);
    float effectiveScale = imageScale * contentScale;

    CGSize captureSize = CGSizeMake(enclosingView.bounds.size.width / effectiveScale, enclosingView.bounds.size.height / effectiveScale);

    NSLog(@"effectiveScale = %0.2f, captureSize = %@", effectiveScale, NSStringFromCGSize(captureSize));

    UIGraphicsBeginImageContextWithOptions(captureSize, YES, 0.0);        
    CGContextRef context = UIGraphicsGetCurrentContext();
    CGContextScaleCTM(context, 1/effectiveScale, 1/effectiveScale);
    [enclosingView.layer renderInContext:context];   
    UIImage *img = UIGraphicsGetImageFromCurrentImageContext();
    UIGraphicsEndImageContext();

    return img;
}

Depending on the current transform the resulting image will have a different size. For example when you zoom in, the size gets smaller. You can also set effectiveScale to a constant value in order to get an image with a constant size.

Your gesture recognizer code does not limit the scale factor, i.e. you can zoom out/in without being limited. That can be very dangerous! My capture method can output really large images when you've zoomed out very much.

If you have zoomed out the background of the captured image will be black. If you want it to be transparent, you must set the opaque-parameter of UIGraphicsBeginImageContextWithOptions to NO.

这篇关于如何以全分辨率从 UIImageView 获取旋转、缩放和平移的图像?的文章就介绍到这了,希望我们推荐的答案对大家有所帮助,也希望大家多多支持编程学习网!

本站部分内容来源互联网,如果有图片或者内容侵犯您的权益请联系我们删除!

相关文档推荐

Pushing UIViewController above UITabBar(将UIView控制器推送到UITabBar上方)
java.lang.IllegalStateException: SimpleTypeImpl should not be created for error type(异常:不应为错误类型创建SimpleTypeImpl)
Android IllegalArgumentException: The tag for fragment_XXX is invalid. Received: layout-sw600dp/fragment_XXX_0(Android IlLegalArgumentException:Fragment_XXX的标签无效。收到:Layout-sw600dp/Fragment_XXX_0)
iOS convert audio sample rate from 16 kHz to 8 kHz(IOS将音频采样率从16 kHz转换为8 kHz)
Enforcing an audio sampling rate in iOS(在iOS中强制音频采样率)
HTTPS request using volley(使用 volley 的 HTTPS 请求)