Welcome to OGeek Q&A Community for programmer and developer-Open, Learning and Share
Welcome To Ask or Share your Answers For Others

Categories

0 votes
847 views
in Technique[技术] by (71.8m points)

ios - Xcode AVCapturesession scan Barcode in specific frame (rectOfInterest is not working)

I am trying to design a barcode scanner for an App i am currently working on. I want the scanner preview to fill the whole screen of the device and provide a smaller frame to point at barcodes. All is working how i want it but i can not get the frame of interest to work.

Here is the Implementation of the barcode scanner:

#import "GEScannerViewController.h"
@import AVFoundation;

@interface GEScannerViewController () <AVCaptureMetadataOutputObjectsDelegate> {
    AVCaptureSession *_session;
    AVCaptureDevice *_device;
    AVCaptureDeviceInput *_input;
    AVCaptureMetadataOutput *_output;
    AVCaptureVideoPreviewLayer *_prevLayer;

    UIView *_greyView;
    UIView *_highlightView;
    UIView *_scopeView;
    UILabel *_label;
}
@end

@implementation GEScannerViewController

- (void)viewDidLoad {
    [super viewDidLoad];

    _label = [[UILabel alloc] init];
    _label.frame = CGRectMake(0, self.view.bounds.size.height - 40, self.view.bounds.size.width, 40);
    _label.autoresizingMask = UIViewAutoresizingFlexibleTopMargin;
    _label.backgroundColor = [UIColor colorWithWhite:0.15 alpha:0.65];
    _label.textColor = [UIColor whiteColor];
    _label.textAlignment = NSTextAlignmentCenter;
    _label.text = @"(none)";
    [self.view addSubview:_label];

    NSError *error = nil;

    _session = [[AVCaptureSession alloc] init];
    _device = [AVCaptureDevice defaultDeviceWithMediaType:AVMediaTypeVideo];

    [_device lockForConfiguration:&error];

    if (error) {
        NSLog(@"Error: %@", error);
    }

    _device.focusPointOfInterest = CGPointMake(self.view.frame.size.width / 2, (self.view.frame.size.height / 2) - 80);

    _input = [AVCaptureDeviceInput deviceInputWithDevice:_device error:&error];
    if (_input) {
        [_session addInput:_input];
    } else {
        NSLog(@"Error: %@", error);
    }

    _output = [[AVCaptureMetadataOutput alloc] init];
    [_output setMetadataObjectsDelegate:self queue:dispatch_get_main_queue()];
    _output.rectOfInterest = CGRectMake((self.view.frame.size.width / 2) - 160, (self.view.frame.size.height / 2) - 160, 320, 160);
    [_session addOutput:_output];

    _output.metadataObjectTypes = [_output availableMetadataObjectTypes];

    _prevLayer = [AVCaptureVideoPreviewLayer layerWithSession:_session];
    _prevLayer.frame = self.view.bounds;
    _prevLayer.videoGravity = AVLayerVideoGravityResizeAspectFill;
    [self.view.layer addSublayer:_prevLayer];

    _greyView = [[UIView alloc] initWithFrame:self.view.frame];
    _greyView.bounds = self.view.bounds;
    _greyView.backgroundColor = [UIColor colorWithWhite:0.15 alpha:0.65];
    [self.view.layer addSublayer:_greyView.layer];

    _scopeView = [[UIView alloc] initWithFrame:CGRectMake((self.view.frame.size.width / 2) - 160, (self.view.frame.size.height / 2) - 160, 320, 160)];
    _scopeView.backgroundColor = [UIColor clearColor];
    _scopeView.layer.borderColor = [UIColor greenColor].CGColor;
    _scopeView.layer.borderWidth = 1;
    _scopeView.clipsToBounds = YES;
    [self.view addSubview:_scopeView];

    _highlightView = [[UIView alloc] init];
    _highlightView.autoresizingMask = UIViewAutoresizingFlexibleTopMargin|UIViewAutoresizingFlexibleLeftMargin|UIViewAutoresizingFlexibleRightMargin|UIViewAutoresizingFlexibleBottomMargin;
    _highlightView.layer.borderColor = [UIColor greenColor].CGColor;
    _highlightView.layer.borderWidth = 3;
    [_scopeView addSubview:_highlightView];

    [_session startRunning];

    [self.view bringSubviewToFront:_highlightView];
    [self.view bringSubviewToFront:_label];
}

I am using _output.rectOfInterest to specify the frame to be the same as the _scopeView's frame. Unfortunately this is not working. No barcodes are recognized anymore if i do that.

See Question&Answers more detail:os

与恶龙缠斗过久,自身亦成为恶龙;凝视深渊过久,深渊将回以凝视…
Welcome To Ask or Share your Answers For Others

1 Reply

0 votes
by (71.8m points)

As soon as i got it, it was absolutely clear:

The AVCaptureMetadataOutput is defined by pixels, therefore to map the coordinates of the display to that in the output, i had to use metadataOutputRectOfInterestForRect:

From AVCaptureOutput.h:


/*!
@method metadataOutputRectOfInterestForRect:
@abstract
Converts a rectangle in the receiver's coordinate space to a rectangle of interest in the coordinate space of an AVCaptureMetadataOutput
whose capture device is providing input to the receiver.

@param rectInOutputCoordinates
A CGRect in the receiver's coordinates.

@result
A CGRect in the coordinate space of the metadata output whose capture device is providing input to the receiver.

@discussion
AVCaptureMetadataOutput rectOfInterest is expressed as a CGRect where {0,0} represents the top left of the picture area,
and {1,1} represents the bottom right on an unrotated picture.  This convenience method converts a rectangle in
the coordinate space of the receiver to a rectangle of interest in the coordinate space of an AVCaptureMetadataOutput
whose AVCaptureDevice is providing input to the receiver.  The conversion takes orientation, mirroring, and scaling into
consideration.  See -transformedMetadataObjectForMetadataObject:connection: for a full discussion of how orientation and mirroring
are applied to sample buffers passing through the output.
*/

- (CGRect)metadataOutputRectOfInterestForRect:(CGRect)rectInOutputCoordinates NS_AVAILABLE_IOS(7_0);

After using this to set the rectOfInterest it worked.


与恶龙缠斗过久,自身亦成为恶龙;凝视深渊过久,深渊将回以凝视…
OGeek|极客中国-欢迎来到极客的世界,一个免费开放的程序员编程交流平台!开放,进步,分享!让技术改变生活,让极客改变未来! Welcome to OGeek Q&A Community for programmer and developer-Open, Learning and Share
Click Here to Ask a Question

...