Welcome to OGeek Q&A Community for programmer and developer-Open, Learning and Share
Welcome To Ask or Share your Answers For Others

Categories

0 votes
410 views
in Technique[技术] by (71.8m points)

ios - Record square video using AVFoundation and add watermark

Illustration of what I'm trying to do

I'm trying to do the following:

  • Play music
  • Record a square video ( I have a container in the view which shows what you are recording)
  • Add a label at the top and the app's icon & name in the bottom left of the square video.

Up to this point I managed to play the music, show the AVCaptureVideoPreviewLayer in a square container in a different view and save the video to the camera roll.

The thing is that I can barely find a few vague tutorials about using AVFoundation and this being my first app, makes things quite hard.

I managed to do these things, but I still don't understand how AVFoundation works. The documentation is vague for a beginner and I haven't found a tutorial for what I specifically want and putting together multiple tutorials (and written in Obj C) is making this impossible. My problems are the following:

  1. The video doesn't get saved as square. (mentioning that the app doesn't support landscape orientation)
  2. The video has no audio. (I think that I should add some sort of audio input other than the video)
  3. How to add the watermarks to the video?
  4. I have a bug: I created a view (messageView; see in code) with a text & image letting the user know that the video was saved to camera roll. But if I start recording the second time, the view appears WHILE the video is recording, not AFTER it was recorded. I suspect it's related to naming every video the same.

So I make the preparations:

override func viewDidLoad() {
        super.viewDidLoad()

        // Preset For High Quality
        captureSession.sessionPreset = AVCaptureSessionPresetHigh

        // Get available devices capable of recording video
        let devices = AVCaptureDevice.devicesWithMediaType(AVMediaTypeVideo) as! [AVCaptureDevice]

        // Get back camera
        for device in devices
        {
            if device.position == AVCaptureDevicePosition.Back
            {
                currentDevice = device
            }
        }

        // Set Input
        let captureDeviceInput: AVCaptureDeviceInput
        do
        {
            captureDeviceInput = try AVCaptureDeviceInput(device: currentDevice)
        }
        catch
        {
            print(error)
            return
        }

        // Set Output
        videoFileOutput = AVCaptureMovieFileOutput()

        // Configure Session w/ Input & Output Devices
        captureSession.addInput(captureDeviceInput)
        captureSession.addOutput(videoFileOutput)

        // Show Camera Preview
        cameraPreviewLayer = AVCaptureVideoPreviewLayer(session: captureSession)
        view.layer.addSublayer(cameraPreviewLayer!)
        cameraPreviewLayer?.videoGravity = AVLayerVideoGravityResizeAspectFill
        let width = view.bounds.width*0.85
        cameraPreviewLayer?.frame = CGRectMake(0, 0, width, width)

        // Bring Record Button To Front
        view.bringSubviewToFront(recordButton)
        captureSession.startRunning()

//        // Bring Message To Front
//        view.bringSubviewToFront(messageView)
//        view.bringSubviewToFront(messageText)
//        view.bringSubviewToFront(messageImage)
    }

Then when I press the record button:

@IBAction func capture(sender: AnyObject) {
    if !isRecording
    {
        isRecording = true

        UIView.animateWithDuration(0.5, delay: 0.0, options: [.Repeat, .Autoreverse, .AllowUserInteraction], animations: { () -> Void in
            self.recordButton.transform = CGAffineTransformMakeScale(0.5, 0.5)
            }, completion: nil)

        let outputPath = NSTemporaryDirectory() + "output.mov"
        let outputFileURL = NSURL(fileURLWithPath: outputPath)
        videoFileOutput?.startRecordingToOutputFileURL(outputFileURL, recordingDelegate: self)
    }
    else
    {
        isRecording = false

        UIView.animateWithDuration(0.5, delay: 0, options: [], animations: { () -> Void in
            self.recordButton.transform = CGAffineTransformMakeScale(1.0, 1.0)
            }, completion: nil)
        recordButton.layer.removeAllAnimations()
        videoFileOutput?.stopRecording()
    }
}

And after the video was recorded:

func captureOutput(captureOutput: AVCaptureFileOutput!, didFinishRecordingToOutputFileAtURL outputFileURL: NSURL!, fromConnections connections: [AnyObject]!, error: NSError!) {
    let outputPath = NSTemporaryDirectory() + "output.mov"
    if UIVideoAtPathIsCompatibleWithSavedPhotosAlbum(outputPath)
    {
        UISaveVideoAtPathToSavedPhotosAlbum(outputPath, self, nil, nil)
        // Show Success Message
        UIView.animateWithDuration(0.4, delay: 0, options: [], animations: {
            self.messageView.alpha = 0.8
            }, completion: nil)
        UIView.animateWithDuration(0.4, delay: 0, options: [], animations: {
            self.messageText.alpha = 1.0
            }, completion: nil)
        UIView.animateWithDuration(0.4, delay: 0, options: [], animations: {
            self.messageImage.alpha = 1.0
            }, completion: nil)
        // Hide Message
        UIView.animateWithDuration(0.4, delay: 1, options: [], animations: {
            self.messageView.alpha = 0
            }, completion: nil)
        UIView.animateWithDuration(0.4, delay: 1, options: [], animations: {
            self.messageText.alpha = 0
            }, completion: nil)
        UIView.animateWithDuration(0.4, delay: 1, options: [], animations: {
            self.messageImage.alpha = 0
            }, completion: nil)
    }
}

So what do I need to do fix this? I kept searching and looking over tutorials but I can't figure it out... I read about adding watermarks and I saw that it has something to do with adding CALayers on top of the video. But obviously I can't do that since I don't even know how to make the video square and add audio.

See Question&Answers more detail:os

与恶龙缠斗过久,自身亦成为恶龙;凝视深渊过久,深渊将回以凝视…
Welcome To Ask or Share your Answers For Others

1 Reply

0 votes
by (71.8m points)

A few things:

As far as Audio goes, you're adding a Video (camera) input, but no Audio input. So do that to get sound.

    let audioInputDevice = AVCaptureDevice.defaultDeviceWithMediaType(AVMediaTypeAudio)

    do {
        let input = try AVCaptureDeviceInput(device: audioInputDevice)

        if sourceAVFoundation.captureSession.canAddInput(input) {
            sourceAVFoundation.captureSession.addInput(input)
        } else {
            NSLog("ERROR: Can't add audio input")
        }
    } catch let error {
        NSLog("ERROR: Getting input device: (error)")
    }

To make the video square, you're going to have to look at using AVAssetWriter instead of AVCaptureFileOutput. This is more complex, but you get more "power". You've created an AVCaptureSession already which is great, to hook up the AssetWriter, you'll need to do something like this:

    let fileManager = NSFileManager.defaultManager()
    let urls = fileManager.URLsForDirectory(.DocumentDirectory, inDomains: .UserDomainMask)
    guard let documentDirectory: NSURL = urls.first else {
        print("Video Controller: getAssetWriter: documentDir Error")
        return nil
    }

    let local_video_name = NSUUID().UUIDString + ".mp4"
    self.videoOutputURL = documentDirectory.URLByAppendingPathComponent(local_video_name)

    guard let url = self.videoOutputURL else {
        return nil
    }


    self.assetWriter = try? AVAssetWriter(URL: url, fileType: AVFileTypeMPEG4)

    guard let writer = self.assetWriter else {
        return nil
    }

    //TODO: Set your desired video size here! 
    let videoSettings: [String : AnyObject] = [
        AVVideoCodecKey  : AVVideoCodecH264,
        AVVideoWidthKey  : captureSize.width,
        AVVideoHeightKey : captureSize.height,
        AVVideoCompressionPropertiesKey : [
            AVVideoAverageBitRateKey : 200000,
            AVVideoProfileLevelKey : AVVideoProfileLevelH264Baseline41,
            AVVideoMaxKeyFrameIntervalKey : 90,
        ],
    ]

    assetWriterInputCamera = AVAssetWriterInput(mediaType: AVMediaTypeVideo, outputSettings: videoSettings)
    assetWriterInputCamera?.expectsMediaDataInRealTime = true
    writer.addInput(assetWriterInputCamera!)

    let audioSettings : [String : AnyObject] = [
        AVFormatIDKey : NSInteger(kAudioFormatMPEG4AAC),
        AVNumberOfChannelsKey : 2,
        AVSampleRateKey : NSNumber(double: 44100.0)
    ]

    assetWriterInputAudio = AVAssetWriterInput(mediaType: AVMediaTypeAudio, outputSettings: audioSettings)
    assetWriterInputAudio?.expectsMediaDataInRealTime = true
    writer.addInput(assetWriterInputAudio!)

Once you have the AssetWriter setup... then hook up some outputs for the Video and Audio

    let bufferAudioQueue = dispatch_queue_create("audio buffer delegate", DISPATCH_QUEUE_SERIAL)
    let audioOutput = AVCaptureAudioDataOutput()
    audioOutput.setSampleBufferDelegate(self, queue: bufferAudioQueue)
    captureSession.addOutput(audioOutput)

    // Always add video last...
    let videoOutput = AVCaptureVideoDataOutput()
    videoOutput.setSampleBufferDelegate(self, queue: bufferVideoQueue)
    captureSession.addOutput(videoOutput)
    if let connection = videoOutput.connectionWithMediaType(AVMediaTypeVideo) {
        if connection.supportsVideoOrientation {
            // Force recording to portrait
            connection.videoOrientation = AVCaptureVideoOrientation.Portrait
        }

        self.outputConnection = connection
    }


    captureSession.startRunning()

Finally you need to capture the buffers and process that stuff... Make sure you make your class a delegate of AVCaptureVideoDataOutputSampleBufferDelegate and AVCaptureAudioDataOutputSampleBufferDelegate

//MARK: Implementation for AVCaptureVideoDataOutputSampleBufferDelegate, AVCaptureAudioDataOutputSampleBufferDelegate
func captureOutput(captureOutput: AVCaptureOutput!, didOutputSampleBuffer sampleBuffer: CMSampleBuffer!, fromConnection connection: AVCaptureConnection!) {

    if !self.isRecordingStarted {
        return
    }

    if let audio = self.assetWriterInputAudio where connection.audioChannels.count > 0 && audio.readyForMoreMediaData {

        dispatch_async(audioQueue!) {
            audio.appendSampleBuffer(sampleBuffer)
        }
        return
    }

    if let camera = self.assetWriterInputCamera where camera.readyForMoreMediaData {
        dispatch_async(videoQueue!) {
            camera.appendSampleBuffer(sampleBuffer)
        }
    }
}

There are a few missing bits and pieces, but hopefully this is enough for you to figure it out along with the documentation.

Finally, if you want to add the watermark, there are many ways this can be done in real time, but one possible way is to modify the sampleBuffer and write the watermark into the image then. You'll find other question on StackOverflow dealing with that.


与恶龙缠斗过久,自身亦成为恶龙;凝视深渊过久,深渊将回以凝视…
OGeek|极客中国-欢迎来到极客的世界,一个免费开放的程序员编程交流平台!开放,进步,分享!让技术改变生活,让极客改变未来! Welcome to OGeek Q&A Community for programmer and developer-Open, Learning and Share
Click Here to Ask a Question

...