Home > Articles > Programming > General Programming/Other Languages

Building Blocks into AVFoundation Movies in Objective-C

  • Print
  • + Share This
Do you like AVFoundation? Do you like blocks? They're two great technologies that taste great together. Learn how to blend these two to create flexible frame compositions for your custom movies.
Like this article? We recommend

Although it's simple to create movies using AVFoundation, many developers seem to use this feature solely to combine screen shots or scrape camera feeds. Because you're basically just feeding a pixel buffer into an asset writer, why not have a bit more fun than that? Enable your users to create and share videos that push boundaries further.

For the last few years, I've used a fairly basic Movie Maker class. This class enabled me to set up a movie file and feed it an image at a time. When finalized, there was a brand new movie ready to share with my user. My class was based on old Apple sample code, and although I've tweaked it a bit for efficiency, it was pretty bare bones.

Two things inspired me to push the class a bit further. First was my work on my iOS Drawing book. In writing this book, I played a lot with UIKit drawing on top of older APIs – like the ones that power the pixel buffers used in movie creation. Second was my work for a recent InformIT article on blocks, Blocks to Basics. In that post I spent a lot of time thinking about how to support development through blocks. I decided to combine these two elements. I hoped they'd simplify the way I built movie frames – and in fact they did. Instead of building an image and tossing it to the helper class, I could use UIKit drawing commands directly with the pixel buffer.

Before this modification, I'd perform image creation sequences over and over. In this, the image was never anything more than a way to transfer data over to the pixel buffer. You see an example of that here. The following code creates a context, performs the drawing, retrieves an image, and passes it to the movie.

CGContextRef context = UIGraphicsGetCurrentContext();
[[UIColor blackColor] set];
CGContextFillRect(context, rect);
[[UIColor whiteColor] set];
[path fill];
UIImage *anImage = UIGraphicsGetImageFromCurrentImageContext();
[myHelper addImageToMovie:anImage];

With blocks, I could draw directly to the pixel buffer. The block contained the drawing commands without using an intermediate image. This was a more appealing and parsimonious approach.

ContextDrawingBlock block = ^(CGContextRef context){
    [[UIColor blackColor] set];
    CGContextFillRect(context, rect);
    [[UIColor whiteColor] set];
    [path fill];
[myHelper addDrawingToMovie:block];

Video 1 shows the first video I made using blocks. In it, I tweaked control point inflections along a UIBezierPath and iteratively drew those results by executing the above block. As you see, there's nothing in the product to hint at any flaws in its creation.

Video 1: This video was built by executing drawing blocks.

Creating a Pixel Buffer

The key to a blocks-based approach lies in merging UIKit drawing with a Core Video pixel buffer. Pixel buffers are, as the name suggests, a wrapper for image data. You create a pixel buffer by calling CVPixelBufferCreate(). Pass it the width and height of the buffer and any options needed for compatibility. When built, you draw into it however you want to, and then append its contents to your movie.

- (BOOL) createPixelBuffer
    // Create Pixel Buffer
    NSDictionary *pixelBufferOptions =
      (id) kCVPixelBufferCGImageCompatibilityKey : @YES,
      (id) kCVPixelBufferCGBitmapContextCompatibilityKey : @YES,
    CVReturn status = CVPixelBufferCreate(
                                          (__bridge CFDictionaryRef) pixelBufferOptions,
    if (status != kCVReturnSuccess)
        NSLog(@"Error creating pixel buffer");
        return NO;
    return YES;

Drawing into the Pixel Buffer

The block example you saw earlier used a custom type called ContextDrawingBlock. As a rule, it's easier to create block types than to add their raw declarations over and over. The following ContextDrawingBlock typedef provides one argument, the current drawing context. Although you can always grab the current context via UIGraphicsGetCurrentContext(),it's convenient to provide that context for ready use.

typedef void (^ContextDrawingBlock)(CGContextRef context);

The secret to direct block-based drawing lies in a pair of UIKit functions not many developers are familiar with. UIGraphicsPushContext() and UIGraphicsPopContext() enable you to add Quartz 2D contexts to the UIKit context stack and then remove them after drawing.

This approach creates a bridge between the Quartz and UIKit worlds, permitting you to use Objective-C UIKit-style calls (such as [myColor set]) in place of C-language Quartz calls (such as CGContextSetFillColorWithColor(context, myColor.CGColor)). In UIKit calls, the context is inferred from the current stack, so you don't need to pass the context every time you update a setting or perform a drawing operation.

The following method builds a Quartz context using the memory stored in the CV pixel buffer. It pushes this context onto the UIKit stack and executes its drawing block. It finishes by popping the stack, releasing the context, and unlocking the pixel buffer. By encapsulating all the pixel-level work in this method, the ContextDrawingBlock that's passed as an argument concerns itself only with actual drawing commands.

- (BOOL) drawToPixelBufferWithBlock: (ContextDrawingBlock) block __attribute__ ((nonnull))
    // Lock the buffer and fetch the base address
    CVPixelBufferLockBaseAddress(bufferRef, 0);
    void *pixelData = CVPixelBufferGetBaseAddress(bufferRef);

    CGColorSpaceRef RGBColorSpace = CGColorSpaceCreateDeviceRGB();
    if (RGBColorSpace == NULL) return NO;

    CGContextRef context = CGBitmapContextCreate(pixelData, width, height, 
        8, 4 * width, RGBColorSpace, (CGBitmapInfo) kCGImageAlphaNoneSkipFirst);
    if (!context)
        CVPixelBufferUnlockBaseAddress(bufferRef, 0);
        NSLog(@"Error creating bitmap context");
        return NO;
    // Handle Quartz Coordinate System
    CGAffineTransform transform = CGAffineTransformIdentity;
    transform = CGAffineTransformScale(transform, 1.0f, -1.0f);
    transform = CGAffineTransformTranslate(transform, 0.0f, -height);
    CGContextConcatCTM(context, transform);
    // Perform drawing
    if (block) block(context);
    // Clean up
    CVPixelBufferUnlockBaseAddress(bufferRef, 0);    
    return YES;

Other Kinds of Drawing

Video 1 used a block that drew a solid background and then filled a Bezier path. As Video 2 demonstrates, any UIKit or Quartz-compatible drawing API will work. This second sample includes image and string rendering, and here's the block that created it. Again notice the simplicity of the implementation.

ContextDrawingBlock block = ^(CGContextRef context){
    // Fill background
    [[UIColor blackColor] set];
    CGContextFillRect(context, rect);

    // Draw image
    [frame drawInRect:insetRect];    

    // Draw string
    NSAttributedString *s = [[NSAttributedString alloc] 
        initWithString:title attributes:@{
            NSFontAttributeName:[UIFont fontWithName:@"Georgia" size:24], 
            NSForegroundColorAttributeName:[UIColor whiteColor]}];
    [s drawAtPoint:CGPointMake(80, 80)];

Video 2: This demo showcases embedded image drawing and string drawing. I don't own Clippy. Animations are courtesy of smore's clippy.js implementation.


Combining blocks with AVFoundation produces recognizable enhancements in clarity and simplicity with a minimum of code. (As Olaf from Frozen would put it, "they're both so intense – put 'em together, it just makes sense.") You'll find a copy of the complete Movie Maker helper class at my Useful Things repository on GitHub. If you find it useful, please drop a comment at the end of this post. And if you find any bugs, please file an issue at the repository.

  • + Share This
  • 🔖 Save To Your Account