Home > Articles > Programming > Graphic Programming

Green-Screen Video Effects for iOS: Video Processing with OpenGL ES

📄 Contents

  1. Introduction / Implementing a Real-Time Green-Screen Video Effect
  2. Blending Video with Another Image / Exploring and Running the Code
  • Print
  • + Share This
  • 💬 Discuss
Like this article? We recommend
Wish you could use the green-screen effects from movies and TV shows with your iOS devices? You can! Erik Buck, author of Learning OpenGL ES for iOS: A Hands-on Guide to Modern 3D Graphics Programming, describes how to perform real-time video processing without impacting performance. Green-screen effects are just one of the cool capabilities! Experiment with the provided demo program and learn how to push out your boundaries.

Introduction

Video productions ranging from feature films to TV weather forecasts apply green-screen effects. The idea is simple: You shoot video with a solid colored backdrop. Green or blue backdrops are most common, but theoretically you can use almost any solid color. Video processing replaces the backdrop color (wherever it can be seen) with something else, such as the image of a weather map. When viewing the final result, action in the foreground appears to take place in front of something that was never there in the background.

Recent iOS devices provide graphics processing units (GPUs) that are powerful enough to perform video processing with no impact to video performance. Processing for the green-screen effect provides just one example. It's a cool effect and a good first step to explore the overall concept of real-time video processing.

The green-screen effect can superimpose any image into live video. For example, Figure 1 shows a still taken from a video of my son standing in front of our house. With the effect turned on, an elephant roaming the African savanna replaces my son's green shirt.

Figure 1 Sample green-screen effect.

Videos are composed of a series of still images. A video player displays each image for a fraction of a second before replacing it with the next image in the series. Video effects modify each image in the series after the camera captures the image but before displaying the image. Video effect processing must execute quickly to avoid delaying images and making the video look choppy.

Each image on a computer display is composed of rectangular dots of color called pixels. Figure 2 shows an enlarged portion of an image to show the individual pixels. If you examine your display through a magnifying glass, you'll see that each pixel is composed of three color elements: a red dot, a green dot, and a blue dot.

Figure 2 Images are composed of pixels with red, green, and blue elements.

By varying the intensities of the red, green, and blue (RGB) elements in a pixel, video displays produce every color in the rainbow. If all three elements have zero intensity, the pixel is black. If all three elements have full intensity, the pixel is white. A pixel with a high-intensity green element and dim red and blue elements appears green.

Green-screen effect processing compares the relative intensities of the red, green, and blue elements within a pixel to determine whether the pixel is considered green enough to be replaced. The color of every pixel in every image within the video must be evaluated.

Close examination of Figure 2 reveals the subtlety required to implement a good-looking effect. The pixel colors around the thumb vary considerably. Which pixels are green enough to be replaced and which should be left unmodified? The algorithm for making that determination needs to execute very quickly, because it's applied to every pixel in the image.

Implementing a Real-Time Green-Screen Video Effect

This article's green-screen demonstration program is available as source code for downloading. The implementation is inspired by Apple's RosyWriter sample program available on the Apple Developer website.

This green-screen demonstration reuses RosyWriter's approach for capturing video via dedicated processing threads. Apple's Grand Central Dispatch (GCD) technology manages the threads and pushes video frame images into a queue for later processing by the application's main thread. RosyWriter executes code via an iOS device's Central Processing Unit (CPU) to modify the color of every pixel in every captured video image. The green-screen effect adopts a much more efficient approach, using the GPU for processing.

Modern GPUs are programmable using the OpenGL ES 2.0 Shading Language. The programming language looks superficially like C and provides many built-in functions. The green-screen calculations are performed in a small program called a fragment shader.

Images generated by a GPU are composed of colored fragments that in turn are mapped to pixels onscreen. In many but not all cases, there is a 1:1 relationship; each fragment determines the color of one pixel. The GPU executes the fragment shader to calculate the color of each generated fragment.

The green-screen fragment shader replaces green fragments with translucent fragments. Calculated translucency varies slightly based on how green the fragment is. The variation produces blending at the boundaries between green and other colors. For example, the boundary in Figure 2 shows no clear edge between green and non-green colors. Varying translucency avoids the appearance of green-tinted halos where fragments are still greenish but not green enough to be fully replaced.

Once the GPU has been configured to use a custom green-screen fragment shader, the next step is to give the GPU some fragments to process. Each video frame image is read from the GCD queue. The image is sent to the GPU in the form of an OpenGL texture. Textures are just images formatted for efficient use by the GPU. The pixels in the image are called texels when used as a texture.

GPUs process OpenGL drawing commands to generate fragments. The final colors of generated fragments determine what's displayed. Drawing a line generates fragments representing a line; drawing a triangle generates fragments corresponding to the filled area of the triangle. This article's demonstration calls OpenGL functions instructing the GPU to draw two triangles that together form a rectangle covering the entire display. While drawing the triangles, the GPU calculates which part of each triangle corresponds to each fragment and provides that information to the custom fragment shader.

The custom fragment shader looks up which texel within the current texture corresponds to the part of the triangle contributing color to the fragment. If the fragment shader uses the texel color unmodified, the resulting fragment colors match the texture colors. However, the custom fragment shader can perform additional processing that changes the generated fragment color.

The following fragment shader program implements the green-screen effect:

//
//  greenScreen.fsh
//  GreenScreen
//

varying highp vec2 vCoordinate;
uniform sampler2D uVideoframe;


void main()
{
   // Look up the color of the texel corresponding to the fragment being
   // generated while rendering a triangle
   lowp vec4 tempColor = texture2D(uVideoframe, vCoordinate);

   // Calculate the average intensity of the texel's red and blue components
   lowp float rbAverage = tempColor.r * 0.5 + tempColor.b * 0.5;

   // Calculate the difference between the green element intensity and the
   // average of red and blue intensities
   lowp float gDelta = tempColor.g - rbAverage;

   // If the green intensity is greater than the average of red and blue
   // intensities, calculate a transparency value in the range 0.0 to 1.0
   // based on how much more intense the green element is
   tempColor.a = 1.0 - smoothstep(0.0, 0.25, gDelta);

   // Use the cube of the transparency value. That way, a fragment that
   // is partially translucent becomes even more translucent. This sharpens
   // the final result by avoiding almost but not quite opaque fragments that
   // tend to form halos at color boundaries.
   tempColor.a = tempColor.a * tempColor.a * tempColor.a;

      gl_FragColor = tempColor;
}

Within the fragment shader program, the vCoordinate variable is initialized by the GPU to identify which portion of the triangle being drawn corresponds to the fragment to be generated. The uVideoframe variable identifies which texture to use when calculating fragment colors. The gl_FragColor variable is built into fragment shader programs. Setting the value of gl_FragColor sets the final color of the generated fragment.

The fragment shader reads texel colors from a texture captured by the video camera. If the texel is mostly green, the fragment program modifies the texel color by making it partially or fully translucent. The translucency of a fragment color is determined by an additional color component element called alpha. Each fragment stores intensity values for red, green, blue, and alpha using components called R, G, B, and A, respectively.

  • + Share This
  • 🔖 Save To Your Account

Discussions

comments powered by Disqus