Using Modern Mobile Graphics Hardware
- What Is 3D Rendering?
- Supplying the Graphics Processor with Data
- The OpenGL ES Context
- The Geometry of a 3D Scene
Embedded systems encompass a wide range of devices, from aircraft cockpits to vending machines. The vast majority of 3D-capable embedded systems are handheld computers such as Apple’s iPhone, iPod Touch, and iPad or phones based on Google’s Android operating system. Handheld devices from Sony, Nintendo, and others also include powerful built-in 3D graphics capabilities.
OpenGL for Embedded Systems (OpenGL ES) defines the standard for embedded 3D graphics. Apple’s iPhone, iPod Touch, and iPad devices running iOS 5 support OpenGL ES version 2.0. Apple’s devices also support the older OpenGL ES version 1.1. A software framework called GLKit introduced with iOS 5 simplifies many common programming tasks and partially hides the differences between the two supported OpenGL ES versions. This book focuses on OpenGL ES version 2.0 for iOS 5 with GLKit.
Without diving into specific programming details, this chapter explains the general approach to producing 3D graphics with OpenGL ES and iOS 5. Modern hardware-accelerated 3D graphics underlie all the visual effects produced by advanced mobile products. Reading this chapter is the first step toward squeezing the best possible 3D graphics and visual effects out of mobile hardware.
What Is 3D Rendering?
A graphics processing unit (GPU) is a hardware component that combines data describing geometry, colors, lights, and other information to produce an image on a screen. The screen only has two dimensions, so the trick to displaying 3D data is generating an image that fools the eye into seeing the missing third dimension, as in the example in Figure 1.1.
Figure 1.1 A sample image generated from 3D data.
The generation of a 2D image from 3D data is called rendering. The image on a computer display is composed of rectangular dots of color called pixels. Figure 1.2 shows an enlarged portion of an image to show the individual pixels. If you examine your display through a magnifying glass, you will see that each pixel is composed of three color elements: a red dot, a green dot, and a blue dot. Figure 1.2 also shows a further enlarged single pixel to depict the individual color elements. On a full-color display, pixels always have red, green, and blue color elements, but the elements might be arranged in different patterns than the side-by-side arrangement shown in Figure 1.2.
Figure 1.2 Images are composed of pixels that each have red, green, and blue elements.
Images are stored in computer memory using an array containing at least three values for each pixel. The first value specifies the red color element’s intensity for the pixel. The second value is the green intensity, and the third value is the blue intensity. An image that contains 10,000 pixels can be stored in memory as an array of 30,000 intensity values—one value for each of the three color elements in each pixel. Combinations of red, green, and blue at different intensities are sufficient to produce every color of the rainbow. If all three elements have zero intensity, the resulting color is black. If all three elements have full intensity, the perceived color is white. Yellow is formed by mixing red and green without any blue. The Mac OS X standard Color panel user interface shown in Figure 1.3 contains graphical sliders to adjust relative Red, Green, Blue (RGB) intensities.
Figure 1.3. User interface to adjust Red, Green, and Blue color component intensities.
Rendering 3D data into a 2D image typically occurs in several separate steps involving calculations to set the red, green, and blue intensities of every pixel in the image. Taken as a whole, this book describes how programs best take advantage of OpenGL ES and graphics hardware at each step in the rendering process. The first step is to supply the GPU with 3D data to process.