Home > Articles > Programming > Java

Applying Affine Transformation to Images

  • Print
  • + Share This
This sample chapter introduces interpolation, discusses image manipulation requirements, and specifies requirements for performing manipulation functions. On the basis of these specifications, Rodrigues builds a class for an image manipulation canvas, and several operator classes to operate on this canvas. Also learn how to build an image viewer to illustrate all the concepts presented here.
This chapter is from the book

After you display an image, the next logical step is to manipulate it. Although image manipulation may mean different things to different people, it generally means handling an image in such a way that its geometry is changed. Operations such as panning, zooming, and rotation can be considered image manipulations.

AWT imaging offers only minimal support for image manipulation. In JDK 1.1, usually images are manipulated through clever use of the drawImage() methods of the Graphics class. If you need to perform complex manipulation operations such as rotation, you must write your own transformation functions. With the introduction of the AffineTransform class in Java 2D, you can now implement any complex manipulation operation.

Many applications need the ability to apply image manipulations in a random order. With a map, for instance, you might want to pan it and then zoom it to look for a place of interest. Then you might want to rotate the map so that it is oriented in a direction to which you are accustomed. To inspect the map closely, you might zoom it again. To see nearby locations, you might pan it. This scenario illustrates that an application must be capable of performing manipulations in a random order in such a way that at every step operations are concatenated. Such capability would be difficult to implement without affine transformations. Because this chapter requires a thorough knowledge of affine transformations, you may want to read Chapter 4 first.

The quality of the rendered image is an important consideration in many image manipulation applications. The quality of the image often depends on the type of interpolation chosen. But quality comes with a price: The higher the quality, the more time it takes to generate the image.

This chapter will begin with an introduction to interpolation. After the basics of interpolation have been presented, we'll discuss image manipulation requirements. As we did in Chapter 6 for image rendering, we'll specify requirements for performing manipulation functions. On the basis of these specifications, we'll build a class for an image manipulation canvas. We'll then build several operator classes to operate on this canvas. Just as in Chapter 6, we'll also build an image viewer to illustrate all the concepts presented here. All the operator classes are part of this image viewer, which is an extension of the image viewer of Chapter 6.


The source code and classes for this image viewer are available on the book's Web page in the directory src/chapter7/manip. To understand this chapter better, you may want to run the image viewer and perform the relevant transformations as you read.

What Is Interpolation?

As you may know already, pixels of an image occupy integer coordinates. When images are rendered or manipulated, the destination pixels may lie between the integer coordinates. So in order to create an image from these pixels, destination pixels are interpolated at the integer coordinates.

Interpolation is a process of generating a value of a pixel based on its neighbors. Neighboring pixels contribute a certain weight to the value of the pixel being interpolated. This weight is often inversely proportional to the distance at which the neighbor is located. Interpolation can be performed in one-dimensional, two-dimensional, or three-dimensional space. Image manipulation such as zooming and rotation is performed by interpolation of pixels in two-dimensional space. Volume imaging operations perform interpolation in three-dimensional space.

Java 2D supports some widely used interpolation techniques. You can choose them through the RENDERING_HINTS constant. The choice will depend on what is more important for your application: speed or accuracy.

Next we'll discuss different types of interpolation. Although you may not need to implement any interpolation code, knowledge of the different types of interpolation is helpful in understanding image rendering and manipulation.

Nearest-Neighbor Interpolation

In this simple scheme, the interpolating pixel is assigned the value of the nearest neighbor. This technique is fast, but it may not produce accurate images.

Linear Interpolation

In linear interpolation, immediate neighbors of the pixel to be interpolated are used to determine the value of the pixel. The distance-to-weight relationship is linear; that is, the relationship is of the form y 5 ax + b. In linear interpolation, left and right neighbors of the pixel are used to compute the pixel value (see Figure 7.1).

FIGURE 7.1 Linear interpolation

Let Px′ be the pixel that lies between Px and Px+1, the respective pixel values of which are px and px+1. Let d be the distance between Px′ and the left neighbor, Px. The value of the pixel Px′ is given by

Px_ 5px1[(px11 – px) 3 d]
	5px_(1 – d) + (px11 3 d)

There are two types of linear interpolation: bilinear and trilinear.

Bilinear Interpolation

Bilinear interpolation is the method used for two-dimensional operations—for instance, magnifying an image. The interpolation is performed in a 2 3 2 neighborhood (see Figure 7.2).

FIGURE 7.2 Bilinear interpolation

Linear interpolation is performed in one direction, and the result is applied to the linear interpolation in the other direction. For example, if P(x′, y′) is the pixel at dx and dy from the upper left-hand neighbor, its value is computed by

pu 5 [p(x,y) – (1 3 dx)] + (p(x+1,y) 3 dx)	(7.1)

which represents the contribution to the upper row, and by

pl 5 [p(x,y+1) 3 (1 – dx)] + (p(x+1,y+1) – dx)	(7.2)

which represents the contribution to the lower row.

From equations (7.1) and (7.2), we get P(x___y_) 5 [pu 3 (1 – dy)] + (pl 3 dy).

Trilinear Interpolation

Trilinear interpolation is computed in a 3 3 3 neighborhood (see Figure 7.3). To compute trilinear interpolation, bilinear interpolation is first performed in the xyz plane. Then the linear interpolation is applied to the resulting value in the z direction.

FIGURE 7.3 Trilinear interpolation

Cubic Interpolation

Cubic interpolation is performed in a neighborhood of four pixels (see Figure 7.4).

FIGURE 7.4 Cubic interpolation

The cubic equation is of the form Px 5 ax3 + bx2 + cx + d.

Just as bilinear interpolation is performed in a 2 3 2 neighborhood, bicubic interpolation is performed in a 4 3 4 neighborhood.

  • + Share This
  • 🔖 Save To Your Account