Home > Articles > Mobile Application Development & Programming

  • Print
  • + Share This
Like this article? We recommend Hands-on Intents

Hands-on Intents

As previously mentioned, the sample application snaps up a picture before embedding it in a callout bubble over a map.

More technically, the camera is launched via an implicit intent in the first activity we create. After a picture is taken, it’s passed to another activity that plots it as a thumbnail on a map. Figure 4 conceptualizes how the different components interact.

Figure 4 A conceptual sequence diagram showing the component interaction.

Step 1: Take a Picture with the Camera

Create an IntentDemo class that extends android.app.Activity, and launch the camera with the code:

private static final int REQUEST_CAMERA = 0;
@Override
protected void onCreate(Bundle savedInstanceState) {
    super.onCreate(savedInstanceState);
    Intent cameraIntent = new Intent(MediaStore.ACTION_IMAGE_CAPTURE);
    File tempImageFile = 
        new File(Environment.getExternalStorageDirectory(), "temp.jpg");
    cameraIntent.putExtra(MediaStore.EXTRA_OUTPUT, 
        Uri.fromFile(tempImageFile));
    startActivityForResult(cameraIntent, REQUEST_CAMERA);
}

An action and extra make up the implicit cameraIntent above. The action, ACTION_IMAGE_CAPTURE, signifies the intention to capture a picture by any means the system has available. Extras are bits of data passed around for different purposes. The one here contains the file to which the camera writes its photo.

Actions and extras are part of an intent’s protocol. Intent protocols are configured in the Android manifest through one or more intent filters. For example, here’s how the one matched earlier looks:

<activity android:name="com.android.camera.Camera" ...>
    <intent-filter>
        <action android:name="android.media.action.IMAGE_CAPTURE" />
        <category android:name="android.intent.category.DEFAULT" />
    </intent-filter>
    < !-- More intent filters... -->
</activity>

Here, the action element corresponds to the MediaStore.ACTION_IMAGE_CAPTURE fed to cameraIntent. Categories are another part of intent protocols. The system classifies activities with the additional information they provide. For example, android.intent.category.LAUNCHER puts an activity in the application launcher with an icon. The android.intent.category.DEFAULT category is required in the absence of any others.

Intent filters can contain multiple actions and categories. In order to match against one, intents specify one action along with every category in the filter (the default category is automatically set on this end). Google defined the intent protocol here, so third-party camera applications will conform if they want candidacy in the same applications as the stock camera. It could make a difference, especially with ad-supported and pay-per-use applications.

Getting back to the code, startActivityForResult launches the camera activity assigning it the request code passed in. When the root activity regains control, it uses the request code to switch between other activities it might have started. Examine the following callback:

@Override
protected void onActivityResult(int requestCode, int resultCode, Intent data) {        
    switch (requestCode) {
    case REQUEST_CAMERA:
        if (resultCode == Activity.RESULT_OK) {                
            new SaveImageTask().execute(); // not yet shown
        } else {
            finish();
        }
        break;
    }
}

First, requestCode is checked to see which activity is returning control. Even though we only defined one, it’s good practice to prepare for more with a switch like this. Next, resultCode is checked to see how the overall operation panned out. SaveImageTask, created next, is fired off on success.

Create a private inner class inside IntentDemo called SaveImageTask that extends android.os.AsyncTask<Void, Void, Void>. AsyncTask prevents screen lockup by running long operations in a background thread. Hooks are supplied to perform work, before and after, on the UI thread (e.g., displaying progress indicators and the like). This simple contract saves us the headache of having to deal with threads directly. SD cards lag enough to lock the UI, so override onPreExecute to display a progress indicator before spawning the background thread.

@Override
protected void onPreExecute() {
    mProgress = ProgressDialog.show(IntentDemo.this, 
        "Please wait", "Saving picture");
}

Now spawn it.

@Override
protected Void doInBackground(Void... arg) {
    File fi = new File(Environment.getExternalStorageDirectory(), "temp.jpg");
    try {
        String path = fi.getAbsolutePath();
        String uri = 
            Images.Media.insertImage(getContentResolver(), path, null, null);
        mBitmapUri = Uri.parse(uri);
        fi.delete();
    } catch (FileNotFoundException e) {
        throw new RuntimeException(e);
    }
    return null;
}

First, the photo the camera wrote to storage is loaded. Next, Images.Media.insertImage duplicates it behind Android’s content model and returns a content://-schemed URI. Local application data is usually kept in private files and SQLite databases; however, applications that share data do so through Android’s content model. As a matter of fact, now the gallery and other applications can see our image.

Behind the scenes, content resolvers and content providers encapsulate data of all sorts behind the content model’s uniform interface. ContentResolvers examine URIs to determine which ContentProviders to forward them to in factory-like fashion. For example, the following URI resolves to the image provider:

content://media/external/images/media/39

The image provider further examines the URI to navigate to the actual resource on disk. Uris of all flavors make up intent protocols along with the actions and extras mentioned earlier. Read more about them in the online documentation, and make applications richer by reading and writing to the Android content model whenever possible.

With that out of the way, we’re back on the UI-thread. The diagram in Figure 5 illustrates the sequence of events so far.

Figure 5 The sequence of events so far.

Next, stub out a com.google.android.maps.MapActivity called BitmapCalloutMap with this code:

public class BitmapCalloutMap extends MapActivity {
    public static final String EXTRA_LATITUDE_E6 = "lat_e6";
    public static final String EXTRA_LONGITUDE_E6 = "lon_e6";
    public static final String EXTRA_PIXELS = "pixels";
}

Set up its extras, and launch its soulless shell with the explicit intent:

@Override
protected void onPostExecute(Void arg) {
    mProgress.dismiss();
    Intent i = new Intent(IntentDemo.this, BitmapCalloutMap.class);
    i.putExtra(Intent.EXTRA_STREAM, mBitmapUri);
    i.putExtra(BitmapCalloutMap.EXTRA_PIXELS, 84);
    i.putExtra(BitmapCalloutMap.EXTRA_LATITUDE_E6, (int) (48.8583 * 1E6));
    i.putExtra(BitmapCalloutMap.EXTRA_LONGITUDE_E6, (int) (2.2945 * 1E6));
    startActivity(i);
}

As stated, the intent here is explicit. It’s common practice to launch activities that live in the same application like this.

In the next step, we implement the rest of BitmapCalloutMap.

Step 2: Display the Picture in a Callout Bubble over a Map

The BitmapCalloutMap activity is made of two parts:

  • The activity itself
  • A layer of map markers called a map overlay

The specialized overlay we build embeds a thumbnail in a callout bubble drawn using the 2D graphics API. But, before we get to the graphics programming, implement basic overlay functionality with the following class:

public class BitmapCalloutOverlay extends ItemizedOverlay<OverlayItem> {
    private static final int   BORDER_PIXELS = 6;
    private static final float TAIL_SCALE_FACTOR = 0.3f;
    private static final float MITRE = 8f;
    private OverlayItem mItem;
    private Drawable mMarker;
    public BitmapCalloutOverlay(BitmapDrawable defaultMarker) {
        super(defaultMarker);
        mMarker = defaultMarker;
    }
    public void setMarker(OverlayItem overlay) {
        mItem = overlay;
        populate();
    }
    @Override
    protected OverlayItem createItem(int i) {
        return mItem;
    }
    @Override
    public int size() {
        return mItem == null ? 0 : 1;
    }
}

The com.google.android.maps.MapView class we use later maintains an ordered collection of overlays like this. The overlay here supports one marker at a time. First, the marker is externally set through the constructor. It’s placed thereafter by calling setMarker and passing in an OverlayItem containing geo coordinates. The other methods are lifecycle methods we don’t have to worry about.

Currently, calling setMarker results in a marker’s undesirable screen placement and no callout bubble. Let’s correct these problems by stepping through the drawing routine starting with its signature:

@Override
public void draw(Canvas canvas, MapView mapView, boolean shadow)

The method here takes a Canvas to draw on along with the MapView that’s holding this overlay. The shadow argument determines whether the marker casts a shadow. The super implementation of draw simply renders a marker-less map. Use it to handle the edge case of an OverlayItem not yet set.

if (mItem == null) {
    super.draw(canvas, mapView, shadow);
    return;
}

The callout bubble has a head and tail. The head is a rounded rectangle, and the tail an upside-down equilateral triangle with sides scaled to 30 percent of the head’s width. Calculate the callout’s dimensions based on the thumbnail’s with this code:

// calculate dimensions
int imgWidth = mMarker.getIntrinsicWidth();
int imgHeight = mMarker.getIntrinsicHeight();
int headWidth = imgWidth + BORDER_PIXELS * 2;
int headHeight = imgHeight + BORDER_PIXELS * 2;
float tailLength = headWidth * TAIL_SCALE_FACTOR;

First, the thumbnail’s width and height are obtained. We soon see how the original photo is scaled down in BitmapCalloutMap before this class gets a hold of it. Back to the previous code, the thumbnail’s dimensions are then used to calculate the callout’s. Draw its path with the following code:

// draw the callout
Path callout = new Path();
// draw head
callout.addRoundRect(new RectF(0, 0, headWidth, headHeight), MITRE, MITRE, 
    Direction.CW);
// draw tail
callout.moveTo(headWidth / 2 - tailLength / 2, headHeight);
callout.rLineTo(tailLength, 0);
callout.rLineTo(-tailLength / 2, tailLength);
callout.close();

addRoundRect draws the head clockwise starting with its top-left corner. Then, the path’s endpoint is repositioned to draw the tail. Actually, these are just instructions for drawing. The actual rendering code starts with the following chunk:

// place callout at top left of screen
ShapeDrawable pathDraw = new ShapeDrawable(new PathShape(callout, headWidth, 
    headHeight));
pathDraw.setBounds(0, 0, headWidth, headHeight);
Paint paint = pathDraw.getPaint();
paint.setColor(Color.GRAY);
paint.setAntiAlias(true);

First, a ShapeDrawable wraps the Path. Next, the top-left corner of the ShapeDrawable’s bounding box, in which the path is rendered, is placed along the top-left corner of the screen. MapView contains a handy Projection object, which translates between geo coordinates and screen pixels. It assists in offsetting the callout to its geo-coordinated position on screen in the code here:

// offset callout to geocoordinates
Point p = new Point();
mapView.getProjection().toPixels(mItem.getPoint(), p);
float offsetX = p.x - headWidth / 2;
float offsetY = p.y - headHeight - tailLength;
callout.offset(offsetX, offsetY);

After the point projection, further offsetting places the tip of the tail directly on the geo point; otherwise, the center point of the callout’s bounding box is used. Figure 6 shows before and after the offset. The callout has been painted red to make it easier to see.

Figure 6 Projecting and offsetting.

Next calculate the thumbnail’s bounding box.

// calculate image bounding box
int left = (-imgWidth / 2);
int top = (int) (-imgHeight - tailLength) - BORDER_PIXELS;
int right = (imgWidth / 2);
int bottom = (int) -tailLength - BORDER_PIXELS;

These bounding box specifications center the thumbnail over the callout’s head. Here, everything is finally rendered:

// draw everything
pathDraw.draw(canvas);
mMarker.setBounds(left, top, right, bottom);
super.draw(canvas, mapView, false);

Now that the overlay is out of the way, our attention is turned back to the MapActivity. The specialized activity MapActivity manages the complexities of MapView. MapView requires a special API key to run. Acquire it with these instructions.

Create an XML layout file in res/layout named bitmap_callout_map.xml. Add the following code:

<?xml version="1.0" encoding="utf-8"?>
<RelativeLayout xmlns:android="http://schemas.android.com/apk/res/android"
    android:layout_width="fill_parent" android:layout_height="fill_parent">

    <com.google.android.maps.MapView android:id="@+id/mapView"
        android:layout_width="fill_parent" 
        android:layout_height="fill_parent"
        android:apiKey="@string/map_api_key" />		
</RelativeLayout>

The RelativeLayout is required for the drawing code to work. It simply contains a MapView. Get BitmapCalloutMap started by adding the following member variables to the stub created earlier.

private MapView mMapView;
private BitmapCalloutOverlay mOverlay;

Extract the extras passed over from IntentDemo, and initialize these fields with them. Use the following code as a guide:

@Override
protected void onCreate(Bundle icicle) {
    super.onCreate(icicle);
    setContentView(R.layout.bitmap_callout_map);
    // get extras into locals
    Bundle extras = getIntent().getExtras();
    int lat = extras.getInt(EXTRA_LATITUDE_E6);
    int lon = extras.getInt(EXTRA_LONGITUDE_E6);
    int pixels = extras.getInt(EXTRA_PIXELS);
    Uri uri = extras.getParcelable(Intent.EXTRA_STREAM);
        
    // load and scale bitmap
    Bitmap bitmap = getBitmap(uri); // not yet shown        
    bitmap = scaleToPixels(bitmap, pixels); // not yet shown
    // create map overlay and map view
    mOverlay = new BitmapCalloutOverlay(new BitmapDrawable(bitmap));
    mMapView = (MapView) findViewById(R.id.mapView);
    mMapView.setClickable(true);
    mMapView.setBuiltInZoomControls(true);
    mMapView.setSatellite(true);
    mMapView.getOverlays().add(mOverlay);
    placeMarker(lat, lon); // not yet shown
}

Here, the extras are extracted into local variables used to construct the marker, overlay, and map.

Now, let’s step through the three not yet shown helper methods that perform the majority of the work. They’re small, simple to understand, and conclude the construction of the sample application. The first one loads a bitmap from the content model.

private Bitmap getBitmap(Uri uri) {
    try {
        return Images.Media.getBitmap(getContentResolver(), uri);
    } catch (IOException ioe) {
        throw new RuntimeException(ioe);
    }
}    

scaleToPixels uses a standard scaling algorithm.

private Bitmap scaleToPixels(Bitmap src, int pixels) {
    int srcWidth = src.getWidth();
    int srcHeight = src.getHeight();
    if (srcWidth < srcHeight) {
        int tmp = srcWidth;
        srcWidth = srcHeight;
        srcHeight = tmp;
    }
    int w = pixels;
    int h = pixels * srcHeight / srcWidth;
    return Bitmap.createScaledBitmap(src, w, h, true);
}

First, the bitmap’s longest side is determined. It’s then set to the value extracted from the EXTRA_PIXELS extra. Lastly, a standard algebraic proportion computes the new height before the bitmap is scaled and returned. Finally, instruct the overlay to draw itself with the placeMarker helper method:

private void placeMarker(int lat, int lon) {
    GeoPoint gp = new GeoPoint(lat, lon);
    mOverlay.setMarker(new OverlayItem(gp, "", ""));
    MapController controller = mMapView.getController();
    controller.setZoom(18);
    controller.animateTo(gp);
}

The code here probably doesn’t warrant an explanation, but just in case, it’s basically creating an overlay item, placing the overlay item on the map, and centering and zooming in on it.

Final Thoughts

Arguably, the more interesting of explicit and implicit intents are implicit intents. In fact, explicit intents are almost easier to think of as command pattern objects. Implicit intents decouple application components making them easier to reuse. Moreover, the user experience benefits as new applications with better capabilities are installed on the system.

  • + Share This
  • 🔖 Save To Your Account