Designing Mobile Interfaces
Modern handheld computers, including mobile phones, are very powerful machines. A latest-generation smartphone has a faster CPU and GPU, and more memory and storage space, than the desktop computer that I took to university. Something like the N900 has higher specs in nearly every respect, including network speed, except for onescreen size.
Because these machines are so powerful, they typically run something like a modern desktop OS. Android uses the Linux kernel. The iPhone uses the XNU kernel. Meego, the platform with the stupid name, uses almost the same software stack as a desktop Linux system, right up to X11 and the toolkits used for GUI programming. Both Symbian and Windows Mobile include all of the features that you'd expect from a modern operating system. Modern handhelds no longer run simple embedded operating systems.
Apple's iPhone uses almost the same stack as an Apple desktop. It differs significantly from desktop OS X in one framework, however: The Application Kit (AppKit) is replaced by the UIKit on the iPhone. Apple had two reasons for making this change:
- By breaking backward compatibility, Apple could update some of the design decisions that NeXT had made for AppKit back in 1988approaches that no longer made sense with modern hardware.
- The more important reason for this change was that switching from the AppKit to the UIKit prevented developers from recompiling Mac applications for the handheld platform.
A Mac application and an iPhone application can still share most of their code, but they must have different user interfaces. This fact is irritating for developers and ideal for users, because a good user interface on the desktop is often a terrible UI on something like a phone. In this article, we'll take a look at some of the differences.
Touch and Go
One of the common rules that apply when designing a user interface is Fitts' Law, which describes the cost of moving a pointer to a target area. The time taken is dependent on both the distance and the size of the target, with the former defining the movement time and the latter defining the stopping time.
It's tempting to think that this rule doesn't apply to a touchscreen, but that's misleading. Moving a finger takes time, just as moving a mouse doesbut now you need to perform the calculations in three dimensions.
There are some obvious effects to this difference. For example, with a mouse or a trackpad, a pointer movement and a drag take the same amount of time. With a touchscreen, the drag is faster, because it doesn't require moving the finger off the screen and then moving it back.
This fact makes gesture-based interfaces more interesting on mobile devices. On the desktop, making a gesture can be slower than clicking on buttons that are close to each other. On a touchscreen, the gesture is often faster.
When applying Fitts' Law on the desktop, it's common to treat objects on the screen edges as being infinitely deep along that axis, because the windowing system clips mouse movements to the edge, meaning that you can overshoot the movement by as much as you like and still hit the target. This is why Mac OS put the menu bar along the top of the screenit's easier to hit that top edge than to hit something floating in the middle of the screen.
Does this principle apply on touchscreens? Sometimes. The screen itself is a barrier to movement in one dimension, limiting the cost of moving the finger on to the screen. What about the edges? Moving your finger from space to the edge of the screen is about as easy as moving it to the middle of the screen. Dragging can be a different matter, depending on the design of the screen. The N900 has a small raised bevel around the edge of the screen. You can just slide your finger toward the edge and then get tactile feedback that it's time to stop. If you put drop targets around the edge of the screen, you can get some benefit from this design.
It's worth noting that the first system to use this rule efficiently was a PDA from a company named after a fruit. The Newton used the screen edge as a drop target for clippings. When you wanted to copy-and-paste between apps, you dragged something to the edge of the screen and left it there. Later, you would drag it back. This very clever and simple bit of user interface design was somehow forgotten by the company's next attempt at producing handheld computers.
The final thing to remember about touchscreens is that most people have opaque hands. While it's possible to put a mouse pointer over something without obscuring much detail, putting your finger on the touchscreen obscures a lot of the display. Think about positioning when you're placing user interface elements. A good example is the location of onscreen keyboards. They're typically at the bottom of the screen, for a good reason: If the keyboard is at the top, your hand would hide the interface element in which you're entering the text. It's a good idea to keep controls that modify other views near the bottom of the screen.
It's also important to remember the shape of fingersan area where Apple failed on the iPhone. The iPhone's onscreen keyboard is designed to be used with two thumbs. If you haven't used an iPhone, hold your hand flat against a tabletop and then touch the surface of the table with your thumb. You'll notice that the place where you actually touch the table surface is slightly offset to the right or left (depending on which thumb you use) from the middle of your thumb. Biasing the hit locations for the buttons on the keyboard toward the edge of the screen would have improved typing accuracy, but Apple missed this trick.