When designing any interface, the most important consideration is how it will be used. The mobile nature of any mobile device means that it's likely to be operated in a very different way from that of a desktop or even a laptop. The user may have only one hand free, for example, or may be holding the screen at an angle.
Often, you can get information about what the user wants to do from the state of the device. For example, Google Maps Mobile sends the cell ID to the server with the request. It then tracks the first location that people look at, assuming that this location is near the cell. (Future users of Google Maps Mobile default to the position where the last user looked first.) This strategy works very well for mobile devices, because people usually want to see a map of their immediate location while mobile. It doesn't work so well for desktop users, because you're more likely to want to look at your destination or some other location while stationary.
The position and orientation of the device indicates a lot about what the user is doing. For example, if you put the device down flat on its back, there's a good chance that you're showing something to friends. If you hold the device upright, you're probably using it yourself. A camera application could take advantage of this positional information by showing the images when the camera lens is pointed directly downward. Sometimes people do aim a camera straight down, but it's relatively uncommon.
One of the buzzwords in mobile interface research is multimodal interaction, which means taking input from multiple sources and producing output in several different forms. A trivial example of multimodal output is displaying a "pushed" image for a pressed button and producing an audible click. A more complex example would be an app like Google's Street View, which lets you see an image of the view from the current location onscreen, and vibrates when you rotate the device too far away from the correct route.
Modern mobile devices very often have multitouch screens, GPS or similar position sensors, six-axis accelerometers, and maybe even more inputs. They're ideal platforms for multimodal input. It's worth remembering all of these possibilities when you're creating a new user interface. Connecting a particular input to something other than just a touchscreen may be a much better option.