Home > Articles

📄 Contents

  1. Definition and Scope
  2. A Brief History of Augmented Reality
  3. Examples
  4. Related Fields
  5. Summary
  • Print
  • + Share This
This chapter is from the book

A Brief History of Augmented Reality

While one could easily go further back in time to find examples in which informational overlays were layered on top of the physical world, suffice it to say that the first annotations of the physical world with computer-generated information occurred in the 1960s. Ivan Sutherland can be credited with starting the field that would eventually turn into both VR and AR. In 1965, he postulated the ultimate display in an essay that contains the following famous quote:

  • The ultimate display would, of course, be a room within which the computer can control the existence of matter. A chair displayed in such a room would be good enough to sit in. Handcuffs displayed in such a room would be confining, and a bullet displayed in such a room would be fatal. With appropriate programming such a display could literally be the Wonderland into which Alice walked.

    Sutherland’s [1965] essay includes more than just an early description of immersive displays, however. It also contains a quote that is less often discussed, but that clearly anticipates AR:

  • The user of one of today’s visual displays can easily make solid objects transparent—he can “see through matter!”

Shortly thereafter, Sutherland constructed the first VR system. In 1968, he finished the first head-mounted display [Sutherland 1968]. Because of its weight, it had to be suspended from the ceiling and was appropriately nicknamed “Sword of Damocles” (Figure 1.2). This display already included head tracking and used see-through optics.

Figure 1.2

Figure 1.2 The Sword of Damocles was the nickname of the world’s first head-mounted display, built in 1968. Courtesy of Ivan Sutherland.

Advances in computing performance of the 1980s and early 1990s were ultimately required for AR to emerge as an independent field of research. Throughout the 1970s and 1980s, Myron Krueger, Dan Sandin, Scott Fisher, and others had experimented with many concepts of mixing human interaction with computer-generated overlays on video for interactive art experiences. Krueger [1991], in particular, demonstrated collaborative interactive overlays of graphical annotations among participant silhouettes in his Videoplace installations around 1974.

The year 1992 marked the birth of the term “augmented reality.” This term first appeared in the work of Caudell and Mizell [1992] at Boeing, which sought to assist workers in an airplane factory by displaying wire bundle assembly schematics in a see-through HMD (Figure 1.3).

Figure 1.3

Figure 1.3 Researchers at Boing used a see-through HMD to guide the assembly of wire bundles for aircraft. Courtesy of David Mizell.

In 1993, Feiner et al. [1993a] introduced KARMA, a system that incorporated knowledge-based AR. This system was capable of automatically inferring appropriate instruction sequences for repair and maintenance procedures (Figure 1.4).

Figure 1.4

Figure 1.4 (top) KARMA was the first knowledge-driven AR application. (bottom) A user with an HMD could see instructions on printer maintenance. Courtesy of Steve Feiner, Blair MacIntyre, and Doreé Seligmann, Columbia University.

Also in 1993, Fitzmaurice created the first handheld spatially aware display, which served as a precursor to handheld AR. The Chameleon consisted of a tethered handheld liquid-crystal display (LCD) screen. The screen showed the video output of an SGI graphics workstation of the time and was spatially tracked using a magnetic tracking device. This system was capable of showing contextual information as the user moved the device around—for example, giving detailed information about a location on a wall-mounted map.

In 1994, State et al. at the University of North Carolina at Chapel Hill presented a compelling medical AR application, capable of letting a physician observe a fetus directly within a pregnant patient (Figure 1.5). Even though the accurate registration of computer graphics on top of a deformable object such as a human body remains a challenge today, this seminal work hints at the power of AR for medicine and other delicate tasks.

Figure 1.5

Figure 1.5 View inside the womb of an expecting mother. Courtesy of Andrei State, UNC Chapel Hill.

Around the mid-1990s, Steve Mann at the MIT Media Lab implemented, and experimented with, a “reality mediator”—a waist-bag computer with a video see-through HMD (a modified VR4 by Virtual Research Systems) that enabled the user to augment, alter, or diminish visual reality. Through the WearCam project, Mann [1997] explored wearable computing and mediated reality. His work ultimately helped establish the academic field of wearable computing, which, in those early days, had a lot of synergy with AR [Starner et al. 1997].

In 1995, Rekimoto and Nagao created the first true—albeit tethered—handheld AR display. Their NaviCam was connected to a workstation, but was outfitted with a forward-facing camera. From the video feed, it could detect color-coded markers in the camera image and display information on a video see-through view.

In 1996, Schmalstieg et al. developed Studierstube, the first collaborative AR system. With this system, multiple users could experience virtual objects in the same shared space. Each user had a tracked HMD and could see perspectively correct stereoscopic images from an individual viewpoint. Unlike in multi-user VR, natural communication cues, such as voice, body posture, and gestures, were not affected in Studierstube, because the virtual content was added to a conventional collaborative situation in a minimally obtrusive way. One of the showcase applications was a geometry course [Kaufmann and Schmalstieg 2003], which was successfully tested with actual high school students (Figure 1.6).

Figure 1.6

Figure 1.6 One of the applications of the Studierstube system was teaching geometry in AR to high school students. Courtesy of Hannes Kaufmann.

From 1997 to 2001, the Japanese government and Canon Inc. jointly funded the Mixed Reality Systems Laboratory as a temporary research company. This joint venture was the largest industrial research facility for mixed reality (MR) research up to that point [Tamura 2000] [Tamura et al. 2001]. Among its most notable achievements was the design of the first coaxial stereo video see-through HMD, the COASTAR. Many of the activities undertaken in the lab were also directed toward the digital entertainment market (Figure 1.7), which plays a very prominent role in Japan.

Figure 1.7

Figure 1.7 RV-Border Guards was a multiuser shooting game developed in Canon’s Mixed Reality Systems Laboratory. Courtesy of Hiroyuki Yamamoto.

In 1997, Feiner et al. developed the first outdoor AR system, the Touring Machine (Figure 1.8), at Columbia University. The Touring Machine uses a see-through HMD with GPS and orientation tracking. Delivering mobile 3D graphics via this system required a backpack holding a computer, various sensors, and an early tablet computer for input [Feiner et al. 1997] [Höllerer et al. 1999b].

Figure 1.8

Figure 1.8 The Touring Machine was the first outdoor AR system (left). Image of the Situated Documentaries AR campus tour guide running on a 1999 version of the Touring Machine (right). Courtesy of Columbia University.

Just one year later, in 1998, Thomas et al. published their work on the construction of an outdoor AR navigation system, Map-in-the-Hat. Its successor, Tinmith (few people know that this name is actually an acronym for “This is not map in the hat”), evolved into a well-known experimental platform for outdoor AR. This platform was used for advanced applications, such as 3D surveying, but is most famous for delivering the first outdoor AR game, ARQuake (Figure 1.9). This game, which is a port of the popular first-person shooter application Quake to Tinmith, places the user in the midst of a zombie attack in a real parking lot.

Figure 1.9

Figure 1.9 Screenshot of ARQuake, the first outdoor AR game. Courtesy of Bruce Thomas and Wayne Piekarski.

In the same year, Raskar et al. [1998] at the University of North Carolina at Chapel Hill presented the Office of the Future, a telepresence system built around the idea of structured light-scanning and projector-camera systems. Although the required hardware was not truly practical for everyday use at the time, related technologies, such as depth sensors and camera-projection coupling, play a prominent role in AR and other fields today.

Until 1999, no AR software was available outside specialized research labs. This situation changed when Kato and Billinghurst [1999] released ARToolKit, the first open-source software platform for AR. It featured a 3D tracking library using black-and-white fiducials, which could easily be manufactured on a laser printer (Figure 1.10). The clever software design, in combination with the increased availability of webcams, made ARToolKit widely popular.

Figure 1.10

Figure 1.10 A person holding a square marker of ARToolKit, the popular open-source software framework for AR. Courtesy of Mark Billinghurst.

In the same year, Germany’s Federal Ministry for Education and Research initiated a €21 million program for industrial AR, called ARVIKA (Augmented Reality for Development, Production, and Servicing). More than 20 research groups from industry and academia worked on developing advanced AR systems for industrial application, in particular in the German automotive industry. This program raised the worldwide awareness of AR in professional communities and was followed by several similar programs designed to enhance industrial application of the technology.

Another noteworthy idea also appeared in the late 1990s: IBM researcher Spohrer [1999] published an essay on Worldboard, a scalable networked infrastructure for hyperlinked spatially registered information, which Spohrer had first proposed while he was working with Apple’s Advanced Technology Group. This work can be seen as the first concept for an AR browser.

After 2000, cellular phones and mobile computing began evolving rapidly. In 2003, Wagner and Schmalstieg presented the first handheld AR system running autonomously on a “personal digital assistant”—a precursor to today’s smartphones. One year later, the Invisible Train [Pintaric et al. 2005], a multiplayer handheld AR game (Figure 1.11), was experienced by thousands of visitors at the SIGGRAPH Emerging Technologies show floor.

Figure 1.11

Figure 1.11 The Invisible Train was a handheld AR game featuring virtual trains on real wooden tracks. Courtesy of Daniel Wagner.

It took several years, until 2008, for the first truly usable natural feature tracking system for smartphones to be introduced [Wagner et al. 2008b]. This work became the ancestor of the popular Vuforia toolkit for AR developers. Other noteworthy achievements in recent years in the area of tracking include the parallel tracking and mapping (PTAM) system of Klein and Murray [2007], which can track without preparation in unknown environments, and the KinectFusion system developed by Newcombe et al. [2011a], which builds detailed 3D models from an inexpensive depth sensor. Today, AR developers can choose among many software platforms, but these model systems continue to represent important directions for researchers.

  • + Share This
  • 🔖 Save To Your Account