Game Programming Algorithms and Techniques: Overview
- Evolution of Video Game Programming
- The Game Loop
- Time and Games
- Game Objects
- Summary
- Review Questions
- Additional References
Evolution of Video Game Programming
The first commercial video game, Computer Space, was released in 1971. Created by future Atari founders Nolan Bushnell and Ted Dabney, the game was not powered by a traditional computer. The hardware had no processor or RAM; it simply was a state machine created with several transistors. All of the logic of Computer Space had to be implemented entirely in hardware.
But when the Atari Video Computer System (Atari 2600) exploded onto the scene in 1977, developers were given a standardized platform for games. This is when video game creation became more about programming software as opposed to designing complex hardware. Though games have changed a great deal since the early Atari titles, some of the core programming techniques developed during that era are still used today. Unlike most of the book, no algorithms will be presented in this section. But before the programming begins, it’s good to have a bit of context on how the video game industry arrived at its current state.
Although the focus of this section is on home console game development, the transitions described also occurred in computer game development. However, the transitions may have occurred a little bit earlier because computer game technology is usually a couple of years ahead of console game technology. This is due to the fact that when a console is released, its hardware is locked for the five-plus years the console is in the “current generation.” On the other hand, computer hardware continuously improves at a dizzying pace. This is why when PC-focused titles such as Crysis are released, the graphical technologies can best many console games. That being said, the advantage of a console’s locked hardware specification is that it allows programmers to become intimately familiar with the system over the course of several years. This leads to late-generation titles such as The Last of Us that present a graphical fidelity rivaling that of even the most impressive PC titles.
In any event, console gaming really did not take off until the original Atari was released in 1977. Prior to that, there were several home gaming systems, but these systems were very limited. They came with a couple of games preinstalled, and those were the only titles the system could play. The video game market really opened up once cartridge-based games became possible.
Atari Era (1977–1985)
Though the Atari 2600 was not the first generalized gaming system, it was the first extraordinarily successful one. Unlike games for modern consoles, most games for the Atari were created by a single individual who was responsible for all the art, design, and programming. Development cycles were also substantially shorter—even the most complicated games were finished in a matter of months.
Programmers in this era also needed to have a much greater understanding of the low-level operations of the hardware. The processor ran at 1.1 MHz and there was only 128 bytes of RAM. With these limitations, usage of a high-level programming language such as C was impractical due to performance reasons. This meant that games had to be written entirely in assembly. To make matters worse, debugging was wholly up to the developer. There were no development tools or a software development kit (SDK).
But in spite of these technical challenges, the Atari was a resounding success. One of the more technically advanced titles, Pitfall!, sold over four million copies. Designed by David Crane and released in 1982, it was one of the first Atari games to feature an animated human running. In a fascinating GDC 2011 postmortem panel, listed in the references, Crane describes the development process and technical challenges that drove Pitfall!.
NES and SNES Era (1985–1995)
In 1983, the North American video game market suffered a dramatic crash. Though there were inarguably several contributing factors, the largest might have been the saturation of the market. There were dozens of gaming systems available and thousands of games, some of which were notoriously poor, such as the Atari port of Pac-Man or the infamous E.T. movie tie-in.
The release of the Nintendo Entertainment System in 1985 is largely credited for bringing the industry back on track. Since the NES was noticeably more powerful than the Atari, it required more man hours to create games. Many of the titles in the NES era required a handful of programmers; the original Legend of Zelda, for instance, had three credited programmers.
The SNES continued this trend of larger programming teams. One necessity that inevitably pops up as programming teams become larger is some degree of specialization. This helps ensure that programmers are not stepping on each other’s toes by trying to write code for the same part of the game at the same time. For example, 1990’s Super Mario World had six programmers in total. The specializations included one programmer who was solely responsible for Mario, and another solely for the map between the levels. Chrono Trigger (1995), a more complex title, had a total of nine programmers; most of them were also in specialized roles.
Games for the NES and SNES were still written entirely in assembly, because the hardware still had relatively small amounts of memory. However, Nintendo did actually provide development kits with some debugging functionality, so developers were not completely in the dark as they were with the Atari.
Playstation/Playstation 2 Era (1995–2005)
The release of the Playstation and N64 in the mid 1990s finally brought high-level programming languages to console development. Games for both platforms were primarily written in C, although assembly subroutines were still used for performance-critical parts of code.
The productivity gains of using a higher-level programming language may at least partially be responsible for the fact that team sizes did not grow during the initial years of this era. Most early games still only had eight to ten programmers in total. Even relatively complex games, such as 2001’s Grand Theft Auto III, had engineering teams of roughly that size.
But while the earlier titles may have had roughly the same number of programmers as the latter SNES games, by the end of this era teams had become comparatively large. For example, 2004’s Full Spectrum Warrior, an Xbox title, had roughly 15 programmers in total, many of which were in specialized roles. But this growth was minimal compared to what was to come.
Xbox 360, PS3, and Wii Era (2005–2013)
The first consoles to truly support high definition caused game development to diverge on two paths. AAA titles have become massive operations with equally massive teams and budgets, whereas independent titles have gone back to the much smaller teams of yesteryear.
For AAA titles, the growth has been staggering. For example, 2008’s Grand Theft Auto IV had a core programming team of about 30, with an additional 15 programmers from Rockstar’s technology team. But that team size would be considered tame compared to more recent titles—2011’s Assassin’s Creed: Revelations had a programming team with a headcount well over 75.
But to independent developers, digital distribution platforms have been a big boon. With storefronts such as XBLA, PSN, Steam, and the iOS App Store, it is possible to reach a wide audience of gamers without the backing of a traditional publisher. The scope of these independent titles is typically much smaller than AAA ones, and in several ways their development is more similar to earlier eras. Many indie games are made with teams of five or less. And some companies have one individual who’s responsible for all the programming, art, and design, essentially completing the full circle back to the Atari era.
Another big trend in game programming has been toward middleware, or libraries that implement solutions to common game programming problems. Some middleware solutions are full game engines, such as Unreal and Unity. Other middleware may only implement a specific subsystem, such as Havok Physics. The advantage of middleware is that it can save time and money because not as many developers need to be allocated to work on that particular system. However, that advantage can become a disadvantage if a particular game design calls for something that is not the core competency of that particular middleware.
The Future
Any discussion of the future would be incomplete without acknowledging mobile and web-based platforms as increasingly important for games. Mobile device hardware has improved at a rapid pace, and new tablets have performance characteristics exceeding that of the Xbox 360 and PS3. The result of this is that more and more 3D games (the primary focus of this book) are being developed for mobile platforms.
But traditional gaming consoles aren’t going anywhere any time soon. At the time of writing, Nintendo has already launched their new console (the Wii U), and by the time you read this, both Microsoft’s Xbox One and Sony’s Playstation 4 will also have been released. AAA games for these platforms will undoubtedly have increasingly larger teams, and video game expertise will become increasingly fractured as more and more game programmers are required to focus on specializations. However, because both Xbox One and PS4 will allow self-publishing, it also means independent developers now have a full seat at the table. The future is both exciting and bright for the games industry.
What’s interesting is that although much has changed in programming games over the years, many concepts from the earlier eras still carry over today. In the rest of this chapter we’ll cover concepts that, on a basic level, have not changed in over 20 years: the game loop, management of time, and game object models.