Secure By Design? Techniques and Frameworks You Need to Know for Secure Application Development
- Dec 19, 2012
The history of software vulnerabilities and exploitation shows a slowly changing landscape of threats. Previously, attacks were focused on the underlying infrastructure of networked systems. Attacks were focused on various aspects of the underlying operation system and network stack.
With the introduction of Windows NT and Windows 95, a slew of attacks against the implementation of the IP stack began. It really began to escalate in the late '90s into 2000. Many of these attacks caused the simple Blue Screen of Death (BSOD), which required users to reboot the system.
One of the most common and best-remembered of these attacks was WinNuke, which began in mid-1997. This remote denial-of-service attack could be launched against any IP address. Vulnerable systems running Windows 95, NT, and even Windows 3.1 were affected.
The attack used a simple out of band (OOB) TCP packet with an Urgent Pointer enabled. This became a very common attack among online gamers, who would knock their opponents offline in the middle of a game.
This was also one of the earlier introductions of a simple "point-and-click" exploit. You could download a simple Windows program that would present you with a prompt for an IP address. You entered the IP address for your target and clicked OK. If the target was vulnerable, they were knocked offline immediately. One particularly nasty version of this was WinGenocide, which could knock out an entire Class C subnet of Windows machines with a single click.
There were also countless attacks against various built in Windows and Unix services that were a standard part of the base installation. As Internet access became more mainstream, more and more people without any background in network technology were going online. Windows was the platform of choice for this early Internet generation, and due to its wide spread use, the attacks were numerous.
The early implementations of Windows Internet Information Server (IIS) were also a primary target of attackers. A very prominent attack in the latter part of 1999 was a buffer overflow against IIS4. Originally discovered and reported by eEye, a proof of concept exploit was released that would crash the IIS service with a buffer overflow.
The string in the attack would get loaded into memory so that it would then execute the payload on the target server. A customized version of netcat was used to open up a listener on port 99, providing a telnet session leading to a C:\> prompt with system level permissions. The script kiddies went wild with this exploit, and web servers were exploited by the thousands.
All these simplified attacks led people and companies to finally begin to realize just how vulnerable they were on the Internet. Sadly, the changes to their security posture didn't come quickly. Systems would remain unpatched and vulnerable for months or even years. Many times, systems wouldn't get patched until they were actually attacked. This eventually led to the current state of patching quickly to outpace the attacks that would take place. Microsoft even began to roll out patches on a regular schedule (known as Black Tuesday or Patch Tuesday—the second Tuesday of every month) to allow users and companies to better plan and schedule their patch cycles.
Now, the threat landscape has shifted dramatically over the past few years. We still race to patch our systems when new vulnerabilities are announced. To be fair, companies such as Microsoft have improved their software development and patch process to the point where the number of crucial vulnerabilities have been diminishing.
A recent survey by the security company Kaspersky indicates that Microsoft doesn't have one single product on its Top 10 Vulnerabilities list. The platform, operating systems, and overall infrastructure have become more mature, stable, and resilient. As these companies see a reduction in vulnerabilities, why then does the number of overall attacks continue to multiply?
How is that possible? As operating system vendors have eliminated many of their security problems, most hackers have figured out that enterprises don't specialize in writing secure applications. So they've started to pound away at the application layer, including home-grown Web apps.
Shifting Focus of Attacks
As software vendors have improved their trade, and system administrators have fought to strengthen their defenses, the "hacking" community has also been maturing and changing. There used to be huge numbers of website defacements, where attackers would exploit some simple vulnerability and then change the website and tag it with their hacker name, or sometimes post some politically motivated rant. There are many lists of defacement mirror sites in which these attacks were archived over many years.
Over the years, we've seen a shift from this type of random drive-by attack, to the evolution of early Distributed Denial of Service attacks, and more recently to very focused attacks with an emphasis on obtaining large quantities of personal information. This has led to a new trend in data breaches, with sites now tracking these breaches in much the same way web defacements were tracked previously. While writing this article, a hacking group posted a list of 1.6 million accounts it says it has compromised, claiming that it has access to "vulnerable" servers at the Pentagon, DHS, and a number of DoD contractors.
These new attacks are much more harmful in terms of the overall cost and impact of the exposure, to the financial impact against individual victims, and the potential damage to corporate reputations and trust—leading to loss of business.
Organizations are now so concerned about the impact of these data breaches that they track the actual cost of breaches. The latest report I could find indicates that the cost of a data breach averages roughly $194 per record. Imagine the impact to Sony, which has had multiple incidents with a combined impact to roughly 100 million records.
Attackers have shifted the focus of their attacks to the source of all this information. Attacks are now much more focused against database architecture, and the home-grown applications that are used to access and manage this information. Web-based application attacks have been on the rise for the past several years. So now that the focus is on contractors and application developers to build better software, exactly how do they go about that?