Four Rules for Surviving an Amazon EC2 Outage
If you followed the news or Twitterverse on April 21, 2011, you were sure to have heard the troubling story that some major Internet sites like FourSquare, Reddit, Quora, and HootSuite were brought down due to the Amazon EC2 outage. The commonality causing these failures was an EC2 availability zone that experienced problems when a simple error during a network change started the problem. An incorrect traffic shift left the primary and secondary elastic block storage (EBS) nodes isolated, each node thinking the other had failed. When they were reconnected, they rapidly searched for free space to re-mirror, which exhausted spare capacity and led to a "re-mirroring storm," in turn causing the services provisioned within this zone to fail or become intermittently available. These failures prompted many to claim that the cloud simply wasn’t “production ready.” The real problem here isn’t the cloud, but rather how systems are designed to work in the cloud. The proof of this statement is that some sites, such as ShareThis, which runs 100% of their infrastructure on Amazon’s cloud services--a fair amount of it in the affected availability zone--didn’t experience any downtime.
ShareThis, according to ComScore data from March 2011, is the largest distributed content media network in the United States, reaching nearly 172 million unique U.S. visitors and over 400 million people per month worldwide. ShareThis offers an innovative sharing platform for social audiences, including publishers, advertisers, agencies, and consumers. They do this all from Amazon’s cloud, using EC2, EBS, and S3. Handling the ShareThis volume, span, and reach is not simple and requires a number of components to be successful. These components include Akamai (as a CDN), LAMP, nginx, Memcached, Membase (Couchbase), MongoDB, Cassandra, Hadoop and several others.
In an effort at full disclosure, one of the authors is a member of the board of directors for ShareThis. We’re going to use ShareThis to explain how they survived the Amazon EC2 outage and how, by following a few simple rules, you can survive the next outage of one of your vendors. Our newest book, Scalability Rules, has a rule based format that we are going to reference, but we’ll fully explain the ones we use in this article.
Rule 1: Design to Clone Things
The first rule is “Design to Clone Things.” This is often called horizontal scaling and is the duplication of services or databases to spread transaction load across multiple physical or virtual servers. Any site with a reasonable amount of traffic has implemented multiple front end web servers to handle the traffic, which is an example of horizontal scaling. Many sites that are dependent on their database for retrieval and storage of information for web pages use horizontal scaling of their databases as well. If your database is MySQL, this is done through Master-Slave replication. Your application writes to one master database, which replicates the data to one or more slave databases from which the application reads data. ShareThis has hundreds of front end web servers handling their traffic and makes use of MySQL Master-Slave replication to spread the load of their database across multiple instances.
Getting rid of any single point of failure (SPOF) in your architecture by horizontally scaling is the first step in surviving outages. Our mantra is “everything fails,” and therefore when cloning look to keep devices geographically separated whenever possible. ShareThis had their systems spread across both the U.S. East and U.S. West data centers roughly 15% of their servers were within the 1c availability zone that failed in U.S. East. Because their databases and servers were spread across the availability zones in U.S. East and further cloned to U.S. West, the size of the potential impact was reduced significantly.
Rule 2: Use Databases Appropriately
The second rule to follow is “Use Databases Appropriately.” Relational database management systems (RDBMSs) such as Oracle and MySQL are based on the relational model introduced by Edgar F. Codd in his 1970 paper, “A Relational Model of Data for Large Shared Data Banks.” Most RDBMSs provide two huge benefits for storing data – the guarantee of transactional integrity through ACID properties (Atomicity, Consistency, Isolation, and Durability) and the relational structure within and between tables. Guaranteeing that transactions are written to multiple database nodes, such as in a MySQL NDB or Oracle RAC cluster, requires synchronous replication that is difficult to scale beyond a couple nodes. The relational structure within and between tables in the RDBMS makes it difficult to split the database through such actions as sharding or partitioning. A simple query that joined two tables in a single database must be converted into two separate queries with the joining of the data taking place in the application to split tables into different databases.
Data that requires transactional integrity or relationships with other data are very well suited for RDBMSs, but there is often data within your system that doesn’t require transactional integrity or relationships with other data. Using an RDBMS for this data is incurring the overhead without the benefits. Alternative persistent storage systems include file systems such as Google File System, MogileFS, and Ceph, depending on the nature of the data you are storing. Another alternative to an RDBMS is a NoSQL solution. Technologies that fall into this category are often subdivided into key-value stores, extensible record stores, and document stores. These are, by design, much easier to scale within or across datacenters than a traditional RDBMS. There is no universally agreed classification of technologies, but in general, key-value stores have a single key-value index for data, extensible record stores use a row and column data model that can be split across nodes, and document stores use a multi-indexed object model that can be aggregated into collections.
ShareThis makes very deliberate decisions about where to store each piece of data. Some data in stored in MySQL, other data is cached in key-value stores such as Memcached or Membase, while still other data are placed in extensible record stores such as Cassandra. By taking this approach of using the right tool for the job, ShareThis can more easily deploy parts of their infrastructure in different availability zones. This rule in association with the cloning rule helped them mitigate or eliminate the impact of a failure.
Rule 3: Design Using Fault Isolation Zones
The third rule is to “Design Using Fault Isolation Zones.” We often call fault isolation zones “swim lanes” because of the visual it invokes of keeping swimmers isolated in their lanes. Other organizations call them pods, pools, or shards. From our perspective, the most important differentiation among these terms is the notion of design. Whereas a pool, shard, or pod refers to how something is implemented in a production environment, the swim lane is a design concept for creating fault isolation domains where service failures only affect those users or services within that zone. Swim lanes extend the concepts provided within shards and pods by extending the failure domain all the way to the users themselves.
Swim lanes can isolate groups of users or services. One example implementation would be that all users with usernames A-F get routed to one swim lane, while usernames G-Z are routed to the second swim lane. In the event of a failure in swim lane one, only about half of your users would be affected. Another example would be dividing up your application into services along fault isolation zones. You could have login, search, checkout, and register all in separate swim lanes, passing the user between lanes as necessary. This way, in the event of a failure in the search swim lane, your users could continue to login, register, and checkout.
There are four general principles for designing and implementing swim lanes in a system’s architecture:
- There must not be any shared hardware or software between lanes other than possibly highly available network gear such as paired load balancers or border routers.
- No synchronous calls can take place between swim lanes. If cross swim lane communication is required, i.e. grabbing search results to display on the same page as login, it must be done asynchronously.
- Limit asynchronous communication. While permitted, more calls lead to a greater chance of failure propagation.
- Use timeouts with asynchronous calls. There is no need to tie up the user’s browser or your server waiting for the eventual TCP timeout.
This concept of swim lanes is heavily utilized at ShareThis to distribute and isolate their traffic. Nanda Kishore, the ShareThis CTO, says, “We have spent a lot of blood, sweat and tears making instances that are largely independent and available across zones.” The ShareThis swim lanes are distributed not only within but also across availability zones, so when Amazon has an issue with one zone, ShareThis services continue to run smoothly in other zones. When availability zone 1c in U.S. East failed, some of the ShareThis relational databases could not be reached, and some web servers were not functioning. Traffic managers (load balancers) shifted new requests to functioning web servers within the remaining three availability zones in U.S. East, and replicated “slave” databases took the RDBMS requests as appropriate. ShareThis lost 15% of their virtual computer traffic, and there was no noticeable degradation to response time or availability to customer requests.
Rule 4: Be Wary of Scaling Through Third Parties
The fourth rule is “Be Wary of Scaling Through Third Parties.” According to Gartner, worldwide IT spending in 2010 was over $3.4 trillion.¹ In this high dollar, intensely competitive landscape of database, hardware, cloud, telecom, network, and other vendors, there are sophisticated approaches being used by vendors to secure and maintain relationships with clients. Unfortunately, these long-term relationships are often actively managed in a way that clients to end up spending more and more with the vendor. This is all great business, and we don’t fault the vendors for trying, but we do want to caution you as a technologist or business leader to be wary of relying on vendors to help you scale.
Reiterating our mantra of “everything fails,” this includes all vendor products. Regardless of whether something is open source or proprietary, it has bugs in it. Recall the Mars Climate Orbiter that disintegrated when attempting to enter Mars’ atmosphere on September 23, 1999 because of a glitch with metric-to-imperial conversion. Take the fate of your systems and possibly your entire company into your own hands. No one cares more about your site being available than you do, so don’t rely on someone else to keep your site available or scalable. This doesn’t mean write your own database (unless you’re Oracle or IBM), but it does mean to expect the database to have bugs and design your system to withstand them.
ShareThis actively manages their design and implementation to ensure their service is highly available and scalable to handle the hundreds of millions of users sharing articles, videos, recipes, pictures, etc. While they make use of proprietary and open source services and technology, they do so in a manner that allows them final authority and control over their own destiny. If a vendor fails, they have designed their system to continue working from either a different availability zone or from a different data source, depending on the nature of the failure.
The cloud is not a panacea. It does not solve the infrastructure and software problems of availability and scalability for you. View it as an alternative to hosting solutions yourself or using a collocation provider to do so. But always remember that you are still actually hosted in a physical environment that can absolutely fail. You must still design your systems to be both fault isolative and fault tolerant, while continuing to ensure that they can scale appropriately using some of the techniques we’ve described above.
We’ve covered four rules that ShareThis follows in the design and implementation of their system to ensure high availability and scalability. By following these rules, you too can be much better prepared for any outage.