The Defining Features of Big Data
So how is Big Data different from just Big Computing?
First, Big Data is not just about crunching numbers. Big Data is about collecting and utilizing the unprecedented—almost inconceivable—amount of digital data now available and applying new analytical tools to reveal new insight from that data. It is called Big Data because quantity is the key element, and it is premised on the fact that all over the world an explosion of digital data is occurring—every second and nearly everywhere.
Simply to describe this digital explosion enters territory far beyond the meager gigabyte or terabyte thresholds that used to impress; today we talk in terms of petabytes, exabytes, yottabytes, and zettabytes (a zettabyte is a trillion gigabytes). My personal favorite, partly because it sounds like something that the Flintstones might have ordered as a take-out, is the brontobyte (1,000 yottabytes).
For most of us, these numbers don’t reveal much. I’ve worked in data management most of my life, and I have no sense of what it means when IBM estimates that there are an additional 2.5 quintillion bytes of data being generated every day. Figure 1.1 provides a good sense of the explosive nature of this growth of digital data.
Figure 1.1 An Explosion of Digital Data
Data Source: IDC
Comparisons can help. For example, it is estimated that somewhere around 2011, the amount of data being produced around the world exceeded 1.8 zettabytes (1.8 trillion gigabytes), at which point there were as many bytes held electronically as there are stars in the universe. Or consider that the 25 petabytes of new data entering the Internet every day is 70 times larger than the total of all the collections in the Library of Congress.1 The IDC estimates that the digital universe will grow by a factor of ten—from 4.4 exabytes in 2013 to 44 exabytes—by 2020.2
Better still, we can think in terms of transactions that we are familiar with. For example, every minute, 48 hours’ worth of new video is loaded onto YouTube. In that same 60 seconds, 34,722 “likes” are recorded on Facebook, and 571 new web sites are created around the world. In one hour, the point-of-sale systems for Walmart capture more than 1 million customer transactions.3 Each day there are more than 180 billion e-mails exchanged around the world, and it has recently been announced that the Library of Congress is maintaining a comprehensive collection of the more than 500 million Tweets sent every day; leaving them currently with an archive of more than 180 billion Twitter messages.4
More unsettling than the idea of a comprehensive Twitter archive, a single data-aggregating company, Acxiom, now maintains a profile containing some 1,500 data points on each of nearly 190 million people. That database accounts for nearly 126 million households in the United States, and about 500 million people worldwide. Acxiom processes more than 50 trillion data “transactions” a year,5 and they are only one (albeit one of the larger) of thousands of data aggregators that collect and sell personal data.
This digital torrent is not limited to just the United States and Europe; 70% of all digital data is already being generated outside the United States,6 and by 2020, the Asian data market alone will be producing more digital data than the United States and Western Europe combined. And this mass digitization process is really only just getting started; 90% of all the world’s digital data has been produced in the past two years, and the rate of data generation is growing steadily at 50% year-on-year.7 That means that there will be nearly 800% more digital data being produced and stored by 2020 than there is now.
The second important feature of Big Data is that it comes from a variety of data sources: online Internet searches, phone recordings, GPS, social media, a car’s diagnostic systems. And from thousands of other sensors and self-reporting components that are increasingly a part of our world.
To put this in perspective, consider for a moment the amount of digital data that individuals create every day—not just the web sites visited, or the output from Twitter accounts, or text and e-mail messages. Also include all the data that is generated on the job—through enterprise systems, presentations, and forwarded and group e-mails. Then think about all of the online activity being logged, tracked, and saved in some way. Everything purchased online—music, games, prescriptions—each blog, photo or video, like or dislike. If you glance at a local paper or The New York Times online, you can be sure that hundreds of electronic trackers are instantly recording what pages you read, calculating how long you linger, and determining which advertisements interest you and which ones don’t. If you use an e-reader, you are monitored for what you have selected to read, how long it takes you to read it, and even the notes you take on each page. Telephone calls to customer service agents may be recorded, digitized, and later “scraped” for keywords or sentiments. When we use a loyalty card to shop for groceries on the way home, the retailer—and anyone else the retailer cares to sell that information to—has a record of our purchases down to the item level. In many stores, smartphones identify the user to in-store tracking systems, and closed circuit TV (CCTV) follows them through the aisles.
Streaming video, TV, and films purchased through set-top boxes and subscription-based cable providers is monitored and logged. Wearable technologies can monitor heart rate, temperature, and blood pressure—and calculate how many calories we’ve burned. Smartphones transmit who we are and where we are; the GPS systems in our cars provide a constant digital trail of our location and our speed and potentially even our driving and braking patterns. At dinner and as the family prepares for bed, the utility companies monitor the amount of water and electricity we use, and when we use it. If, like me, you have Google’s Nest, they have access to a detailed record of your household temperatures.
Each and every day, data aggregators compile a vast record of our financial and personal life taken from both online and offline transactions: employment, income, loans, repayment records. They categorize us by our socioeconomic status and our preferences, selling names and contact information to interested retailers, dentists, car dealerships, or charities. Other online data compilers (some reputable, others less so), for around $10, provide personal information—phone numbers, e-mail addresses, where we live, where we went to school or college, who our relatives are, and even who we associate with—to anyone who has access to the Internet anywhere in the world. Facial recognition software captures photos that we—or others—have posted online, and these are picked up, in turn, by the major search engines. If the photos came from the camera in a mobile phone, there is data showing when and where those photos were initially taken or posted.
And that’s just the consumer side of Big Data. Beyond that is all the data being created and exchanged in the Industrial Internet in hundreds of millions of supply chain and financial system transactions taking place in companies and suppliers and their customers all around the globe. And as we move more and more into the world of smart parts and self-monitoring components, hundreds of millions of measurements—from car, jet, and marine engines; pumps; motors; bearings; refrigerators; air conditioners; and hundreds of thousands of other mechanical devices that rotate or create heat or power—will record performance data every second. Then think about all the government data: the census, employment levels, labor statistics, GNP, retail prices, and epidemiological data on disease, or digital sources of statistics on poverty and crime. All that data is being created and to an ever-increasing extent being stored in a digital format.
Some of that data—names and contact information, credit card and social security numbers, product SKUs, and banking transactions—is structured data, easily converted into ones and zeroes and placed in digital tables for searching and retrieval. That has pretty much been standard for digital data in the past 30 years, and although it may strain the capacity of our existing relational database and analytics systems, other than its ever-increasing volume, there is nothing essentially revolutionary about the nature of this type of data.
But much of the data being produced and collected today worldwide (probably as much as 90%) emanates from videos, Internet search tracking, customer service phone calls, and other sources of digital data that is only in a semistructured or unstructured format, which makes search and retrieval using our conventional storage, database, and business intelligence technologies much more difficult. Figure 1.2 reflects the relative growth between structured and unstructured digital data. One of the fundamental contentions of those who see Big Data as a unique new phenomenon is that we need to keep all the data we produce, because it is only when we apply algorithms to a full and complete universe of a single large data set, that computers can discern new patterns or correlations that otherwise would remain invisible.
Figure 1.2 The Growth of Unstructured Data
Data Source: IDC
This reflects the number-crunching origins of Big Data in science and engineering, and the assumption that the data in that complete universe of a single, large data set, is clean, uncorrupted, and relevant. Obviously, if 90% of that data comes from such a wide variety of sources and in such varied formats, ensuring that we have a usable data set for analysis is much more difficult. And if we are going to deal with these large data sets, given the sheer volume of data being produced and made available from the consumer and industrial spheres—most of which is in an unstructured format—we need to change our conventional approach to data management.
This brings us to the third important feature of Big Data: the new tools and technologies that now allow us to store and analyze that data in ways that can help draw correlations and conclusions about everyday activities—customer preferences, political positions, purchasing patterns, and personal health—in ways that weren’t possible in the past. This is what makes Big Data different from just “more data”—the ability to apply sophisticated algorithms and powerful computers to large data sets to reveal correlations and insight previously inaccessible through conventional data warehousing or Business Intelligence tools.
These Big Data tools consist broadly of new storage systems (mostly cloud computing) and new search and analytical tools such as Hadoop and other MapReduce-type technologies that allow storage and analysis of massive amounts of data from many different formats. Technologies that had their origins in the enormously powerful search engines—Yahoo! and Google—have revolutionized the way we search the Internet. We look at all these things more carefully throughout the book, but the important thing to note is that for the first time these types of technologies—for collection, storage, search, and analysis—are becoming democratized—made available to organizations of any size through a wide variety of cloud-based offerings and enterprise software. Part of the reason that the Big Data phenomenon has captured the imagination of the business world is because now almost anyone can get a piece of the Big Data action.
Those are the fundamental features of the Big Data phenomenon in its narrowest sense: huge amounts of digital data being produced and captured from a variety of sources and new tools to analyze large data sets to extract patterns and correlations that we otherwise could not. That puts us fairly close to the IT research and advisory firm Gartner’s long-standing definition that describes Big Data as “high-volume, high-velocity, and/or high-variety information assets that demand cost-effective, innovative forms of information processing for enhanced insight, decision making, and process optimization.”8