Home > Articles

  • Print
  • + Share This
This chapter is from the book

1.4 Extracting Useful Information from Big Data

The ever-growing digital universe, along with multimedia social platforms, has flooded today’s business world with an overwhelming amount of data. To make it worse, approximately 80% of data held by an organization is known to be unstructured and unrelated to its other data (Godika 2015). These data are often composed of data from customer calls, e-mails (including unsolicited), blogs, video clips, and social media feeds. The question is, How we will make sense of this abundant unstructured data and exploit it as a competitive differentiator? To answer this question, there are several steps that we can take:

  1. Data screening: Some data were collected and stored by the company not necessarily because they would be useful, but because they had to be captured and kept by government or company mandates. For example, we are required by the healthcare law to retain patient data for 7 years for adults and 25 years for children. Some portions of those data may be useful for tracking the patient’s health history and immunization records, but others, such as the height or weight of patients at a certain point of their lives, may not present many clues for their current health conditions. As such, data screening should begin with the determination of relevancy of stored data to intended data usage (e.g., medical diagnosis, vaccine immunization against infectious disease outbreaks, market trend forecasts). After relevant data are identified, those data should be cleaned by the removal of any outliers and faulty data (e.g., 200-year-old human being) from the analysis.
  2. Data standardization: It will be difficult for us to make sense of incompatible data in different formats (e.g., Excel versus SPSS) or measurement units (e.g., dollar versus yuan). So the main purpose of data standardization is to make data consistent and clear. Herein, what we mean by “consistent” is ensuring that the output (data analysis result) is reliable so that related data can be identified using a common terminology and format. What we mean by “clear” is to ensure that the data can be easily understood by those who are not involved with the data analysis process (Oracle 2015). Also, data standardization ensures that the analyzed data can be shared across the enterprise.
  3. Data analysis: With the standardized data, the next step to take is to figure out what that data means by describing, condensing, inspecting, recapping, and modeling it. Since raw data itself means nothing to the decision maker, it is really important for us to select the proper data analysis tools to interpret what the data tells. In a broad sense, there are two types of data analysis tools: qualitative approach and quantitative approach. In general, a qualitative approach aims to develop (usually not predefined) concepts and insights useful for explaining natural phenomena from holistic, speculative, and descriptive views. It often deals with data that is not easily converted to numbers and its analysis relies heavily on field observations, interviews, archives, transcriptions, audio/video recordings, and focus group discussions. For example, this approach is proven to be useful for segmenting unfamiliar markets, understanding customer responses to new products, differentiating a company brand from its competition, and repositioning a product after its market image has gone stale (Mariampolski 2001). On the other hand, a quantitative approach aims to make sense of numerical data (numbers) by evaluating it mathematically and reporting its analysis results in numerical terms. A quantitative approach is primarily concerned with finding clear patterns of evidence to either support or contradict a preconceived notion or hypothetical idea that is formulated based on the abstract representation of real-world situations. As such, it is helpful for fact findings.

    These two approaches can be further broken down into many categories, as shown in Figure 1.1. Since categories belonging to the quantitative approach are discussed in detail in Chapter 3, “Business Analytics Models,” we will briefly introduce well-known categories of the qualitative approach here. These include narrative analysis, which is intended to reflect on the subjective accounts of field texts such as stories, folklore, life experiences, letters, conversations, diaries, and journals presented by people in different contexts and from different viewpoints (see, e.g., Reissman 2008). Thus, narrative analysis is not interested in verifying whether field texts are true. Grounded theory develops a set of general but flexible guidelines for collecting and analyzing qualitative data to construct theories “grounded” in the data themselves and foster seeing the data in fresh ways (Charmaz 2006). It usually starts with an examination of a single case from a predefined population to formulate a general statement and then proceeds to the examination of another case to see whether it fits the general statement. This process continues until all the cases of predefined population fit that statement. Stated simply, content analysis is a systematic and objective way of interpreting message characteristics from a wide range of text data (e.g., words, phrases) obtained from the careful examination of human interactions; character portrayals in TV commercials, films, and novels; the computer-driven investigation of word usage in news releases and political speeches; and so forth (Neuendorf 2002). Discourse analysis aims to analyze the use of “discourse” (i.e., language beyond the level of a sentence; language behavior linked to social practices; language as a system of thought) in any communicated messages. Thus, it can be defined as the analysis of language “beyond the sentence” (Tannen 2015). Discourse analysis allows us to make sense of what we are hearing or reading based on the analysis of every word spoken by a particular individual, the timing of her words, and the general topic she addresses when she utters those words. Domain analysis is intended to discover patterns that exist in cultural behaviors, social situations, and cultural artifacts in the group from whom the data was collected. Conversation analysis uncovers details of conversational interaction among people under the premise that conversation can give us a sense of who we are to one another and that conversational interaction is sequentially organized, and talk can be analyzed in terms of the process of social interaction rather than in terms of motives or social status (Holstein and Gubrium 2000). Conversation analysis allows us to capture shifts in meaning of language a person spoke, changes in her nuances, and conveyance of nonverbal messages.

    Figure 1.1

    Figure 1.1 Types of data analysis tools.

  4. Data Reporting: To create actionable insights into the analysis results, the results should be presented to the intended users in such a way that they can be accessed in real time and be understood by the users without much technical expertise. Thus, the results should be rendered in tabular, graphical, and other visual formats. Some of the data visualization tools such as Instant-Atlas, Fusion-Charts, and Visualize Free that support data presentation include the dynamic interactive feature that enables the user to see alternative results under the “what-if” scenarios.

As discussed, extracting actual business value from data overload is an onerous task that should be performed in a systematic, ordered fashion. To maximize the efficiency and effectiveness of data extraction, it is desired that business executives develop clear data management policy and procedural guidelines to follow based on the aforementioned steps.

  • + Share This
  • 🔖 Save To Your Account