Home > Articles > Business & Management

Modeling Techniques in Predictive Analytics: Being Technically Inclined

  • Print
  • + Share This
To work efficiently in web and network data science, it helps to be technically inclined, with some understanding of at least three languages: Python, R, and JavaScript. Learn the technical requirements to be proficient in predictive analytics in this chapter from Web and Network Data Science: Modeling Techniques in Predictive Analytics.
This chapter is from the book

Being Technically Inclined

  • “Why don’t you come up sometime and see me?”
  • —MAE WEST AS LADY LOU IN She Done Him Wrong (1933)

I began my business career working as a network engineer in Roseville, Minnesota. Just out of graduate training in statistics at the University of Minnesota, I was well schooled in math and models but lacking business understanding. It did not take long to learn that success in my job meant coming up with meaningful answers for management.

In the dial-up and leased-line world of the late 1970s, asynchronous, bisynchronous, and synchronous connections ruled the day. We translated network protocols into polling and message bits and noted the bits per second that each communication line could accommodate. Queuing theory and discrete event simulation guided the analysis.

A bank teller would make a request, hitting the return key at a terminal. The terminal was connected to a control unit, which in turn was connected to a remote concentrator processor. Leased lines went from remote concentrator processors to a front-end processor, providing a channel to the mainframe computer. These were the nodes and links of networks at the time. The queuing problem involved estimating how long the bank teller would have to wait to get a response from the mainframe.

Fast forward forty years. We have moved away from dial-up and leased lines. Protocols are packet-switched and mobile. Users of networks are everywhere, not just at banks, businesses, and research establishments. Most mainframes have been replaced by clusters of microcomputers. We carry the smallest of computers in our pockets. We wear computers if we like. Of course, when making requests of remote systems, we are still waiting for responses, although now we wait wherever we are and whatever we are doing.

With computer hardware looking more like a commodity and software going open-source, established technology firms seek out new opportunities in business intelligence and data science. IBM moves from hardware to software to consulting. HP splits into two firms, one focused on hardware, the other on business services and utilities. Meanwhile, Apple fights battles with Amazon and Google over the distribution of media, while suing Samsung for copyright violations.

The big battles of today concern information and its online distribution. Intellectual property, special knowledge, competitive intelligence, expertise, and art—these add value in an online world that otherwise appears to offer information for free.

It is hard to resist the allure of the web. She is the ultimate seductress, holding the promise of unlimited information and connection to all. The web is a huge data repository, a path to the world’s knowledge, and the research medium through which we develop new knowledge.

Web and network data science is a collection of technologies and modeling techniques, some well understood, others emerging, that help us to understand the web and the networks in our lives. The technologies of the web are many, with current market shares tracked by Alexa Internet (2014) and W3Techs (2014), among others.

To work efficiently in web and network data science, it helps to be technically inclined, with some understanding of at least three languages: Python, R, and JavaScript. Python is the tool of choice for data preparation (or data munging, as it is sometimes called). R provides specialized tools for modeling and data visualization. And JavaScript is the client-side language of the web, available on every major web browser. When working on web and network problems, it also helps to know HTML5, CSS3, XPath, a variety of text and image file formats, Java, Linux, Apache, .Net web services, database systems, and server-side languages such as Perl and PHP. It helps to be technically inclined, but there is a limit to what we can cover in one book. We provide a glossary of terms as the final appendix in the book.

From its humble beginnings as a language that Brendan Eich developed in ten days in 1995 at the former company Netscape, JavaScript has emerged as the client-side language of the web, a browser-based engine for managing user interaction. JavaScript is dominant on the client side, with an estimated 88 percent of websites using the technology and with 11.8 percent of websites being pure/static HTML sites with no client-side programming (W3Techs 2014).

Crockford (2008) tells us what is right and wrong with JavaScript. Others tell us how to use it in practice (Stefanov 2010; Flanagan 2011; Resig and BearBibeault 2013). Recently, with the emergence of Node.js, JavaScript has taken on a role on the server side (Hughes-Croucher and Wilson 2012; Wanderschneider 2013; Cantelon, Harter, Holowaychuk, and Rajlich 2014). There are those who promote end-to-end JavaScript applications with client-and server-side programs and document databases (Mikowski and Powell 2014). JavaScript Object Notation (JSON), a data interchange format, is more readable than XML and easily integrated into a MongoDB document database (Chodorow 2013; Copeland 2013; Hoberman 2014), for example. JavaScript would certainly rule the web if it had sufficient capabilities as a modeling and analysis language. It does not.

Today’s world of data science brings together statisticians fluent in R and information technology professionals fluent in Python. These communities have much to learn from each other. For the practicing data scientist, there are considerable advantages to being multilingual.

Designed by Ross Ihaka and Robert Gentleman, R first appeared in 1993. R represents an extensible, object-oriented, open-source scripting language for programming with data. It is well established in the statistical community and has syntax, data structures, and methods similar to its precursors, S and S-Plus. Contributors to the language have provided more than five thousand packages, most focused on traditional statistics, machine learning, and data visualization. R is the most widely used language in data science, but it is not a general-purpose programming language.

Guido van Rossum, a fan of Monty Python, released version 1.0 of Python in 1994. This general-purpose language has grown in popularity in the ensuing years. Many systems programmers have moved from Perl to Python, and Python has a strong following among mathematicians and scientists. Many universities use Python as a way to introduce basic concepts of object-oriented programming. An active open-source community has contributed more than fifteen thousand Python packages.

Sometimes referred to as a “glue language,” Python provides a rich open-source environment for scientific programming and research. For computer-intensive applications, it gives us the ability to call on compiled routines from C, C++, and Fortran. We can also use Cython to convert Python code into optimized C. For modeling techniques or graphics not currently implemented in Python, we can execute R programs from Python.

Some problems are more easily solved with Python, others with R. We benefit from Python’s capabilities as a general-purpose programming language. We draw on R packages for traditional statistics, time series analysis, multivariate methods, statistical graphics, and handling missing data. Accordingly, this book includes Python and R code examples and represents a dual-language guide to web and network data science.

Browser usage has changed dramatically over the years, with the rise of Google Chrome and the decline of Microsoft Internet Explorer (IE). Table 1.1 and figure 1.1 show worldwide browser usage statistics from October 2008 through October 2014. It is good to have some familiarity with browsers and the tools they provide for examining the text elements and structure of web pages.

Table 1.1. Worldwide Web Browser Usage Percentages (2008–2014)

Year

IE

Chrom

Firefox

Safari

Other

2008

67.68

1.02

25.54

2.91

2.85

2009

57.96

4.17

31.82

3.47

2.58

2010

49.21

12.39

31.24

4.56

2.60

2011

40.18

25.00

26.39

5.93

2.50

2012

32.08

34.77

22.32

7.81

3.02

2013

28.96

40.44

18.11

8.54

3.95

2014

19.25

47.57

17.00

10.95

5.23

Data obtained from StatCounter (2014).

Figure 1.1

Figure 1.1. Worldwide Web Browser Usage (July 2008 through October 2014)

Data obtained from StatCounter (2014).

The challenge of “big data,” as they are sometimes called, is not so much the volume of data. It is that these data arise from sources poorly understood, in particular the web and social media. Data are everywhere on the web. We need to find our way to the relevant data and obtain those data in an efficient manner.

Application programming interfaces (APIs) are one way to gather data from the web, and Russell (2014) provides a useful review of social media APIs. Unfortunately, APIs have syntax, parameters, and authorization codes that can change at the whim of the data providers. We employ a different approach, focusing on general purpose technologies for automated data acquisition from the web.

Figure 1.2 summarizes the online research process. Sampling, data collection, and data preparation consume much of our time, with secondary research dominating primary research. Online secondary research draws from existing web data. We review secondary research methods in chapter three and use them in many subsequent chapters. Primary research online is facilitated by the web. We cover these methods in appendix B.

Figure 1.2

Figure 1.2. Web and Network Data Science: Online Research Process

The domain of web and network data science is large. There are many questions to address, as shown in the list to follow.

  • Website design and user behavior. Web analytics, as it is understood by many, involves collecting, storing, and analyzing data from users of a particular website. There are many questions to be addressed. How shall we design and implement websites (for ease of use, visibility, marketing communication, good performance in search, and/or conversion of visits to sales)? How can we gather information from the web efficiently? How can we convert semi-structured and unstructured text into data for input to analysis and modeling? What kinds of website and social media measures make the most sense? Who are the users of a website, and how do they use it? How well does a website do in serving user needs? How well does a website do compared with other websites?
  • Network paths and communication. Web and network data science is much more than website analytics. We look at each website in the context of others on the web. We think in terms of networks—information nodes connected to one another, and users communicating with one another. What is the shortest, fastest, or lowest cost path between two locations? What is the fastest way to spread a message across a network? Which activities are on the critical path to completing a project? How long must we wait for a response from the server?
  • Communities and influence. Social media provide a glimpse of electronic social networks in action. Here we have the questions of social network analysis. Are there identifiable groups of people in this community? Who are the key players, the most important people in a group? Who are the people with prestige, influence, or power? Who is best positioned to be the leader of a group?
  • Individual and group behavior. As data scientists, we are often called on to go beyond description and provide predictions about future behavior or performance. So we have more questions to address. Will this person buy the product, given his/her connections with other buyers or non-buyers? Will this person vote for the candidate, given his/her connections with other voters? Given the motives of individuals, what can we predict for the group? Given growth in the network in the past, what can we expect for the future?
  • Information and networks. As an information resource, the web is unparalleled. Additional questions arise about the nature of online information. Which are the best websites for getting information about a particular topic? Who are the most credible sources of information? How shall we characterize a domain of knowledge? How can we use the web to obtain competitive intelligence? How can we utilize web-based information as a database for answering questions (domain-specific and general questions)?

This book is designed to provide an overview of the domain of web and network data science. We illustrate measurement and modeling techniques for answering many questions, and we cite resources for additional learning. Some of the techniques may be regarded as basic, others advanced. All are important to the work of data science.

Some say that data science is the new statistics. And in a world dominated by data, data science is beginning to look like the new business and the new IT as well. Nowhere is this more apparent than when working on web and network problems. With unlimited data mediated and distributed through the web, there is certainly enough to keep us busy for a long time.

To begin the programming portion of the book, exhibit 1.1 lists a Python program for exploring web browser usage statistics. Exhibit 1.2 shows the corresponding R program and draws on graphics software from Wickham and Chang (2014).

Exhibit 1.1. Analysis of Browser Usage (Python)

# Analysis of Browser Usage (Python)

# prepare for Python version 3x features and functions
from __future__ import division, print_function

# import packages for data analysis
import pandas as pd  # data structures for time series analysis
import datetime  # date manipulation
import matplotlib.pyplot as plt

# browser usage data from StatCounter Global Stats
# retrieved from the World Wide Web, October 21, 2014:
# \url{http://gs.statcounter.com/#browser-ww-monthly-200807-201410
# read in comma-delimited text file
browser_usage = pd.read_csv('browser_usage_2008_2014.csv')
# examine the data frame object
print(browser_usage.shape)
print(browser_usage.head())

# identify date fields as dates with apply and lambda function
browser_usage['Date'] =     browser_usage['Date']    .apply(lambda d: datetime.datetime.strptime(str(d), '%Y-%m'))
# define Other category
browser_usage['Other'] = 100 -    browser_usage['IE'] - browser_usage['Chrome'] -    browser_usage['Firefox'] - browser_usage['Safari']

# examine selected columns of the data frame object
selected_browser_usage = pd.DataFrame(browser_usage,    columns = ['Date', 'IE', 'Chrome', 'Firefox', 'Safari', 'Other'])
print(selected_browser_usage.shape)
print(selected_browser_usage.head())

# create multiple time series plot
selected_browser_usage.plot(subplots = True,       sharex = True, sharey = True, style = 'k-')
plt.legend(loc = 'best')
plt.xlabel('')
plt.savefig('fig_browser_mts_Python.pdf',
    bbox_inches = 'tight', dpi=None, facecolor='w', edgecolor='b',
    orientation='portrait', papertype=None, format=None,
    transparent=True, pad_inches=0.25, frameon=None)

# Suggestions for the student:
# Explore alternative visualizations of these data.
# Try the Python package ggplot to reproduce R graphics.
# Explore time series for other software and systems.

Exhibit 1.2. Analysis of Browser Usage (R)

# Analysis of Browser Usage (R)

# begin by installing necessary package ggplot2

# load package into the workspace for this program
library(ggplot2)  # grammar of graphics plotting

# browser usage data from StatCounter Global Stats
# retrieved from the World Wide Web, October 21, 2014:
# \url{http://gs.statcounter.com/#browser-ww-monthly-200807-201410
# read in comma-delimited text file
browser_usage <- read.csv("browser_usage_2008_2014.csv")
# examine the data frame object
print(str(browser_usage))
# define Other category
browser_usage$Other <- 100 -
    browser_usage$IE - browser_usage$Chrome -
    browser_usage$Firefox - browser_usage$Safari

# define time series data objects
IE_ts <- ts(browser_usage$IE, start = c(2008, 7), frequency = 12)
Chrome_ts <- ts(browser_usage$Chrome, start = c(2008, 7), frequency = 12)
Firefox_ts <- ts(browser_usage$Firefox, start = c(2008, 7), frequency = 12)
Safari_ts <- ts(browser_usage$Safari, start = c(2008, 7), frequency = 12)
Other_ts <- ts(browser_usage$Other, start = c(2008, 7), frequency = 12)

# create a multiple time series object
browser_mts <- cbind(IE_ts, Chrome_ts, Firefox_ts, Safari_ts, Other_ts)
dimnames(browser_mts)[[2]] <- c("IE", "Chrome", "Firefox", "Safari", "Other")
# plot multiple time series object using standard R graphics
pdf(file="fig_browser_mts_R.pdf",width = 11,height = 8.5)
ts.plot(browser_mts, ylab = "Percent Usage", main="",
    plot.type = "single", col = 1:5)
legend("topright", colnames(browser_mts), col = 1:5,
    lty = 1, cex = 1)
dev.off()

# define Year as numeric with fractional values for months
browser_usage$Year <- as.numeric(time(IE_ts))

# build data frame for plotting a stacked area graph
Browser <- rep("IE", length = nrow(browser_usage))
Percent <- browser_usage$IE
Year <- browser_usage$Year
plotting_data_frame <- data.frame(Browser, Percent, Year)

Browser <- rep("Chrome", length = nrow(browser_usage))
Percent <- browser_usage$Chrome
Year <- browser_usage$Year
plotting_data_frame <- rbind(plotting_data_frame,
    data.frame(Browser, Percent, Year))

Browser <- rep("Firefox", length = nrow(browser_usage))
Percent <- browser_usage$Firefox
Year <- browser_usage$Year
plotting_data_frame <- rbind(plotting_data_frame,
    data.frame(Browser, Percent, Year))

Browser <- rep("Safari", length = nrow(browser_usage))
Percent <- browser_usage$Safari
Year <- browser_usage$Year
plotting_data_frame <- rbind(plotting_data_frame,
    data.frame(Browser, Percent, Year))

Browser <- rep("Other", length = nrow(browser_usage))
Percent <- browser_usage$Other
Year <- browser_usage$Year
plotting_data_frame <- rbind(plotting_data_frame,
    data.frame(Browser, Percent, Year))

# create ggplot plotting object and plot to external file
pdf(file = "fig_browser_usage_stacked_area_R.pdf", width = 11, height = 8.5)
area_plot <- ggplot(data = plotting_data_frame,
    aes(x = Year, y = Percent, fill = Browser)) +
    geom_area(colour = "black", size = 1, alpha = 0.4) +
    scale_fill_brewer(palette = "Blues",
        breaks = rev(levels(plotting_data_frame$Browser))) +
    theme(legend.text = element_text(size = 15))  +
    theme(legend.title = element_text(size = 15)) +
    theme(axis.title = element_text(size = 15))
print(area_plot)
dev.off()
  • + Share This
  • 🔖 Save To Your Account

InformIT Promotional Mailings & Special Offers

I would like to receive exclusive offers and hear about products from InformIT and its family of brands. I can unsubscribe at any time.

Overview


Pearson Education, Inc., 221 River Street, Hoboken, New Jersey 07030, (Pearson) presents this site to provide information about products and services that can be purchased through this site.

This privacy notice provides an overview of our commitment to privacy and describes how we collect, protect, use and share personal information collected through this site. Please note that other Pearson websites and online products and services have their own separate privacy policies.

Collection and Use of Information


To conduct business and deliver products and services, Pearson collects and uses personal information in several ways in connection with this site, including:

Questions and Inquiries

For inquiries and questions, we collect the inquiry or question, together with name, contact details (email address, phone number and mailing address) and any other additional information voluntarily submitted to us through a Contact Us form or an email. We use this information to address the inquiry and respond to the question.

Online Store

For orders and purchases placed through our online store on this site, we collect order details, name, institution name and address (if applicable), email address, phone number, shipping and billing addresses, credit/debit card information, shipping options and any instructions. We use this information to complete transactions, fulfill orders, communicate with individuals placing orders or visiting the online store, and for related purposes.

Surveys

Pearson may offer opportunities to provide feedback or participate in surveys, including surveys evaluating Pearson products, services or sites. Participation is voluntary. Pearson collects information requested in the survey questions and uses the information to evaluate, support, maintain and improve products, services or sites, develop new products and services, conduct educational research and for other purposes specified in the survey.

Contests and Drawings

Occasionally, we may sponsor a contest or drawing. Participation is optional. Pearson collects name, contact information and other information specified on the entry form for the contest or drawing to conduct the contest or drawing. Pearson may collect additional personal information from the winners of a contest or drawing in order to award the prize and for tax reporting purposes, as required by law.

Newsletters

If you have elected to receive email newsletters or promotional mailings and special offers but want to unsubscribe, simply email information@informit.com.

Service Announcements

On rare occasions it is necessary to send out a strictly service related announcement. For instance, if our service is temporarily suspended for maintenance we might send users an email. Generally, users may not opt-out of these communications, though they can deactivate their account information. However, these communications are not promotional in nature.

Customer Service

We communicate with users on a regular basis to provide requested services and in regard to issues relating to their account we reply via email or phone in accordance with the users' wishes when a user submits their information through our Contact Us form.

Other Collection and Use of Information


Application and System Logs

Pearson automatically collects log data to help ensure the delivery, availability and security of this site. Log data may include technical information about how a user or visitor connected to this site, such as browser type, type of computer/device, operating system, internet service provider and IP address. We use this information for support purposes and to monitor the health of the site, identify problems, improve service, detect unauthorized access and fraudulent activity, prevent and respond to security incidents and appropriately scale computing resources.

Web Analytics

Pearson may use third party web trend analytical services, including Google Analytics, to collect visitor information, such as IP addresses, browser types, referring pages, pages visited and time spent on a particular site. While these analytical services collect and report information on an anonymous basis, they may use cookies to gather web trend information. The information gathered may enable Pearson (but not the third party web trend services) to link information with application and system log data. Pearson uses this information for system administration and to identify problems, improve service, detect unauthorized access and fraudulent activity, prevent and respond to security incidents, appropriately scale computing resources and otherwise support and deliver this site and its services.

Cookies and Related Technologies

This site uses cookies and similar technologies to personalize content, measure traffic patterns, control security, track use and access of information on this site, and provide interest-based messages and advertising. Users can manage and block the use of cookies through their browser. Disabling or blocking certain cookies may limit the functionality of this site.

Do Not Track

This site currently does not respond to Do Not Track signals.

Security


Pearson uses appropriate physical, administrative and technical security measures to protect personal information from unauthorized access, use and disclosure.

Children


This site is not directed to children under the age of 13.

Marketing


Pearson may send or direct marketing communications to users, provided that

  • Pearson will not use personal information collected or processed as a K-12 school service provider for the purpose of directed or targeted advertising.
  • Such marketing is consistent with applicable law and Pearson's legal obligations.
  • Pearson will not knowingly direct or send marketing communications to an individual who has expressed a preference not to receive marketing.
  • Where required by applicable law, express or implied consent to marketing exists and has not been withdrawn.

Pearson may provide personal information to a third party service provider on a restricted basis to provide marketing solely on behalf of Pearson or an affiliate or customer for whom Pearson is a service provider. Marketing preferences may be changed at any time.

Correcting/Updating Personal Information


If a user's personally identifiable information changes (such as your postal address or email address), we provide a way to correct or update that user's personal data provided to us. This can be done on the Account page. If a user no longer desires our service and desires to delete his or her account, please contact us at customer-service@informit.com and we will process the deletion of a user's account.

Choice/Opt-out


Users can always make an informed choice as to whether they should proceed with certain services offered by InformIT. If you choose to remove yourself from our mailing list(s) simply visit the following page and uncheck any communication you no longer want to receive: www.informit.com/u.aspx.

Sale of Personal Information


Pearson does not rent or sell personal information in exchange for any payment of money.

While Pearson does not sell personal information, as defined in Nevada law, Nevada residents may email a request for no sale of their personal information to NevadaDesignatedRequest@pearson.com.

Supplemental Privacy Statement for California Residents


California residents should read our Supplemental privacy statement for California residents in conjunction with this Privacy Notice. The Supplemental privacy statement for California residents explains Pearson's commitment to comply with California law and applies to personal information of California residents collected in connection with this site and the Services.

Sharing and Disclosure


Pearson may disclose personal information, as follows:

  • As required by law.
  • With the consent of the individual (or their parent, if the individual is a minor)
  • In response to a subpoena, court order or legal process, to the extent permitted or required by law
  • To protect the security and safety of individuals, data, assets and systems, consistent with applicable law
  • In connection the sale, joint venture or other transfer of some or all of its company or assets, subject to the provisions of this Privacy Notice
  • To investigate or address actual or suspected fraud or other illegal activities
  • To exercise its legal rights, including enforcement of the Terms of Use for this site or another contract
  • To affiliated Pearson companies and other companies and organizations who perform work for Pearson and are obligated to protect the privacy of personal information consistent with this Privacy Notice
  • To a school, organization, company or government agency, where Pearson collects or processes the personal information in a school setting or on behalf of such organization, company or government agency.

Links


This web site contains links to other sites. Please be aware that we are not responsible for the privacy practices of such other sites. We encourage our users to be aware when they leave our site and to read the privacy statements of each and every web site that collects Personal Information. This privacy statement applies solely to information collected by this web site.

Requests and Contact


Please contact us about this Privacy Notice or if you have any requests or questions relating to the privacy of your personal information.

Changes to this Privacy Notice


We may revise this Privacy Notice through an updated posting. We will identify the effective date of the revision in the posting. Often, updates are made to provide greater clarity or to comply with changes in regulatory requirements. If the updates involve material changes to the collection, protection, use or disclosure of Personal Information, Pearson will provide notice of the change through a conspicuous notice on this site or other appropriate way. Continued use of the site after the effective date of a posted revision evidences acceptance. Please contact us if you have questions or concerns about the Privacy Notice or any objection to any revisions.

Last Update: November 17, 2020