Based on the information in the text and the previous lectures, consider how text analysis can be applied.
a. Research completed research studies that have used text analysis.
b. Provide a brief summary one study that includes the type of study, its purpose, and its final conclusions.
c. Based on your research-based knowledge, provide your evaluation on the benefits of text analysis to fulfill the purpose of the research.Table of Contents
1. Introduction
1. EMC Academic Alliance
2. EMC Proven Professional Certification
2. Chapter 1: Introduction to Big Data Analytics
1. 1.1 Big Data Overview
2. 1.2 State of the Practice in Analytics
3. 1.3 Key Roles for the New Big Data Ecosystem
4. 1.4 Examples of Big Data Analytics
5. Summary
6. Exercises
7. Bibliography
3. Chapter 2: Data Analytics Lifecycle
1. 2.1 Data Analytics Lifecycle Overview
2. 2.2 Phase 1: Discovery
3. 2.3 Phase 2: Data Preparation
4. 2.4 Phase 3: Model Planning
5. 2.5 Phase 4: Model Building
6. 2.6 Phase 5: Communicate Results
7. 2.7 Phase 6: Operationalize
8. 2.8 Case Study: Global Innovation Network and Analysis (GINA)
9. Summary
10. Exercises
11. Bibliography
4. Chapter 3: Review of Basic Data Analytic Methods Using R
1. 3.1 Introduction to R
2. 3.2 Exploratory Data Analysis
3. 3.3 Statistical Methods for Evaluation
4. Summary
5. Exercises
6. Bibliography
5. Chapter 4: Advanced Analytical Theory and Methods: Clustering
1. 4.1 Overview of Clustering
2. 4.2 K-means
3. 4.3 Additional Algorithms
4. Summary
5. Exercises
6.
7.
8.
9.
10.
6. Bibliography
Chapter 5: Advanced Analytical Theory and Methods: Association Rules
1. 5.1 Overview
2. 5.2 Apriori Algorithm
3. 5.3 Evaluation of Candidate Rules
4. 5.4 Applications of Association Rules
5. 5.5 An Example: Transactions in a Grocery Store
6. 5.6 Validation and Testing
7. 5.7 Diagnostics
8. Summary
9. Exercises
10. Bibliography
Chapter 6: Advanced Analytical Theory and Methods: Regression
1. 6.1 Linear Regression
2. 6.2 Logistic Regression
3. 6.3 Reasons to Choose and Cautions
4. 6.4 Additional Regression Models
5. Summary
6. Exercises
Chapter 7: Advanced Analytical Theory and Methods: Classification
1. 7.1 Decision Trees
2. 7.2 Naïve Bayes
3. 7.3 Diagnostics of Classifiers
4. 7.4 Additional Classification Methods
5. Summary
6. Exercises
7. Bibliography
Chapter 8: Advanced Analytical Theory and Methods: Time Series Analysis
1. 8.1 Overview of Time Series Analysis
2. 8.2 ARIMA Model
3. 8.3 Additional Methods
4. Summary
5. Exercises
Chapter 9: Advanced Analytical Theory and Methods: Text Analysis
1. 9.1 Text Analysis Steps
2. 9.2 A Text Analysis Example
3. 9.3 Collecting Raw Text
11.
12.
13.
14.
4. 9.4 Representing Text
5. 9.5 Term Frequency—Inverse Document Frequency (TFIDF)
6. 9.6 Categorizing Documents by Topics
7. 9.7 Determining Sentiments
8. 9.8 Gaining Insights
9. Summary
10. Exercises
11. Bibliography
Chapter 10: Advanced Analytics—Technology and Tools: MapReduce and Hadoop
1. 10.1 Analytics for Unstructured Data
2. 10.2 The Hadoop Ecosystem
3. 10.3 NoSQL
4. Summary
5. Exercises
6. Bibliography
Chapter 11: Advanced Analytics—Technology and Tools: In-Database Analytics
1. 11.1 SQL Essentials
2. 11.2 In-Database Text Analysis
3. 11.3 Advanced SQL
4. Summary
5. Exercises
6. Bibliography
Chapter 12: The Endgame, or Putting It All Together
1. 12.1 Communicating and Operationalizing an Analytics Project
2. 12.2 Creating the Final Deliverables
3. 12.3 Data Visualization Basics
4. Summary
5. Exercises
6. References and Further Reading
7. Bibliography
End User License Agreement
List of Illustrations
1. Figure 1.1
2. Figure 1.2
3. Figure 1.3
4. Figure 1.4
5. Figure 1.5
6. Figure 1.6
7. Figure 1.7
8. Figure 1.8
9. Figure 1.9
10. Figure 1.10
11. Figure 1.11
12. Figure 1.12
13. Figure 1.13
14. Figure 1.14
15. Figure 2.1
16. Figure 2.2
17. Figure 2.3
18. Figure 2.4
19. Figure 2.5
20. Figure 2.6
21. Figure 2.7
22. Figure 2.8
23. Figure 2.9
24. Figure 2.10
25. Figure 2.11
26. Figure 3.1
27. Figure 3.2
28. Figure 3.3
29. Figure 3.4
30. Figure 3.5
31. Figure 3.6
32. Figure 3.7
33. Figure 3.8
34. Figure 3.9
35. Figure 3.10
36. Figure 3.11
37. Figure 3.12
38. Figure 3.13
39. Figure 3.14
40. Figure 3.15
41. Figure 3.16
42. Figure 3.17
43. Figure 3.18
44. Figure 3.19
45. Figure 3.20
46. Figure 3.21
47. Figure 3.22
48. Figure 3.23
49. Figure 3.24
50. Figure 3.25
51. Figure 3.26
52. Figure 3.27
53. Figure 4.1
54. Figure 4.2
55. Figure 4.3
56. Figure 4.4
57. Figure 4.5
58. Figure 4.6
59. Figure 4.7
60. Figure 4.8
61. Figure 4.9
62. Figure 4.10
63. Figure 4.11
64. Figure 4.12
65. Figure 4.13
66. Figure 5.1
67. Figure 5.2
68. Figure 5.3
69. Figure 5.4
70. Figure 5.5
71. Figure 5.6
72. Figure 6.1
73. Figure 6.2
74. Figure 6.3
75. Figure 6.4
76. Figure 6.5
77. Figure 6.6
78. Figure 6.7
79. Figure 6.10
80. Figure 6.8
81. Figure 6.9
82. Figure 6.11
83. Figure 6.12
84. Figure 6.13
85. Figure 6.14
86. Figure 6.15
87. Figure 6.16
88. Figure 6.17
89. Figure 7.1
90. Figure 7.2
91. Figure 7.3
92. Figure 7.4
93. Figure 7.5
94. Figure 7.6
95. Figure 7.7
96. Figure 7.8
97. Figure 7.9
98. Figure 7.10
99. Figure 8.1
100. Figure 8.2
101. Figure 8.3
102. Figure 8.4
103. Figure 8.5
104. Figure 8.6
105. Figure 8.7
106. Figure 8.8
107. Figure 8.9
108. Figure 8.10
109. Figure 8.11
110. Figure 8.12
111. Figure 8.13
112. Figure 8.14
113. Figure 8.15
114. Figure 8.16
115. Figure 8.17
116. Figure 8.18
117. Figure 8.19
118. Figure 8.20
119. Figure 8.21
120. Figure 8.22
121. Figure 9.1
122. Figure 9.2
123. Figure 9.3
124. Figure 9.4
125. Figure 9.5
126. Figure 9.6
127. Figure 9.7
128. Figure 9.8
129. Figure 9.9
130. Figure 9.10
131. Figure 9.11
132. Figure 9.12
133. Figure 9.13
134. Figure 9.14
135. Figure 9.15
136. Figure 9.16
137. Figure 10.1
138. Figure 10.2
139. Figure 10.3
140. Figure 10.4
141. Figure 10.5
142. Figure 10.6
143. Figure 10.7
144. Figure 11.1
145. Figure 11.2
146. Figure 11.3
147. Figure 11.4
148. Figure 12.1
149. Figure 12.2
150. Figure 12.3
151. Figure 12.4
152. Figure 12.5
153. Figure 12.6
154. Figure 12.7
155. Figure 12.8
156. Figure 12.9
157. Figure 12.10
158. Figure 12.11
159. Figure 12.12
160. Figure 12.13
161. Figure 12.14
162. Figure 12.15
163. Figure 12.16
164. Figure 12.17
165. Figure 12.18
166. Figure 12.19
167. Figure 12.20
168. Figure 12.21
169. Figure 12.22
170. Figure 12.23
171. Figure 12.24
172. Figure 12.25
173. Figure 12.26
174. Figure 12.27
175. Figure 12.28
176. Figure 12.29
177. Figure 12.30
178. Figure 12.31
179. Figure 12.32
180. Figure 12.33
181. Figure 12.34
182. Figure 12.35
List of Tables
1. Table 1.1
2. Table 1.2
3. Table 2.1
4. Table 2.2
5. Table 2.3
6. Table 3.1
7. Table 3.2
8. Table 3.3
9. Table 3.4
10. Table 3.5
11. Table 3.6
12. Table 6.1
13. Table 7.1
14. Table 7.2
15. Table 7.3
16. Table 7.4
17. Table 7.5
18. Table 7.6
19. Table 7.7
20. Table 7.8
21. Table 8.1
22. Table 9.1
23. Table 9.2
24. Table 9.3
25. Table 9.4
26. Table 9.5
27. Table 9.6
28. Table 9.7
29. Table 10.1
30. Table 10.2
31. Table 11.1
32. Table 11.2
33. Table 11.3
34. Table 11.4
35. Table 12.1
36. Table 12.2
37. Table 12.3
Introduction
Big Data is creating significant new opportunities for organizations to derive new value
and create competitive advantage from their most valuable asset: information. For
businesses, Big Data helps drive efficiency, quality, and personalized products and
services, producing improved levels of customer satisfaction and profit. For scientific
efforts, Big Data analytics enable new avenues of investigation with potentially richer
results and deeper insights than previously available. In many cases, Big Data analytics
integrate structured and unstructured data with real-time feeds and queries, opening new
paths to innovation and insight.
This book provides a practitioner’s approach to some of the key techniques and tools used
in Big Data analytics. Knowledge of these methods will help people become active
contributors to Big Data analytics projects. The book’s content is designed to assist
multiple stakeholders: business and data analysts looking to add Big Data analytics skills
to their portfolio; database professionals and managers of business intelligence, analytics,
or Big Data groups looking to enrich their analytic skills; and college graduates
investigating data science as a career field.
The content is structured in twelve chapters. The first chapter introduces the reader to the
domain of Big Data, the drivers for advanced analytics, and the role of the data scientist.
The second chapter presents an analytic project lifecycle designed for the particular
characteristics and challenges of hypothesis-driven analysis with Big Data.
Chapter 3 examines fundamental statistical techniques in the context of the open source R
analytic software environment. This chapter also highlights the importance of exploratory
data analysis via visualizations and reviews the key notions of hypothesis development
and testing.
Chapters 4 through 9 discuss a range of advanced analytical methods, including clustering,
classification, regression analysis, time series and text analysis.
Chapters 10 and 11 focus on specific technologies and tools that support advanced
analytics with Big Data. In particular, the MapReduce paradigm and its instantiation in the
Hadoop ecosystem, as well as advanced topics in SQL and in-database text analytics form
the focus of these chapters.
Chapter 12 provides guidance on operationalizing Big Data analytics projects. This
chapter focuses on creating the final deliverables, converting an analytics project to an
ongoing asset of an organization’s operation, and creating clear, useful visual outputs
based on the data.
EMC Academic Alliance
University and college faculties are invited to join the Academic Alliance program to
access unique “open” curriculum-based education on the following topics:
Data Science and Big Data Analytics
Information Storage and Management
Cloud Infrastructure and Services
Backup Recovery Systems and Architecture
The program provides faculty with course resources to prepare students for opportunities
that exist in today’s evolving IT industry at no cost. For more information, visit
http://education.EMC.com/academicalliance.
EMC Proven Professional Certification
EMC Proven Professional is a leading education and certification program in the IT
industry, providing comprehensive coverage of information storage technologies,
virtualization, cloud computing, data science/Big Data analytics, and more.
Being proven means investing in yourself and formally validating your expertise.
This book prepares you for Data Science Associate (EMCDSA) certification. Visit
http://education.EMC.com for details.
Chapter 1
Introduction to Big Data Analytics
Key Concepts
1. Big Data overview
2. State of the practice in analytics
3. Business Intelligence versus Data Science
4. Key roles for the new Big Data ecosystem
5. The Data Scientist
6. Examples of Big Data analytics
Much has been written about Big Data and the need for advanced analytics within
industry, academia, and government. Availability of new data sources and the rise of more
complex analytical opportunities have created a need to rethink existing data architectures
to enable analytics that take advantage of Big Data. In addition, significant debate exists
about what Big Data is and what kinds of skills are required to make best use of it. This
chapter explains several key concepts to clarify what is meant by Big Data, why advanced
analytics are needed, how Data Science differs from Business Intelligence (BI), and what
new roles are needed for the new Big Data ecosystem.
1.1 Big Data Overview
Data is created constantly, and at an ever-increasing rate. Mobile phones, social media,
imaging technologies to determine a medical diagnosis—all these and more create new
data, and that must be stored somewhere for some purpose. Devices and sensors
automatically generate diagnostic information that needs to be stored and processed in real
time. Merely keeping up with this huge influx of data is difficult, but substantially more
challenging is analyzing vast amounts of it, especially when it does not conform to
traditional notions of data structure, to identify meaningful patterns and extract useful
information. These challenges of the data deluge present the opportunity to transform
business, government, science, and everyday life.
Several industries have led the way in developing their ability to gather and exploit data:
Credit card companies monitor every purchase their customers make and can identify
fraudulent purchases with a high degree of accuracy using rules derived by
processing billions of transactions.
Mobile phone companies analyze subscribers’ calling patterns to determine, for
example, whether a caller’s frequent contacts are on a rival network. If that rival
network is offering an attractive promotion that might cause the subscriber to defect,
the mobile phone company can proactively offer the subscriber an incentive to
remain in her contract.
For companies such as LinkedIn and Facebook, data itself is their primary product.
The valuations of these companies are heavily derived from the data they gather and
host, which contains more and more intrinsic value as the data grows.
Three attributes stand out as defining Big Data characteristics:
Huge volume of data: Rather than thousands or millions of rows, Big Data can be
billions of rows and millions of columns.
Complexity of data types and structures: Big Data reflects the variety of new data
sources, formats, and structures, including digital traces being left on the web and
other digital repositories for subsequent analysis.
Speed of new data creation and growth: Big Data can describe high velocity data,
with rapid data ingestion and near real time analysis.
Although the volume of Big Data tends to attract the most attention, generally the variety
and velocity of the data provide a more apt definition of Big Data. (Big Data is sometimes
described as having 3 Vs: volume, variety, and velocity.) Due to its size or structure, Big
Data cannot be efficiently analyzed using only traditional databases or methods. Big Data
problems require new tools and technologies to store, manage, and realize the business
benefit. These new tools and technologies enable creation, manipulation, and management
of large datasets and the storage environments that house them. Another definition of Big
Data comes from the McKinsey Global report from 2011:Big Data is data whose scale,
distribution, diversity, and/or timeliness require the use of new technical
architectures and analytics to enable insights that unlock new sources of business
value.
McKinsey & Co.; Big Data: The Next Frontier for Innovation, Competition, and Productivity [1]
McKinsey’s definition of Big Data implies that organizations will need new data
architectures and analytic sandboxes, new tools, new analytical methods, and an
integration of multiple skills into the new role of the data scientist, which will be
discussed in Section 1.3. Figure 1.1 highlights several sources of the Big Data deluge.
Figure 1.1 What’s driving the data deluge
The rate of data creation is accelerating, driven by many of the items in Figure 1.1.
Social media and genetic sequencing are among the fastest-growing sources of Big Data
and examples of untraditional sources of data being used for analysis.
For example, in 2012 Facebook users posted 700 status updates per second worldwide,
which can be leveraged to deduce latent interests or political views of users and show
relevant ads. For instance, an update in which a woman changes her relationship status
from “single” to “engaged” would trigger ads on bridal dresses, wedding planning, or
name-changing services.
Facebook can also construct social graphs to analyze which users are connected to each
other as an interconnected network. In March 2013, Facebook released a new feature
called “Graph Search,” enabling users and developers to search social graphs for people
with similar interests, hobbies, and shared locations.
Another example comes from genomics. Genetic sequencing and human genome mapping
provide a detailed understanding of genetic makeup and lineage. The health care industry
is looking toward these advances to help predict which illnesses a person is likely to get in
his lifetime and take steps to avoid these maladies or reduce their impact through the use
of personalized medicine and treatment. Such tests also highlight typical responses to
different medications and pharmaceutical drugs, heightening risk awareness of specific
drug treatments.
While data has grown, the cost to perform this work has fallen dramatically. The cost to
sequence one human genome has fallen from $100 million in 2001 to $10,000 in 2011,
and the cost continues to drop. Now, websites such as 23andme (Figure 1.2) offer
genotyping for less than $100. Although genotyping analyzes only a fraction of a genome
and does not provide as much granularity as genetic sequencing, it does point to the fact
that data and complex analysis is becoming more prevalent and less expensive to deploy.
Figure 1.2 Examples of what can be learned through genotyping, from 23andme.com
As illustrated by the examples of social media and genetic sequencing, individuals and
organizations both derive benefits from analysis of ever-larger and more complex datasets
that require increasingly powerful analytical capabilities.
1.1.1 Data Structures
Big data can come in multiple forms, including structured and non-structured data such as
financial data, text files, multimedia files, and genetic mappings. Contrary to much of the
traditional data analysis performed by organizations, most of the Big Data is unstructured
or semi-structured in nature, which requires different techniques and tools to process and
analyze. [2] Distributed computing environments and massively parallel processing (MPP)
architectures that enable parallelized data ingest and analysis are the preferred approach to
process such complex data.
With this in mind, this section takes a closer look at data structures.
Figure 1.3 shows four types of data structures, with 80–90% of future data growth coming
from non-structured data types. [2] Though different, the four are commonly mixed. For
example, a classic Relational Database Management System (RDBMS) may store call
logs for a software support call center. The RDBMS may store characteristics of the
support calls as typical structured data, with attributes such as time stamps, machine type,
problem type, and operating system. In addition, the system will likely have unstructured,
quasi- or semi-structured data, such as free-form call log information taken from an e-mail
ticket of the problem, customer chat history, or transcript of a phone call describing the
technical problem and the solution or audio file of the phone call conversation. Many
insights could be extracted from the unstructured, quasi- or semi-structured data in the call
center data.
Figure 1.3 Big Data Growth is increasingly unstructured
Although analyzing structured data tends to be the most familiar technique, a different
technique is required to meet the challenges to analyze semi-structured data (shown as
XML), quasi-structured (shown as a clickstream), and unstructured data.
Here are examples of how each of the four main types of data structures may look.
Structured data: Data containing a defined data type, format, and structure (that is,
transaction data, online analytical processing [OLAP] data cubes, traditional
RDBMS, CSV files, and even simple spreadsheets). See Figure 1.4.
Semi-structured data: Textual data files with a discernible pattern that enables
parsing (such as Extensible Markup Language [XML] data files that are selfdescribing and defined by an XML schema). See Figure 1.5.
Quasi-structured data: Textual data with erratic data formats that can be formatted
with effort, tools, and time (for instance, web clickstream data that may contain
inconsistencies in data values and formats). See Figure 1.6.
Unstructured data: Data that has no inherent structure, which may include text
documents, PDFs, images, and video. See Figure 1.7.
Figure 1.4 Example of structured data
Figure 1.5 Example of semi-structured data
Figure 1.6 Example of EMC Data Science search results
Figure 1.7 Example of unstructured data: video about Antarctica expedition [3]
Quasi-structured data is a common phenomenon that bears closer scrutiny. Consider the
following example. A user attends the EMC World conference and subsequently runs a
Google search online to find information related to EMC and Data Science. This would
produce a URL such as https://www.google.com/#q=EMC+ data+science and a list of
results, such as in the first graphic of Figure 1.5.
After doing this search, the user may choose the second link, to read more about the
headline “Data Scientist—EMC Education, Training, and Certification.” This brings the
user to an emc.com site focused on this topic and a new URL,
https://education.emc.com/guest/campaign/data_science.aspx, that displays the
page shown as (2) in Figure 1.6. Arriving at this site, the user may decide to click to learn
more about the process of becoming certified in data science. The user chooses a link
toward the top of the page on Certifications, bringing the user to a new URL:
https://education.emc.com/guest/certification/framework/stf/data_science.aspx
which is (3) in Figure 1.6.
Visiting these three websites adds three URLs to the log files monitoring the user’s
computer or network use. These three URLs are:
https://www.google.com/#q=EMC+data+science
https://education.emc.com/guest/campaign/data_science.aspx
https://education.emc.com/guest/certification/framework/stf/data_science.aspx
This set of three URLs reflects the websites and actions taken to find Data Science
information related to EMC. Together, this comprises a clickstream that can be parsed and
mined by data scientists to discover usage patterns and uncover relationships among clicks
and areas of interest on a website or group of sites.
The four data types described in this chapter are sometimes generalized into two groups:
structured and unstructured data. Big Data describes new kinds of data with which most
organizations may not be used to working. With this in mind, the next section discusses
common technology architectures from the standpoint of someone wanting to analyze Big
Data.
1.1.2 Analyst Perspective on Data Repositories
The introduction of spreadsheets enabled business users to create simple logic on data
structured in rows and columns and create their own analyses of business problems.
Database administrator training is not required to create spreadsheets: They can be set up
to do many things quickly and independently of information technology (IT) groups.
Spreadsheets are easy to share, and end users have control over the logic involved.
However, their proliferation can result in “many versions of the truth.” In other words, it
can be challenging to determine if a particular user has the most relevant version of a
spreadsheet, with the most current data and logic in it. Moreover, if a laptop is lost or a file
becomes corrupted, the data and logic within the spreadsheet could be lost. This is an
ongoing challenge because spreadsheet programs such as Microsoft Excel still run on
many computers worldwide. With the proliferation of data islands (or spreadmarts), the
need to centralize the data is more pressing than ever.
As data needs grew, so did more scalable data warehousing solutions. These technologies
enabled data to be managed centrally, providing benefits of security, failover, and a single
repository where users could rely on getting an “official” source of data for financial
reporting or other mission-critical tasks. This structure also enabled the creation of OLAP
cubes and BI analytical tools, which provided quick access to a set of dimensions within
an RDBMS. More advanced features enabled performance of in-depth analytical
techniques such as regressions and neural networks. Enterprise Data Warehouses (EDWs)
are critical for reporting and BI tasks and solve many of the problems that proliferating
spreadsheets introduce, such as which of multiple versions of a spreadsheet is correct.
EDWs—and a good BI strategy—provide direct data feeds from sources that are centrally
managed, backed up, and secured.
Despite the benefits of EDWs and BI, these systems tend to restrict the flexibility needed
to perform robust or exploratory data analysis. With the EDW model, data is managed and
controlled by IT groups and database administrators (DBAs), and data analysts must
depend on IT for access and changes to the data schemas. This imposes longer lead times
for analysts to get data; most of the time is spent waiting for approvals rather than starting
meaningful work. Additionally, many times the EDW rules restrict analysts from building
datasets. Consequently, it is common for additional systems to emerge containing critical
data for constructing analytic datasets, managed locally by power users. IT groups
generally dislike existence of data sources outside of their control because, unlike an
EDW, these datasets are not managed, secured, or backed up. From an analyst perspective,
EDW and BI solve problems related to data accuracy and availability. However, EDW and
BI introduce new problems related to flexibility and agility, which were less pronounced
when dealing with spreadsheets.
A solution to this problem is the analytic sandbox, which attempts to resolve the conflict
for analysts and data scientists with EDW and more formally managed corporate data. In
this model, the IT group may still manage the analytic sandboxes, but they will be
purposefully designed to enable robust analytics, while being centrally managed and
secured. These sandboxes, often referred to as workspaces, are designed to enable teams
to explore many datasets in a controlled fashion and are not typically used for enterpriselevel financial reporting and sales dashboards.
Many times, analytic sandboxes enable high-performance computing using in-database
processing—the analytics occur within the database itself. The idea is that performance of
the analysis will be better if the analytics are run in the database itself, rather than bringing
the data to an analytical tool that resides somewhere else. In-database analytics, discussed
further in Chapter 11, “Advanced Analytics—Technology and Tools: In-Database
Analytics,” creates relationships to multiple data sources within an organization and saves
time spent creating these data feeds on an individual basis. In-database processing for
deep analytics enables faster turnaround time for developing and executing new analytic
models, while reducing, though not eliminating, the cost associated with data stored in
local, “shadow” file systems. In addition, rather than the typical structured data in the
EDW, analytic sandboxes can house a greater variety of data, such as raw data, textual
data, and other kinds of unstructured data, without interfering with critical production
databases. Table 1.1 summarizes the characteristics of the data repositories mentioned in
this section.
Table 1.1 Types of Data Repositories, from an Analyst Perspective
Data Repository
Spreadsheets and
data marts
(“spreadmarts”)
Data Warehouses
Analytic Sandbox
(workspaces)
Characteristics
Spreadsheets and low-volume databases for recordkeeping
Analyst depends on data extracts.
Centralized data containers in a purpose-built space
Supports BI and reporting, but restricts robust analyses
Analyst dependent on IT and DBAs for data access and schema
changes
Analysts must spend significant time to get aggregated and
disaggregated data extracts from multiple sources.
Data assets gathered from multiple sources and technologies for
analysis
Enables flexible, high-performance analysis in a nonproduction
environment; can leverage in-database processing
Reduces costs and risks associated with data replication into
“shadow” file systems
“Analyst owned” rather than “DBA owned”
There are several things to consider with Big Data Analytics projects to ensure the
approach fits with the desired goals. Due to the characteristics of Big Data, these projects
lend themselves to decision support for high-value, strategic decision making with high
processing complexity. The analytic techniques used in this context need to be iterative
and flexible, due to the high volume of data and its complexity. Performing rapid and
complex analysis requires high throughput network connections and a consideration for
the acceptable amount of latency. For instance, developing a real-time product
recommender for a website imposes greater system demands than developing a near-realtime recommender, which may still provide acceptable performance, have slightly greater
latency, and may be cheaper to deploy. These considerations require a different approach
to thinking about analytics challenges, which will be explored further in the next section.
1.2 State of the Practice in Analytics
Current business problems provide many opportunities for organizations to become more
analytical and data driven, as shown in Table 1.2.
Table 1.2 Business Drivers for Advanced Analytics
Business Driver
Optimize business operations
Identify business risk
Predict new business
opportunities
Comply with laws or regulatory
requirements
Examples
Sales, pricing, profitability, efficiency
Customer churn, fraud, default
Upsell, cross-sell, best new customer prospects
Anti-Money Laundering, Fair Lending, Basel II-III,
Sarbanes-Oxley (SOX)
Table 1.2 outlines four categories of common business problems that organizations
contend with where they have an opportunity to leverage advanced analytics to create
competitive advantage. Rather than only performing standard reporting on these areas,
organizations can apply advanced analytical techniques to optimize processes and derive
more value from these common tasks. The first three examples do not represent new
problems. Organizations have been trying to reduce customer churn, increase sales, and
cross-sell customers for many years. What is new is the opportunity to fuse advanced
analytical techniques with Big Data to produce more impactful analyses for these
traditional problems. The last example portrays emerging regulatory requirements. Many
compliance and regulatory laws have been in existence for decades, but additional
requirements are added every year, which represent additional complexity and data
requirements for organizations. Laws related to anti-money laundering (AML) and fraud
prevention require advanced analytical techniques to comply with and manage properly.
1.2.1 BI Versus Data Science
The four business drivers shown in Table 1.2 require a variety of analytical techniques to
address them properly. Although much is written generally about analytics, it is important
to distinguish between BI and Data Science. As shown in Figure 1.8, there are several
ways to compare these groups of analytical techniques.
Figure 1.8 Comparing BI with Data Science
One way to evaluate the type of analysis being performed is to examine the time horizon
and the kind of analytical approaches being used. BI tends to provide reports, dashboards,
and queries on business questions for the current period or in the past. BI systems make it
easy to answer questions related to quarter-to-date revenue, progress toward quarterly
targets, and understand how much of a given product was sold in a prior quarter or year.
These questions tend to be closed-ended and explain current or past behavior, typically by
aggregating historical data and grouping it in some way. BI provides hindsight and some
insight and generally answers questions related to “when” and “where” events occurred.
By comparison, Data Science tends to use disaggregated data in a more forward-looking,
exploratory way, focusing on analyzing the present and enabling informed decisions about
the future. Rather than aggregating historical data to look at how many of a given product
sold in the previous quarter, a team may employ Data Science techniques such as time
series analysis, further discussed in Chapter 8, “Advanced Analytical Theory and
Methods: Time Series Analysis,” to forecast future product sales and revenue more
accurately than extending a simple trend line. In addition, Data Science tends to be more
exploratory in nature and may use scenario optimization to deal with more open-ended
questions. This approach provides insight into current activity and foresight into future
events, while generally focusing on questions related to “how” and “why” events occur.
Where BI problems tend to require highly structured data organized in rows and columns
for accurate reporting, Data Science projects tend to use many types of data sources,
including large or unconventional datasets. Depending on an organization’s goals, it may
choose to embark on a BI project if it is doing reporting, creating dashboards, or
performing simple visualizations, or it may choose Data Science projects if it needs to do
a more sophisticated analysis with disaggregated or varied datasets.
1.2.2 Current Analytical Architecture
As described earlier, Data Science projects need workspaces that are purpose-built for
experimenting with data, with flexible and agile data architectures. Most organizations
still have data warehouses that provide excellent support for traditional reporting and
simple data analysis activities but unfortunately have a more difficult time supporting
more robust analyses. This section examines a typical analytical data architecture that may
exist within an organization.
Figure 1.9 shows a typical data architecture and several of the challenges it presents to
data scientists and others trying to do advanced analytics. This section examines the data
flow to the Data Scientist and how this individual fits into the process of getting data to
analyze on projects.
1. For data sources to be loaded into the data warehouse, data needs to be well
understood, structured, and normalized with the appropriate data type definitions.
Although this kind of centralization enables security, backup, and failover of highly
critical data, it also means that data typically must go through significant
preprocessing and checkpoints before it can enter this sort of controlled environment,
which does not lend itself to data exploration and iterative analytics.
2. As a result of this level of control on the EDW, additional local systems may emerge
in the form of departmental warehouses and local data marts that business users
create to accommodate their need for flexible analysis. These local data marts may
not have the same constraints for security and structure as the main EDW and allow
users to do some level of more in-depth analysis. However, these one-off systems
reside in isolation, often are not synchronized or integrated with other data stores, and
may not be backed up.
3. Once in the data warehouse, data is read by additional applications across the
enterprise for BI and reporting purposes. These are high-priority operational
processes getting critical data feeds from the data warehouses and repositories.
4. At the end of this workflow, analysts get data provisioned for their downstream
analytics. Because users generally are not allowed to run custom or intensive
analytics on production databases, analysts create data extracts from the EDW to
analyze data offline in R or other local analytical tools. Many times these tools are
limited to in-memory analytics on desktops analyzing samples of data, rather than the
entire population of a dataset. Because these analyses are based on data extracts, they
reside in a separate location, and the results of the analysis—and any insights on the
quality of the data or anomalies—rarely are fed back into the main data repository.
Figure 1.9 Typical analytic architecture
Because new data sources slowly accumulate in the EDW due to the rigorous validation
and data structuring process, data is slow to move into the EDW, and the data schema is
slow to change. Departmental data warehouses may have been originally designed for a
specific purpose and set of business needs, but over time evolved to house more and more
data, some of which may be forced into existing schemas to enable BI and the creation of
OLAP cubes for analysis and reporting. Although the EDW achieves the objective of
reporting and sometimes the creation of dashboards, EDWs generally limit the ability of
analysts to iterate on the data in a separate nonproduction environment where they can
conduct in-depth analytics or perform analysis on unstructured data.
The typical data architectures just described are designed for storing and processing
mission-critical data, supporting enterprise applications, and enabling corporate reporting
activities. Although reports and dashboards are still important for organizations, most
traditional data architectures inhibit data exploration and more sophisticated analysis.
Moreover, traditional data architectures have several additional implications for data
scientists.
High-value data is hard to reach and leverage, and predictive analytics and data
mining activities are last in line for data. Because the EDWs are designed for central
data management and reporting, those wanting data for analysis are generally
prioritized after operational processes.
Data moves in batches from EDW to local analytical tools. This workflow means that
data scientists are limited to performing in-memory analytics (such as with R, SAS,
SPSS, or Excel), which will restrict the size of the datasets they can use. As such,
analysis may be subject to constraints of sampling, which can skew model accuracy.
Data Science projects will remain isolated and ad hoc, rather than centrally managed.
The implication of this isolation is that the organization can never harness the power
of advanced analytics in a scalable way, and Data Science projects will exist as
nonstandard initiatives, which are frequently not aligned with corporate business
goals or strategy.
All these symptoms of the traditional data architecture result in a slow “time-to-insight”
and lower business impact than could be achieved if the data were more readily accessible
and supported by an environment that promoted advanced analytics. As stated earlier, one
solution to this problem is to introduce analytic sandboxes to enable data scientists to
perform advanced analytics in a controlled and sanctioned way. Meanwhile, the current
Data Warehousing solutions continue offering reporting and BI services to support
management and mission-critical operations.
1.2.3 Drivers of Big Data
To better understand the market drivers related to Big Data, it is helpful to first understand
some past history of data stores and the kinds of repositories and tools to manage these
data stores.
As shown in Figure 1.10, in the 1990s the volume of information was often measured in
terabytes. Most organizations analyzed structured data in rows and columns and used
relational databases and data warehouses to manage large stores of enterprise information.
The following decade saw a proliferation of different kinds of data sources—mainly
productivity and publishing tools such as content management repositories and networked
attached storage systems—to manage this kind of information, and the data began to
increase in size and started to be measured at petabyte scales. In the 2010s, the
information that organizations try to manage has broadened to include many other kinds of
data. In this era, everyone and everything is leaving a digital footprint. Figure 1.10 shows
a summary perspective on sources of Big Data generated by new applications and the
scale and growth rate of the data. These applications, which generate data volumes that
can be measured in exabyte scale, provide opportunities for new analytics and driving new
value for organizations. The data now comes from multiple sources, such as these:
Medical information, such as genomic sequencing and diagnostic imaging
Photos and video footage uploaded to the World Wide Web
Video surveillance, such as the thousands of video cameras spread across a city
Mobile devices, which provide geospatial location data of the users, as well as
metadata about text messages, phone calls, and application usage on smart phones
Smart devices, which provide sensor-based collection of information from smart
electric grids, smart buildings, and many other public and industry infrastructures
Nontraditional IT devices, including the use of radio-frequency identification (RFID)
readers, GPS navigation systems, and seismic processing
Figure 1.10 Data evolution and rise of Big Data sources
The Big Data trend is generating an enormous amount of information from many new
sources. This data deluge requires advanced analytics and new market players to take
advantage of these opportunities and new market dynamics, which will be discussed in the
following section.
1.2.4 Emerging Big Data Ecosystem and a New Approach to Analytics
Organizations and data collectors are realizing that the data they can gather from
individuals contains intrinsic value and, as a result, a new economy is emerging. As this
new digital economy continues to evolve, the market sees the introduction of data vendors
and data cleaners that use crowdsourcing (such as Mechanical Turk and GalaxyZoo) to
test the outcomes of machine learning techniques. Other vendors offer added value by
repackaging open source tools in a simpler way and bringing the tools to market. Vendors
such as Cloudera, Hortonworks, and Pivotal have provided this value-add for the open
source framework Hadoop.
As the new ecosystem takes shape, there are four main groups of players within this
interconnected web. These are shown in Figure 1.11.
Data devices [shown in the (1) section of Figure 1.11] and the “Sensornet” gather
data from multiple locations and continuously generate new data about this data. For
each gigabyte of new data created, an additional petabyte of data is created about that
data. [2]
For example, consider someone playing an online video game through a PC,
game console, or smartphone. In this case, the video game provider captures
data about the skill and levels attained by the player. Intelligent systems monitor
and log how and when the user plays the game. As a consequence, the game
provider can fine-tune the difficulty of the game, suggest other related games
that would most likely interest the user, and offer additional equipment and
enhancements for the character based on the user’s age, gender, and interests.
This information may get stored locally or uploaded to the game provider’s
cloud to analyze the gaming habits and opportunities for upsell and cross-sell,
and identify archetypical profiles of specific kinds of users.
Smartphones provide another rich source of data. In addition to messaging and
basic phone usage, they store and transmit data about Internet usage, SMS
usage, and real-time location. This metadata can be used for analyzing traffic
patterns by scanning the density of smartphones in locations to track the speed
of cars or the relative traffic congestion on busy roads. In this way, GPS devices
in cars can give drivers real-time updates and offer alternative routes to avoid
traffic delays.
Retail shopping loyalty cards record not just the amount an individual spends,
but the locations of stores that person visits, the kinds of products purchased, the
stores where goods are purchased most often, and the combinations of products
purchased together. Collecting this data provides insights into shopping and
travel habits and the likelihood of successful advertisement targeting for certain
types of retail promotions.
Data collectors [the blue ovals, identified as (2) within Figure 1.11] include sample
entities that collect data from the device and users.
Data results from a cable TV provider tracking the shows a person watches,
which TV channels someone will and will not pay for to watch on demand, and
the prices someone is willing to pay for premium TV content
Retail stores tracking the path a customer takes through their store while
pushing a shopping cart with an RFID chip so they can gauge which products
get the most foot traffic using geospatial data collected from the RFID chips
Data aggregators (the dark gray ovals in Figure 1.11, marked as (3)) make sense of
the data collected from the various entities from the “SensorNet” or the “Internet of
Things.” These organizations compile data from the devices and usage patterns
collected by government agencies, retail stores, and websites. In turn, they can
choose to transform and package the data as products to sell to list brokers, who may
want to generate marketing lists of people who may be good targets for specific ad
campaigns.
Data users and buyers are denoted by (4) in Figure 1.11. These groups directly
benefit from the data collected and aggregated by others within the data value chain.
Retail banks, acting as a data buyer, may want to know which customers have
the highest likelihood to apply for a second mortgage or a home equity line of
credit. To provide input for this analysis, retail banks may purchase data from a
data aggregator. This kind of data may include demographic information about
people living in specific locations; people who appear to have a specific level of
debt, yet still have solid credit scores (or other characteristics such as paying
bills on time and having savings accounts) that can be used to infer credit
worthiness; and those who are searching the web for information about paying
off debts or doing home remodeling projects. Obtaining data from these various
sources and aggregators will enable a more targeted marketing campaign, which
would have been more challenging before Big Data due to the lack of
information or high-performing technologies.
Using technologies such as Hadoop to perform natural language processing on
unstructured, textual data from social media websites, users can gauge the
reaction to events such as presidential campaigns. People may, for example,
want to determine public sentiments toward a candidate by analyzing related
blogs and online comments. Similarly, data users may want to track and prepare
for natural disasters by identifying which areas a hurricane affects first and how
it moves, based on which geographic areas are tweeting about it or discussing it
via social media.
Figure 1.11 Emerging Big Data ecosystems
As illustrated by this emerging Big Data ecosystem, the kinds of data and the related
market dynamics vary greatly. These datasets can include sensor data, text, structured
datasets, and social media. With this in mind, it is worth recalling that these datasets will
not work well within traditional EDWs, which were architected to streamline reporting
and dashboards and be centrally managed. Instead, Big Data problems and projects require
different approaches to succeed.
Analysts need to partner with IT and DBAs to get the data they need within an analytic
sandbox. A typical analytical sandbox contains raw data, aggregated data, and data with
multiple kinds of structure. The sandbox enables robust exploration of data and requires a
savvy user to leverage and take advantage of data in the sandbox environment.
1.3 Key Roles for the New Big Data Ecosystem
As explained in the context of the Big Data ecosystem in Section 1.2.4, new players have
emerged to curate, store, produce, clean, and transact data. In addition, the need for
applying more advanced analytical techniques to increasingly complex business problems
has driven the emergence of new roles, new technology platforms, and new analytical
methods. This section explores the new roles that address these needs, and subsequent
chapters explore some of the analytical methods and technology platforms.
The Big Data ecosystem demands three categories of roles, as shown in Figure 1.12.
These roles were described in the McKinsey Global study on Big Data, from May 2011
[1].
Figure 1.12 Key roles of the new Big Data ecosystem
The first group—Deep Analytical Talent— is technically savvy, with strong analytical
skills. Members possess a combination of skills to handle raw, unstructured data and to
apply complex analytical techniques at massive scales. This group has advanced training
in quantitative disciplines, such as mathematics, statistics, and machine learning. To do
their jobs, members need access to a robust analytic sandbox or workspace where they can
perform large-scale analytical data experiments. Examples of current professions fitting
into this group include statisticians, economists, mathematicians, and the new role of the
Data Scientist.
The McKinsey study forecasts that by the year 2018, the United States will have a talent
gap of 140,000–190,000 people with deep analytical talent. This does not represent the
number of people needed with deep analytical talent; rather, this range represents the
difference between what will be available in the workforce compared with what will be
needed. In addition, these estimates only reflect forecasted talent shortages in the United
States; the number would be much larger on a global basis.
The second group—Data Savvy Professionals—has less technical depth but has a basic
knowledge of statistics or machine learning and can define key questions that can be
answered using advanced analytics. These people tend to have a base knowledge of
working with data, or an appreciation for some of the work being performed by data
scientists and others with deep analytical talent. Examples of data savvy professionals
include financial analysts, market research analysts, life scientists, operations managers,
and business and functional managers.
The McKinsey study forecasts the projected U.S. talent gap for this group to be 1.5
million people by the year 2018. At a high level, this means for every Data Scientist
profile needed, the gap will be ten times as large for Data Savvy Professionals. Moving
toward becoming a data savvy professional is a critical step in broadening the perspective
of managers, directors, and leaders, as this provides an idea of the kinds of questions that
can be solved with data.
The third category of people mentioned in the study is Technology and Data Enablers.
This group represents people providing technical expertise to support analytical projects,
such as provisioning and administrating analytical sandboxes, and managing large-scale
data architectures that enable widespread analytics within companies and other
organizations. This role requires skills related to computer engineering, programming, and
database administration.
These three groups must work together closely to solve complex Big Data challenges.
Most organizations are familiar with people in the latter two groups mentioned, but the
first group, Deep Analytical Talent, tends to be the newest role for most and the least
understood. For simplicity, this discussion focuses on the emerging role of the Data
Scientist. It describes the kinds of activities that role performs and provides a more
detailed view of the skills needed to fulfill that role.
There are three recurring sets of activities that data scientists perform:
Reframe business challenges as analytics challenges. Specifically, this is a skill to
diagnose business problems, consider the core of a given problem, and determine
which kinds of candidate analytical methods can be applied to solve it. This concept
is explored further in Chapter 2, “Data Analytics Lifecycle.”
Design, implement, and deploy statistical models and data mining techniques on
Big Data. This set of activities is mainly what people think about when they consider
the role of the Data Scientist: namely, applying complex or advanced analytical
methods to a variety of business problems using data. Chapter 3 through Chapter 11
of this book introduces the reader to many of the most popular analytical techniques
and tools in this area.
Develop insights that lead to actionable recommendations. It is critical to note that
applying advanced methods to data problems does not necessarily drive new business
value. Instead, it is important to learn how to draw insights out of the data and
communicate them effectively. Chapter 12, “The Endgame, or Putting It All
Together,” has a brief overview of techniques for doing this.
Data scientists are generally thought of as having five main sets of skills and behavioral
characteristics, as shown in Figure 1.13:
Quantitative skill: such as mathematics or statistics
Technical aptitude: namely, software engineering, machine learning, and
programming skills
Skeptical mind-set and critical thinking: It is important that data scientists can
examine their work critically rather than in a one-sided way.
Curious and creative: Data scientists are passionate about data and finding creative
ways to solve problems and portray information.
Communicative and collaborative: Data scientists must be able to articulate the
business value in a clear way and collaboratively work with other groups, including
project sponsors and key stakeholders.
Figure 1.13 Profile of a Data Scientist
Data scientists are generally comfortable using this blend of skills to acquire, manage,
analyze, and visualize data and tell compelling stories about it. The next section includes
examples of what Data Science teams have created to drive new value or innovation with
Big Data.
1.4 Examples of Big Data Analytics
After describing the emerging Big Data ecosystem and new roles needed to support its
growth, this section provides three examples of Big Data Analytics in different areas:
retail, IT infrastructure, and social media.
As mentioned earlier, Big Data presents many opportunities to improve sales and
marketing analytics. An example of this is the U.S. retailer Target. Charles Duhigg’s book
The Power of Habit [4] discusses how Target used Big Data and advanced analytical
methods to drive new revenue. After analyzing consumer-purchasing behavior, Target’s
statisticians determined that the retailer made a great deal of money from three main lifeevent situations.
Marriage, when people tend to buy many new products
Divorce, when people buy new products and change their spending habits
Pregnancy, when people have many new things to buy and have an urgency to buy
them
Target determined that the most lucrative of these life-events is the third situation:
pregnancy. Using data collected from shoppers, Target was able to identify this fact and
predict which of its shoppers were pregnant. In one case, Target knew a female shopper
was pregnant even before her family knew [5]. This kind of knowledge allowed Target to
offer specific coupons and incentives to their pregnant shoppers. In fact, Target could not
only determine if a shopper was pregnant, but in which month of pregnancy a shopper
may be. This enabled Target to manage its inventory, knowing that there would be demand
for specific products and it would likely vary by month over the coming nine- to tenmonth cycles.
Hadoop [6] represents another example of Big Data innovation on the IT infrastructure.
Apache Hadoop is an open source framework that allows companies to process vast
amounts of information in a highly parallelized way. Hadoop represents a specific
implementation of the MapReduce paradigm and was designed by Doug Cutting and Mike
Cafarella in 2005 to use data with varying structures. It is an ideal technical framework for
many Big Data projects, which rely on large or unwieldy datasets with unconventional
data structures. One of the main benefits of Hadoop is that it employs a distributed file
system, meaning it can use a distributed cluster of servers and commodity hardware to
process large amounts of data. Some of the most common examples of Hadoop
implementations are in the social media space, where Hadoop can manage transactions,
give textual updates, and develop social graphs among millions of users. Twitter and
Facebook generate massive amounts of unstructured data and use Hadoop and its
ecosystem of tools to manage this high volume. Hadoop and its ecosystem are covered in
Chapter 10, “Advanced Analytics—Technology and Tools: MapReduce and Hadoop.”
Finally, social media represents a tremendous opportunity to leverage social and
professional interactions to derive new insights. LinkedIn exemplifies a company in which
data itself is the product. Early on, LinkedIn founder Reid Hoffman saw the opportunity to
create a social network for working professionals. As of 2014, LinkedIn has more than
250 million user accounts and has added many additional features and data-related
products, such as recruiting, job seeker tools, advertising, and InMaps, which show a
social graph of a user’s professional network. Figure 1.14 is an example of an InMap
visualization that enables a LinkedIn user to get a broader view of the interconnectedness
of his contacts and understand how he knows most of them.
Figure 1.14 Data visualization of a user’s social network using InMaps
Summary
Big Data comes from myriad sources, including social media, sensors, the Internet of
Things, video surveillance, and many sources of data that may not have been considered
data even a few years ago. As businesses struggle to keep up with changing market
requirements, some companies are finding creative ways to apply Big Data to their
growing business needs and increasingly complex problems. As organizations evolve their
processes and see the opportunities that Big Data can provide, they try to move beyond
traditional BI activities, such as using data to populate reports and dashboards, and move
toward Data Science- driven projects that attempt to answer more open-ended and
complex questions.
However, exploiting the opportunities that Big Data presents requires new data
architectures, including analytic sandboxes, new ways of working, and people with new
skill sets. These drivers are causing organizations to set up analytic sandboxes and build
Data Science teams. Although some organizations are fortunate to have data scientists,
most are not, because there is a growing talent gap that makes finding and hiring data
scientists in a timely manner difficult. Still, organizations such as those in web retail,
health care, genomics, new IT infrastructures, and social media are beginning to take
advantage of Big Data and apply it in creative and novel ways.
Exercises
1. What are the three characteristics of Big Data, and what are the main considerations
in processing Big Data?
2. What is an analytic sandbox, and why is it important?
3. Explain the differences between BI and Data Science.
4. Describe the challenges of the current analytical architecture for data scientists.
5. What are the key skill sets and behavioral characteristics of a data scientist?
Bibliography
1. [1] C. B. B. D. Manyika, “Big Data: The Next Frontier for Innovation, Competition,
and Productivity,” McKinsey Global Institute, 2011.
2. [2] D. R. John Gantz, “The Digital Universe in 2020: Big Data, Bigger Digital
Shadows, and Biggest Growth in the Far East,” IDC, 2013.
3. [3] http://www.willisresilience.com/emc-datalab [Online].
4. [4] C. Duhigg, The Power of Habit: Why We Do What We Do in Life and Business,
New York: Random House, 2012.
5. [5] K. Hill, “How Target Figured Out a Teen Girl Was Pregnant Before Her Father
Did,” Forbes, February 2012.
6. [6] http://hadoop.apache.org [Online].
Chapter 2
Data Analytics Lifecycle
Key Concepts
1. Discovery
2. Data preparation
3. Model planning
4. Model execution
5. Communicate results
6. Operationalize
Data science projects differ from most traditional Business Intelligence projects and many
data analysis projects in that data science projects are more exploratory in nature. For this
reason, it is critical to have a process to govern them and ensure that the participants are
thorough and rigorous in their approach, yet not so rigid that the process impedes
exploration.
Many problems that appear huge and daunting at first can be broken down into smaller
pieces or actionable phases that can be more easily addressed. Having a good process
ensures a comprehensive and repeatable method for conducting analysis. In addition, it
helps focus time and energy early in the process to get a clear grasp of the business
problem to be solved.
A common mistake made in data science projects is rushing into data collection and
analysis, which precludes spending sufficient time to plan and scope the amount of work
involved, understanding requirements, or even framing the business problem properly.
Consequently, participants may discover mid-stream that the project sponsors are actually
trying to achieve an objective that may not match the available data, or they are attempting
to address an interest that differs from what has been explicitly communicated. When this
happens, the project may need to revert to the initial phases of the process for a proper
discovery phase, or the project may be canceled.
Creating and documenting a process helps demonstrate rigor, which provides additional
credibility to the project when the data science team shares its findings. A well-defined
process also offers a common framework for others to adopt, so the methods and analysis
can be repeated in the future or as new members join a team.
2.1 Data Analytics Lifecycle Overview
The Data Analytics Lifecycle is designed specifically for Big Data problems and data
science projects. The lifecycle has six phases, and project work can occur in several
phases at once. For most phases in the lifecycle, the movement can be either forward or
backward. This iterative depiction of the lifecycle is intended to more closely portray a
real project, in which aspects of the project move forward and may return to earlier stages
as new information is uncovered and team members learn more about various stages of the
project. This enables participants to move iteratively through the process and drive toward
operationalizing the project work.
2.1.1 Key Roles for a Successful Analytics Project
In recent years, substantial attention has been placed on the emerging role of the data
scientist. In October 2012, Harvard Business Review featured an article titled “Data
Scientist: The Sexiest Job of the 21st Century” [1], in which experts DJ Patil and Tom
Davenport described the new role and how to find and hire data scientists. More and more
conferences are held annually focusing on innovation in the areas of Data Science and
topics dealing with Big Data. Despite this strong focus on the emerging role of the data
scientist specifically, there are actually seven key roles that need to be fulfilled for a highfunctioning data science team to execute analytic projects successfully.
Figure 2.1 depicts the various roles and key stakeholders of an analytics project. Each
plays a critical part in a successful analytics project. Although seven roles are listed, fewer
or more people can accomplish the work depending on the scope of the project, the
organizational structure, and the skills of the participants. For example, on a small,
versatile team, these seven roles may be fulfilled by only 3 people, but a very large project
may require 20 or more people. The seven roles follow.
Business User: Someone who understands the domain area and usually benefits from
the results. This person can consult and advise the project team on the context of the
project, the value of the results, and how the outputs will be operationalized. Usually
a business analyst, line manager, or deep subject matter expert in the project domain
fulfills this role.
Project Sponsor: Responsible for the genesis of the project. Provides the impetus
and requirements for the project and defines the core business problem. Generally
provides the funding and gauges the degree of value from the final outputs of the
working team. This person sets the priorities for the project and clarifies the desired
outputs.
Project Manager: Ensures that key milestones and objectives are met on time and at
the expected quality.
Business Intelligence Analyst: Provides business domain expertise based on a deep
understanding of the data, key performance indicators (KPIs), key metrics, and
business intelligence from a reporting perspective. Business Intelligence Analysts
generally create dashboards and reports and have knowledge of the data feeds and
sources.
Database Administrator (DBA): Provisions and configures the database
environment to support the analytics needs of the working team. These
responsibilities may include providing access to key databases or tables and ensuring
the appropriate security levels are in place related to the data repositories.
Data Engineer: Leverages deep technical skills to assist with tuning SQL queries for
data management and data extraction, and provides support for data ingestion into the
analytic sandbox, which was discussed in Chapter 1, “Introduction to Big Data
Analytics.” Whereas the DBA sets up and configures the databases to be used, the
data engineer executes the actual data extractions and performs substantial data
manipulation to facilitate the analytics. The data engineer works closely with the data
scientist to help shape data in the right ways for analyses.
Data Scientist: Provides subject matter expertise for analytical techniques, data
modeling, and applying valid analytical techniques to given business problems.
Ensures overall analytics objectives are met. Designs and executes analytical
methods and approaches with the data available to the project.
Figure 2.1 Key roles for a successful analytics project
Although most of these roles are not new, the last two roles—data engineer and data
scientist—have become popular and in high demand [2] as interest in Big Data has grown.
2.1.2 Background and Overview of Data Analytics Lifecycle
The Data Analytics Lifecycle defines analytics process best practices spanning discovery
to project completion. The lifecycle draws from established methods in the realm of data
analytics and decision science. This synthesis was developed after gathering input from
data scientists and consulting established approaches that provided input on pieces of the
process. Several of the processes that were consulted include these:
Scientific method [3], in use for centuries, still provides a solid framework for
thinking about and deconstructing problems into their principal parts. One of the
most valuable ideas of the scientific method relates to forming hypotheses and
finding ways to test ideas.
CRISP-DM [4] provides useful input on ways to frame analytics problems and is a
popular approach for data mining.
Tom Davenport’s DELTA framework [5]: The DELTA framework offers an approach
for data analytics projects, including the context of the organization’s skills, datasets,
and leadership engagement.
Doug Hubbard’s Applied Information Economics (AIE) approach [6]: AIE
provides a framework for measuring intangibles and provides guidance on
developing decision models, calibrating expert estimates, and deriving the expected
value of information.
“MAD Skills” by Cohen et al. [7] offers input for several of the techniques
mentioned in Phases 2–4 that focus on model planning, execution, and key findings.
Figure 2.2 presents an overview of the Data Analytics Lifecycle that includes six phases.
Teams commonly learn new things in a phase that cause them to go back and refine the
work done in prior phases based on new insights and information that have been
uncovered. For this reason, Figure 2.2 is shown as a cycle. The circular arrows convey
iterative movement between phases until the team members have sufficient information to
move to the next phase. The callouts include sample questions to ask to help guide
whether each of the team members has enough information and has made enough progress
to move to the next phase of the process. Note that these phases do not represent formal
stage gates; rather, they serve as criteria to help test whether it makes sense to stay in the
current phase or move to the next.
Figure 2.2 Overview of Data Analytics Lifecycle
Here is a brief overview of the main phases of the Data Analytics Lifecycle:
Phase 1—Discovery: In Phase 1, the team learns the business domain, including
relevant history such as whether the organization or business unit has attempted
similar projects in the past from which they can learn. The team assesses the
resources available to support the project in terms of people, technology, time, and
data. Important activities in this phase include framing the business problem as an
analytics challenge that can be addressed in subsequent phases and formulating initial
hypotheses (IHs) to test and begin learning the data.
Phase 2—Data preparation: Phase 2 requires the presence of an analytic sandbox,
in which the team can work with data and perform analytics for the duration of the
project. The team needs to execute extract, load, and transform (ELT) or extract,
transform and load (ETL) to get data into the sandbox. The ELT and ETL are
sometimes abbreviated as ETLT. Data should be transformed in the ETLT process so
the team can work with it and analyze it. In this phase, the team also needs to
familiarize itself with the data thoroughly and take steps to condition the data
(Section 2.3.4).
Phase 3—Model planning: Phase 3 is model planning, where the team determines
the methods, techniques, and workflow it intends to follow for the subsequent model
building phase. The team explores the data to learn about the relationships between
variables and subsequently selects key variables and the most suitable models.
Phase 4—Model building: In Phase 4, the team develops datasets for testing,
training, and production purposes. In addition, in this phase the team builds and
executes models based on the work done in the model planning phase. The team also
considers whether its existing tools will suffice for running the models, or if it will
need a more robust environment for executing models and workflows (for example,
fast hardware and parallel processing, if applicable).
Phase 5—Communicate results: In Phase 5, the team, in collaboration with major
stakeholders, determines if the results of the project are a success or a failure based
on the criteria developed in Phase 1. The team should identify key findings, quantify
the business value, and develop a narrative to summarize and convey findings to
stakeholders.
Phase 6—Operationalize: In Phase 6, the team delivers final reports, briefings,
code, and technical documents. In addition, the team may run a pilot project to
implement the models in a production environment.
Once team members have run models and produced findings, it is critical to frame these
results in a way that is tailored to the audience that engaged the team. Moreover, it is
critical to frame the results of the work in a manner that demonstrates clear value. If the
team performs a technically accurate analysis but fails to translate the results into a
language that resonates with the audience, people will not see the value, and much of the
time and effort on the project will have been wasted.
The rest of the chapter is organized as follows. Sections 2.2–2.7 discuss in detail how each
of the six phases works, and Section 2.8 shows a case study of incorporating the Data
Analytics Lifecycle in a real-world data science project.
2.2 Phase 1: Discovery
The first phase of the Data Analytics Lifecycle involves discovery (Figure 2.3). In this
phase, the data science team must learn and investigate the problem, develop context and
understanding, and learn about the data sources needed and available for the project. In
addition, the team formulates initial hypotheses that can later be tested with data.
Figure 2.3 Discovery phase
2.2.1 Learning the Business Domain
Understanding the domain area of the problem is essential. In many cases, data scientists
will have deep computational and quantitative knowledge that can be broadly applied
across many disciplines. An example of this role would be someone with an advanced
degree in applied mathematics or statistics.
These data scientists have deep knowledge of the methods, techniques, and ways for
applying heuristics to a variety of business and conceptual problems. Others in this area
may have deep knowledge of a domain area, coupled with quantitative expertise. An
example of this would be someone with a Ph.D. in life sciences. This person would have
deep knowledge of a field of study, such as oceanography, biology, or genetics, with some
depth of quantitative knowledge.
At this early stage in the process, the team needs to determine how much business or
domain knowledge the data scientist needs to develop models in Phases 3 and 4. The
earlier the team can make this assessment the better, because the decision helps dictate the
resources needed for the project team and ensures the team has the right balance of
domain knowledge and technical expertise.
2.2.2 Resources
As part of the discovery phase, the team needs to assess the resources available to support
the project. In this context, resources include technology, tools, systems, data, and people.
During this scoping, consider the available tools and technology the team will be using
and the types of systems needed for later phases to operationalize the models. In addition,
try to evaluate the level of analytical sophistication within the organization and gaps that
may exist related to tools, technology, and skills. For instance, for the model being
developed to have longevity in an organization, consider what types of skills and roles will
be required that may not exist today. For the project to have long-term success, what types
of skills and roles will be needed for the recipients of the model being developed? Does
the requisite level of expertise exist within the organization today, or will it need to be
cultivated? Answering these questions will influence the techniques the team selects and
the kind of implementation the team chooses to pursue in subsequent phases of the Data
Analytics Lifecycle.
In addition to the skills and computing resources, it is advisable to take inventory of the
types of data available to the team for the project. Consider if the data available is
sufficient to support the project’s goals. The team will need to determine whether it must
collect additional data, purchase it from outside sources, or transform existing data. Often,
projects are started looking only at the data available. When the data is less than hoped for,
the size and scope of the project is reduced to work within the constraints of the existing
data.
An alternative approach is to consider the long-term goals of this kind of project, without
being constrained by the current data. The team can then consider what data is needed to
reach the long-term goals and which pieces of this multistep journey can be achieved
today with the existing data. Considering longer-term goals along with short-term goals
enables teams to pursue more ambitious projects and treat a project as the first step of a
more strategic initiative, rather than as a standalone initiative. It is critical to view projects
as part of a longer-term journey, especially if executing projects in an organization that is
new to Data Science and may not have embarked on the optimum datasets to support
robust analyses up to this point.
Ensure the project team has the right mix of domain experts, customers, analytic talent,
and project management to be effective. In addition, evaluate how much time is needed
and if the team has the right breadth and depth of skills.
After taking inventory of the tools, technology, data, and people, consider if the team has
sufficient resources to succeed on this project, or if additional resources are needed.
Negotiating for resources at the outset of the project, while scoping the goals, objectives,
and feasibility, is generally more useful than later in the process and ensures sufficient
time to execute it properly. Project managers and key stakeholders have better success
negotiating for the right resources at this stage rather than later once the project is
underway.
2.2.3 Framing the Problem
Framing the problem well is critical to the success of the project. Framing is the process
of stating the analytics problem to be solved. At this point, it is a best practice to write
down the problem statement and share it with the key stakeholders. Each team member
may hear slightly different things related to the needs and the problem and have somewhat
different ideas of possible solutions. For these reasons, it is crucial to state the analytics
problem, as well as why and to whom it is important. Essentially, the team needs to clearly
articulate the current situation and its main challenges.
As part of this activity, it is important to identify the main objectives of the project,
identify what needs to be achieved in business terms, and identify what needs to be done
to meet the needs. Additionally, consider the objectives and the success criteria for the
project. What is the team attempting to achieve by doing the project, and what will be
considered “good enough” as an outcome of the project? This is critical to document and
share with the project team and key stakeholders. It is best practice to share the statement
of goals and success criteria with the team and confirm alignment with the project
sponsor’s expectations.
Perhaps equally important is to establish failure criteria. Most people doing projects prefer
only to think of the success criteria and what the conditions will look like when the
participants are successful. However, this is almost taking a best-case scenario approach,
assuming that everything will proceed as planned and the project team will reach its goals.
However, no matter how well planned, it is almost impossible to plan for everything that
will emerge in a project. The failure criteria will guide the team in understanding when it
is best to stop trying or settle for the results that have been gleaned from the data. Many
times people will continue to perform analyses past the point when any meaningful
insights can be drawn from the data. Establishing criteria for both success and failure
helps the participants avoid unproductive effort and remain aligned with the project
sponsors
2.2.4 Identifying Key Stakeholders
Another important step is to identify the key stakeholders and their interests in the project.
During these discussions, the team can identify the success criteria, key risks, and
stakeholders, which should include anyone who will benefit from the project or will be
significantly impacted by the project. When interviewing stakeholders, learn about the
domain area and any relevant history from similar analytics projects. For example, the
team may identify the results each stakeholder wants from the project and the criteria it
will use to judge the success of the project.
Keep in mind that the analytics project is being initiated for a reason. It is critical to
articulate the pain points as clearly as possible to address them and be aware of areas to
pursue or avoid as the team gets further into the analytical process. Depending on the
number of stakeholders and participants, the team may consider outlining the type of
activity and participation expected from each stakeholder and participant. This will set
clear expectations with the participants and avoid delays later when, for example, the team
may feel it needs to wait for approval from someone who views himself as an adviser
rather than an approver of the work product.
2.2.5 Interviewing the Analytics Sponsor
The team should plan to collaborate with the stakeholders to clarify and frame the
analytics problem. At the outset, project sponsors may have a predetermined solution that
may not necessarily realize the desired outcome. In these cases, the team must use its
knowledge and expertise to identify the true underlying problem and appropriate solution.
For instance, suppose in the early phase of a project, the team is told to create a
recommender system for the business and that the way to do this is by speaking with three
people and integrating the product recommender into a legacy corporate system. Although
this may be a valid approach, it is important to test the assumptions and develop a clear
understanding of the problem. The data science team typically may have a more objective
understanding of the problem set than the stakeholders, who may be suggesting solutions
to a given problem. Therefore, the team can probe deeper into the context and domain to
clearly define the problem and propose possible paths from the problem to a desired
outcome. In essence, the data science team can take a more objective approach, as the
stakeholders may have developed biases over time, based on their experience. Also, what
may have been true in the past may no longer be a valid working assumption. One
possible way to circumvent this issue is for the project sponsor to focus on clearly defining
the requirements, while the other members of the data science team focus on the methods
needed to achieve the goals.
When interviewing the main stakeholders, the team needs to take time to thoroughly
interview the project sponsor, who tends to be the one funding the project or providing the
high-level requirements. This person understands the problem and usually has an idea of a
potential working solution. It is critical to thoroughly understand the sponsor’s perspective
to guide the team in getting started on the project. Here are some tips for interviewing
project sponsors:
Prepare for the interview; draft questions, and review with colleagues.
Use open-ended questions; avoid asking leading questions.
Probe for details and pose follow-up questions.
Avoid filling every silence in the conversation; give the other person time to think.
Let the sponsors express their ideas and ask clarifying questions, such as “Why? Is
that correct? Is this idea on target? Is there anything else?”
Use active listening techniques; repeat back what was heard to make sure the team
heard it correctly, or reframe what was said.
Try to avoid expressing the team’s opinions, which can introduce bias; instead, focus
on listening.
Be mindful of the body language of the interviewers and stakeholders; use eye
contact where appropriate, and be attentive.
Minimize distractions.
Document what the team heard, and review it with the sponsors.
Following is a brief list of common questions that are helpful to ask during the discovery
phase when interviewing the project sponsor. The responses will begin to shape the scope
of the project and give the team an idea of the goals and objectives of the project.
What business problem is the team trying to solve?
What is the desired outcome of the project?
What data sources are available?
What industry issues may impact the analysis?
What timelines need to be considered?
Who could provide insight into the project?
Who has final decision-making authority on the project?
How will the focus and scope of the problem change if the following dimensions
change:
Time: Analyzing 1 year or 10 years’ worth of data?
People: Assess impact of changes in resources on project timeline.
Risk: Conservative to aggressive
Resources: None to unlimited (tools, technology, systems)
Size and attributes of data: Including internal and external data sources
2.2.6 Developing Initial Hypotheses
Developing a set of IHs is a key facet of the discovery phase. This step involves forming
ideas that the team can test with data. Generally, it is best to come up with a few primary
hypotheses to test and then be creative about developing several more. These IHs form the
basis of the analytical tests the team will use in later phases and serve as the foundation for
the findings in Phase 5. Hypothesis testing from a statistical perspective is covered in
greater detail in Chapter 3, “Review of Basic Data Analytic Methods Using R.”
In this way, the team can compare its answers with the outcome of an experiment or test to
generate additional possible solutions to problems. As a result, the team will have a much
richer set of observations to choose from and more choices for agreeing upon the most
impactful conclusions from a project.
Another part of this process involves gathering and assessing hypotheses from
stakeholders and domain experts who may have their own perspective on what the
problem is, what the solution should be, and how to arrive at a solution. These
stakeholders would know the domain area well and can offer suggestions on ideas to test
as the team formulates hypotheses during this phase. The team will likely collect many
ideas that may illuminate the operating assumptions of the stakeholders. These ideas will
also give the team opportunities to expand the project scope into adjacent spaces where it
makes sense or design experiments in a meaningful way to address the most important
interests of the stakeholders. As part of this exercise, it can be useful to obtain and explore
some initial data to inform discussions with stakeholders during the hypothesis-forming
stage.
2.2.7 Identifying Potential Data Sources
As part of the discovery phase, identify the kinds of data the team will need to solve the
problem. Consider the volume, type, and time span of the data needed to test the
hypotheses. Ensure that the team can access more than simply aggregated data. In most
cases, the team will need the raw data to avoid introducing bias for the downstream
analysis. Recalling the characteristics of Big Data from Chapter 1, assess the main
characteristics of the data, with regard to its volume, variety, and velocity of change. A
thorough diagnosis of the data situation will influence the kinds of tools and techniques to
use in Phases 2-4 of the Data Analytics Lifecycle. In addition, performing data exploration
in this phase will help the team determine the amount of data needed, such as the amount
of historical data to pull from existing systems and the data structure. Develop an idea of
the scope of the data needed, and validate that idea with the domain experts on the project.
The team should perform five main activities during this step of the discovery phase:
Identify data sources: Make a list of candidate data sources the team may need to
test the initial hypotheses outlined in this phase. Make an inventory of the datasets
currently available and those that can be purchased or otherwise acquired for the tests
the team wants to perform.
Capture aggregate data sources: This is for previewing the data and providing
high-level understanding. It enables the team to gain a quick overview of the data and
perform further exploration on specific areas. It also points the team to possible areas
of interest within the data.
Review the raw data: Obtain preliminary data from initial data feeds. Begin
understanding the interdependencies among the data attributes, and become familiar
with the content of the data, its quality, and its limitations.
Evaluate the data structures and tools needed: The data type and structure dictate
which tools the team can use to analyze the data. This evaluation gets the team
thinking about which technologies may be good candidates for the project and how to
start getting access to these tools.
Scope the sort of data infrastructure needed for this type of problem: In addition
to the tools needed, the data influences the kind of infrastructure that’s required, such
as disk storage and network capacity.
Unlike many traditional stage-gate processes, in which the team can advance only when
specific criteria are met, the Data Analytics Lifecycle is intended to accommodate more
ambiguity. This more closely reflects how data science projects work in real-life
situations. For each phase of the process, it is recommended to pass certain checkpoints as
a way of gauging whether the team is ready to move to the next phase of the Data
Analytics Lifecycle.
The team can move to the next phase when it has enough information to draft an analytics
plan and share it for peer review. Although a peer review of the plan may not actually be
required by the project, creating the plan is a good test of the team’s grasp of the business
problem and the team’s approach to addressing it. Creating the analytic plan also requires
a clear understanding of the domain area, the problem to be solved, and scoping of the
data sources to be used. Developing success criteria early in the project clarifies the
problem definition and helps the team when it comes time to make choices about the
analytical methods being used in later phases.
2.3 Phase 2: Data Preparation
The second phase of the Data Analytics Lifecycle involves data preparation, which
includes the steps to explore, preprocess, and condition data prior to modeling and
analysis. In this phase, the team needs to create a robust environment in which it can
explore the data that is separate from a production environment. Usually, this is done by
preparing an analytics sandbox. To get the data into the sandbox, the team needs to
perform ETLT, by a combination of extracting, transforming, and loading data into the
sandbox. Once the data is in the sandbox, the team needs to learn about the data and
become familiar with it. Understanding the data in detail is critical to the success of the
project. The team also must decide how to condition and transform data to get it into a
format to facilitate subsequent analysis. The team may perform data visualizations to help
team members understand the data, including its trends, outliers, and relationships among
data variables. Each of these steps of the data preparation phase is discussed throughout
this section.
Data preparation tends to be the most labor-intensive step in the analytics lifecycle. In fact,
it is common for teams to spend at least 50% of a data science project’s time in this critical
phase. If the team cannot obtain enough data of sufficient quality, it may be unable to
perform the subsequent steps in the lifecycle process.
Figure 2.4 shows an overview of the Data Analytics Lifecycle for Phase 2. The data
preparation phase is generally the most iterative and the one that teams tend to
underestimate most often. This is because most teams and leaders are anxious to begin
analyzing the data, testing hypotheses, and getting answers to some of the questions posed
in Phase 1. Many tend to jump into Phase 3 or Phase 4 to begin rapidly developing models
and algorithms without spending the time to prepare the data for modeling. Consequently,
teams come to realize the data they are working with does not allow them to execute the
models they want, and they end up back in Phase 2 anyway.
Figure 2.4 Data preparation phase
2.3.1 Preparing the Analytic Sandbox
The first subphase of data preparation requires the team to obtain an analytic sandbox
(also commonly referred to as a workspace), in which the team can explore the data
without interfering with live production databases. Consider an example in which the team
needs to work with a company’s financial data. The team should access a copy of the
financial data from the analytic sandbox rather than interacting with the production
version of the organization’s main database, because that will be tightly controlled and
needed for financial reporting.
When developing the analytic sandbox, it is a best practice to collect all kinds of data
there, as team members need access to high volumes and varieties of data for a Big Data
analytics project. This can include everything from summary-level aggregated data,
structured data, raw data feeds, and unstructured text data from call logs or web logs,
depending on the kind of analysis the team plans to undertake.
This expansive approach for attracting data of all kind differs considerably from the
approach advocated by many information technology (IT) organizations. Many IT groups
provide access to only a particular subsegment of the data for a specific purpose. Often,
the mindset of the IT group is to provide the minimum amount of data required to allow
the team to achieve its objectives. Conversely, the data science team wants access to
everything. From its perspective, more data is better, as oftentimes data science projects
are a mixture of purpose-driven analyses and experimental approaches to test a variety of
ideas. In this context, it can be challenging for a data science team if it has to request
access to each and every dataset and attribute one at a time. Because of these differing
views on data access and use, it is critical for the data science team to collaborate with IT,
make clear what it is trying to accomplish, and align goals.
During these discussions, the data science team needs to give IT a justification to develop
an analytics sandbox, which is separate from the traditional IT-governed data warehouses
within an organization. Successfully and amicably balancing the needs of both the data
science team and IT requires a positive working relationship between multiple groups and
data owners. The payoff is great. The analytic sandbox enables organizations to undertake
more ambitious data science projects and move beyond doing traditional data analysis and
Business Intelligence to perform more robust and advanced predictive analytics.
Expect the sandbox to be large. It may contain raw data, aggregated data, and other data
types that are less commonly used in organizations. Sandbox size can vary greatly
depending on the project. A good rule is to plan for the sandbox to be at least 5–10 times
the size of the original datasets, partly because copies of the data may be created that serve
as specific tables or data stores for specific kinds of analysis in the project.
Although the concept of an analytics sandbox is relatively new, companies are making
progress in this area and are finding ways to offer sandboxes and workspaces where teams
can access datasets and work in a way that is acceptable to both the data science teams and
the IT groups.
2.3.2 Performing ETLT
As the team looks to begin data transformations, make sure the analytics sandbox has
ample bandwidth and reliable network connections to the underlying data sources to
enable uninterrupted read and write. In ETL, users perform extract, transform, load
processes to extract data from a datastore, perform data transformations, and load the data
back into the datastore. However, the analytic sandbox approach differs slightly; it
advocates extract, load, and then transform. In this case, the data is extracted in its raw
form and loaded into the datastore, where analysts can choose to transform the data into a
new state or leave it in its original, raw condition. The reason for this approach is that
there is significant value in preserving the raw data and including it in the sandbox before
any transformations take place.
For instance, consider an analysis for fraud detection on credit card usage. Many times,
outliers in this data population can represent higher-risk transactions that may be
indicative of fraudulent credit card activity. Using ETL, these outliers may be
inadvertently filtered out or transformed and cleaned before being loaded into the
datastore. In this case, the very data that would be needed to evaluate instances of
fraudulent activity would be inadvertently cleansed, preventing the kind of analysis that a
team would want to do.
Following the ELT approach gives the team access to clean data to analyze after the data
has been loaded into the database and gives access to the data in its original form for
finding hidden nuances in the data. This approach is part of the reason that the analytic
sandbox can quickly grow large. The team may want clean data and aggregated data and
may need to keep a copy of the original data to compare against or look for hidden
patterns that may have existed in the data before the cleaning stage. This process can be
summarized as ETLT to reflect the fact that a team may choose to perform ETL in one
case and ELT in another.
Depending on the size and number of the data sources, the team may need to consider how
to parallelize the movement of the datasets into the sandbox. For this purpose, moving
large amounts of data is sometimes referred to as Big ETL. The data movement can be
parallelized by technologies such as Hadoop or MapReduce, which will be explained in
greater detail in Chapter 10, “Advanced Analytics—Technology and Tools: MapReduce
and Hadoop.” At this point, keep in mind that these technologies can be used to perform
parallel data ingest and introduce a huge number of files or datasets in parallel in a very
short period of time. Hadoop can be useful for data loading as well as for data analysis in
subsequent phases.
Prior to moving the data into the analytic sandbox, determine the transformations that
need to be performed on the data. Part of this phase involves assessing data quality and
structuring the datasets properly so they can be used for robust analysis in subsequent
phases. In addition, it is important to consider which data the team will have access to and
which new data attributes will need to be derived in the data to enable analysis.
As part of the ETLT step, it is advisable to make an inventory of the data and compare the
data currently available with datasets the team needs. Performing this sort of gap analysis
provides a framework for understanding which datasets the team can take advantage of
today and where the team needs to initiate projects for data collection or access to new
datasets currently unavailable. A component of this subphase involves extracting data
from the available sources and determining data connections for raw data, online
transaction processing (OLTP) databases, online analytical processing (OLAP) cubes, or
other data feeds.
Application programming interface (API) is an increasingly popular way to access a data
source [8]. Many websites and social network applications now provide APIs that offer
access to data to support a project or supplement the datasets with which a team is
working. For example, connecting to the Twitter API can enable a team to download
millions of tweets to perform a project for sentiment analysis on a product, a company, or
an idea. Much of the Twitter data is publicly available and can augment other datasets
used on the project.
2.3.3 Learning About the Data
A critical aspect of a data science project is to become familiar with the data itself.
Spending time to learn the nuances of the datasets provides context to understand what
constitutes a reasonable value and expected output versus what is a surprising finding. In
addition, it is important to catalog the data sources that the team has access to and identify
additional data sources that the team can leverage but perhaps does not have access to
today. Some of the activities in this step may overlap with the initial investigation of the
datasets that occur in the discovery phase. Doing this activity accomplishes several goals.
Clarifies the data that the data science team has access to at the start of the project
Highlights gaps by identifying datasets within an organization that the team may find
useful but may not be accessible to the team today. As a consequence, this activity
can trigger a project to begin building relationships with the data owners and finding
ways to share data in appropriate ways. In addition, this activity may provide an
impetus to begin collecting new data that benefits the organization or a specific longterm project.
Identifies datasets outside the organization that may be useful to obtain, through open
APIs, data sharing, …
Purchase answer to see full
attachment
Why Choose Us
- 100% non-plagiarized Papers
- 24/7 /365 Service Available
- Affordable Prices
- Any Paper, Urgency, and Subject
- Will complete your papers in 6 hours
- On-time Delivery
- Money-back and Privacy guarantees
- Unlimited Amendments upon request
- Satisfaction guarantee
How it Works
- Click on the “Place Order” tab at the top menu or “Order Now” icon at the bottom and a new page will appear with an order form to be filled.
- Fill in your paper’s requirements in the "PAPER DETAILS" section.
- Fill in your paper’s academic level, deadline, and the required number of pages from the drop-down menus.
- Click “CREATE ACCOUNT & SIGN IN” to enter your registration details and get an account with us for record-keeping and then, click on “PROCEED TO CHECKOUT” at the bottom of the page.
- From there, the payment sections will show, follow the guided payment process and your order will be available for our writing team to work on it.