One important aspect of data science is reframing business challenges as analytics challenges. Understanding this concept is necessary for understanding the application of the data analytics lifecycle.
Review this week’s required reading. Construct an essay that incorporates the following information:
a. Briefly describe an industry that is of interest to you
b. Using your chosen industry as an example, describe a business challenge
c. Describe how the business challenge you described can be reframed as an analytics challenge
Specifications:
Data Science &
Big Data Analytics
Discovering, Analyzing, Visualizing
and Presenting Data
EMC Education Services
WILEY
‘
Data Science & Big Data Analytics: Discovering, Analyzing, Visualizing and Presenting Data
Published by
John Wiley & Sons, Inc.
10475 Crosspoint Boulevard
Indianapolis, IN 46256
www. wiley. com
Copyright© 2015 by John Wiley & Sons, Inc., Indianapolis, Indiana
Published simultaneously in Canada
ISBN: 978-1-118-87613-8
ISBN: 978-1-118-87622-0 (ebk)
ISBN: 978-1-118-87605-3 (ebk)
Manufactured in the United States of America
10987654321
No part ofthis publication may be reproduced, stored in a retrieval system or transmitted in any form or by any means, electronic, mechanical, photocopying,
recording, scanning or otherwise, except as permitted under Sections 107 or 108 of the 1976 United States Copyright Act, without either the prior written permis-
sion of the Publisher, or authorization through payment of the appropriate per-copy fee to the Copyright Clearance Center, 222 Rosewood Drive, Danvers, MA
01923, (978) 750-8400, fax (978) 646-8600. Requests to the Publisher for permission should be addressed to the Permissions Department, John Wiley & Sons, Inc.,
111 River Street, Hoboken, NJ 07030, (201) 748-6011, fax (201) 748-6008, or online at http: I /www. wiley. com/ go/permissions.
limit ofliability/DisclaimerofWarranty: The publisher and the author make no representations or warranties with respect to the accuracy or completeness of
the contents of this work and specifically disclaim all warranties, including without limitation warranties of fitness for a particular purpose. No warranty may be
created or extended by sales or promotional materials. The advice and strategies contained herein may not be suitable for every situation. This work is sold with
the understanding that the publisher is not engaged in rendering legal, accounting, or other professional services. If professional assistance is required, the
services of a competent professional person should be sought. Neither the publisher nor the author shall be liable for damages arising herefrom. The fact that an
organization or Web site is referred to in this work as a citation and/or a potential source of further information does not mean that the author or the publisher
endorses the information the organization or website may provide or recommendations it may make. Further, readers should be aware that Internet websites
listed in this work may have changed or disappeared between when this work was written and when it is read.
For general information on our other products and services please contact our Customer Care Department within the United States at (877) 762-2974, outside the
United States at (317) 572-3993 orfax (317) 572-4002.
Wiley publishes in a variety of print and electronic formats and by print-on-demand. Some material included with standard print versions of this book may not be
included in e-books or in print-on-demand.lf this book refers to media such as a CD or DVD that is not included in the version you purchased, you may download
this material at http: I /book support. wiley. com. For more information about Wiley products, visit www. wiley. com.
library of Congress Control Number: 2014946681
Trademarks: Wiley and the Wiley logo are trademarks or registered trademarks of John Wiley & Sons, Inc. and/or its affiliates, in the United States and other coun-
tries, and may not be used without written permission. All other trademarks are the property of their respective owners. John Wiley & Sons, Inc. is not associated
with any product or vendor mentioned in this book.
Credits
Executive Editor
Carol Long
Project Editor
Kelly Talbot
Production Manager
Kathleen Wisor
Copy Editor
Karen Gill
Manager of Content Development
and Assembly
Mary Beth Wakefield
Marketing Director
David Mayhew
Marketing Manager
Carrie Sherrill
Professional Technology and Strategy Director
Ba rry Pruett
Business Manager
Amy Knies
Associate Publisher
Jim Minatel
Project Coordinator, Cover
Patrick Redmond
Proofreader
Nancy Carrasco
Indexer
Johnna Van Hoose Dinse
Cover Designer
Mallesh Gurram
About the Key Contributors
David Dietrich heads the data science education team within EMC Education Services, where he leads the
curriculum, strategy and course development related to Big Data Analytics and Data Science. He co-au-
thored the first course in EMC’s Data Science curriculum, two additional EMC courses focused on teaching
leaders and executives about Big Data and data science, and is a contributing author and editor of this
book. He has filed 14 patents in the areas of data science, data privacy, and cloud computing.
David has been an advisor to severa l universities looking to develop academic programs related to data
analytics, and has been a frequent speaker at conferences and industry events. He also has been a a guest lecturer at universi-
ties in the Boston area. His work has been featured in major publications including Forbes, Harvard Business Review, and the
2014 Massachusetts Big Data Report, commissioned by Governor Deval Patrick.
Involved with analytics and technology for nearly 20 years, David has worked with many Fortune 500 companies over his
career, holding mu lti ple roles involving analytics, including managing ana lytics and operations teams, delivering analytic con-
sulting engagements, managing a line of analytical software products for regulating the US banking industry, and developing
Sohware-as-a-Service and BI-as-a-Service offerings. Additionally, David collaborated with the U.S. Federal Reserve in develop-
ing predictive models for monitoring mortgage portfolios.
Barry Heller is an advisory technical education consultant at EMC Education Services. Barry is a course developer and cu r-
riculum advisor in the emerging technology areas of Big Data and data science. Prior to his current role, Barry was a consul-
tant research scientist leadi ng numerous analytical initiatives within EMC’s Total Customer Experience
organization. Early in his EMC career, he managed the statistical engineering group as well as led the
data warehousing efforts in an Enterprise Resource Planning (ERP) implementation. Prior to joining EMC,
Barry held managerial and analytical roles in reliability engineering functions at medical diagnostic and
technology companies. During his career, he has applied his quantitative skill set to a myriad of business
applications in the Customer Service, Engineering, Ma nufacturing, Sales/Marketing, Finance, and Legal
arenas. Underscoring the importance of strong executive stakeholder engagement, many of his successes
have resulted from not only focusing on the technical details of an analysis, but on the decisions that will be resulting from
the analysis. Barry earned a B.S. in Computational Mathematics from the Rochester Institute ofTechnology and an M.A. in
Mathematics from the State University of New York (SUNY) New Paltz.
Beibei Yang is a Technical Education Consultant of EMC Education Services, responsible for developing severa l open courses
at EMC related to Data Science and Big Data Analytics. Beibei has seven years of experi ence in the IT industry. Prior to EMC she
worked as a sohware engineer, systems manager, and network manager for a Fortune 500 company where she introduced
new technologies to improve efficiency and encourage collaboration. Beibei has published papers to
prestigious conferences and has filed multiple patents. She received her Ph.D. in computer science from
the University of Massachusetts Lowell. She has a passion toward natural language processing and data
mining, especially using various tools and techniques to find hidden patterns and tell storie s with data.
Data Science and Big Data Analytics is an exciting domain where the potential of digital information is
maximized for making intelligent business decisions. We believe that this is an area that will attract a lot of
talented students and professiona ls in the short, mid, and long term.
Acknowledgments
EMC Education Services embarked on learning this subject with the intent to develop an “open” curriculum and
certification. It was a challenging journey at the time as not many understood what it would take to be a true
data scientist. After initial research (and struggle), we were able to define what was needed and attract very
talented professionals to work on the project. The course, “Data Science and Big Data Analytics,” has become
well accepted across academia and the industry.
Led by EMC Education Services, this book is the result of efforts and contributions from a number of key EMC
organizations and supported by the office of the CTO, IT, Global Services, and Engi neering. Many sincere
thanks to many key contributors and subject matter experts David Dietrich, Barry Heller, and Beibei Yang
for their work developing content and graphics for the chapters. A special thanks to subject matter experts
John Cardente and Ganesh Rajaratnam for their active involvement reviewing multiple book chapters and
providing valuable feedback throughout the project.
We are also grateful to the fol lowing experts from EMC and Pivotal for their support in reviewing and improving
the content in this book:
Aidan O’Brien Joe Kambourakis
Alexander Nunes Joe Milardo
Bryan Miletich John Sopka
Dan Baskette Kathryn Stiles
Daniel Mepham Ken Taylor
Dave Reiner Lanette Wells
Deborah Stokes Michael Hancock
Ellis Kriesberg Michael Vander Donk
Frank Coleman Narayana n Krishnakumar
Hisham Arafat Richard Moore
Ira Sch ild Ron Glick
Jack Harwood Stephen Maloney
Jim McGroddy Steve Todd
Jody Goncalves Suresh Thankappan
Joe Dery Tom McGowa n
We also thank Ira Schild and Shane Goodrich for coordinating this project, Mallesh Gurram for the cover design, Chris Conroy
and Rob Bradley for graphics, and the publisher, John Wiley and Sons, for timely support in bringing this book to the
industry.
Nancy Gessler
Director, Education Services, EMC Corporation
Alok Shrivastava
Sr. Direc tor, Education Services, EMC Corporation
Contents
Introduction ……………. . .. . …..• . •.. … …. •….. .. .. . .. . ………. .. … . ………………… •.•…… xvii
Chapter 1 • Introduction to Big Data Analytics ………………. . . . ………………….. 1
1.1 Big Data Overview ………………… ……. …..•… • …… . . . …….. • .. … . . … ……. ……. 2
1.1.1 Data Structures .. . .. . . . .. ……………. … … . .. . …… . .. .. …. . ……………….. ….. . .. . . . .. 5
1.1.2 Analyst Perspective on Data Repositories . ……………………….. . ………. …….•. … … .. .. 9
1.2 State of the Practice in Analytics ……………………………………………………….. . 11
1.2.1 Bl Versus Data Science ………….. …. ……. . .. . ……….. . . . …. . ………………….. .. …. 12
1.2.2 Current Analytical Architecture … . …. .• . . ……………. …. ………….. …. …. …… •.. . ….. 13
1.2.3 Drivers of Big Data ……………………………………………. . . . .. …………….. .. … . . 15
1.2.4 Emerging Big Data Ecosystem and a New Approach to Analytics .. ……. …… . ………… .. ……. 16
1.3 Key Roles for the New Big Data Ecosystem ……. ….. ……… . ……. . ….. .. ……………….. 19
1.4 Examples of Big Data Analytics … …. ………. …. . … ……. … …. . …… . ……………….. 22
Summary ………….. ………… … … ……… …. • … •……. …….. .. • ..•… . ……………. 23
Exercises ………………… …. ….. .. …… . ……•……… .. .. . … …. . ..•……………….. 23
Bibliography ……………………… …. .. … … … •………………. .. • …… ….. ….. ……. 24
Chapter 2 • Data Ana lytics Lifecycle …………………………………………….. . 25
2.1 Data Analytics Lifecycle Overview … ….. . …………. • …… •.. ….. …… • … •…………. . . . 26
2.1.1 Key Roles for a Successful Anolytics Project …. . .. . …. …. . …….. . .. .. . ..•……… •. •……. . .. . . 26
2.1.2 Background and Overview of Data Analytics Lifecyc/e …………………….. . …….•… . ….. … 28
2.2 Phase 1: Discovery ….. .. .. .. . ……………………….. . ..•………………… •……….. . 30
2.2.1 Learning the Business Domain .. . ……. … ..•.•. •…. . .. ….. . . .. . ……………….•……….. .30
2.2.2 Resources . . … . ………………. . …… . ……………………. ….. …………. •…….•…. 31
2.2.3 Framing the Problem …………•…. . ……………………………..•……… •.•…. . . …… 32
2.2.41dentifying Key Stakeholders … .. ………………….. … . … ……… …. . ……. •. . ………. . . 33
2.2.51nterviewing the Analytics Sponsor …… …….. …… .. ………. …. … .. … ….. .. ……….. … 33
2.2.6 Developing Initial Hypotheses …………….. .. . . . .. . . . .. . . . . … …. .. ……….. . . •………… . . 35
2.2.71dentifying Po tential Data Sources . … …•. •.. …. . . .. . ……•. •………. . ……. . ….. . … . .. .. . . 35
2.3 Phase 2: Data Preparation …………………………………………………..•…•..•….. 36
2.3.1 Preparing the Analytic Sandbox . ………….. . …………………. … •. •…….•………. .. …. 37
2.3.2 Performing ETLT …………………………………………………………•.•…….•… .. . 38
2.3.3 Learning About the Data .. ….. . ………….. .. ……………………•.•…….•.•…….. ….. . 39
2.3.4 Data Conditioning ……. .. ….•………. . ………………….. .. . .. . . . ……•. •…………. .. .40
2.3.5 Survey and Visualize . . . … .. …. .. .. …… . . ….. .. . ……………… . . •. …… . .•.. .. .. .. . . . ….. 41
2.3.6 Common Tools for the Data Preparation Phase . . . …. .. ….. ……. . •……… •.• .•.. .. ….. .. .. . . .42
2.4 Phase 3: Model Planning ……………………….•…………….. . … . .. •….. …..•…….. 42
2.4.1 Data Exploration and Variable Selection . . … . . .. . ……… •… . … . . …….. . ………….. .. .. . . . .44
2.4.2 Model Selection . … ……………. . .. . . . ……………. •…….•…•…………………….. . .45
2.4.3 Common Tools for the Model Planning Phase . . ……….•……. . . •. ……………………… . . . .45
CONTENTS
2.5 Phase 4: Model Building …… ……………… …… •. … ….. …. • … •. . •. .. •………•…•…. 46
2.5.1 Common Tools for th e Mode/Building Phase …… .. .. . ….. .. ….. . ……. . .. . . .. . . .. . …. . . .. . …. 48
2.6 Phase 5: Communicate Re sults ……… …. …… . … •…….. …….. … . •….. …..•. ….. •…. 49
2.7 Phase 6: Operationalize … … ……. … . .. …….. ……. … ……….. •. . •. . … ……. ………. SO
2.8 Case Study: Global Innovation Network and Analysis (GINA) …………….. •…………………. 53
2.8.1 Phase 1: Discovery ……………………………………………………………………… 54
2.8.2 Phase 2: Data Preparation …. •…….. . ……………………………………………… . …. 55
2.8.3 Phase 3: Model Planning . . . …•.•. . . .. . . ….. .. . . .. . ….. .. .. … …… . . . ………………. . . . .. . . 56
2.8.4 Phase 4: Mode/Building ….. . ….•.. .. .. ………. . ………….. . . .. . … . . ……. .. . …. … . .. . . . 56
2.8.5 Phase 5: Commun icate Results .. . . ….. . …… .. …… … .. . .. . . ………………… …… …….. 58
2.8.6 Phase 6: Operationalize . . … ……•….. ..• .. . . . .. . . …………..•………………………….. 59
Summary …………………………… • …………….. •..•.. •…….•…..••…….. . ….•…. 60
Exercises ……………………………•…. .. …………..•. . •………………….. . . . . . •…. 61
Bibliography ….• . .••……………………………..•…. . . • ….. .. ……………………….. 61
Chapter 3 • Review of Basic Data Analytic Methods Using R . . . . . . .. . … . .. .. . … . . . . . .. … . 63
3.1 Introd uction toR ………………………. … ……………………………… ….. ……… 64
3.1.1 R Graphical User Interfaces . ………… . …………………………. …… . .. … . . . … ……. … 67
3.1.2 Data Import and Export. . ……… . .. …………. ……….. ……….. ……………….. ……. 69
3.1.3 Attribute and Data Types . ………. .. …… . ………………………………………………. 71
3.1.4 Descriptive Statistics ………………….. . . . …………………………………………….. 79
3.2 Exploratory Data Analysis ………….. • … . .• •………….•……….. . ……………….. …. 80
3.2.1 Visualization Before Analysis …….. . …………………………………………..•……….. 82
3.2.2 Dirty Data ………… .. ………………………………………… . ……….. …•…… …. . 85
3.2.3 Visualizing a Single Variable …….. •.. . ……………. .. .. . . ……….. . …. ……. •.. . . . …. .. . . 88
3.2.4 Examining Multiple Varia bles . …. …. ….• . .. . … ………. ………….. …… . .. .. ………….. 91
3.2.5 Data Exploration Versus Presentation …… . …….. •. . . . .. . . ….. …… ………………. …… .. 99
3.3 Statistical Methods for Evaluation ……………….. . .. .• ……… … . .. ……………….. . .. 101
3.3.1 Hypoth esis Testing …….. …….. ………. …. ………………………. . .. . …… .. …… . … 102
3.3.2 Difference of Means …… . …. .. . …. ….. . …………………………………………….. 704
3.3.3 Wilcoxon Rank-Sum Test …………….•…………………… … .. . … . ……………… •… 108
3.3.4 Type I and Type II Errors … . …… . .. . ……………… . …….. . .. …. .. ……………………. 109
3.3.5 Power and Sample Size …..•.. . . .. . … …… . …….. ……. ………….. ……. .. …. ………. 110
3.3.6 ANOVA . ……………. . .. ……… . . …. .. . . … …. …….. . . .. ….. . … .. .. …. . •. •…….•… . 110
Summary …… …………. • ……. …… ….• .. •… • …………………………. •……•…… 114
Exercises …… ……… ……………………. . …………… …… . … … ……. •…………. 114
Bibliography …………………………….. . . . …………….. ……………… •…. . . .. . …. 11 5
Chapter 4 • Advanced Analytical Theory and Method s: Clu stering .. . . .. . .. . … . .. . . . … . .. 117
4.1 Overview of Clustering …….. …… ……… .. …………………………………………. 11 8
4.2 K-means …………… ……. … ………………….. .. …….. . … . ………. . …. . …. …. 11 8
4.2.1 Use Cases ….. .. …………. . •…..• … … .. ….. …….. ………. . . .. …….. …… … .. . …… 119
4.2.2 Overview of the Method . ………… ……. … . .. …….. ………………. … … .. . .•. ….. . .. . 120
4.2.3 Determining the Number of Clusters . . . .. .. •. •…………………. . ………. ….. .. … …… . … 123
4.2.4 Diagnostics .. ……………………. …•…. ……….. ….. ………………….. .. .. ……. . 128
CONTENTS
4.2.5 Reasons to Choose and Cautions .. . .. . . . . . . .. . . . . . .. … . ….. … .. .. . . • . •. • . . …•. • .• . … . ….. … 730
4.3 Add itional Algorithms ………….. … . . . . .. . …… . … . …….. .• .. .. . .. ……………. .. …. 134
Summary ……… …. …………………… .. . ………………….. . . . ..•.. . ……………… 135
Exercises ……….. ………………… . . ….. . …………………………. . ………. .. ….. . 135
Bibliography ……………………….. ……. ………………………….. . ……………… 136
Chapter 5 • Advanced Analytica l Theory and Methods: Association Ru les ……………… 137
5.1 Overview …. . . … …………………………………. .. . .. . ….. . .. ……………… .. …. 138
5.2 A priori Algorit hm ……….. . …………… . . . …… … . . …. . . ….. ………. .. ……… … … 140
5.3 Evaluation of Candidate Rules ………………….. . … .. . .. ….. • ……. . ……………. ….. 141
5.4 Applications of Association Rules ………… … ….. . ….. . . . … ….. . . .. . . . …… ………….. 143
5.5 An Example: Transactions in a Grocery Store … . ……………….. …. . . … ………. ……….. 143
5.5.1 The Groceries Dataset ………………. . . .. ………….. •……….. •… . …….•…………… 144
5.5.2 Frequent ltemset Generation . . ……………………… .. ……… . . • . •……… •…………… 146
5.5.3 Rule Generation and Visualization …… . … . ……………………. . .•. •…. . •. •……….. . .. . 752
5.6 Validation and Testing ……….. . … …. . . ……………………………………… . ……. 157
5.7 Diagnostics .. …. ………………… . .. . . ….. . ………… . … . . … . …… . ……… .. …. . . . 158
Summa ry ……. .. ……………. . ….. … . . .. . . …… …. …. . …….. . . …. ….. ………….. . . 158
Exercises ………………………….. … . . . …….. . …………….. . …. ……. ……… . …. 159
Bibliog raphy ………………………….. . .. …. ….. ………… ….. . … ……….. … . …… . 160
Chapter 6 • Advanced Analytical Theory and Methods: Regression ……………… . ….. 161
6.1 Li near Regression ………. . ………. . .. . .. .. …… . ………… …. . . . ……. ……….. …… 162
6.1.1 UseCases . . . … . . . .. . …… ….. ……………………. .. . ……. …. …. .. …… . ………. . .. . /62
6.1.2 Model Description .. … .. . .. . ….. . ……….. . .. . .. …. . . •. ….. . •. •.• . …… . .•…………. . .. . 163
6.1.3 Diagnostics ………………….. . …. .. . . . . . . ……. •.•.• …..•. •.•…… .• . • .•.. . .. . …. . . . . . . . 773
6.2 Logistic Regression ………… …….. . ….. ………………………….. . ……… .. .. . .. .. 178
6.2.1 Use Cases …… . ………………………………… …. ……………. …. ………………. 179
6.2.2 Model Description …….. .. …. … •….. . …. …….. .. .. • . ….. … . .•. •…• .•………………. 179
6.2.3 Diagnostics …………….. ….. …… . . .. …………•. •. ……..•. ….. .• .•………………. 181
6.3 Reasons to Choose and Cautions ……. . . …. .. …. ………… ……….. ……… ……. ….. . 188
6.4 Additional Regression Models ………… … .. …… . … . …………. . … …….. ……….. … 189
Summary ……….. …. . . ……….. . ……. . ………•… . …… . …… … . .. . . … .. ……….. . . 190
Exercises ………… .. ………. .. . .. ……………. .. .. .. ………… . . .. ………. . . . .. .. …. . . 190
Chapter 7 • Advanced Ana lytical Theory and Methods: Classification …… . ………. . …. 191
7.1 Decision Trees … .. …………… …… ………… …………. ………. ………….. … …. 192
7.1.1 Overview of a Decision Tree …… . ……………….. .. . …………………… .. …. ….. . …… 193
7.1.2 The General Algorithm . ………….. ………….. … ..•. … ………….. .• .. .. …….. …. . .. . . 197
7.1.3 Decision Tree Algorithms …………. .. . …. .. ……•. . .•.. … • . •… …. . …. … . ………….. .. 203
7.1.4 Evaluating a Decision Tree …………. . . •… . … . …•… …. . ……. . ……………….. . … . . . . 204
7.1.5 Decision Trees in R . . . .. ……………. …… .. .. ….. ….. …. ……………… . ….. …….. .. 206
7.2 Na’lve Bayes . …. … ……………. . ….. . …… . ………. . .. . … . ….. .. ….. ……… . …… 211
7.2.1 Bayes’ Theorem . . .. . …………………… . …………………………………………….. 212
7.2.2 Nai’ve Bayes Classifier ………………. •… . … ….. …….•……………………………. .. . 214
CONTENTS
7.2.3 Smoothing . …………… ……………….. . .. . …….. . .. . …… .. • . .. ………. .. ………. . 277
7.2.4 Diagnostics .. . ……….. . ………………… .. …. . .•……… •.•…..•…•…….. . . . ……… 217
7.2.5 Nai’ve Bayes in R …………… . . .. . …..•… .. . …•.•………•.•.. .. . .. •. •.•…. …….. . .. …. . 278
7.3 Diagnostics of Classifiers ………… •…… ……….. •………. …•…• .. •… •. …. ……….. 224
7.4 Additiona l Classification Methods …. • … • …… • …………. • ……………..•… …. ……… 228
Summary …………….. ….. ………… • ……•………….. .. ……………………..•….. 229
Exercises ……………… … ……… …. …………………….•…. . . . ……………..•….. 230
Bibliography …… . ……….•……… …. ……….. . … . ………….. … …•………………. 231
Chapter 8 • Advanced Analytical Theory and Methods: Time Series Analysis . . .. … . … . .. . 233
8.1 Overview of Time Series Analysis ……. ……. ……………. ……………………. …. ….. 234
8.1.1 Box-Jenkins Methodology ………………. . .. …. …… . ……………….. . .. ….. ………… 235
8.2 ARIMA Model. ……………. . .. . ……. •..•….. .. …… . … •…………….. • … . ..•…….. 236
8.2.1 Autocorrelation Function (A CF) .. ……… …………………. … …….. . ……… . .. ….. ….. 236
8.2.2 Autoregressive Models . …… … ………… . . . .. •. … ….. … . .. … … . ……… . ……. .. . . …. 238
8.2.3 Moving Average Models . .. .. . ……………………………… ……………….. •….. . …. . 239
8.2.4 ARMA and ARIMA Models …………. . ……………………………•………..•…..•……. 241
8.2.5 Building and Evaluating an ARIMA Model ……………………….. . .•………•. •. . … •…… 244
8.2.6 Reasons to Choose and Cautions .. ……………. . .. . …….. .. . . .. . ……. . …. .•.•. •.. . •. . …. . 252
8.3 Additional Methods …….. … . … ……. … .. …… …… .. ……. ……. .. … . …. . … . …… . 253
Summary …………………… … … …… .. ………… • ……… ……… ..• .. …….• … ….. 254
Exercises ………….. . ………. … ……… . •. .. ………………………..• .. . . .. • . .• … ….. 254
Chapter 9 • Advanced Analytical Theory and Methods: Text Analysis …… . … . .. .. .. . . … 255
9.1 Text Analysis Steps ………. . …. ……… …… … ……………….. . …… . …… . . .•……. 257
9.2 A Text Analysis Example ….. •…. …. ………………………. .. ………… …… • …. …… 259
9.3 Collecting Raw Text …….. .. ………….. 00 00 00 00 ••••• 00 ••• ••• ••• ••••• 00 ••••• 00 ••••• •• ••• 00 ••• 260
9.4 Representing Text …………………….. … ……………… . ……………….•.. …… .. 264
9.5 Term Frequency-Inverse Document Freq uency (TFIDF) …… • ………. • ….. .•. …… . ……… 269
9.6 Categorizing Documents by Topics …. ………………. .. .•….. . . … • …… •.. . . .. . . ……… 274
9.7 Determining Sentim ents …………… . …… . ……•…•..•…. .. .. .. •.. •… •.. . . .. ……….. 277
9.8 Gaining Insig hts ……………. .. ………………….. •..•……. .. ……..•… . ….. . ……. 283
Summary …………… . ……….. . ……… •……………….. • ….. . . . ……… •….. . ……. 290
Exercises ……………•… . ….. . . .. …….. •..•… . …………. • …………….. . ….. . ……. 290
Bibliography ………… •. ..•… . ….. . ……. … . ……. . .. . ……………. . ………… . …….. 291
Chapter 10 • Advanced Analytics-Technology and Tools: MapReduce and Hadoop . . . ….. 295
10.1 Analytics for Unstructured Data . 00 .. …. .. 00 ••••• 00. 00 ••• 00 00 ………. 00 ……… 00 •• 00 .. . …. 296
10.1.1 UseCasesoo .. 00.00 00 ••••• 00.00 00 •••••• 00 ••••••• 00 ••• 00 • • 00.00 .. . ……………….. 00 . …. .. 00. 296
10.1.2 MapReduce . .. …. ……… .. …………… . ………. •……… •. •……. •.•. •……. . ……. 298
70.7.3 Apache Hadoop ……… … ……….. . ……… . . .. ……. .. . . .. . …. … . .• …•…. .. . •……. 300
10.2 The Hadoop Ecosystem ….•… . ……….. ….. … . •… .. ………….• . •. .. .. ……. . •• …… 306
70.2.1 Pig . ……. ….. …….. . ………………………………….. . .. . . …….•… . ….. •.•….. 306
70.2.2 Hive …………… . …………•……………. . … •.•………..•…….•. . .. . .. . ….. . .. . .. 308
70.2.3 HBase …… .. 00 .. … . . 00 ••••••••••••••• 00 •••••• 00 . …. . ….. .. …… 00 .. .. . . . 00 ••• 00 00 … 00 •••• • 317
10.2.4 Mahout .. 00 • •• • ••••••••••• 00 ………… . . . .. . … . …. …. . ….. .. ………. .. .. 00 • • • 00 .. . .. . .. • 319
CONTENTS
10.3 NoSOL ……………•…………………… • …………….. •………………… • ……. 322
Summary ………….•…•……………………•…………….. •…………………•……. 323
Exercises ……………..•…………………… • ………………… •…… ………………. 324
Bibliography ……. •…… • …………….. •…… • ……………..•……… …. …….. • ..•…. 324
Chapter 11 • Advanced Analytics-Technology and Tools: In-Database Analytics …….. . . . 327
11.1 SOL Essentials ……………………………………………………. .. . . …….. • ..•…. 328
77.1.1 Joins .. . .. . . .. .. . .. … .. . ……… … …………. . .. .. . …… .. .. … . ……. … …. . .. …… . .. . 330
77.1.2 Set Operations ……………. . .. . …………………. . …… … ……………………… . … 332
11.1.3 Grouping Extensions ……… .. .. .. . . . . .. …………………… …………. .. ……………. . 334
11.2 In-Database Text Analysis …………… •… . ………….•……•……… . . .. … . • . . . •..•…. 338
11 .3 Advanced SOL … .. ……………………. •.. • ……………..•……….. . ………•……. 343
71 .3.1 Window Functions . . . . …………………………. … .. …. .. . . . ….. . ………………….. 343
11.3.2 User-Defined Functions and Aggregates ……………………….•. • . •…………… .. … …. .347
11.3.3 Ordered Aggregates …………. ….. …. ….. ……. …. .. ………………………………. 351
11.3.4 MADiib ……………………………….. …………•. ……. . . …. . …. • . •……………. .352
Summary ……….•.. • … • …………………………………………………. .. . . ………. 356
Exercises ……… . ………………………… …….. ………………………. .. . . ………. 356
Bibliography …….•…… •. .• • ……………….. • … .. ……….. . •. ..• . ……… …. .. . …….. 357
Chapter 12 • The Endgame, or Putting It All Together ………………………………. 359
12.1 Communicating and Operationalizing an Analytics Project. …….. . …………………•……. 360
12.2 Creating the Final Deliverables ……………………. ….. . .. .. .. .•…………………….. 362
12.2.1 Developing Core Material for Multiple Audiences …………………… •….. .. •.•………….. 364
12.2.2 Project Goals . . . . . .. . . ………… . ………… . ….. . …….. ….. . .. . . ….. . . . ……………. 365
12.2.3 Main Findings ……. . … . . … . ………………….. . .. … . .. . … ….• . . . … . •. •……….. . .. .367
12.2.4 Approach … . .. . . .. . . …………………………………………………… …. …. …… 369
12.2.5 Model Description … . .. . ……………………………… .. ……… . …. . …•….. . ….. …. 371
12.2.6 Key Points Supported with Data . …………………….. . . . . . ……. . . . ….. .. .. .. . ….. . ….. .372
12.2.7 Model Details .. . . .. …………………………………………. . ……. •.•……. . …….. .372
12.2.8 Recommendations …….. … …. ……. …. ……….. ………. . …. . . …… • .•.• .. …. ….. . . 374
12.2.9 Additional Tips on Final Presentation ……… . .. . ………… .. . . . . .. . .. . ….. . •. •………….. .375
12.2.10 Providing Technica15pecificarions and Code …………………………….. . ……………. . 376
12.3 Data Visua lization Basics ………. …. … …. ………………..•………. . …. . …………. 377
12.3.1 Key Points Supported with Data …………… . … . . . ……………… . …………… … …… .378
12.3.2 Evolution of a Graph ……………. ….. …. …………. …… . …… •.•… •. •.•……… •…. 380
12.3.3 Common Representation Methods ………….. .. ………… .. . . . •. • .. . …. • . . ……………. 386
12.3.4 How to Clean Up a Graphic ………………. •. . . …. . ….. . ………. . . . ….. . … ………. … .387
12.3.5 Additional Considerations ….. …………….. …. … . ….. .. . . . . •.•. .. … . •.• …… . …… … . 392
Summary ………… .. …………………….•…… • … • . … ………•… •………………… 393
Exercises ……….. . . …. . …………….. .. .. . . . …. • …………….. . . .. . .. • ………. . ……. 394
References and Further Reading … .. ………… …. …… ….. ……… . …. . . ……………….. 394
Bibliography …. . . … ……… …. . …………………… • …………….. .. . .. .. . … . . … …… 394
Index .. . ………….. . .. . .. . .. . . .. . . ……….. . . . .. . .. . . . ……. . . . … . . .. . .. .. . .. . . . … .. . . …………… . 397
Foreword
Technological advances and the associated changes in practical daily life have produced a rapidly expanding
“parallel universe” of new content, new data, and new information sources all around us. Regardless of how one
defines it, the phenomenon of Big Data is ever more present, ever more pervasive, and ever more important. There
is enormous value potential in Big Data: innovative insights, improved understanding of problems, and countless
opportunities to predict-and even to shape-the future. Data Science is the principal means to discover and
tap that potential. Data Science provides ways to deal with and benefit from Big Data: to see patterns, to discover
relationships, and to make sense of stunningly varied images and information.
Not everyone has studied statistical analysis at a deep level. People with advanced degrees in applied math-
ematics are not a commodity. Relatively few organizations have committed resources to large collections of data
gathered primarily for the purpose of exploratory analysis. And yet, while applying the practices of Data Science
to Big Data is a valuable differentiating strategy at present, it will be a standard core competency in the not so
distant future.
How does an organization operationalize quickly to take advantage of this trend? We’ve created this book for
that exact purpose.
EMC Education Services has been listening to the industry and organizations, observing the multi-faceted
transformation of the technology landscape, and doing direct research in order to create curriculum and con-
tent to help individuals and organizations transform themselves. For the domain of Data Science and Big Data
Analytics, our educational strategy balances three things: people-especially in the context of data science teams,
processes-such as the analytic lifecycle approach presented in this book, and tools and technologies-in this case
with the emphasis on proven analytic tools.
So let us help you capitalize on this new “parallel universe” that surrounds us. We invite you to learn about
Data Science and Big Data Analytics through this book and hope it significantly accelerates your efforts in the
transformational process.
Introduction
Big Data is creating significant new opportunities for organizations to derive new value and create competitive
advantage from their most valuable asset: information. For businesses, Big Data helps drive efficiency, quality, and
personalized products and services, producing improved levels of customer satisfaction and profit. For scientific
efforts, Big Data analytics enable new avenues of investigation with potentially richer results and deeper insights
than previously available. In many cases, Big Data analytics integrate structured and unstructured data with real-
time feeds and queries, opening new paths to innovation and insight.
This book provides a practitioner’s approach to some of the key techniques and tools used in Big Data analytics.
Knowledge ofthese methods will help people become active contributors to Big Data analytics projects. The book’s
content is designed to assist multiple stakeholders: business and data analysts looking to add Big Data analytics
skills to their portfolio; database professionals and managers of business intelligence, analytics, or Big Data groups
looking to enrich their analytic skills; and college graduates investigating data science as a career field.
The content is structured in twelve chapters. The first chapter introduces the reader to the domain of Big Data,
the drivers for advanced analytics, and the role of the data scientist. The second chapter presents an analytic project
lifecycle designed for the particular characteristics and challenges of hypothesis-driven analysis with Big Data.
Chapter 3 examines fundamental statistical techniques in the context of the open source R analytic software
environment. This chapter also highlights the importance of exploratory data analysis via visualizations and reviews
the key notions of hypothesis development and testing.
Chapters 4 through 9 discuss a range of advanced analytical methods, including clustering, classification,
regression analysis, time series and text analysis.
Chapters 10 and 11 focus on specific technologies and tools that support advanced analytics with Big Data. In
particular, the Map Reduce paradigm and its instantiation in the Hadoop ecosystem, as well as advanced topics
in SOL and in-database text analytics form the focus of these chapters.
XVIII ! INTRODUCTION
Chapter 12 provides guidance on operationalizing Big Data analytics projects. This chapter focuses on creat·
ing the final deliverables, converting an analytics project to an ongoing asset of an organization’s operation, and
creating clear, useful visual outputs based on the data.
EMC Academic Alliance
University and college faculties are invited to join t he Academic Alliance program to access unique “ope n”
curriculum-based education on the following top ics:
• Data Science and Big Data Analytics
• Information Storage and Management
• Cloud Infrastructure and Services
• Backup Recovery Systems and Architecture
The program provides faculty with course re sources to prepare students for opportunities that exist in today’s
evolving IT industry at no cost. For more information, visit http: // education . EMC . com/ academicalliance.
EMC Proven Professional Certification
EMC Proven Professional is a leading education and certification program in the IT industry, providing compre-
hensive coverage of information storage technologies, virtualization, cloud computing, data science/ Big Data
analytics, and more.
Being proven means investing in yourself and formally validating your expertise.
This book prepares you for Data Science Associate (EMCDSA) certification. Visit http : I I educat i on . EMC
. com for details.
INTRODUCTION TO BIG DATA ANAL YTICS
Much has been written about Big Data and the need for advanced analytics within industry, academ ia,
and government. Availa bility of new data sources and the rise of more complex analytical opportunities
have created a need to rethink existing data architectures to enable analytics that take advantage of Big
Data. In addition, sig nificant debate exists about what Big Data is and what kinds of skil ls are required to
make best use of it. This chapter explains severa l key concepts to clarify what is meant by Big Data, why
adva nced analyt ics are needed, how Data Science differs from Business Intelligence (BI), and what new
roles are needed for the new Big Data ecosystem.
1.1 Big Data Overview
Data is created constantly, and at an ever-increasing rate. Mobile phones, social media, imaging technologies
to determine a medical diagnosis-all these and more create new data, and that must be stored somewhere
for some purp ose. Devices and sensors automatically generate diagnostic information that needs to be
stored and processed in real time. Merely keeping up with this huge influx of data is difficult, but su bstan-
tially more cha llenging is analyzing vast amounts of it, especially when it does not conform to traditional
notions of data structure, to identify meaningful patterns and extract useful information. These challenges
of the data deluge present the opportunity to transform business, government, science, and everyday life.
Several industries have led the way in developing their ability to gather and exploit data:
• Credit ca rd companies monitor every purchase their customers make and can identify fraudulent
purchases with a high degree of accuracy using rules derived by processing billions of transactions.
• Mobi le phone companies analyze subscribers’ calling patterns to determine, for example, whether a
caller’s frequent contacts are on a rival network. If that rival network is offeri ng an attractive promo-
tion t hat might cause the subscriber to defect, the mobile phone company can proactively offer the
subscriber an incentive to remai n in her contract.
• For compan ies such as Linked In and Facebook, data itself is their primary product. The valuations of
these compan ies are heavi ly derived from the data they gather and host, which contains more and
more intrinsic va lue as the data grows.
Three attributes stand out as defining Big Data characteristics:
• Huge volume of data: Rather than thousands or millions of rows, Big Data can be billions of rows and
millions of columns.
• Complexity of data t ypes and st ructures: Big Data reflects the variety of new data sources, forma ts,
and structures, including digital traces being left on the web and other digital repositories for subse-
quent analysis.
• Speed of new dat a crea tion and growt h: Big Data can describe high velocity data, with rapid data
ingestion and near real time analysis.
Although the vol ume of Big Data tends to attract the most attention, genera lly the variety and veloc-
ity of the data provide a more apt defi nition of Big Data. (Big Data is sometimes described as havi ng 3 Vs:
volu me, vari ety, and velocity.) Due to its size or structure, Big Data cannot be efficiently analyzed using on ly
traditional databases or methods. Big Data problems req uire new tools and tech nologies to store, manage,
and realize the business benefit. These new tools and technologies enable creation, manipulation, and
1.1 Big Data Overview
management of large datasets and t he storage environments that house them. Another definition of Big
Data comes from the McKi nsey Global report from 2011:
Big Data is data whose s cale, dis tribution, diversity, and/ or timeliness require th e
use of new technical architectures and analytics to e nable insights that unlock ne w
sources of business value.
McKinsey & Co.; Big Data: The Next Frontier for Innovation, Competit ion, and
Prod uctivity [1]
McKinsey’s definition of Big Data impl ies that orga nizations will need new data architectures and ana-
lytic sandboxes, new tools, new analytical methods, and an integration of multiple skills into the new ro le
of the data scientist, which will be discussed in Section 1.3. Figure 1-1 highlights several sources of the Big
Data deluge.
What’s Driving Data Deluge?
Mobile
Sensors
Smart
Grids
Social
Media
Geophysical
Exploration
FtGURE 1-1 What ‘s driving the da ta deluge
Video
Surveillance
• Medical Imaging
Video
Rendering
Gene
Seque ncing
The rate of data creation is accelerating, driven by many of the items in Figure 1-1.
Social media and genetic sequencing are among the fastest-growing sources of Big Data and examples
of untraditional sources of data being used for analysis.
For example, in 2012 Facebook users posted 700 status updates per second worldwide, which can be
leveraged to deduce latent interests or political views of users and show relevant ads. For instance, an
update in wh ich a woman changes her relationship status from “single” to “engaged” wou ld t rigger ads
on bri dal dresses, wedding plann ing, or name-changing services.
Facebook can also construct social graphs to ana lyze which users are connected to each other as an
interconnected network. In March 2013, Facebook released a new featu re called “Graph Search,” enabling
users and developers to search social graphs for people with similar interests, hobbies, and shared locations.
INTRODUCTION TO BIG DATA ANALYTICS
Another example comes from genomics. Genetic sequencing and human genome mapping provide a
detailed understanding of genetic makeup and lineage. The health care industry is looking toward these
advances to help predict which illnesses a person is li kely to get in his lifetime and take steps to avoid these
maladies or reduce their impact through the use of personalized med icine and treatment. Such tests also
highlight typical responses to different medications and pharmaceutical drugs, heightening risk awareness
of specific drug treatments.
While data has grown, the cost to perform this work has fall en dramatically. The cost to sequence one
huma n genome has fallen from $100 million in 2001 to $10,000 in 2011, and the cost continues to drop. Now,
websites such as 23andme (Figure 1-2) offer genotyp ing for less than $100. Although genotyping analyzes
on ly a fraction of a genome and does not provide as much granularity as genetic sequencing, it does point
to the fact that data and complex analysis is becoming more prevalent and less expensive to deploy.
23 pairs of
chromosomes.
One unique you.
Bring your ancestry to life.
F1ncl out what percent or your DNA comes !rom
populations around the world. rang1ng from East As1a
Sub-Saharan Alllca Europe, and more. B1eak
European ancestry down 1010 d1st1nct regions such as
the Bnush Isles. Scnnd1navla Italy and Ashkenazi
Jewish. People IVI\h mixed ancestry. Alncan
Amencans. Launos. and Nauve Amencans w111 also
get a detailed breakdown.
20.5%
( .t A! n
Find relatives across
continents or across
the street.
Build your family tree
and enhance your
ex erience.
: 38.6%
· s, b·S 1h Jn Afr c.an
24.7%
Europe.,,
•
‘ Share your knowledge. Watch it
row.
FIGURE 1-2 Examples of what can be learned through genotyping, from 23andme.com
1.1 Big Dat a Overview
As illustrated by the examples of social media and genetic sequencing, ind ividuals and organizations
both derive benefits from analysis of ever-larger and more comp lex data sets that require increasingly
powerful analytical capabilities.
1.1.1 Data Structures
Big data can come in multiple forms, including structured and non -structured data such as financial
data, text files, multimedia files, and genetic mappings. Contrary to much of the traditional data ana lysis
performed by organizations, most of the Big Data is unstructured or semi-structured in nature, which
requires different techniques and tools to process and analyze. [2) Distributed computing environments
and massively parallel processing (MPP) architectures that enable parallelized data ingest and analysis are
the preferred approach to process such complex data.
With this in mind, this section takes a closer look at data structures.
Figure 1-3 shows four types of data structures, with 80-90% of future data growth coming from non-
structured data types. [2) Though different, the four are commonly mixed. For example, a classic Relational
Database Management System (RDBMS) may store call logs for a software support call center. The RDBMS
may store characteristics of the support calls as typical structured data, with attributes such as time stamps,
machine type, problem type, and operating system. In addition, the system will likely have unstructured,
quasi- or semi-structured data, such as free-form call log information taken from an e-mail ticket of the
problem, customer chat history, or transcript of a phone call describing the technical problem and the solu-
tion or aud io file of the phone call conversation. Many insights could be extracted from the unstructured,
quasi- or semi-structu red data in the call center data.
‘0
Q)
E
u
2
iii
Q)
0
~
Big Data Characteristics: Data Structures
Data Growth Is Increasingly Unstructured
I
Structured
FIGURE 1-3 Big Data Growth is increasingly unstructured
INTRODUCTION TO BIG DATA ANALYTICS
Although analyzing structured data tends to be the most familiar technique, a different technique is
required to meet the challenges to analyze semi-structured data (shown as XML), quasi-structured (shown
as a clickstream), and unstructured data.
Here are examples of how each of the four main types of data structures may look.
o Structured data: Data containing a defined data type, format, and structure (that is, transaction data,
online analytical processing [OLAP] data cubes, traditional RDBMS, CSV files, and even simple spread-
sheets). See Figure 1-4.
SUMMER FOOD SERVICE PROGRAM 11
Data as of August 01. 2011)
Fiscal Number of Peak (July) Meals Total Federal
Year Sites Participation Served Expenditures 2]
—Thousands– -MiL- -Million$-
1969 1.2 99 2.2 0.3
1970 1.9 227 8.2 1.8
1971 3.2 569 29.0 8.2
1972 6.5 1,080 73.5 21.9
1973 11.2 1,437 65.4 26.6
1974 10.6 1,403 63.6 33.6
1975 12.0 1,785 84.3 50.3
1976 16.0 2,453 104.8 73.4
TQ3] 22.4 3,455 198.0 88.9
1977 23.7 2,791 170.4 114.4
1978 22.4 2,333 120.3 100.3
1979 23.0 2,126 121.8 108.6
1980 21.6 1,922 108.2 110.1
1981 20.6 1,726 90.3 105.9
1982 14.4 1,397 68.2 87.1
1983 14.9 1,401 71.3 93.4
1984 15.1 1,422 73.8 96.2
1985 16.0 1,462 77.2 111.5
1986 16.1 1,509 77.1 114.7
1987 16.9 1,560 79.9 129.3
1988 17.2 1,577 80.3 133.3
1989 18.5 1.652 86.0 143.8
1990 19? 1 ~Q? 91? 1~11
FIGURE 1-4 Example of structured data
o Semi-structured data: Textual data files with a discernible pattern that enables parsing (such
as Extensible Markup Language [XML] data files that are self-describing and defined by an XML
schema). See Figure 1-5.
o Quasi-structured data: Textual data with erratic data formats that can be formatted with effort,
tools, and time (for instance, web clickstream data that may contain inconsistencies in data values
and formats). See Figure 1-6.
o Unstructured data: Data that has no inherent structure, which may include text documents, PDFs,
images, and video. See Figure 1-7.
1.1 Big Data Ove rvi ew
Quasi-structured data is a common phenomenon that bears closer scrutiny. Consider the following
example. A user attend s the EMC World conference and subsequently runs a Google search online to find
information related to EMC and Data Scien ce. This would produce a URL such as https: I /www . googl e
. c om/ #q=EMC+ data +scienc e and a list of results, such as in the first graphic of Figure 1-5.
– ~ ….- . .
•• 0
o:.~t.a c!’:a=-set.•”~t.t-e”>
clc::d cc::,r·..:e.:.::r; . “>
< :~ c:.:.;::t. .!l:c •"' / R. /a:.sec:J(<~.;/ cgrr;;:c""/rred•--.1z .. _2 I 6 I 2 .;;,;. "'j;. ~ 3 "' > ~c:.:.pt.>
FIGURE 1-5 Example of semi-structured data
Tool!un
QUKkt~b~
b:plorerbars
Go to
Stop
R
Sty!<
C• rct brOWSing
Source
Stc:unt\ frpclt
lnt~ ~loON I 0. tt u re-
Wdlpoge pnv.cy potoey_
P""""'JI>ond
Ful scr~
Ctri•Q
h e
F5
F7
Fll
After doing this search, the user may choose the second link, to read more about the headline “Data
Scientist- EM( Educa tion, Training, and Certification.” This brings the user to an erne . com site focu sed on
this topic and a new URL, h t t p s : I / e d ucation . e rne . com/ guest / campa i gn / data_ science
INTRODUCTION TO BIG DATA ANALYTICS
1
. aspx, that displays the page shown as (2) in Figure 1-6. Arrivi ng at this site, the user may decide to click
to learn more about the process of becoming certified in data science. The user chooses a link to ward the
top of the page on Certifications, bringing the user to a new URL: ht tps : I I education. erne. com/
guest / certifica tion / framewo rk / stf / data_science . aspx, which is (3) in Figure 1-6.
Visiting these three websites adds three URLs to the log files monitoring the user’s computer or network
use. These three URLs are:
https: // www.google . com/# q=EMC+data+ s cience
https: // education . emc.com/ guest / campaign/ data science . aspx
https : // education . emc . com/ guest / certification/ framework / stf / data_
science . aspx
– – …… – .._.. …………. _
O.Uk*-andi’IO..~T~ · OIC~ o
—·- t…_ ·– . — ·-A– ——·—– .. -,.. _ , _____ ….
0.. ldHIWI • DtC (Ot.aiiOI\. l….,… and~ 0 — -~-~· 1 .. ……. _ .. _….._. __ , ___ -~-·-·
· ~—-“‘ .. ~_.,.. ….. –
:c ~::…~ and Cenbbcrt 0
t-e •·,-‘””””… ‘•’-“”‘•• ..,…._ _ … –……
~ …. __ …. …..,.,_…. … ,…._~·
–
~O•Uik~R…….., A0.1t-~~_,…h”, • £MC O ——–.. … .- . ‘”” …_. ______ , ______ …., –
– ···-.. … -~–.– ….
https:/ /www.google.com/#q
3
——
—
,_ __
—-
~-:::.::.::·–===-=-== .. ——·———·——..—::=–…..::..-..=-.:.-.=-…….
— ·——·—
-·—·–·—·~–·-· ———–·–·–., ______ … ___ ____ _ -·———-·-______ , _______ _
– ——-~ · –· —–
>l __ _ __ , , _ _ _
… , ——., :::… ::
FiGURE 1-6 Example of EMC Data Science search results
1.1 Big Data Overview
FIGURE 1-7 Example of unstructured data: video about Antarctica expedition [3]
This set of three URLs reflects the websites and actions taken to find Data Science inform ation related
to EMC. Together, this comprises a clicksrream that can be parsed and mined by data scientists to discover
usage patterns and uncover relation ships among clicks and areas of interest on a website or group of sites.
The four data types described in this chapter are sometimes generalized into two groups: structured
and unstructu red data. Big Data describes new kinds of data with which most organizations may not be
used to working. With this in mind, the next section discusses common technology arch itectures from the
standpoint of someone wanting to analyze Big Data.
1.1.2 Analyst Perspective on Data Repositories
The introduction of spreadsheets enabled business users to crea te simple logic on data structured in rows
and columns and create their own analyses of business problems. Database administrator training is not
requ ired to create spreadsheets: They can be set up to do many things qu ickly and independently of
information technology (IT) groups. Spreadsheets are easy to share, and end users have control over the
logic involved. However, their proliferation can result in “many versions of the t ruth.” In other words, it
can be challenging to determine if a particular user has the most relevant version of a spreadsheet, with
the most current data and logic in it. Moreover, if a laptop is lost or a file becomes corrupted, the data and
logic within the spreadsheet could be lost. This is an ongoing challenge because spreadsheet programs
such as Microsoft Excel still run on many computers worldwide. With the proliferation of data islands (or
spread marts), the need to centralize the data is more pressing than ever.
As data needs grew, so did mo re scalable data warehousing solutions. These technologies enabled
data to be managed centrally, providing benefits of security, failover, and a single repository where users
INTRODUCTION TO BIG DATA ANALYTICS
could rely on getting an “official” source of data for finan cial reporting or other mission-critical tasks. This
structure also enabled the creation ofOLAP cubes and 81 analytical tools, which provided quick access to a
set of dimensions within an RD8MS. More advanced features enabled performance of in-depth analytical
techniques such as regressions and neural networks. Enterprise Data Warehouses (EDWs) are critica l for
reporting and 81 tasks and solve many of the problems that proliferating spreadsheets introduce, such as
which of multiple versions of a spreadsheet is correct. EDWs-and a good 81 strategy-provide direct data
feeds from sources that are centrally managed, backed up, and secured.
Despite the benefits of EDWs and 81, these systems tend to restri ct the flexibility needed to perform
robust or exploratory data analysis. With the EDW model, data is managed and controlled by IT groups
and database administrators (D8As), and data analysts must depend on IT for access and changes to the
data schemas. This imposes longer lead ti mes for analysts to get data; most of the time is spent waiting for
approvals rather than starting meaningful work. Additionally, many times the EDW rul es restrict analysts
from building datasets. Consequently, it is com mon for additional systems to emerge containing critical
data for constructing analytic data sets, managed locally by power users. IT groups generally dislike exis-
tence of data sources outside of their control because, unlike an EDW, these data sets are not managed,
secured, or backed up. From an analyst perspective, EDW and 81 solve problems related to data accuracy
and availabi lity. However, EDW and 81 introduce new problems related to flexibility and agil ity, which were
less pronounced when dealing with spreads heets.
A solution to this problem is the analytic sandbox, which attempts to resolve the conflict for analysts and
data scientists with EDW and more formally managed corporate data. In this model, the IT group may still
manage the analytic sandboxes, but they will be purposefully designed to enable robust analytics, while
being centrally managed and secured. These sandboxes, often referred to as workspaces, are designed to
enable teams to explore many datasets in a controlled fashion and are not typically used for enterprise-
level financial reporting and sales dashboards.
Many times, analytic sa ndboxes enable high-performance computing using in-database processing-
the analytics occur within the database itself. The idea is that performance of the analysis will be better if
the analytics are run in the database itself, rather than bringing the data to an analytical tool that resides
somewhere else. In-database analytics, discussed further in Chapter 11, “Advanced Analytics- Technology
and Tools: In-Database Analytics.” creates relationships to multiple data sources within an organization and
saves time spent creating these data feeds on an individual basis. In-database processing for deep analytics
enables faster turnaround time for developing and executing new analytic models, while reducing, though
not eli minating, the cost associated with data stored in local, “shadow” file systems. In addition, rather
than the typical structured data in the EDW, analytic sandboxes ca n house a greater variety of data, such
as raw data, textual data, and other kinds of unstructured data, without interfering with critical production
databases. Table 1-1 summarizes the characteristics of the data repositories mentioned in this section.
TABLE 1-1 Types of Data Repositories, from an Analyst Perspective
Data Repository Characteristics
Spreadsheets and
data marts
(“spreadmarts”)
Spreadsheets and low-volume databases for record keeping
Analyst depends on data extracts.
Data Warehouses
Analytic Sandbox
(works paces)
1.2 State of the Practice in Analytics
Centralized data containers in a purpose-built space
Suppo rt s Bl and reporting, but restri cts robust analyses
Ana lyst d ependent o n IT and DBAs for data access and schema changes
Ana lysts must spend significant t ime to g et aggregat ed and d isaggre-
gated data extracts f rom multiple sources.
Data assets gathered f rom multiple sources and technologies fo r ana lysis
Enables fl exible, high-performance ana lysis in a nonproduction environ-
ment; can leverage in-d atabase processing
Reduces costs and risks associated w ith data replication into “shadow” file
systems
“Analyst owned” rather t han “DBA owned”
There are several things to consider with Big Data Analytics projects to ensure the approach fits w ith
the desired goals. Due to the characteristics of Big Data, these projects le nd them selves to decision su p-
port for high-value, strategic decision making w ith high processing complexi t y. The analytic techniques
used in this context need to be iterative and fl exible, due to the high volume of data and its complexity.
Performing rapid and complex analysis requires high throughput network con nections and a consideration
for the acceptable amount of late ncy. For instance, developing a real- t ime product recommender for a
website imposes greater syst em demands than developing a near· real·time recommender, which may
still pro vide acceptable p erform ance, have sl ight ly greater latency, and may be cheaper to deploy. These
considerations requi re a different approach to thinking about analytics challenges, which will be explored
further in the next section.
1.2 State of the Practice in Analytics
Current business problems provide many opportunities for organizations to become more analytical and
data dri ven, as shown in Table 1 ·2.
TABLE 1-2 Business Drivers for Advanced Analytics
Business Driver Examples
Optimize business operations
Identify business ri sk
Predict new business opportunities
Comply w ith laws or regu latory
requirements
Sales, pricing, profitability, efficiency
Customer churn, fraud, default
Upsell, cross-sell, best new customer prospects
Anti-Money Laundering, Fa ir Lending, Basel II-III, Sarbanes-
Oxley(SOX)
INTRODUCTION TO BIG DATA ANALYTICS
Table 1-2 outlines four categories of common business problems that organizations contend with where
they have an opportunity to leverage advanced analytics to create competitive advantage. Rather than only
performing standard reporting on these areas, organizations can apply advanced analytical techniques
to optimize processes and derive more value from these common tasks. The first three examples do not
represent new problems. Organizations have been trying to reduce customer churn, increase sales, and
cross-sell customers for many years. What is new is the opportunity to fuse advanced analytical techniques
with Big Data to produce more impactful analyses for these traditional problems. The last example por-
trays emerging regulatory requirements. Many compliance and regulatory laws have been in existence for
decades, but additional requirements are added every year, which represent additional complexity and
data requirements for organizations. Laws related to anti-money laundering (AML) and fraud prevention
require advanced analytical techniques to comply with and manage properly.
1.2.1 81 Versus Data Science
The four business drivers shown in Table 1-2 require a variety of analytical techniques to address them prop-
erly. Although much is written generally about analytics, it is important to distinguish between Bland Data
Science. As shown in Figure 1-8, there are several ways to compare these groups of analytical techniques.
One way to evaluate the type of analysis being performed is to examine the time horizon and the kind
of analytical approaches being used. Bl tends to provide reports, dashboards, and queries on business
questions for the current period or in the past. Bl systems make it easy to answer questions related to
quarter-to-date revenue, progress toward quarterly targets, and understand how much of a given product
was sold in a prior quarter or year. These questions tend to be closed-ended and explain current or past
behavior, typically by aggregating historical data and grouping it in some way. 81 provides hindsight and
some insight and generally answers questions related to “when” and “where” events occurred.
By comparison, Data Science tends to use disaggregated data in a more forward-looking, exploratory
way, focusing on analyzing the present and enabling informed decisions about the future. Rather than
aggregating historical data to look at how many of a given product sold in the previous quarter, a team
may employ Data Science techniques such as time series analysis, further discussed in Chapter 8, “Advanced
Analytical Theory and Methods: Time Series Analysis,” to forecast future product sales and revenue more
accurately than extending a simple trend line. In addition, Data Science tends to be more exploratory in
nature and may use scenario optimization to deal with more open-ended questions. This approach provides
insight into current activity and foresight into future events, while generally focusing on questions related
to “how” and “why” events occur.
Where 81 problems tend to require highly structured data organized in rows and columns for accurate
reporting, Data Science projects tend to use many types of data sources, including large or unconventional
datasets. Depending on an organization’s goals, it may choose to embark on a 81 project if it is doing reporting,
creating dashboards, or performing simple visualizations, or it may choose Data Science projects if it needs
to do a more sophisticated analysis with disaggregated or varied datasets.
Exploratory
Analytical
Approach
Explanatory
I
, .. — —,
1 Busin ess 1
1 Inte lligence 1
\ , …. _____ ..,
Past
fiGUR E 1 ·8 Comparing 81 with Data Science
1.2.2 Current Analytical Architecture
1 .2 State ofthe Practice In Analytlcs
Predictive Analytics and Data Mini ng
(Data Sci ence)
Typical • Optimization. predictive modo lin£
Techniques forocastlnC. statlatlcal analysis
and • Structured/unstructured data. many
Data Types types of sources, very Ioree datasata
Common
Questions
Typical
Techniques
and
Data Types
Tim e
Common
Questions
• What II … ?
• What’s tho optlmaltconarlo tor our bualnoss?
• What wtll happen next? What II these trend$
continuo? Why Is this happonlnt?
Busi ness Intelligence
• Standard and ad hoc reportlnc. dashboards.
alerts, queries, details on demand
• Structured data. traditional sourcoa.
manac:eable datasets
• What happened lut quarter?
• How many units sold?
• Whore Is the problem? In whic h situations?
Future
As described earlier, Data Science projects need workspaces that are purpose-built for experimenting with
data, with flexible and agile data architectures. Most organizations still have data warehouses that provide
excellent support for traditional reporting and simple data analysis activities but unfortunately have a more
difficult time supporting more robust analyses. This section examines a typical analytical data architecture
that may exist within an organization.
Figure 1-9 shows a typical data architecture and several of the challenges it presents to data scientists
and others trying to do advanced analytics. This section examines the data flow to the Data Scientist and
how this individual tits into the process of getting data to analyze on proj ects.
INTRODUCTION TO BIG DATA ANALYTICS
FIGURE 1-9 Typical analytic architecture
i..l ,_,
It
An alysts
Dashboards
Reports
Al erts
1. For data sources to be loaded into the data wa rehouse, data needs to be well understood,
structured, and normalized with the appropriate data type defini t ions. Although th is kind of
centralization enabl es security, backup, and fai lover of highly critical data, it also means that data
typically must go through significant preprocessing and checkpoints before it can enter this sort
of controll ed environment, which does not lend itself to data exploration and iterative analytic s.
2. As a result of t his level of control on the EDW, add itional local systems may emerge in the form of
departmental wa rehou ses and loca l data marts t hat business users create to accommodate thei r
need for flexible analysis. These local data marts may not have the same constraints for secu-
ri ty and structu re as the main EDW and allow users to do some level of more in-depth analysis.
However, these one-off systems reside in isolation, often are not synchronized or integrated with
other data stores, and may not be backed up.
3. Once in the data warehouse, data is read by additional applications across the enterprise for Bl
and reporting purposes. These are high-priority operational processes getting critical data feeds
from the data warehouses and repositories.
4. At the end of this workfl ow, analysts get data provisioned for their downstream ana lytics.
Because users generally are not allowed to run custom or intensive analytics on production
databases, analysts create data extracts from the EDW to analyze data offline in R or other local
analytical tools. Many times the se tools are lim ited to in-memory analytics on desktops analyz-
ing sa mples of data, rath er than the entire population of a dataset. Because the se analyses are
based on data extracts, they reside in a separate location, and the results of the analysis-and
any insights on the quality of the data or anomalies- rarely are fed back into the main data
repository.
Because new data sources slowly accum ulate in the EDW due to the rigorous validation and
data struct uring process, data is slow to move into the EDW, and the data schema is slow to change.
1.2 State of the Practice in Analytics
Departmental data warehouses may have been originally designed for a specific purpose and set of business
needs, but over time evolved to house more and more data, some of which may be forced into existing
schemas to enable Bland the creation of OLAP cubes for analysis and reporting. Although the EDW achieves
the objective of reporting and sometimes the creation of dashboards, EDWs generally limit the ability of
analysts to iterate on the data in a separate nonproduction environment where they can conduct in-depth
analytics or perform analysis on unstructured data.
The typical data architectures just described are designed for storing and processing mission-critical
data, supporting enterprise applications, and enabling corporate reporting activities. Although reports and
dashboards are still important for organizations, most traditional data architectures inhibit data exploration
and more sophisticated analysis. Moreover, traditional data architectures have several additional implica-
tions for data scientists.
o High-value data is hard to reach and leverage, and predictive analytics and data mining activities
are last in line for data. Because the EDWs are designed for central data management and reporting,
those wanting data for analysis are generally prioritized after operational processes.
o Data moves in batches from EDW to local analytical tools. This workflow means that data scientists
are limited to performing in-memory analytics (such as with R, SAS, SPSS, or Excel), which will restrict
the size of the data sets they can use. As such, analysis may be subject to constraints of sampling,
which can skew model accuracy.
o Data Science projects will remain isolated and ad hoc, rather than centrally managed. The implica-
tion of this isolation is that the organization can never harness the power of advanced analytics in a
scalable way, and Data Science projects will exist as nonstandard initiatives, which are frequently not
aligned with corporate business goals or strategy.
All these symptoms of the traditional data architecture result in a slow “time-to-insight” and lower
business impact than could be achieved if the data were more readily accessible and supported by an envi-
ronment that promoted advanced analytics. As stated earlier, one solution to this problem is to introduce
analytic sandboxes to enable data scientists to perform advanced analytics in a controlled and sanctioned
way. Meanwhile, the current Data Warehousing solutions continue offering reporting and Bl services to
support management and mission-critical operations.
1.2.3 Drivers of Big Data
To better understand the market drivers related to Big Data, it is helpful to first understand some past
history of data stores and the kinds of repositories and tools to manage these data stores.
As shown in Figure 1-10, in the 1990s the volume of information was often measured in terabytes.
Most organizations analyzed structured data in rows and columns and used relational databases and data
warehouses to manage large stores of enterprise information. The following decade saw a proliferation of
different kinds of data sources-mainly productivity and publishing tools such as content management
repositories and networked attached storage systems-to manage this kind of information, and the data
began to increase in size and started to be measured at petabyte scales. In the 2010s, the information that
organizations try to manage has broadened to include many other kinds of data. In this era, everyone
and everything is leaving a digital footprint. Figure 1-10 shows a summary perspective on sources of Big
Data generated by new applications and the scale and growth rate of the data. These applications, which
generate data volumes that can be measured in exabyte scale, provide opportunities for new analytics and
driving new value for organizations. The data now comes from multiple sources, such as these:
INTRODUCTION TO BIG DATA ANALYTICS
• Medical information, such as genomic sequencing and diag nostic imagi ng
• Photos and video footage uploaded to the World Wide Web
• Video surveillance, such as the thousands of video ca meras spread across a city
• Mobile devices, which provide geospatiallocation data of the users, as well as metadata about text
messages, phone calls, and application usage on smart phones
• Smart devices, which provide sensor-based collection of information from smart electric grids, smart
bu ildings, and many other public and ind ustry infrastructures
• Nontraditional IT devices, including the use of radio-freq uency identifica tion (RFID) reader s, GPS
navigation systems, and seismic processing
MEASURED IN MEASURED IN WILL BE MEASURED IN
TERABYTES PET A BYTES EXABYTES
lTB • 1.000GB lPB • l .OOOTB lEB l .OOOPB
IIEII
You(D
…. ~ .. ·,
A n ” \ . ~
I b ~
~
~
SMS
w: ‘—–”
ORACLE =
1.9905 20005 201.05
( RDBMS & DATA (CONTENT & DIGITAL ASSET (NO-SQL & KEY VALUE)
WAREHOUSE) MANAGEMENT)
FIGURE 1-10 Data evolution and the rise of Big Data sources
Th e Big Data t rend is ge nerating an enorm ous amount of information from many new sources. This
data deluge requires advanced analytics and new market players to take adva ntage of these opportunities
and new market dynamics, which wi ll be discussed in the following section.
1.2.4 Emerging Big Data Ecosystem and a New Approach to Analytics
Organ izations and data collectors are realizing that the data they ca n gath er from individuals contain s
intrinsic value and, as a result, a new economy is emerging. As this new digital economy continues to
1.2 State of the Practice in Analytics
evol ve, the market sees the introduction of data vendors and data cl eaners that use crowdsourcing (such
as Mechanica l Turk and Ga laxyZoo) to test the outcomes of machine learning techniques. Other vendors
offer added va lue by repackaging open source tools in a simpler way and bri nging the tools to market.
Vendors such as Cloudera, Hortonworks, and Pivotal have provid ed thi s value-add for the open source
framework Hadoop.
As the new ecosystem takes shape, there are four main groups of playe rs within this interconnected
web. These are shown in Figure 1-11.
• Data devices [shown in the (1) section of Figure 1-1 1] and the “Sensornet” gat her data from multiple
locations and continuously generate new data about th is data. For each gigabyte of new data cre-
ated, an additional petabyte of data is created about that data. [2)
• For example, consider someone playing an online video game through a PC, game console,
or smartphone. In this case, the video game provider captures data about the skill and levels
attained by the playe r. Intelligent systems monitor and log how and when the user plays the
game. As a consequence, the game provider can fine -tune the difficulty of the game,
suggest other related games that would most likely interest the user, and offer add itional
equipment and enhancements for the character based on the user’s age, gender, and
interests. Th is information may get stored loca lly or uploaded to the game provider’s cloud
to analyze t he gaming habits and opportunities for ups ell and cross-sell, and identify
archetypica l profiles of specific kinds of users.
• Smartphones provide another rich source of data . In add ition to messag ing and basic phone
usage, they store and transmit data about Internet usage, SMS usage, and real-time location.
This metadata can be used for analyzing traffic patterns by sca nning the density of smart-
phones in locations to track the speed of cars or the relative traffi c congestion on busy
roads. In t his way, GPS devices in ca rs can give drivers real-time updates an d offer altern ative
routes to avoid traffic delays .
• Retail shopping loyalty cards record not just the amo unt an individual spends, but the loca-
tions of stores that person visits, the kind s of products purchased, the stores where goods
are purchased most ofte n, and the combinations of prod ucts purchased together. Collecting
this data provides insights into shopping and travel habits and the likelihood of successful
advertiseme nt targeting for certa in types of retail promotions.
• Data collectors [the blue ovals, identified as (2) within Figure 1-1 1] incl ude sa mple entities that
col lect data from the dev ice and users.
• Data resul ts from a cable TV provider tracking the shows a person wa tches, which TV
channels someone wi ll and will not pay for to watch on demand, and t he prices someone is
will ing to pay fo r premiu m TV content
• Retail stores tracking the path a customer takes through their store w hile pushing a shop-
ping cart with an RFID chip so they can gauge which products get the most foot traffic using
geospatial data co llected from t he RFID chips
• Data aggregators (the dark gray ovals in Figure 1-11, marked as (3)) make sense of the data co llected
from the various entities from the “Senso rN et” or the “Internet ofThings.” These org anizatio ns
compile data from the devices an d usage pattern s collected by government agencies, retail stores,
INT RODUCTION TO BIG DATA ANALYTIC S
and websites. ln turn, t hey can choose to transform and package the data as products to sell to list
brokers, who may want to generate marketing lists of people who may be good targets for specific ad
campaigns.
• Data users and buyers are denoted by (4) in Figu re 1-11. These groups directly benefit from t he data
collected and aggregated by others within the data value chain.
• Retai l ba nks, acting as a data buyer, may want to know which customers have the hig hest
likelihood to apply for a second mortgage or a home eq uity line of credit. To provide inpu t
for this analysis, retai l banks may purchase data from a data aggregator. This kind of data
may include demograp hic information about people living in specific locations; people who
appear to have a specific level of debt, yet still have solid credit scores (or other characteris-
tics such as paying bil ls on time and having savings accounts) that can be used to infer cred it
worthiness; and those who are sea rching the web for information about paying off debts or
doing home remodeling projects. Obtaining data from these various sources and aggrega-
tors will enable a more targeted marketing campaign, which would have been more chal-
lenging before Big Data due to the lack of information or high-performing technologies.
• Using technologies such as Hadoop to perform natural language processing on
unstructured, textual data from social media websites, users can gauge the reaction to
events such as presidential campaigns. People may, for example, want to determine public
sentiments toward a candidate by analyzing related blogs and online comments. Similarl y,
data users may want to track and prepare for natural disasters by identifying which areas
a hurricane affects fi rst and how it moves, based on which geographic areas are tweeting
about it or discussing it via social med ia.
r:t\ Data
\.::J Devices {‘[I t Ptto…r r.r…, l UC)(.K VlOLU l !\I ill UO\. AI”
(,.\MI
CfitUII CAfW CO\tPl!UR
RfAO(H
~ .~
Iff [) \llOfO MfOICAI
IMo\C’oi”G
Law
EniCHCefllefll
Data
Users/ Buyers
0
Media
FIGURE 1-11 Emerging Big Data ecosystem
Do live!)’
So Mea
‘I If,. [Ill AN [
Privato
Investigators
/ lawyors
1.3 Key Roles for the New Big Data Ecosyst e m
As il lustrated by this emerging Big Data ecosystem, the kinds of data and the related market dynamics
vary greatly. These data sets ca n include sensor data, text, structured datasets, and social med ia . With this
in mind, it is worth recall ing that these data sets will not work wel l within trad itional EDWs, which were
architected to streamline reporting and dashboards and be centrally managed.lnstead, Big Data problems
and projects require different approaches to succeed.
Analysts need to partner with IT and DBAs to get the data they need within an analytic sandbox. A
typical analytical sandbox contains raw data, agg regated data, and data with mu ltiple kinds of structure.
The sandbox enables robust exploration of data and requires a savvy user to leverage and take advantage
of data in the sandbox environment.
1.3 Key Roles for the New Big Data Ecosystem
As explained in the context of the Big Data ecosystem in Section 1.2.4, new players have emerged to curate,
store, produce, clean, and transact data. In addition, the need for applying more advanced ana lytica l tech-
niques to increasing ly complex business problems has driven the emergence of new roles, new technology
platforms, and new analytical methods. This section explores the new roles that address these needs, and
subsequent chapters explore some of the analytica l methods and technology platforms.
The Big Data ecosystem demands three ca tegories of roles, as shown in Figure 1-12. These roles were
described in the McKinsey Global study on Big Data, from May 2011 [1].
Three Key Roles of The New Data Ecosystem
Role
Deep Analytical Talent
Data Savvy Professionals
Technology and Data Enablers
Data Scientists
.. Projected U.S. tal ent
gap: 1.40 ,000 to 1.90,000
.. Projected U.S. talent
gap: 1..5 million
Note: RcuresaboYe m~ • projected talent CDP In US In 201.8. as ihown In McKinsey May 2011 article “81& Data: l he Nut rront* t ot
Innovation. Competition. and Product~
FIGURE 1-12 Key roles of the new Big Data ecosystem
The first group- Deep Analytical Talent- is technically savvy, with strong analytical skills. Members pos-
sess a combi nation of skills to handle raw, unstructured data and to apply complex analytical techniques at
INTRODUCTION TO BIG DATA ANALYTICS
massive scales. This group has advanced training in quantitative disciplines, such as mathematics, statistics,
and machine learning. To do their jobs, members need access to a robust analytic sandbox or workspace
where they can perform large-scale analytical data experiments. Examples of current professions fitting
into this group include statisticians, economists, mathematicians, and the new role of the Data Scientist.
The McKinsey study forecasts that by the year 2018, the United States will have a talent gap of 140,000-
190,000 people with deep analytical talent. This does not represent the number of people needed with
deep analytical talent; rather, this range represents the difference between what will be available in the
workforce compared with what will be needed. In addition, these estimates only reflect forecasted talent
shortages in the United States; the number would be much larger on a global basis.
The second group-Data Savvy Professionals-has less technical depth but has a basic knowledge of
statistics or machine learning and can define key questions that can be answered using advanced analytics.
These people tend to have a base knowledge of working with data, or an appreciation for some of the work
being performed by data scientists and others with deep analytical talent. Examples of data savvy profes-
sionals include financial analysts, market research analysts, life scientists, operations managers, and business
and functional managers.
The McKinsey study forecasts the projected U.S. talent gap for this group to be 1.5 million people by
the year 2018. At a high level, this means for every Data Scientist profile needed, the gap will be ten times
as large for Data Savvy Professionals. Moving toward becoming a data savvy professional is a critical step
in broadening the perspective of managers, directors, and leaders, as this provides an idea of the kinds of
questions that can be solved with data.
The third category of people mentioned in the study is Technology and Data Enablers. This group
represents people providing technical expertise to support analytical projects, such as provisioning and
administrating analytical sandboxes, and managing large-scale data architectures that enable widespread
analytics within companies and other organizations. This role requires skills related to computer engineering,
programming, and database administration.
These three groups must work together closely to solve complex Big Data challenges. Most organizations
are familiar with people in the latter two groups mentioned, but the first group, Deep Analytical Talent,
tends to be the newest role for most and the least understood. For simplicity, this discussion focuses on
the emerging role of the Data Scientist. It describes the kinds of activities that role performs and provides
a more detailed view of the skills needed to fulfill that role.
There are three recurring sets of activities that data scientists perform:
o Reframe business challenges as analytics challenges. Specifically, this is a skill to diagnose busi-
ness problems, consider the core of a given problem, and determine which kinds of candidate analyt-
ical methods can be applied to solve it. This concept is explored further in Chapter 2, “Data Analytics
lifecycle.”
o Design, implement, and deploy statistical models and data mining techniques on Big Data. This
set of activities is mainly what people think about when they consider the role of the Data Scientist:
1.3 Key Roles for the New Big Data Ecosystem
namely, applying complex or advanced ana lytical methods to a variety of busi ness problems using
data. Chapter 3 t hrough Chapter 11 of this book introd uces the reader to many of the most popular
analytical techniques and tools in this area.
• Develop insights that lead to actionable recommendations. It is critical to note that applying
advanced methods to data problems does not necessarily drive new business va lue. Instead, it is
important to learn how to draw insights out of the data and communicate them effectively. Chapter 12,
“The Endgame, or Putting It All Together;’ has a brief overview of techniques for doing this.
Data scientists are generally thoug ht of as having fi ve mai n sets of skills and behaviora l characteristics,
as shown in Figure 1-13:
• Quantitative skill: such as mathematics or statistics
• Technical aptitude: namely, software engineering, machine learning, and programming skills
• Skeptical mind-set and critica l thin king: It is important that data scientists can examine their work
critica lly rather than in a one-sided way.
• Curious and creative: Data scientists are passionate about data and finding creative ways to solve
problems and portray information.
• Communicative and collaborative: Data scie ntists must be able to articulate the business val ue
in a clear way and collaboratively work with other groups, including project sponsors and key
stakeholders.
Quantitative
Technical
Skeptical
Curious and
Creative
Communlcativr
and
CDDaborati~
fiGURE 1 Profile of a Data Scientist
INTRODUCTION TO BIG DATA ANALYTICS
Data scientists are generally comfortable using this blend of skills to acquire, manage, analyze, and
visualize data and tell compelling stories about it. The next section includes examples of what Data Science
teams have created to drive new value or innovation with Big Data.
1.4 Examples of Big Data Analytics
After describing the emerging Big Data ecosystem and new roles needed to support its growth, this section
provides three examples of Big Data Analytics in different areas: retail, IT infrastructure, and social media.
As mentioned earlier, Big Data presents many opportunities to improve sa les and marketing ana lytics.
An example of this is the U.S. retailer Target. Cha rles Duhigg’s book The Power of Habit [4] discusses how
Target used Big Data and advanced analytical methods to drive new revenue. After analyzing consumer-
purchasing behavior, Target’s statisticians determin ed that the retailer made a great deal of money from
three main life-event situations.
• Marriage, when people tend to buy many new products
• Divorce, when people buy new products and change their spending habits
• Pregnancy, when people have many new things to buy and have an urgency to buy t hem
Target determined that the most lucrative of these life-events is the thi rd situation: pregnancy. Using
data collected from shoppers, Ta rget was able to identify this fac t and predict which of its shoppers were
pregnant. In one case, Target knew a female shopper was pregnant even before her family knew [5]. This
kind of knowledge allowed Target to offer specifi c coupons and incentives to thei r pregnant shoppers. In
fact, Target could not only determine if a shopper was pregnant, but in which month of pregnancy a shop-
per may be. This enabled Target to manage its inventory, knowi ng that there would be demand for specific
products and it wou ld likely vary by month over the com ing nine- to ten- month cycl es.
Hadoop [6] represents another example of Big Data innovation on the IT infra structure. Apache Hadoop
is an open source framework that allows companies to process vast amounts of information in a highly paral-
lelized way. Hadoop represents a specific implementation of t he MapReduce paradigm and was designed
by Doug Cutting and Mike Cafa rel la in 2005 to use data with varying structu res. It is an ideal technical
framework for many Big Data projects, which rely on large or unwieldy data set s with unconventiona l data
structures. One of the main benefits of Hadoop is that it employs a distributed file system, meaning it can
use a distributed cluster of servers and commodity hardware to process larg e amounts of data. Some of
the most co mmon examples of Hadoop imp lementations are in the social med ia space, where Hadoop
ca n manage transactions, give textual updates, and develop social graphs among millions of users. Twitter
and Facebook generate massive amounts of unstructured data and use Hadoop and its ecosystem of tools
to manage this hig h volu me. Hadoop and its ecosystem are covered in Chapter 10, “Adva nced Ana lytics-
Technology and Tools: MapReduce and Hadoop.”
Finally, social media represents a tremendous opportunity to leverage social and professional interac-
tions to derive new insights. Linked In exemplifies a company in which data itself is the product. Early on,
Linkedln founder Reid Hoffman saw the opportunity to create a social network for working professionals.
Exercises
As of 2014, Linkedln has more than 250 million user accounts and has added many additional features and
data-related products, such as recruiting, job seeker too ls, advertising, and lnMa ps, whic h show a social
graph of a user’s professional network. Figure 1-14 is an example of an In Map visualization that enables
a Linked In user to get a broader view of the interconnectedness of his contacts and understand how he
knows most of them .
fiGURE 1-14 Data visualization of a user’s social network using lnMaps
Summary
Big Data comes from myriad sources, including social media, sensors, the Internet ofThings, video surveil-
lance, and many sources of data that may not have been considered data even a few years ago. As businesses
struggle to keep up with changing market requirements, some companies are finding creative ways to apply
Big Data to their growing business needs and increasing ly complex problems. As organizations evolve
their processes and see the opportunities that Big Data can provide, they try to move beyond t raditional Bl
activities, such as using data to populate reports and dashboards, and move toward Data Science- driven
projects that attempt to answer more open-ended and complex questions.
However, exploiting the opportunities that Big Data presents requires new data architectures, includ –
ing analytic sandboxes, new ways of working, and people with new skill sets. These drivers are causing
organizations to set up analytic sandboxes and build Data Science teams. Although some organizations are
fortunate to have data scientists, most are not, because there is a growing talent gap that makes finding
and hi ring data scientists in a timely man ner difficult. Still, organizations such as those in web retail, health
care, genomics, new IT infrast ructures, and social media are beginning to take advantage of Big Data and
apply it in creati ve and novel ways.
Exercises
1. What are the three characteristics of Big Data, and what are the main considerations in processing Big
Data?
2 . What is an analytic sa ndbox, and why is it important?
3. Explain the differences between Bland Data Science.
4 . Describe the challenges of the current analytical architecture for data scientists.
5. What are the key skill sets and behavioral characteristics of a data scientist?
INTRODUCTION TO BIG DATA ANALYTICS
Bibliography
[1] C. B. B. D. Manyika, “Big Data: The Next Frontier for Innovation, Competition, and Productivity,”
McKinsey Globa l Institute, 2011 .
[2] D. R. John Ga ntz, “The Digital Universe in 2020: Big Data, Bigger Digital Shadows, and Biggest
Growth in the Far East,” IDC, 2013.
[3] http: I l www. willisresilience . coml emc-data l ab [Online].
[4] C. Duhigg, The Power of Habit: Why We Do What We Do in Life and Business, New York: Random
House, 2012.
[5] K. Hil l, “How Target Figured Out a Teen Girl Was Pregnant Before Her Father Did,” Forb es, February
2012.
[6] http: I l hadoop. apache . org [Online].
DATA ANALYTICS LIFECYCLE
Data science projects differ from most traditional Business Intelligence projects and many data ana lysis
projects in that data science projects are more exploratory in nature. For t his reason, it is critical to have a
process to govern them and ensure t hat the participants are thorough and rigorous in their approach, yet
not so rigid that the process impedes exploration.
Many problems that appear huge and daunting at first can be broken down into smaller pieces or
actionable phases that can be more easily addressed. Having a good process ensures a comprehensive and
repeatable method for conducting analysis. In addition, it helps focus time and energy early in the process
to get a clear grasp of the business problem to be solved.
A common mistake made in data science projects is rushing into data collection and analysis, wh ich
precludes spending sufficient time to plan and scope the amount of work involved, understanding requ ire-
ments, or even framing the business problem properly. Consequently, participants may discover mid-stream
that the project sponsors are actually trying to achieve an objective that may not match the available data,
or they are attempting to address an interest that differs from what has been explicitly communicated.
When this happens, the project may need to revert to the initial phases of the process for a proper discovery
phase, or the project may be canceled.
Creating and documenting a process helps demonstrate rigor, which provides additional credibility
to the project when the data science team shares its findings. A well-defi ned process also offers a com-
mon framework for others to adopt, so the methods and analysis can be repeated in the future or as new
members join a team.
2.1 Data Analytics Lifecycle Overview
The Data Analytics Lifecycle is designed specifica lly for Big Data problems and data science projects. The
lifecycle has six phases, and project work can occur in several phases at once. For most phases in the life-
cycle, the movement can be either forward or backward. This iterative depiction of the lifecycle is intended
to more closely portray a real project, in which aspects of the project move forward and may return to
earlier stages as new information is uncovered and team members learn more about various stages of the
project. This enables participants to move iteratively through the process and drive toward operationa l-
izing the project work.
2.1.1 Key Roles for a Successful Analytics Project
In recent years, substantial attention has been placed on the emerging role of the data scientist. In October
2012, Harvard Business Review featured an article titled “Data Scientist: The Sexiest Job of the 21st Century”
[1], in which experts OJ Patil and Tom Davenport described the new role and how to find and hire data
scientists. More and more conferences are held annually focusing on innovation in the areas of Data Science
and topics dealing with Big Data. Despite this strong focus on the emerg ing role of the data scientist specifi-
cally, there are actually seven key roles that need to be fulfilled for a high-functioning data science team
to execute analytic projects successfully.
Figure 2-1 depicts the various roles and key stakeholders of an analytics project. Each plays a critical part
in a successful ana lytics project. Although seven roles are listed, fewer or more peop le can accomplish the
work depending on t he scope of the project, the organizational structure, and the skills of t he participants.
For example, on a small, versatile team, these seven roles may be fulfilled by only 3 people, but a very large
proj ect may require 20 or more people. The seven roles follow.
2.1 Data Analytics Lifecycle Overview
…
•
FIGURE 2-1 Key roles for a successful analytics project
• Business User: Someone who understands the domain area and usually benefits from the resu lts.
Th is person can consult and advise the project team on the context of the project, the value of the
results, and how the outputs will be operationalized. Usually a business analyst, line manager, or
deep subject matter expert in the project domain fulfills this role.
• Project Sponsor: Responsible for the genesis of the project. Provides the impetus and requirements
for the project and defines the core business problem. Generally provides the funding and gauges
the degree of value from the final outputs of the working team. This person set s the priorities for the
project and clarifies the desired outputs.
• Proj ect Manage r: Ensures that key milestones and objectives are met on time and at the expected
quality.
• Busin ess Intelligence Analyst : Provides business domain expertise based on a deep understanding
of the data, key performance indicators (KPis), key metrics, and business intelligence from a reporting
perspective. Business Intelligence Analysts generally create dashboards and reports and have knowl-
edge of the data feeds and sources.
• Database Administrator (DBA): Provisions and configures the database environment to support
the analytics needs of the working team. These responsibilities may include provid ing access to
key databases or tables and ensuring the appropriate security levels are in place related to the data
repositories.
• Dat a Engineer: Leverag es deep technical skills to assist with tuning SQL queries for data manage-
ment and data extraction, and provides support for data ingestion into the analytic sandbox, which
DATA ANALYTICS LIFECYCLE
was discussed in Chapter 1, “Introduction to Big Data Analytics.” Whereas the DBA sets up and config-
ures the databases to be used, the data engineer executes the actual data extractions and performs
substantial data manipulation to facilitate the analytics. The data engineer works closely with the
data scientist to help shape data in the right ways for analyses.
o Data Scientist: Provides subject matter expertise for analytical techniques, data modeling, and
applying valid analytical techniques to given business problems. Ensures overall analytics objectives
are met. Designs and executes analytical methods and approaches with the data available to the
project.
Although most of these roles are not new, the last two roles-data engineer and data scientist-have
become popular and in high demand [2] as interest in Big Data has grown.
2.1.2 Background and Overview of Data Analytics Lifecycle
The Data Analytics Lifecycle defines analytics process best practices spanning discovery to project
completion. The lifecycle draws from established methods in the realm of data analytics and decision
science. This synthesis was developed after gathering input from data scientists and consulting estab-
lished approaches that provided input on pieces of the process. Several of the processes that were
consulted include these:
o Scientific method [3], in use for centuries, still provides a solid framework for thinking about and
deconstructing problems into their principal parts. One of the most valuable ideas of the scientific
method relates to forming hypotheses and finding ways to test ideas.
o CRISP-OM [4] provides useful input on ways to frame analytics problems and is a popular approach
for data mining.
o Tom Davenport’s DELTA framework [5]: The DELTA framework offers an approach for data analytics
projects, including the context of the organization’s skills, datasets, and leadership engagement.
o Doug Hubbard’s Applied Information Economics (AlE) approach [6]: AlE provides a framework for
measuring intangibles and provides guidance on developing decision models, calibrating expert
estimates, and deriving the expected value of information.
o “MAD Skills” by Cohen et al. [7] offers input for several of the techniques mentioned in Phases 2-4
that focus on model planning, execution, and key findings.
figure 2-2 presents an overview of the Data Analytics Lifecycle that includes six phases. Teams commonly
learn new things in a phase that cause them to go back and refine the work done in prior phases based
on new insights and information that have been uncovered. for this reason, figure 2-2 is shown as a cycle.
The circular arrows convey iterative movement between phases until the team members have sufficient
information to move to the next phase. The callouts include sample questions to ask to help guide whether
each of the team members has enough information and has made enough progress to move to the next
phase of the process. Note that these phases do not represent formal stage gates; rather, they serve as
criteria to help test whether it makes sense to stay in the current phase or move to the next.
Is t he model robust
enough? Have we
fai led for sure?
······ ::-.. ~~?
…….. ·\·
FIGURE 2-2 Overview of Data Analytics Lifecycle
2.1 Data Analytlcs Lifecycle Overview
Do I have enough
Information to draft
an analytic plan and
share for peer review?
Do I have
enough good
quality data to
start building
the model?
Do I have a good Idea
about the type of model
to try? Can I refine the
analytic plan?
Here is a brief overview of the main phases of the Data Analytics Lifecycle:
• Phase 1- Discovery: In Phase 1, the team learns the business domain, including relevant history
such as whether the organization or business unit has attempted similar projects in the past from
which they can learn. The team assesses the resources available to support the project in terms of
people, technology, time, and data. Important activities in this phase include fram ing the business
problem as an analytics challenge that can be addressed in subsequent phases and formulating ini-
tial hypotheses (IHs) to test and begin learn ing the data.
• Phase 2- Data prepa ration: Phase 2 requires the presence of an analytic sandbox, in which the
team can work with data and perform analytics for the duration of the project. The team needs to
execute ext ract, load, and transform (ELT) or extract, transform and load (ETL) to get data into the
sandbox. The ELT and ETL are sometimes abbreviated as ETLT. Data should be t ransformed in the
ETLT process so t he team can work with it and analyze it. In t his phase, the team also needs to famil-
iarize itself with the data thoroughly and take steps to condition the data (Section 2.3.4).
DATA ANALYTICS LIFECYCLE
• Phase 3-Model planning: Phase 3 is model planning, where the team determines the methods,
techniques, and workflow it intends to follow for the subsequent model building phase. The team
explores the data to learn about the relationships between variables and subsequently se lects key
variables and the most suitable models.
• Phase 4-Mode l building: In Phase 4, the team deve lops data sets for testing, trai ning, and produc-
tion purposes. In addition, in this phase the team builds and executes models based on the work
done in the model planning phase. The team also considers whether its existing tools will suffice for
running the models, or if it will need a more robust environment for executing models and work flows
(for example, fast hardware and parallel processing, if applicable).
• Phase 5-Commu nicate results: In Phase 5, the team, in collaboration with major stakeholders,
determines if the results of the project are a success or a failure based on the criteria developed in
Phase 1. The team should identify key findings, quantify the business value, and develop a narrative
to summarize and convey findings to stakeholders.
• Phase 6-0perationalize: In Phase 6, the team delivers final reports, briefings, code, and technical
documents. In addition, the team may run a pilot project to implement the models in a production
envi ronment.
Once team members have run models and produced findings, it is critical to frame these results in a
way that is tailored to the audience that engaged the team . Moreover, it is critical to frame the results of
the work in a manner that demonstrates clear value. If the team performs a technically accurate analysis
but fails to translate the results into a language that resonates with the audience, people will not see the
value, and much of the time and effort on the project will have been wasted.
The rest of the chapter is organized as follows. Sections 2.2-2.7 discuss in detail how each of the six
phases works, and Section 2.8 shows a case study of incorporating the Data Analytics Lifecycle in a real-
world data science project.
2.2 Phase 1: Discovery
The first phase of the Data Analytics Lifecycle involves discovery (Figure 2-3).1n this phase, the data science
team must learn and investigate the problem, develop context and understanding, and learn about the
data sources needed and ava ilable for the project. In addition, the team formulates initial hypotheses that
can later be tested with data.
2.2.1 Learning the Business Domain
Understanding the domain area of the problem is essential. In many cases, data scientists will have deep
computational and quantitative knowledge that can be broadly applied across many disciplines. An example
of this role would be someone with an advanced degree in applied mathematics or statistics.
These data scientists have deep knowledge of the methods, technique s, and ways for applying heuris-
tics to a variety of business and conceptual problems. Others in this area may have deep knowledge of a
domain area, coupled with quantitative expertise. An example of this would be someone with a Ph.D. in
life sciences. This person would have deep knowledge of a field of study, such as oceanography, biology,
or genetics, with some depth of quantitative knowledge.
At this early stage in the process, the team needs to determine how much business or domain knowledge
the data scientist needs to develop models in Phases 3 and 4. The earlier the team can make this assessment
2.2 Phase 1: Discovery
the better, because t he decision helps dictate the resources needed for the project team and ensures the
tea m has t he right balance of domain knowledge and technica l expertise.
FIGURE 2-3 Discovery phase
2.2.2 Resources
Do I have enough
Inform ation to draft
an analytic plan and
share for peer review?
As part of t he discovery phase, the team needs to assess the resources ava ila ble to support the proj ect. In
this context, resources include technology, tools, system s, data, and people.
During this scoping, consider the available tools and technology t he team will be using and the types
of systems needed for later phases to operat ionalize the models. In add itio n, try to evaluate the level of
analytica l sophisticat ion within the orga nization and gaps that may exist related to tools, technology, and
skills. For instance, for th e model being developed to have longevity in an organization, consider what
types of skills and roles will be re qui red that may not exist today. For the proj ect to have long-term success,
DATA ANALYTIC$ LIFECVCLE
what types of skills and roles will be needed for the recipients of the model being developed? Does the
requisite level of expertise exist within the organization today, or will it need to be cultivated? Answering
these questions will influence the techniques the team selects and the kind of implementation the team
chooses to pursue in subsequent phases of the Data Analytics lifecycle.
In addition to the skills and computing resources, it is advisable to take inventory of the types of data
available to the team for the project. Consider if the data available is sufficient to support the project’s
goals. The team will need to determine whether it must collect additional data, purchase it from outside
sources, or transform existing data. Often, projects are started looking only at the data available. When
the data is less than hoped for, the size and scope of the project is reduced to work within the constraints
of the existing data.
An alternative approach is to consider the long-term goals of this kind of project, without being con-
strained by the current data. The team can then consider what data is needed to reach the long-term goals
and which pieces of this multistep journey can be achieved today with the existing data. Considering
longer-term goals along with short-term goals enables teams to pursue more ambitious projects and treat
a project as the first step of a more strategic initiative, rather than as a standalone initiative. It is critical
to view projects as part of a longer-term journey, especially if executing projects in an organization that
is new to Data Science and may not have embarked on the optimum datasets to support robust analyses
up to this point.
Ensure the project team has the right mix of domain experts, customers, analytic talent, and project
management to be effective. In addition, evaluate how much time is needed and if the team has the right
breadth and depth of skills.
After taking inventory of the tools, technology, data, and people, consider if the team has sufficient
resources to succeed on this project, or if additional resources are needed. Negotiating for resources at the
outset of the project, while seeping the goals, objectives, and feasibility, is generally more useful than later
in the process and ensures sufficient time to execute it properly. Project managers and key stakeholders have
better success negotiating for the right resources at this stage rather than later once the project is underway.
2.2.3 Framing the Problem
Framing the problem well is critical to the success of the project. Framing is the process of stating the
analytics problem to be solved. At this point, it is a best practice to write down the problem statement
and share it with the key stakeholders. Each team member may hear slightly different things related to
the needs and the problem and have somewhat different ideas of possible solutions. For these reasons, it
is crucial to state the analytics problem, as well as why and to whom it is important. Essentially, the team
needs to clearly articulate the current situation and its main challenges.
As part of this activity, it is important to identify the main objectives of the project, identify what needs
to be achieved in business terms, and identify what needs to be done to meet the needs. Additionally,
consider the objectives and the success criteria for the project. What is the team attempting to achieve by
doing the project, and what will be considered “good enough” as an outcome of the project? This is critical
to document and share with the project team and key stakeholders. It is best practice to share the statement
of goals and success criteria with the team and confirm alignment with the project sponsor’s expectations.
Perhaps equally important is to establish failure criteria. Most people doing projects prefer only to think
of the success criteria and what the conditions will look like when the participants are successful. However,
this is almost taking a best-case scenario approach, assuming that everything will proceed as planned
2.2 Phase 1: Discovery
and the project team will reach its goals. However, no matter how well planned, it is almost impossible to
plan for everything that will emerge in a project. The failure criteria will guide the team in understanding
when it is best to stop trying or settle for the results that have been gleaned from the data. Many times
people will continue to perform analyses past the point when any meaningful insights can be drawn from
the data. Establishing criteria for both success and failure helps the participants avoid unproductive effort
and remain aligned with the project sponsors
2.2.4 Identifying Key Stakeholders
Another important step is to identify the key stakeholders and their interests in the project. During
these discussions, the team can identify the success criteria, key risks, and stakeholders, which should
include anyone who will benefit from the project or will be significantly impacted by the project. When
interviewing stakeholders, learn about the domain area and any relevant history from similar analytics
projects. For example, the team may identify the results each stakeholder wants from the project and the
criteria it will use to judge the success of the project.
Keep in mind that the analytics project is being initiated for a reason. It is critical to articulate the pain
points as clearly as possible to address them and be aware of areas to pursue or avoid as the team gets
further into the analytical process. Depending on the number of stakeholders and participants, the team
may consider outlining the type of activity and participation expected from each stakeholder and partici-
pant. This will set clear expectations with the participants and avoid delays later when, for example, the
team may feel it needs to wait for approval from someone who views himself as an adviser rather than an
approver of the work product.
2.2.5 Interviewing the Analytics Sponsor
The team should plan to collaborate with the stakeholders to clarify and frame the analytics problem. At the
outset, project sponsors may have a predetermined solution that may not necessarily realize the desired
outcome. In these cases, the team must use its knowledge and expertise to identify the true underlying
problem and appropriate solution.
For instance, suppose in the early phase of a project, the team is told to create a recommender system
for the business and that the way to do this is by speaking with three people and integrating the product
recommender into a legacy corporate system. Although this may be a valid approach, it is important to test
the assumptions and develop a clear understanding of the problem. The data science team typically may
have a more objective understanding of the problem set than the stakeholders, who may be suggesting
solutions to a given problem. Therefore, the team can probe deeper into the context and domain to clearly
define the problem and propose possible paths from the problem to a desired outcome. In essence, the
data science team can take a more objective approach, as the stakeholders may have developed biases
over time, based on their experience. Also, what may have been true in the past may no longer be a valid
working assumption. One possible way to circumvent this issue is for the project sponsor to focus on clearly
defining the requirements, while the other members of the data science team focus on the methods needed
to achieve the goals.
When interviewing the main stakeholders, the team needs to take time to thoroughly interview the
project sponsor, who tends to be the one funding the project or providing the high-level requirements.
This person understands the problem and usually has an idea of a potential working solution. It is critical
DATA ANALYTIC S LIFE CYCLE
to thoroughly understand t he sponsor’s perspective to guide the team in getting started on the proj ect.
Here are some ti ps for interviewing project sponsors:
• Prepare for the interview; draft questio ns, and review with coll eagues.
• Use open-ended questi ons; avoid asking lead ing questions.
• Probe for details and pose foll ow-up questions.
• Avoid filling every silence in t he co nversation; give the other person time to think.
• Let the sponsors express t hei r ideas and ask clarifying questions, such as “Why? Is that correct? Is t his
idea on target? Is there anything else?”
• Use active listening techniques; repeat back what was heard to make sure t he team heard it correctly,
or reframe what was sa id.
• Try to avoid expressing the team’s opinions, which can introduce bias; instead, focus on listening.
• Be mindfu l of the body language of the interviewers and sta keholders; use eye contact where appro-
priate, and be attentive.
• Mi nimize distractions.
• Document what t he team heard, and review it with the sponsors.
Following is a brief list of common questions that are helpful to ask during the discovery phase when
interviewi ng t he project sponsor. The responses wi ll begin to shape the scope of the projec t and give the
team an idea of the goals and objectives of the project.
• What busi ness problem is t he team trying to solve?
• What is t he desired outcome of the proj ect?
• What data sources are available?
• What industry issues may impact t he analysis?
• What timelines need to be considered?
• Who could provide insight into the project?
• Who has final decision-making authority on the project?
• How wi ll t he focus and scope of t he problem change if the following dimensions change:
• Time: Analyzing 1 year or 10 years’ worth of data?
• People: Assess impact of changes in resources on project timelin e.
• Risk: Conservative to aggressive
• Resources: None to unlimited (tools, technology, systems)
• Size and attributes of data: Includi ng internal and external data sou rces
2.2 Phase 1: Discovery
2.2.6 Developing Initial Hypotheses
Developing a set of IHs is a key facet of the discovery phase. This step involves forming ideas that the team
can test with data. Generally, it is best to come up with a few primary hypotheses to test and then be
creative about developing several more. These IHs form the basis of the analytical tests the team will use
in later phases and serve as the foundation for the findings in Phase 5. Hypothesis testing from a statisti-
cal perspective is covered in greater detail in Chapter 3, “Review of Basic Data Analytic Methods Using R.”
In this way, the team can compare its answers with the outcome of an experiment or test to generate
additional possible solutions to problems. As a result, the team will have a much richer set of observations
to choose from and more choices for agreeing upon the most impactful conclusions from a project.
Another part of this process involves gathering and assessing hypotheses from stakeholders and domain
experts who may have their own perspective on what the problem is, what the solution should be, and how
to arrive at a solution. These stakeholders would know the domain area well and can offer suggestions on
ideas to test as the team formulates hypotheses during this phase. The team will likely collect many ideas
that may illuminate the operating assumptions of the stakeholders. These ideas will also give the team
opportunities to expand the project scope into adjacent spaces where it makes sense or design experiments
in a meaningful way to address the most important interests of the stakeholders. As part of this exercise,
it can be useful to obtain and explore some initial data to inform discussions with stakeholders during the
hypothesis-forming stage.
2.2.7 Identifying Potential Data Sources
As part of the discovery phase, identify the kinds of data the team will need to solve the problem. Consider
the volume, type, and time span of the data needed to test the hypotheses. Ensure that the team can access
more than simply aggregated data. In most cases, the team will need the raw data to avoid introducing
bias for the downstream analysis. Recalling the characteristics of Big Data from Chapter 1, assess the main
characteristics of the data, with regard to its volume, variety, and velocity of change. A thorough diagno-
sis of the data situation will influence the kinds of tools and techniques to use in Phases 2-4 of the Data
Analytics lifecycle.ln addition, performing data exploration in this phase will help the team determine
the amount of data needed, such as the amount of historical data to pull from existing systems and the
data structure. Develop an idea of the scope of the data needed, and validate that idea with the domain
experts on the project.
The team should perform five main activities during this step of the discovery phase:
o Identify data sources: Make a list of candidate data sources the team may need to test the initial
hypotheses outlined in this phase. Make an inventory of the datasets currently available and those
that can be purchased or otherwise acquired for the tests the team wants to perform.
o Capture aggregate data sources: This is for previewing the data and providing high-level under-
standing. It enables the team to gain a quick overview of the data and perform further exploration on
specific areas. It also points the team to possible areas of interest within the data.
o Review the raw data: Obtain preliminary data from initial data feeds. Begin understanding the
interdependencies among the data attributes, and become familiar with the content of the data, its
quality, and its limitations.
DATA ANALYTICS LIFEC YCLE
• Evaluate the data structures and tools needed: The data type and structure dictate which tools the
team can use to analyze the data. This evaluation gets the team thinking about which technologies
may be good candidates for the project and how to start getting access to these tools.
• Scope the sort of data infrastructure needed for this type of problem: In addition to the tools
needed, the data influences the kind of infrastructure that ‘s required, such as disk storage and net-
work capacity.
Unlike many traditional stage-gate processes, in which the team can advance only when specific criteria
are met, the Data Ana lytics Lifecycle is intended to accommodate more ambiguity. This more closely reflects
how data science projects work in real-life situati ons. For each phase of the process, it is recomm ended
to pass certain checkpoints as a way of gauging whether the team is ready to move to t he next phase of
the Data Analytics Lifecycle.
The team can move to the next phase when it has enough information to draft an analytics plan and
share it for peer review. Although a peer review of the plan may not actually be required by the project,
creating t he plan is a good test of the team’s grasp of the busin ess problem and the tea m’s approach
to add ressing it. Creating the analytic plan also requires a clear understanding of the domain area, the
problem to be solved, and scoping of the data sources to be used. Developing success criteria early in the
project clarifies the problem definition and helps the team when it comes time to make choices about
the analytical methods being used in later phases.
2.3 Phase 2: Data Preparation
The second phase of the Data Analytics Lifecycle involves data preparation, which includes the steps to
explore, preprocess, and condition data prior to model ing and analysis. In this phase, the team needs to
create a robust environment in which it can explore the data that is separate from a production environment.
Usua lly, this is done by preparing an ana lytics sandbox. To get the data into the sandbox, the team needs to
perform ETLT, by a combination of extracting, transforming, and load ing data into the sandbox. Once the
data is in the sa ndbox, the team needs to learn about the data and become familiar with it. Understandi ng
the data in detail is critical to t he success of the proj ect. The team also must decide how to condition and
transform data to get it into a format to facilitate subsequent analys is. The tea m may perform data visua liza-
tions to help team members understand the data, including its trends, outliers, and relationships among
data variables. Each of these steps of the data preparation phase is discussed throughout this section.
Data preparation tends to be the most labor-intensive step in the analytics lifecycle.ln fact, it is common
for teams to spend at least SOo/o of a data science project’s time in this critical phase. if the team cannot obtain
enough data of sufficient quality, it may be unable to perform the subsequent steps in the lifecycle process.
Figu re 2-4 shows an overview of the Data Analytics Lifecycle for Phase 2. The data preparation phase is
generally the most iterative and the one that teams tend to undere stimate most often. This is because most
teams and leaders are anxious to begin analyzing the data, testing hypotheses, and getting answers to some
of the questions posed in Phase 1. Many tend to jump into Phase 3 or Phase 4 to begin rapid ly developing
models and algorithms without spendi ng the time to prepare the data for modeling. Consequently, teams
come to realize the data they are working with does not allow them to execute the models they want, and
they end up back in Phase 2 anyway.
Frc;uRE 2 Data preparation phase
2.3.1 Preparing the Analytic Sandbox
2.3 Phase 2: Data Preparation
Do I have
enough good
quality dat a to
start building
the model?
The firs t subphase of data preparation requires the team to obtain an analytic sandbox (also commonly
referred to as a wo rkspace), in which the tea m ca n explore the data without interfering with live produc-
tion databa ses. Consider an exa mple in which the team needs to work with a company’s fin ancial data.
The team should access a copy of the fin ancial data from the analytic sand box rather than interacting with
the product ion version of t he organization’s ma in database, because that will be tight ly controlled and
needed for fi nancial reporting.
When developi ng the analytic sandbox, it is a best practice to collect all kinds of data there, as tea m
mem bers need access to high volumes and varieties of data for a Big Data analytics project. This ca n include
DATA ANALYTICS LIFECYCLE
everything from summary-level aggregated data, structured data, raw data feeds, and unstructured text
data from call logs or web logs, depending on the kind of analysis the team plans to undertake.
This expansive approach for attracting data of all kind differs considerably from the approach advocated
by many information technology (IT) organizations. Many IT groups provide access to only a particular sub-
segment of the data for a specific purpose. Often, the mindset of the IT group is to provide the minimum
amount of data required to allow the team to achieve its objectives. Conversely, the data science team
wants access to everything. From its perspective, more data is better, as oftentimes data science projects
are a mixture of purpose-driven analyses and experimental approaches to test a variety of ideas. In this
context, it can be challenging for a data science team if it has to request access to each and every dataset
and attribute one at a time. Because of these differing views on data access and use, it is critical for the data
science team to collaborate with IT, make clear what it is trying to accomplish, and align goals.
During these discussions, the data science team needs to give IT a justification to develop an analyt-
ics sandbox, which is separate from the traditional IT-governed data warehouses within an organization.
Successfully and amicably balancing the needs of both the data science team and IT requires a positive
working relationship between multiple groups and data owners. The payoff is great. The analytic sandbox
enables organizations to undertake more ambitious data science projects and move beyond doing tradi-
tional data analysis and Business Intelligence to perform more robust and advanced predictive analytics.
Expect the sandbox to be large.lt may contain raw data, aggregated data, and other data types that are
less commonly used in organizations. Sandbox size can vary greatly depending on the project. A good rule
is to plan for the sandbox to be at least 5-10 times the size of the original data sets, partly because copies of
the data may be created that serve as specific tables or data stores for specific kinds of analysis in the project.
Although the concept of an analytics sandbox is relatively new, companies are making progress in this
area and are finding ways to offer sandboxes and workspaces where teams can access data sets and work
in a way that is acceptable to both the data science teams and the IT groups.
2.3.2 Performing ETLT
As the team looks to begin data transformations, make sure the analytics sandbox has ample bandwidth
and reliable network connections to the underlying data sources to enable uninterrupted read and write.
In ETL, users perform extract, transform, load processes to extract data from a datastore, perform data
transformations, and load the data back into the datastore. However, the analytic sandbox approach differs
slightly; it advocates extract, load, and then transform.ln this case, the data is extracted in its raw form and
loaded into the datastore, where analysts can choose to transform the data into a new state or leave it in
its original, raw condition. The reason for this approach is that there is significant value in preserving the
raw data and including it in the sandbox before any transformations take place.
For instance, consider an analysis for fraud detection on credit card usage. Many times, outliers in this
data population can represent higher-risk transactions that may be indicative of fraudulent credit card
activity. Using ETL, these outliers may be inadvertently filtered out or transformed and cleaned before being
loaded into the datastore.ln this case, the very data that would be needed to evaluate instances of fraudu-
lent activity would be inadvertently cleansed, preventing the kind of analysis that a team would want to do.
Following the ELT approach gives the team access to clean data to analyze after the data has been loaded
into the database and gives access to the data in its original form for finding hidden nuances in the data.
This approach is part of the reason that the analytic sandbox can quickly grow large. The team may want
clean data and aggregated data and may need to keep a copy of the original data to compare against or
2.3 Phase 2: Data Preparation
look for hidden patterns that may have existed in the data before the cleaning stage. This process can be
summarized as ETLT to reflect the fact that a team may choose to perform ETL in one case and ELT in another.
Depending on the size and number of the data sources, the team may need to consider how to paral-
lelize the movement of the datasets into the sandbox. For this purpose, moving large amounts of data is
sometimes referred to as Big ETL. The data movement can be parallelized by technologies such as Hadoop
or MapReduce, which will be explained in greater detail in Chapter 10, “Advanced Analytics-Technology
and Tools: MapReduce and Hadoop.” At this point, keep in mind that these technologies can be used to
perform parallel data ingest and introduce a huge number of files or datasets in parallel in a very short
period of time. Hadoop can be useful for data loading as well as for data analysis in subsequent phases.
Prior to moving the data into the analytic sandbox, determine the transformations that need to be
performed on the data. Part of this phase involves assessing data quality and structuring the data sets
properly so they can be used for robust analysis in subsequent phases. In addition, it is important to con-
sider which data the team will have access to and which new data attributes will need to be derived in the
data to enable analysis.
As part of the ETLT step, it is advisable to make an inventory of the data and compare the data currently
available with datasets the team needs. Performing this sort of gap analysis provides a framework for
understanding which datasets the team can take advantage of today and where the team needs to initiate
projects for data collection or access to new datasets currently unavailable. A component of this sub phase
involves extracting data from the available sources and determining data connections for raw data, online
transaction processing (OLTP) databases, online analytical processing (OLAP) cubes, or other data feeds.
Application programming interface (API) is an increasingly popular way to access a data source [8]. Many
websites and social network applications now provide APis that offer access to data to support a project
or supplement the datasets with which a team is working. For example, connecting to the Twitter API can
enable a team to download millions of tweets to perform a project for sentiment analysis on a product, a
company, or an idea. Much of the Twitter data is publicly available and can augment other data sets used
on the project.
2.3.3 Learning About the Data
A critical aspect of a data science project is to become familiar with the data itself. Spending time to learn the
nuances of the datasets provides context to understand what constitutes a reasonable value and expected
output versus what is a surprising finding. In addition, it is important to catalog the data sources that the
team has access to and identify additional data sources that the team can leverage but perhaps does not
have access to today. Some of the activities in this step may overlap with the initial investigation of the
datasets that occur in the discovery phase. Doing this activity accomplishes several goals.
o Clarifies the data that the data science team has access to at the start of the project
o Highlights gaps by identifying datasets within an organization that the team may find useful but may
not be accessible to the team today. As a consequence, this activity can trigger a project to begin
building relationships with the data owners and finding ways to share data in appropriate ways. In
addition, this activity may provide an impetus to begin collecting new data that benefits the organi-
zation or a specific long-term project.
o Identifies datasets outside the organization that may be useful to obtain, through open APis, data
sharing, or purchasing data to supplement already existing datasets
DATA ANALYTICS LIFECYCLE
Table 2-1 demonstrates one way to organize this type of data inventory.
TABLE: 1 Sample Dataset Inventory
Data
Available Data to Obtain
and Data Available, but Data to from Third
Dataset Accessible not Accessible Collect Party Sources
Products shipped e
Product Fina ncials •
Product Ca ll Center •
Data
Live Product
Feedback Surveys
Product Sentiment
from Social Media
2.3.4 Data Conditioning
•
•
Data conditioning refers to the process of cleaning data, normalizing datasets, and perform ing trans-
formations on the data. A critical step with in the Data Ana lytics Lifecycle, data conditioning can involve
many complex steps to join or merge data sets or otherwise get datasets into a state that enables analysis
in further phases. Data conditioning is often viewed as a preprocessing step for the data analysis because
it involves many operations on the dataset before developing models to process or analyze the data. This
implies that the data-conditioning step is performed only by IT, the data owners, a DBA, or a data eng ineer.
However, it is also important to involve the data scientist in this step because many decisions are made in
the data conditioning phase that affect subsequent analysis. Part of this phase involves decidi ng which
aspects of particular datasets will be useful to analyze in later steps. Because teams begin forming ideas
in this phase about which data to keep and which data to transform or discard, it is important to involve
mu ltiple team members in these decisions. Leaving such decisions to a single person may cause teams to
return to this phase to retrieve data that may have been discarded.
As with the previous example of deciding which data to keep as it relates to fraud detection on credit
card usage, it is critical to be thoughtful about which data the team chooses to keep and which data will
be discarded. This can have far-reaching consequences that will cause the team to retrace previous steps
if th e team discards too much of the data at too early a point in this process. Typically, data science teams
would rather keep more data than too little data for the analysis. Additional questions and considerations
for the data conditioning step include these.
• What are the data sources? What are the target fields (for example, columns of the tables)?
• How clean is the data?
2.3 Phase 2: Data Preparation
o How consistent are the contents and files? Determine to what degree the data contains missing or
inconsistent values and ifthe data contains values deviating from normal.
o Assess the consistency of the data types. For instance, if the team expects certain data to be numeric,
confirm it is numeric or if it is a mixture of alphanumeric strings and text.
o Review the content of data columns or other inputs, and check to ensure they make sense. For
instance, if the project involves analyzing income levels, preview the data to confirm that the income
values are positive or if it is acceptable to have zeros or negative values.
o Look for any evidence of systematic error. Examples include data feeds from sensors or other data
sources breaking without anyone noticing, which causes invalid, incorrect, or missing data values. In
addition, review the data to gauge if the definition ofthe data is the same over all measurements. In
some cases, a data column is repurposed, or the column stops being populated, without this change
being annotated or without others being notified.
2.3.5 Survey and Visualize
After the team has collected and obtained at least some of the datasets needed for the subsequent
analysis, a useful step is to leverage data visualization tools to gain an overview of the data. Seeing high-level
patterns in the data enables one to understand characteristics about the data very quickly. One example
is using data visualization to examine data quality, such as whether the data contains many unexpected
values or other indicators of dirty data. (Dirty data will be discussed further in Chapter 3.) Another example
is skewness, such as if the majority of the data is heavily shifted toward one value or end of a continuum.
Shneiderman [9] is well known for his mantra for visual data analysis of “overview first, zoom and filter,
then details-on-demand.” This is a pragmatic approach to visual data analysis. It enables the user to find
areas of interest, zoom and filter to find more detailed information about a particular area of the data, and
then find the detailed data behind a particular area. This approach provides a high-level view of the data
and a great deal of information about a given dataset in a relatively short period of time.
When pursuing this approach with a data visualization tool or statistical package, the following guide-
lines and considerations are recommended.
o Review data to ensure that calculations remained consistent within columns or across tables for a
given data field. For instance, did customer lifetime value change at some point in the middle of data
collection? Or if working with financials, did the interest calculation change from simple to com-
pound at the end of the year?
o Does the data distribution stay consistent over all the data? If not, what kinds of actions should be
taken to address this problem?
o Assess the granularity of the data, the range of values, and the level of aggregation of the data.
o Does the data represent the population of interest? For marketing data, if the project is focused on
targeting customers of child-rearing age, does the data represent that, or is it full of senior citizens
and teenagers?
o For time-related variables, are the measurements daily, weekly, monthly? Is that good enough? Is
time measured in seconds everywhere? Or is it in milliseconds in some places? Determine the level of
granularity of the data needed for the analysis, and assess whether the current level of timestamps
on the data meets that need.
DATA ANALYTICS LIFECYCLE
• Is the data standardized/ normalized? Are the scales consistent? If not, how consistent or irregular is
the data?
• For geospatial datasets, are state or country abbreviations consistent across the data? Are personal
names normalized? English units? Metric units?
These are typical considerations that should be part of the thought process as the team evaluates the
data sets that are obtained for the project. Becoming deeply knowledgeable about the data will be critica l
when it comes time to construct and run models later in the process.
2.3.6 Common Tools for the Data Preparation Phase
Several tools are commonly used for this phase:
• Hadoop [10] ca n perform massively para llel ingest and custom analysis for web traffic parsing, GPS
location ana lytics, genomic analysis, and combining of massive unstructured data fe eds from mul-
tiple sources.
• Alpine Mi ner [11 ] provides a graphical user interface (GUI) for creating analytic work flows, includi ng
data manipu lations and a series of analytic events such as staged data-mining techniques (for exam-
ple, first select the top 100 customers, and then run descriptive statistics and clustering) on Postgres
SQL and other Big Data sources.
• Open Refine (formerly ca lled Google Refine) [12] is “a free, open source, powerful tool for working
with messy data.” It is a popular GUI-based tool for performing data transformations, and it’s one of
the most robust free tools cu rrentl y available.
• Simi lar to Open Refin e, Data Wrangler [13] is an interactive tool for data clean ing and transformation.
Wrangler was developed at Stanford University and can be used to perform many transformations on
a given dataset. In addition, data transformation outputs can be put into Java or Python. The advan-
tage of this feature is that a subset of the data can be manipulated in Wrangler via its GUI, and then
the same operations can be written out as Java or Python code to be executed against the full, larger
dataset offline in a local analytic sandbox.
For Phase 2, the team needs assistance from IT, DBAs, or whoever controls the Enterprise Data Warehouse
(EDW) for data sources the data science team would like to use.
2.4 Phase 3: Model Planning
In Phase 3, the data science team identifi es candidate models to apply to the data for clustering,
cla ssifying, or findin g relationships in the data depending on the goa l of the project, as shown in Fig ure 2-5.
It is during this phase that the team refers to the hypotheses developed in Phase 1, when they first became
acquainted with the data and understanding the business problems or domain area. These hypotheses help
the team fram e the analytics to execute in Phase 4 and select the right methods to achieve its objectives.
Some of the activities to consider in this phase include the following:
• Assess the structure of the datasets. The structure of the data sets is one factor that dictates the tools
and analytical techniques for the next phase. Depending on whether the team plans to analyze tex-
tual data or transactional data, for example, different tools and approaches are required.
• Ensure that the analytical techniques enable the team to meet the business objectives and accept or
reject the working hypotheses.
2 .4 Phase 3: Model Planning
• Determine if the situation warrants a single model or a series of techn iques as part of a larger ana lytic
workflow. A few example models include association rules (Chapter 5, “Advanced Ana lytical Theory
and Methods: Association Rules”) and logistic regression (Chapter 6, “Adva nced Analytical Theory
and Methods: Regression”). Other tools, such as Alpine Miner, enable users to set up a series of steps
and analyses and can serve as a front·end user interface (UI) for manipulating Big Data sources in
PostgreSQL.
FIGURE 2· 5 Model planning phase
Do I have a good Idea
about the type of model
to try? Can I refine the
analytic plan?
In addition to the considerations just listed, it is useful to research and understand how other ana lysts
generally approach a specific kind of problem. Given the kind of data and resources that are available, eva lu-
ate whether similar, existing approaches will work or if the team will need to create something new. Many
times teams can get ideas from analogous problems that other people have solved in different industry
verticals or domain areas. Table 2-2 summarizes the results of an exercise of this type, involving several
doma in areas and the types of models previously used in a classification type of problem after conducting
research on chu rn models in multiple industry verti ca ls. Performing this sort of diligence gives the team
DATAANALYT ICS LI FECYCLE
ideas of how others have solved similar problems and presents the team with a list of candidate models to
try as part of the model planning phase.
TABLE 2-2 Research on Model Planning in industry Verticals
Market Sector Analytic Techniques/Methods Used
Consumer Packaged
Goods
Retail Banking
Reta il Business
Wireless Telecom
Multiple linear regression, automatic relevance determination (ARD). and
decision tree
Multiple regression
Logistic regression, ARD, decision tree
Neural network, decision tree, hierarchical neurofuzzy systems, rule
evolver, logistic regression
2.4.1 Data Exploration and Variable Selection
Although some data exploration takes place in t he data preparation phase, those activities focus mainly on
data hygiene and on assessing the quality of the data itself. In Phase 3, the objective of the data exploration
is to understand the relationships among the variables to inform selection of the variables and methods
and to understand the prob lem domain. As with earlier phases of t he Data Analytics Lifecycle, it is impor-
tant to spend t ime and focus attention on this preparatory work to make t he subsequent phases of model
selection and execution easier and more efficient. A common way to cond uct this step involves using tools
to perform data visualizations. Approaching the data exploration in this way aids t he team in previewing
the data and assessing relationsh ips between varia bles at a high level.
In many cases, stakeholders and subject matter experts have instincts and hunches about what the data
science team should be considering and ana lyzing. Likely, this group had some hypothesis that led to the
genesis of the proj ect. Often, stakeholders have a good grasp of the problem and domain, although they
may not be aware of t he subtleties within the data or the model needed to accept or reject a hypothesi s.
Oth er times, sta keholders may be correct, bu t for the wrong reasons (for insta nce, they may be correct
about a correlation that exists but infer an incorrect reason for the corr elation). Meanwhile, data scientists
have to approach problems with an unbiased mind-set and be ready to question all assumptions.
As the team begi ns to question the incoming assumptions and test initial ideas of the projec t sponsors
and stakeholders, it needs to consider the inputs and data that will be needed, and then it must examine
whether these inputs are actually correlated with the outcomes that the team plans to predict or analyze.
Some methods and types of models will hand le correlated variables better than others. Depending on what
the team is attempting to solve, it may need to consider an alternate method, reduce the number of data
inputs, or transform the inputs to allow the team to use the best method for a given business problem.
Some of these techniques will be explored further in Chapter 3 and Chapter 6.
The key to this approach is to aim for capturing the most essential predictors and variables rather than
considering every possible variable that people think may influence the outcome. Approachi ng the prob-
lem in this manner requires iterations and testing to identify the most essential variables for the intended
analyses. The team should plan to test a range of variables to include in the model and then focus on the
most important and influential variab les.
2.4 Phase 3: Model Planning
If the team plans to run regression analyses, identify the candidate predictors and outcome variables
of the model. Plan to create variables that determine outcomes but demonstrate a strong relationship to
the outcome rather than to the other input variables. This includes remaining vigilant for problems such
as serial correlation, multicollinearity, and other typical data modeling challenges that interfere with the
validity of these models. Sometimes these issues can be avoided simply by looking at ways to reframe a
given problem. In addition, sometimes determining correlation is all that is needed (“black box prediction”),
and in other cases, the objective of the project is to understand the causal relationship better. In the latter
case, the team wants the model to have explanatory power and needs to forecast or stress test the model
under a variety of situations and with different datasets.
2.4.2 Model Selection
In the model selection subphase, the team’s main goal is to choose an analytical technique, or a short list
of candidate techniques, based on the end goal of the project. For the context of this book, a model is
discussed in general terms. In this case, a model simply refers to an abstraction from reality. One observes
events happening in a real-world situation or with live data and attempts to construct models that emulate
this behavior with a set of rules and conditions. In the case of machine learning and data mining, these
rules and conditions are grouped into several general sets of techniques, such as classification, association
rules, and clustering. When reviewing this list of types of potential models, the team can winnow down the
list to several viable models to try to address a given problem. More details on matching the right models
to common types of business problems are provided in Chapter 3 and Chapter 4, “Advanced Analytical
Theory and Methods: Clustering.”
An additional consideration in this area for dealing with Big Data involves determining if the team will
be using techniques that are best suited for structured data, unstructured data, or a hybrid approach. For
instance, the team can leverage MapReduce to analyze unstructured data, as highlighted in Chapter 10.
Lastly, the team should take care to identify and document the modeling assumptions it is making as it
chooses and constructs preliminary models.
Typically, teams create the initial models using a statistical software package such as R, SAS, or Matlab.
Although these tools are designed for data mining and machine learning algorithms, they may have limi-
tations when applying the models to very large datasets, as is common with Big Data. As such, the team
may consider redesigning these algorithms to run in the database itself during the pilot phase mentioned
in Phase 6.
The team can move to the model building phase once it has a good idea about the type of model to try
and the team has gained enough knowledge to refine the analytics plan. Advancing from this phase requires
a general methodology for the analytical model, a solid understanding of the variables and techniques to
use, and a description or diagram of the analytic workflow.
2.4.3 Common Tools for the Model Planning Phase
Many tools are available to assist in this phase. Here are several of the more common ones:
o R [14] has a complete set of modeling capabilities and provides a good environment for building
interpretive models with high-quality code.ln addition, it has the ability to interface with databases
via an ODBC connection and execute statistical tests and analyses against Big Data via an open
source connection. These two factors makeR well suited to performing statistical tests and analyt-
ics on Big Data. As of this writing, R contains nearly 5,000 packages for data analysis and graphical
representation. New packages are posted frequently, and many companies are providing value-add
D ATA ANALVTICS LIFECVCLE
services for R (such as train ing, instruction, and best practices), as well as packaging it in ways to make
it easier to use and more robust. This phenomenon is si milar to what happened with Linux in the late
1980s and ea rl y 1990s, when companies appeared to package and ma ke Linux easier for companies
to consume and deploy. UseR with fi le extracts for offline ana lysis and optimal performance, and use
RODBC connections for dynamic queri es and faster development.
• SQL Analysis services [1 5] ca n perform in-database analytics of common data mining fun ctions,
involved aggregations, and basic predicti ve models.
• SAS/ACCESS [16] provides integration bet ween SAS and the analytics sandbox via multiple data
con nectors such as OBDC, JOB(, and OLE DB. SAS itself is ge nera lly used on fil e extract s, but with
SAS/ACCESS, users ca n conn ect to relational databases (such as Orac le or Teradata) and data ware-
house appliances (such as Green plum or Aster), files, and enterpri se applications (such as SAP and
Sa lesforce.com).
2.5 Phase 4: Model Building
In Phase 4, the data science team needs to develop data sets for training, testing, and production purposes.
These data sets enable the data scientist to develop the analytical model and train it (“t rai ning data”), while
holding aside some of t he data (“hold-out data” or “test data”) for testing the model. (These topics are
addressed in more detail in Chapter 3.) During th is process, it is critical to ensure t hat the t raining and test
datasets are sufficiently robust for the model and analytical techn iques. A si mple way to t hink of these
datasets is to view the training dataset for cond ucting the initial experiments and the test sets for va lidating
an approach once the initia l experiments and models have been run.
In the model building phase, shown in Figure 2-6, an ana lytica l model is developed and fit on the trai n-
ing data and eva luated (scored) against t he test data. The phases of model planning and model building
can overl ap quite a bit, and in practice one ca n iterate back and forth between the two phases for a while
before settli ng on a final model.
Although the modeling techniques and logic required to develop models ca n be highly complex, the
actual dura tion of th is phase can be short compared to the time spent preparing the data and defi ning the
approaches. In general, plan to spend more ti me preparing and learning the data (Phases 1-2) and crafting
a pres entation of the fin di ngs (Phase 5). Phases 3 and 4 tend to move more quickly, although they are more
complex from a conceptual standpoint.
As part of this phase, t he data science tea m needs to execute the mod els defined in Phase 3.
During this phase, users run models from ana lytical software packages, such as R or SAS, on fil e extracts
and small data sets for testing purposes. On a small scale, assess t he va lidity of the model and its results.
For insta nce, determine if the model accounts for most of the data and has robust predictive power. At t his
point, refine the models to optimize the results, such as by modifying variable inputs or reducing correla ted
variables where appropriate. In Phase 3, the team may have had some knowledge of correlated variables
or problematic data attributes, w hich will be confirmed or denied once the models are actually executed.
When immersed in the details of constructing models and transforming data, many small decisions are often
made about the data and the approach for the modeli ng. These details can be easily forgotten once the
proj ect is completed. Therefore, it is vital to record t he resu lts and logic of the model du ring this phase. In
addi tion, one must take care to record any operating assumptions that were made in the modeling process
regarding the data or the context.
Is the model robust
enough? Have we
failed for sure?
FIGURE 2· 6 Model building phase
2.5 Phase 4 : Model Building
Creating robust models that are suitable to a specific situation requ ires thoughtful consideration to
ensure the models being developed ultimately meet the objectives outlined in Phase 1. Questions to con-
sider include these:
• Does the model appear valid and accurate on the test data?
• Does the model output/behavior make sense to the domain experts? That is, does it appear as if the
model is giving answers that make sense in this contex t?
• Do the parameter values of the fitted model make sense in the context of the domain?
• Is the model sufficiently accurate to meet the goal?
• Does the model avoid intolerable mistakes? Depending on context, false positives may be more seri-
ous or less serious than false negatives, for instance. (False positives and false negatives are discussed
further in Chapter 3 and Chapter 7, “Advanced Analytical Theory and Methods: Classification.”)
DATA ANALYT ICS LIFECYCLE
• Are more data or more inputs needed? Do any of the inputs need to be transformed or eliminated?
• Will the kind of model chosen support the runtime requirements?
• Is a different form of the model required to address the business problem? If so, go back to the model
planning phase and revise the modeling approach.
Once the data science team can evaluate either if the model is sufficiently robust to solve the problem
or if the team has failed, it can move to the next phase in the Data Analytics Lifecycle.
2.5.1 Common Tools for the Model Building Phase
There are many tools avai lable to assist in this phase, focused primarily on statistical analysis or data mining
soft wa re. Common tools in this space include, but are not limited to, the following:
• Commercial Tools:
• SAS Enterprise Mi ner (17) allows users to run predictive and descriptive models based on large
volumes of data from across the enterprise. It interoperates with other large data stores, has
many partnerships, and is built for enterpri se-level computing and analytics.
• SPSS Modeler [18) (provided by IBM and now called IBM SPSS Modeler) offers methods to
explore and analyze data through a GUI.
• Matlab [19) provides a high-level language for performing a variety of data analytics, algo-
rithms, and data exploration.
• Alpine Miner [1 1) provides a GUI front end for users to develop ana lytic workfiows and intera ct
with Big Data tools and platforms on the back end.
• STATISTICA [20) and Mathematica [21) are also popular and well-regarded data mining and
analytics tools.
• Free or Open Source tool s:
• Rand PL/R [14) R was described earlier in the model planning phase, and PL!R is a procedural
language for PostgreSQL with R. Using this approach means that R commands can be exe-
cuted in database. This technique provides higher performance and is more scalable than
running R in memory.
• Octave [22), a free software programming language for computational modeling, has some of
the functionality of Matlab. Because it is freely available, Octave is used in major universities
when teaching machine learning.
• WEKA [23) is a free data mining software package with an analytic workbench. The functions
created in WEKA can be executed within Java code.
• Python is a programming language that provides toolkits for machine learning and analysis,
such as scikit-learn, numpy, scipy, pandas, and related data visualization using matplotlib.
• SQL in-database implementations, such as MADlib [241. provide an alterative to in -memory
desktop analytical tools. MADiib provides an open-source machine learning library of algo-
rithms that can be executed in-database, for PostgreSQL or Greenplum.
2.6 Phase 5: Communicate Results
2.6 Phase 5: Communicate Results
After executing the model, the team needs to compare the outcomes of the modeling to the criteria estab-
lished for success and failure. In Phase 5, shown in Figure 2-7, the team considers how best to articulate
the findings and outcomes to the various team members and stakeholders, taking into account caveats,
assumptions, and any limita t ions of the results. Because the presentation is of ten circulated within an
orga nization, it is critical to articulate t he results properly and position the findings in a way that is appro-
priate for the audience.
FIGURE 2-7 Communicate results phase
. .. :.: ··· .. :~· ,.. .·
………. ~··
As part of Phase 5, the team needs to determine if it succeeded or failed in its objectives. Many times
people do not wa nt to admit to failing, but in this instance failure should not be considered as a true
failure, but rather as a failure of the data to accept or reject a given hypothesis adequately. This concept
can be counterintuitive for those w ho have been told their whole careers not to fail. However, t he key is
DATA ANALYTICS LIFEC YCLE
to remember that the team must be rigorous enough with the data to determine whether it wil l prove or
disprove the hypotheses outlined in Phase 1 (discovery). Sometimes teams have only done a superficial
analysis, which is not robust enough to accept or reject a hypothesis. Other times, teams perform very robust
analysis and are searching for ways to show results, even when results may not be there. It is important
to strike a balance between these two extremes when it comes to analyzing data and being pragmatic in
terms of showing real-world results.
When conducting this assessment, determine if the results are statistica lly significant and va lid. If they
are, identify the aspects of the results that stand out and may provide sa lient findings when it comes time
to communicate them. If the results are not valid, think about adjustments that can be made to refine and
iterate on the model to make it va lid. During this step, assess the re sults and identify which data points
may have been surprising and which were in line with the hypotheses that were developed in Phase 1.
Comparing the actual resu lts to the ideas formulated early on produces additiona l ideas and insights that
would have been missed if the team had not taken time to formulate initial hypotheses early in the process.
By this time, the team should have determined which model or models address the analytical challenge
in the most appropriate way. In addition, the team should have ideas of some of the findings as a result of the
project. The best practice in this phase is to record all the findings and then select the three most significant
ones that can be shared with the stakeholders. In addition, the team needs to reflect on the implications
of these findings and measure the business value. Depending on what emerged as a result of the model,
the team may need to spend time quantifying the business impact of the results to help prepare for the
presentation and demonstrate the value of the finding s. Doug Hubbard’s work [6) offers insights on how
to assess intangibles in business and quantify the value of seemingly unmeasurable th ings.
Now that the team has run the model, completed a thorough discovery phase, and learned a great deal
about the datasets, reflect on the project and consider what obstacles were in the project and what can be
improved in the future. Make recommendations for future work or improvements to existing processes, and
consider what each of the team members and sta keholders needs to fulfi ll her responsibil ities. For instance,
sponsors must champion the project. Stakeholders must understand how the model affects their processes.
(For example, if the team has created a model to predict customer churn, the Marketing team must under-
stand how to use the churn model predictions in planning their interventions.) Production eng ineers need
to operationalize the work that has been done. In addition, this is the phase to underscore the business
benefits of the work and begin making the case to implement the logic into a live production environment.
As a result of this phase, the team will have documented the key findings and major insights derived
from the analysis. The deliverable of this phase will be the most visible portion of the process to the outside
stakeholders and sponsors, so take care to clearly articulate the results, methodology, and business value
of the findings. More details will be provided about data visualization tools and references in Chapter 12,
“The Endgame, or Putting It All Together.”
2.7 Phase 6: Operationalize
In the final phase, the team communicates the benefits of the project more broadly and sets up a pilot
project to deploy the work in a controlled way before broadening the work to a full enterprise or ecosystem
of users. In Phase 4, the team scored the model in the analytics sandbox. Phase 6, shown in Figure 2-8,
represents the first time that most analytics teams approach dep loying the new ana lytical methods or
models in a production environment. Rather than deploying these models immediately on a wide-scale
2 .7 Phase 6 : Operationalize
basis, the risk can be managed more effectively and the team can learn by undertaking a small scope, pilot
deployment before a wide-scale rollout. This approach enables the team to learn about the performance
and related constraints of the model in a production environment on a small scale and make adjustments
before a full deployment. During the pilot project, the team may need to consider executing the algorithm
in the database rather than with in-memory tools such as R because the run time is significantly faster and
more efficient than running in-memory, especially on larger datasets.
FIGURE 2-8 Model operationalize phase
While scoping the effort involved in conducting a pilot project, consider running th e model in a
production environment for a discrete set of products or a single line of business, which tests the model
in a live setting. Thi s allows the team to learn from the deployment and make any needed adjustments
before launching the model across the enterprise. Be aware that this phase can bring in a new set of team
members- usually the engineers responsible for the production environment who have a new set of
issues and concerns beyond those of the core project team. This technical group needs to ensure that
DATA ANALYTIC$ LIFEC YCLE
running the model fits smoothly into the production environ ment and that the model can be integrated
into related business processes.
Part of the operationalizing phase includes creating a mechanism for performing ongoing monitoring
of model accu racy and, if accuracy degrades, finding ways to retrain the model. If fea sible, design alert s
for when the model is operating “out-of-bounds.” This includes situations when the inputs are beyond the
range that the model was trained on, which may cause the outputs of the model to be inaccurate or invalid.
If this begins to happen regularly, the model needs to be retrained on new data.
Often, analytical projects yield new insights about a business, a problem, or an idea that people may
have taken at face value or thought was impossible to explore. Four main deliverables can be created to
meet the needs of most stakeholders. This approach for developing the four deliverables is discussed in
greater detail in Chapter 12.
Figure 2-9 portrays the key outputs for each of the main stakeholders of an analytics project and what
they usually expect at the conclusion of a project.
• Busi ness Use r typically tries to determine the benefit s and implications of the findings to the
business.
• Proj ect Sponsor typica lly asks questions related to the business impact of the project, the risks and
return on investment (ROI ), and the way the project ca n be evangelized within the organization (and
beyond).
• Project Ma nager needs to determine if the project was completed on time and within budget and
how well the goals were met.
• Business Intellig ence Analyst needs to know if the reports and dashboards he manages will be
impacted and need to change.
• Data Engineer and Database Administrator (DBA) typical ly need to share their code from the ana-
lytics project and create a technical document on how to implement it.
• Dat a Scientist needs to share the code and explain the model to her peers, managers, and other
stakeholders.
Although these seven roles represent many interests within a project, these interests usually overlap,
and most of them can be met with four main deliverables.
• Presentation for project sponsors: This contains high-level takeaways for executive leve l stakehold-
ers, with a few key messages to aid th eir decision-making process. Focus on clean, easy visuals for the
prese nter to explain and for the viewer to grasp.
• Presentation for analysts, which describes business process changes and reporting changes. Fellow
data scientists will want the details and are comfortable with technical graphs (such as Receiver
Operating Characteristic [ROC) curves, density plots, and histograms shown in Chapter 3 and Chapter 7).
• Code for technical people.
• Technical specifications of implementing the code.
As a general rul e, the more executive the audience, the more succinct the presentation needs to be.
Most executive sponsors attend many briefin gs in the course of a day or a week. Ensure that the presenta-
tion gets to the point quickly and frames the results in terms of value to the sponsor’s organization. For
instance, if the team is working with a bank to analyze cases of credit card fraud, highlight the frequency
of fraud, the number of cases in the past month or year, and the cost or revenue impact to the bank
2.8 Case Study: Globa l Innovation Netw ork and Analysis (GINA)
(or focu s on the reverse-how much more revenue the bank could gain if it addresses the fraud problem).
This demonstrates the business impact better than deep dives on the methodology. The presentation
needs to include supporting information about analytical methodology and data sources, but genera lly
only as supporting detail or to ensure the audience has confidence in the approach that was taken to
analyze the data.
-Key Outputs from a Successful Analytic Project
Code Presentation for Analysts
== Technlt411 Specs 11:011 Present1t1on for Project Sponsors
•
FIGURE 2-9 Key outputs from a successful ana lyrics project
When presenting to other audiences with more quantitative backgrounds, focus more time on the
methodology and findings. In these in stances, the team can be more expansive in describing t he out-
comes, methodology, and analytical experiment with a peer group. This audience will be more interested
in the techniques, especia lly if the team developed a new way of processing or analyzing data that can be
reused in the future or appl ied to similar problems. In addition, use imagery or data visualization when
possible. Although it may take more time to develop imagery, people tend to remember mental pictu re s to
demonstrate a point more than long lists of bullets [25]. Data visualization and presentations are discussed
further in Chapter 12.
2.8 Case Study: Global Innovation Network
and Analysis (GINA)
EMC’s Global Innovation Network and Analytics (GINA) team is a group of senior technologists located in
centers of excellence (COEs) around the world. This team’s charter is to engage employees across global
COEs to drive innovation, research, and university partnerships. In 2012, a newly hired director wanted to
DATA ANALYTICS LIFECYCLE
improve these activities and provide a mechanism to track and analyze the related information. In addition,
this team wanted to create more robust mechanisms for capturing the results of its informal conversations
with other thought leaders within EMC, in academia, or in other organizations, which could later be mined
for insights.
The GINA team thought its approach would provide a means to share ideas globally and increase
knowledge sharing among GINA members who may be separated geographically. It planned to create a
data repository containing both structured and unstructured data to accomplish three main goals.
o Store formal and informal data.
o Track research from global technologists.
o Mine the data for patterns and insights to improve the team’s operations and strategy.
The GINA case study provides an example of how a team applied the Data Analytics Ufecycle to analyze
innovation data at EMC.Innovation is typically a difficult concept to measure, and this team wanted to look
for ways to use advanced analytical methods to identify key innovators within the company.
2.8.1 Phase 1: Discovery
In the GINA project’s discovery phase, the team began identifying data sources. Although GINA was a
group of technologists skilled in many different aspects of engineering, it had some data and ideas about
what it wanted to explore but lacked a formal team that could perform these analytics. After consulting
with various experts including Tom Davenport, a noted expert in analytics at Babson College, and Peter
Gloor, an expert in collective intelligence and creator of CoiN (Collaborative Innovation Networks) at MIT,
the team decided to crowdsource the work by seeking volunteers within EMC.
Here is a list of how the various roles on the working team were fulfilled.
o Business User, Project Sponsor, Project Manager: Vice President from Office of the CTO
o Business Intelligence Analyst: Representatives from IT
o Data Engineer and Database Administrator (DBA): Representatives from IT
o Data Scientist: Distinguished Engineer, who also developed the social graphs shown in the GINA
case study
The project sponsor’s approach was to leverage social media and blogging [26] to accelerate the col-
lection of innovation and research data worldwide and to motivate teams of “volunteer” data scientists
at worldwide locations. Given that he lacked a formal team, he needed to be resourceful about finding
people who were both capable and willing to volunteer their time to work on interesting problems. Data
scientists tend to be passionate about data, and the project sponsor was able to tap into this passion of
highly talented people to accomplish challenging work in a creative way.
The data for the project fell into two main categories. The first category represented five years of idea
submissions from EMC’s internal innovation contests, known as the Innovation Road map (formerly called the
Innovation Showcase). The Innovation Road map is a formal, organic innovation process whereby employees
from around the globe submit ideas that are then vetted and judged. The best ideas are selected for further
incubation. As a result, the data is a mix of structured data, such as idea counts, submission dates, inventor
names, and unstructured content, such as the textual descriptions of the ideas themselves.
2.8 Case Study: Global Innovation Network and Analysis (GINA)
The second category of data encompassed minutes and notes representing innovation and research
activity from around the world. This also represented a mix of structured and unstructured data. The
structured data included attributes such as dates, names, and geographic locations. The unstructured
documents contained the “who, what, when, and where” information that represents rich data about
knowledge growth and transfer within the company. This type of information is often stored in business
silos that have little to no visibility across disparate research teams.
The 10 main IHs that the GINA team developed were as follows:
o IH1: Innovation activity in different geographic regions can be mapped to corporate strategic
directions.
o IH2: The length oftime it takes to deliver ideas decreases when global knowledge transfer occurs as
part of the idea delivery process.
o IH3: Innovators who participate in global knowledge transfer deliver ideas more quickly than those
who do not.
o IH4: An idea submission can be analyzed and evaluated for the likelihood of receiving funding.
o IHS: Knowledge discovery and growth for a particular topic can be measured and compared across
geographic regions.
o IH6: Knowledge transfer activity can identify research-specific boundary spanners in disparate
regions.
o IH7: Strategic corporate themes can be mapped to geographic regions.
o IHS: Frequent knowledge expansion and transfer events reduce the time it takes to generate a corpo-
rate asset from an idea.
o IH9: Lineage maps can reveal when knowledge expansion and transfer did not (or has not) resulted in
a corporate asset.
o IH1 0: Emerging research topics can be classified and mapped to specific ideators, innovators, bound-
ary spanners, and assets.
The GINA (IHs) can be grouped into two categories:
o Descriptive analytics of what is currently happening to spark further creativity, collaboration, and
asset generation
o Predictive analytics to advise executive management of where it should be investing in the future
2.8.2 Phase 2: Data Preparation
The team partnered with its IT department to set up a new analytics sandbox to store and experiment on
the data. During the data exploration exercise, the data scientists and data engineers began to notice that
certain data needed conditioning and normalization. In addition, the team realized that several missing
data sets were critical to testing some of the analytic hypotheses.
As the team explored the data, it quickly realized that if it did not have data of sufficient quality or could
not get good quality data, it would not be able to perform the subsequent steps in the lifecycle process.
As a result, it was important to determine what level of data quality and cleanliness was sufficient for the
DATA ANALYTICS LIFECYCLE
project being undertaken. In the case of the GINA, the team discovered that many of the names of the
researchers and people interacting with the universities were misspelled or had leading and trailing spaces
in the datastore. Seemingly small problems such as these in the data had to be addressed in this phase to
enable better analysis and data aggregation in subsequent phases.
2.8.3 Phase 3: Model Planning
In the GINA project, for much of the dataset, it seemed feasible to use social network analysis techniques to
look at the networks of innovators within EMC.In other cases, it was difficult to come up with appropriate
ways to test hypotheses due to the lack of data. In one case (IH9), the team made a decision to initiate a
longitudinal study to begin tracking data points over time regarding people developing new intellectual
property. This data collection would enable the team to test the following two ideas in the future:
o IHS: Frequent knowledge expansion and transfer events reduce the amount oftime it takes to
generate a corporate asset from an idea.
o IH9: Lineage maps can reveal when knowledge expansion and transfer did not (or has not} result(ed)
in a corporate asset.
For the longitudinal study being proposed, the team needed to establish goal criteria for the study.
Specifically, it needed to determine the end goal of a successful idea that had traversed the entire journey.
The parameters related to the scope of the study included the following considerations:
o Identify the right milestones to achieve this goal.
o Trace how people move ideas from each milestone toward the goal.
o Once this is done, trace ideas that die, and trace others that reach the goal. Compare the journeys of
ideas that make it and those that do not.
o Compare the times and the outcomes using a few different methods (depending on how the data is
collected and assembled). These could be as simple as t-tests or perhaps involve different types of
classification algorithms.
2.8.4 Phase 4: Model Building
In Phase 4, the GINA team employed several analytical methods. This included work by the data scientist
using Natural Language Processing (NLP} techniques on the textual descriptions of the Innovation Road map
ideas. In addition, he conducted social network analysis using Rand RStudio, and then he developed social
graphs and visualizations of the network of communications related to innovation using R’s ggplot2
package. Examples of this work are shown in Figures 2-10 and 2-11.
2.8 Case Study: Global Innovation Network and Analysis (GINA)
• • , t-ft11lf f
·-·· ‘-..lr f .. fi”-CIW1o
FIGURE 2-10 Social graph [27] visualization of idea submitt ers and finalists
• •
•
0
0 0 0
0 o o o 0 0
0
0 oo
O o
0
()
o ocflr_ og
Betweenness Ranks 0 ..() 0
1. 578 o cr~
2. 5 11 0 0() 0
3. 341 0
4. 171 0 :amine Lhe imported dataset
head(sales)
summary (sales)
# plot num_of_orders vs. sales
plot(sales$num_of_orders,sales$sales_total,
main .. “Number of Orders vs. Sales”)
# perform a statistical analysis (fit a linear regression model)
results <- lm(sales$sales_total - sales$num_of_orders)
summary(results)
# perform some diagnostics on the fitted model
# plot histogram of the residuals
hist(results$residuals, breaks .. 800)
3.1 Introduction toR
In this example, the data file is imported using the read. csv () function. Once the file has been
imported, it is useful to examine the contents to ensure that the data was loaded properly as well as to become
familiar with the data. In the example, the head ( ) function, by default, displays the first six records of sales.
# examine the imported dataset
head(sales)
cust id sales total num of orders gender
- - - -
100001 800.64
2 100002 217.53
100003 74.58 2 t·l
4 100004 ·198. 60 t•l
5 100005 723.11 4 F
6 100006 69.43 2 F
The summary () function provides some descriptive statistics, such as the mean and median, for
each data column. Additionally, the minimum and maximum values as well as the 1st and 3rd quartiles are
provided. Because the gender column contains two possible characters, an "F" (female) or "M" (male),
the summary () function provides the count of each character's occurrence.
summary(sales)
cust id
i"lin. :100001
1st Qu . : 1 o 2 5o 1
l>ledian :105001
!>lean :105001
3rd Qu. :107500
l\lax. : 110 0 0 0
sales total
!’-lin. 30.02
1 s t Qu . : 8 0 . 2 9
r’ledian : 151.65
t•lean 24 9. 46
3 rd Qu. : 2 9 5 . 50
t•lax. :7606.09
num of orde1·s gender
!’-lin. 1.000 F:5035
1st Qu.: 2.000 !·l: 4965
t>ledian : 2.000
!>lean 2.428
3rd Qu.: 3.000
1•1ax. :22.000
Plotting a dataset’s contents can provide information about the relationships between the vari-
ous columns. In this example, the plot () function generates a scatterplot of the number of orders
(sales$num_of_orders) againsttheannual sales (sales$sales_total). The$ is used to refer-
ence a specific column in the dataset sales. The resulting plot is shown in Figure 3-1.
# plot num_of_orders vs. sales
plot(sales$num_of_orders,sales$sales_total,
main .. “Number of Orders vs. Sales”)
REVIEW OF BASIC DATA ANALYTIC METHODS USING R
Number of Orders vs . Total Sales
0
iii 0 0 0
:§ 0 0
I <0
C/)
Q> 0
iii 0 § 0 0 C/) 0 0 II> 0
I I i i i
8 8 0 0 0 ~ C/) 0 0 Q> 0
• 8 iii N I I C/) • 0
5 10 15 20
sales$num_ of_ orders
FtGURE 3-1 Graphically examining th e data
Each point corresponds to the number of orders and the total sales for each customer. The plot indicates
that the annual sales are proportional to the number of orders placed. Although the observed relationship
between these two variables is not purely linear, the ana lyst decided to apply linear regression using the
lm () function as a first step in the model ing process.
r esul t s <- lm(sal es$sa l es_total - sales$num_of _o r de rs)
r e s ults
ca.l:
lm formu.a sa.c Ssales ~ota. sales$num_of_orders
Coefti 1en·
In· er ep • sa:essnum o f orders
The resulting intercept and slope values are -154.1 and 166.2, respectively, for the fitted linear equation.
However, results stores considerably more information that can be examined with the summary ()
fun ction. Details on the contents of results are examined by applying the at t ributes () fun ction.
Because regression analysis is presented in more detail later in the book, the reader shou ld not overly focus
on interpreting the following output.
summary(results)
Call :
lm formu:a sa!esSsales_total - salcs$ num_of_orders
Re!'a ilnls:
Min IQ Med1an 3C 1·1ax
-666 . 5 12S . S - 26 . 7 86 . 6 4103 . 4
Coe f ficie nt:s:
Est1mate Std . Errol r value Prl> t
Intercept -15~.128
sal~s$num f orders 166 . 22l
.; . 12″‘ – 37 . 33
1 . 462 112 . 66
<2e-16
<2e- ~6
3.1 Introduction to R
Sior:1t . codes : 0 ' ... . 0.00! •·· · c . o: • • • 5 I • I . 1 I 1
Res1aua. star:da~d e~ro~ : ~! .: on 999° deg~ees of :reeo~m
~ultlple R·squar d : 0 . ~617 , Aa:usted P-sq~a~ed : . 561
The summary () function is an example of a generic function. A generic function is a group of fu nc-
tions sharing the same name but behaving differently depending on the number and the type of arguments
they receive. Utilized previously, plot () is another example of a generic function; the plot is determi ned
by the passed variables. Generic functions are used throughout this chapter and t he book. In t he final
portion of the example, the foll owing R code uses the generic function hist () to ge nerate a histogram
(Figure 3-2) of t he re siduals stored in results. The function ca ll illustrates that optional parameter values
can be passed. In this case, the number of breaks is specified to observe the large residua ls.
~ pert H. some d13gnosLics or. the htted m .. de.
# plot hist >gnm f the residu, ls
his t (r esults $res idua l s, breaks= 8 00)
Histogra m of resultsSresid uals
0
I()
>-u
c 0 .. 0
:J
<:T
~ 0 u. I()
0
0 1000 2000 3000
resuttsSres1duals
FIGURE 3-2 Evidence of large residuals
4000
This simple example illustrates a few of the basic model planning and bu ilding tasks that may occur
in Phases 3 and 4 of the Data Analytics Lifecycl e. Throughout this chapter, it is useful to envision how the
presented R fun ctionality will be used in a more comprehensive analysis.
3.1.1 R Graphical User Interfaces
R software uses a command-line interface (CLI) that is similar to the BASH shell in Li nux or the interactive
versions of scripting languages such as Python. UNIX and Linux users can enter command Rat the termina l
prompt to use the CU. For Windows installations, R comes with RGui.exe, which provides a basic graphica l
user interface (GU I). However, to im prove the ease of writing, executing, and debugging R code, several
additional GUis have been written for R. Popular GUis include the R commander [3]. Ra ttle [4], and RStudio
[5). This section presents a brief overview of RStudio, w hich was used to build the R examples in th is book.
Figure 3-3 provides a screenshot of the previous R code example executed in RStudio.
RE V IEW O F BASIC D ATA ANALYTIC M ETHODS USING R
._ CNtl.
.. ..
t 1 ules .. r t -.d.uv
" ... tw..o u l n
' 1 • -...1 • n • • "'
6llu,-...,.1.,_uh,1.u.,·
!~uln •
- - ....._.. -....y
.. tJ - ..... O....ft• f
.:1-
.-------, ulu 10000 OM. ol " " whbhs
Scripts
rn~o~hs ... .. , ... ... ... ... .•. ... ... ...
plot u1ts~of-orcl9ors,u1es ules_tot•l, utn-·~ ~ cwo...s '"· s..ln
. '
ruulu
t"ltSIIIU
•. '
to' 1 ' Jo flt I T
1• uln ,u lu_utul ulu~,..._ot_orct«'s
11 luu
- .... ,..,.,... ""'-
; :- •toMtt· 0 { a. .....
j Workspace
'" lU
Ul
•Ill r t f " ~ I
hht ruuhs1t"tst~•h. br u11 • 100 r Histogram or resu lts$ reslduals ·-'" •• • iu~-~.~ ... ~------:=::::::::::::::::~
"s-uy(resulu)
c•H:
l•(f or-..1& - ulu lulu_uul .. saln~of_orcMf's)
... , .... ls :
tUn tQ lllt'dhn 1Q ~b
· 6M. 1 - US, \ •U .7 t6.6 .&10), <1
lsttuu Ud. Crrw 1 " ' ',.,. k(,.Jtl)
omuupt) . ,,,, u, •.ut -Jr.n ~ •. ,, •·•
ul•s~oLoro.-' u6. Ul 1. 41 61 uJ.M <-lt- 16 •••
sttnU . codts : o '• •• ' o. oot •••• 0.01 ••• o.os •, • 0.1 • • 1
l:utdlu• 1 sund¥d uror: 110.1 on tnt CS•vr"' or fr~
tty\ttph I:•S.,¥-.1: 0 , $6 17, .lodJw~t f'd l•squ.vl'd: 0 . )6)7
f" • Uathttc : t . ltl••Ool on I wrd 9Ht Of", P•VA1U41: ot l . h•16
,. #pf'rfor• ~- diA~ltn Of' Uw ttlud _,..,
Jo ?lot hhtOQrM ot tt,. re~lduh
~ Mn(ruwh~k'n lcN.th, .,.,,,, · tOO)
FIGURE 3-3 RStudio GUI
.:1
Console
j
The fou r highlighted window panes follow.
~
~
j
~
~
• Scripts: Serves as an area to write and saveR code
1000
• Workspace: Lists the datasets and variables in the R environment
Plots
2000 3000
• Plot s: Displays the plots generated by the R code and provides a straightfor ward mechanism to
export the plots
• Co nsole: Provides a history of the executed R code and the output
Additionally, the console pane can be used to obtain help information on R. Figure 3-4 illustrates that
by entering ? lm at the console prompt, the help details of the lm ( ) function are provided on the right.
Alternatively, help ( lm ) could have been entered at the console prompt.
Functions such as edit () and fix () allow the user to update the contents of an R variable.
Alternatively, such changes can be implemented with RStudio by selecting the appropriate variable from
the workspace pan e.
R allows one to save the work space environment, includ ing variables and loaded li brari es, into an
. Rdata fil e using the save . image () function. An existing . Rdata file can be loaded using the
load . image () function . Tools such as RS tud io prompt the user for whether the developer wants to
save the workspace connects prior to exiting the GUI.
The reader is encouraged to install Rand a preferred GUI to try out the R examples provided in the book
and utilize the help functionality to access more details about the discussed topics.
., .. t tt • • ........... , 41 1ft f Ot l.a.l:?' ~~
t" u lu
N
r e...cl.csv 411\1 , • .,.1yJ•1n.u\
" ... ... . .. hud ulu s~salu . ,. ,. ..
. ._. .
"' .. , plot ultJ l......_.,....ot'MrS ,Uh.tSnles.toul. .. ,,..-~of arden'''~· ~·u· , ..
>01 …
>07 , .. , ..
110
• f • .I H-It t II t>t .II It~ r–r;~l~ ~\)
ru.ulu 1• uluSulu,.ICKil ulus~of .. crd~
ru~ln
Ul “r•·f dl \1 nt,,d~l
U7 •ph•t hi t< t t~ '"' ••I
lU hht ru11hs Srut duah, bru~s • 100 1
u• ---
,;.·,;·iah .... ~ •• ;;;.;::.:---------=========----'
un:
lo(f or-.la .. ultsSul•s .. tO'CAI .. uohsJtuil,..of_or~s )
luto.uh :
Min IQ .. fltiM )Q '""'
~tM. , · US. S • lt.7 M , t 4110), 41
CCHff lc.l..r:ts:
Uttaau St d. lrror t "' ' '"' "'(•I t I>
(fnurcept) -1~.UI
head(housing_data )
4552088
4 45″- 88
5 8699:!93 6
5
5
5
9
9
5
3.1 Intro ductio n t o R
Although plots can be saved using the RStudio GUI, plots can also be saved using R code by specifying
the appropriate g raphic devices. Using the j peg () function, the following R code creates a new JPEG
file, adds a histogram plot to the file, and then closes the fi le. Such techniques are useful w hen automating
standard repor ts. Other functions, such as png () , bmp () , pdf () ,and postscript () ,are available
in R to save plots in the des ired format.
jpeg ( fil e= “c : /data/ sale s_h ist . j peg” )
h ist(sales$num_of_ o rders )
creaLe a ne'” jpeg file
# export histogt·;un to jpeg
d ev. o ff () ~ shut off the graphic device
More information on data imports and exports can be fou nd at http : I I cran . r-proj e ct . o rgl
doc I ma nuals I r- rel ease i R- d a ta . html, such as how to import data sets from statistical software
packages including Minitab, SAS, and SPSS.
3.1.3 Attribute and Data Types
In the earli er exa mple, the sal es variable contained a record for each cu st omer. Several cha racteristic s,
such as total an nual sa les, number of orders, and gender, were provided for each customer. In general,
these characteristics or attributes provide the qualitative and quantitative measures for each item or subject
of interest. Attributes can be categorized into four types: nominal, ordinal, interval, and ratio (NOIR) [8).
Table 3-2 distinguishes these four attrib ute types and shows the operations they support. Nominal and
ordinal attributes are considered categorical attributes, w hereas interval and ratio attributes are considered
numeric attributes.
TABLE 3-2 NOIR Attribu te Types
Categorical (Qualitative) Numeric (Quantitative)
Nominal Ordina l Inte rv al Rat io
Definition The va lues represent Attributes The difference Both the difference
labels that distin- imply a betw een two and the ratio of
guish one from sequence. values is two values are
another. meaningful. meaningful.
Examples ZIP codes, nationa l- Quality of Temperature in Age, temperature
ity, street names, diamonds, Celsius or in Kelvin, counts,
gender, employee ID academic Fahrenheit, ca l- length, weight
numbers, TRUE or grades, mag- endar dates,
FALS E nitude of lati tudes
ea rthquakes
Operat ions =, >’ =, ~, = , ;t., =,~,
< , s , > , 2: <, s , > , c:, <, s , >, ~,
+, – + , – ,
x, .:-
REVIEW OF BASIC DATA ANALYTIC METHODS USING R
Data of one attribute type may be converted to another. For example, the qual it yof diamonds {Fair,
Good, Very Good, Premium, Ideal} is considered ordinal but can be converted to nominal {Good, Excellent}
with a defined mapping. Similarly, a ratio attribute like Age can be converted into an ordinal attribute such
as {Infant, Adolescent, Adult, Senior}. Understanding the attribute types in a given dataset is important
to ensure that the appropriate descriptive statistics and analytic methods are applied and properly inter-
preted. For example, the mean and standard deviation of U.S. postal ZIP codes are not very meaningful or
appropriate. Proper handling of categorical variables will be addressed in subsequent chapters. Also, it is
useful to consider these attribute types during the following discussion on R data types.
Numeric, Character, and Logical Data Types
Like other programming languages, R supports the use of numeric, character, and logical (Boolean) values.
Examples of such variables are given in the following R code.
i <- 1
sport <- "football"
flag <- TRUE
# create a numeric variable
# create a character variable
# create a logical variable
R provides several functions, such as class () and type of (),to examine the characteristics of a
given variable. The class () function represents the abstract class of an object. The typeof () func-
tion determines the way an object is stored in memory. Although i appears to be an integer, i is internally
stored using double precision. To improve the readability of the code segments in this section, the inline
R comments are used to explain the code or to provide the returned values.
class(i) # returns "numeric"
typeof(i) # returns "double"
class(sport) # returns "character"
typeof(sport) # returns "character"
class(flag) .. returns "logical" tt
typeof (flag) # returns "logical"
Additional R functions exist that can test the variables and coerce a variable into a specific type. The
following R code illustrates how to test if i is an integer using the is . integer ( } function and to coerce
i into a new integer variable, j, using the as. integer () function. Similar functions can be applied
for double, character, and logical types.
is.integer(i)
j <- as.integer(i)
is.integer(j)
# returns FALSE
# coerces contents of i into an integer
# returns TRUE
The application of the length () function reveals that the created variables each have a length of 1.
One might have expected the returned length of sport to have been 8 for each of the characters in the
string 11 football". However, these three variables are actually one element, vectors.
length{i)
length(flag)
length(sport)
# returns 1
# returns 1
# returns 1 (not 8 for "football")
3.1 Introduction to R
Vectors
Vectors are a basic building block for data in R. As seen previously, simple R variables are actually vectors.
A vector can only consist of values in the same class. The tests for vectors can be conducted using the
is. vector () function.
is.vector(i)
is.vector(flag)
is.vector(sport)
!t returns TRUE
# returns TRUE
±t returns TRUE
R provides functionality that enables the easy creation and manipulation of vectors. The following R
code illustrates how a vector can be created using the combine function, c () or the colon operator, :,
to build a vector from the sequence of integers from 1 to 5. Furthermore, the code shows how the values
of an existing vector can be easily modified or accessed. The code, related to the z vector, indicates how
logical comparisons can be built to extract certain elements of a given vector.
u <- c("red", "yellow", "blue") " create a vector "red" "yello•d" "blue"
u
u[l]
v <- 1:5
v
sum(v)
w <- v * 2
w
w[3]
±; t·eturns "red" "yellow'' "blue"
returns "red" 1st element in u)
# create a vector 1 2 3 4 5
# returns 1 2 3 4 5
It returns 15
It create a vector 2 4 6 8 10
# returns 2 4 6 8 10
returns 6 (the 3rd element of w)
z <- v + w
z
# sums two vectors element by element
# returns 6 9 12 15
z > 8
z [z > 8]
# returns FALSE FALSE TRUE TRUE TRUE
# returns 9 12 15
z[z > 8 I z < 5] returns 9 12 15 ("!"denotes "or")
Sometimes it is necessary to initialize a vector of a specific length and then populate the content of
the vector later. The vector ( } function, by default, creates a logical vector. A vector of a different type
can be specified by using the mode parameter. The vector c, an integer vector of length 0, may be useful
when the number of elements is not initially known and the new elements will later be added to the end
ofthe vector as the values become available.
a <- vector(length=3) # create a logical vector of length 3
a # returns FALSE FALSE FALSE
b <- vector(mode::"numeric 11 , 3) #create a numeric vector of length 3
typeof(b) # returns "double"
b[2] <- 3.1 #assign 3.1 to the 2nd element
b # returns 0.0 3.1 0.0
c <- vector(mode= 11 integer", 0) # create an integer vectot· of length o
c # returns integer(O)
length(c) # returns o
Although vectors may appear to be analogous to arrays of one dimension, they are technically dimen-
sionless, as seen in the following R code. The concept of arrays and matrices is addressed in the following
discussion.
REVIEW OF BASIC DATA ANALYTIC METHODS USING R
1·eturns 3 length(b)
dim(b) ~ 1·etun1s NULL (an value)
Arrays and Matrices
The array () function can be used to restructure a vector as an array. For example, the following R code
builds a three-dimensional array to hold the quarterly sales for three regions over a two-year period and
then assign the sales amount of $158,000 to the second region for the first quarter of the first year.
H the dimensions are 3 regions , 4 quarters, and 2 years
quarterly_sales < - array(O, dim=c(3,4,2))
quarterly_sales[2,1,11 <- 158000
quarterly_ sales
1
[. 1, [.:! 1 :. , 1 [,·s!
[1' 1 0 0 0
[:!,1 !58000 c 0 0
[ 3. 1 0 0 0 0
2
[. 11 (.21 [. 31 [. 4 1
[ 1' 1 0 0 0 0
[2 ' 1 0 0 0 0
[3 ' 1 0 0 0 0
A two-dimensional array is known as a matrix. The following code initializes a matrix to hold the quar-
terly sales for the three region s. The parameters nrov1 and nco l define the number of rows and columns,
respectively, for the sal es_ma tri x.
sales_matrix <- matrix(O, nrow = 3, neal 4)
sales_matrix
[ 1, 1
[2,1
[ 1' 1
[.11 ;,:!) 1.31 [. .; 1
0
0
0
0
0
0
0
0
n
R provides the standard matrix operations such as addition, subtraction, and multiplication, as well
as the transpose function t () and the inverse matrix function ma t r ix . inve r s e () included in the
matrixcalc package. Th e following R code builds a 3 x 3 matrix, M, and multiplies it by its inverse to
obtain the identity matrix.
library(matrixcalc)
M <- matrix(c(1,3,3,5,0,4,3 , 3,3) ,nrow 3,ncol 3) build a 3x3 matrix
3.1 Introduction toR
M %* % matrix . inverse (M} ~ multiply 1·! by inverse (:01 }
[. 1] [. 2] [ ' 3]
[ 1' J 0 0
[2' J 0 1 0
[3' J 0 0 1
Data Fram es
Simi lar to the concept of matrices, data frames provide a structure for storing and accessing several variables
of possibly different data types. In fact, as the i s . d ata . fr a me () function indicates, a data frame was
created by the r e ad . csv () function at the beginning of the chapter.
r.import a CSV :ile of the total annual sales :or each customer
s ales < - read . csv ( "c : / data / ye arly_s a l es . c sv" )
i s .da t a . f r ame (sal e s ) ~ t·eturns TRUE
As seen earlier, the variables stored in the data frame can be easily accessed using the $ notation. The
following R code illustrates that in this example, each variable is a vector with the exception of gende r ,
which wa s, by a read . csv () default, imported as a factor. Discussed in detail later in this section, a fa ctor
denotes a categorical variable, typically with a few finite levels such as "F" and "M " in the case of gender.
l e ngth(sal es$num_o f _o r ders) returns 10000 (number of customers)
i s . v ector(sales$cust id) returns TRUE -
is . v ector(sales$ sales_total) returns TRUE
i s .vector(sales$num_of_ orders ) returns TRUE
is . v ector (sales$gender) returns FALSE
is . factor(s a les$gender ) ~ returns TRUE
Because of their fl exibility to handle many data types, data frames are the preferred input format for
many ofthe modeling functions available in R. The foll owing use of the s t r () functio n provides the
structure of the sal es data frame. This fun ction identifi es the integer and numeric (double) data types,
the factor variables and levels, as well as the first few values for each variable.
str (sal es) # display structure of the data frame object
'data.ft·ame': 10000 obs . of 4 vanables :
$ CUSt id int 100001 100002 100003 100004 100005 100006 . ..
$ sales total num 800 . 6 217.5 74.6 498 . 6 723 . 1
$ num of orders : int 3 3 2 3 4 2 2 2 2 2 . .. -
$ gender Factor w/ 2 le,·els UfU I "f'-1" : 1 l 2 2 1 1 2 2 1 2 .. .
In the simplest sense, data frames are lists of variables of the same length. A subset of the data frame
can be re trieved through subsetting operators. R's subsetting operators are powerful in t hat they allow
one to express complex operations in a succinct fa shion and easily retrieve a subset of the dataset.
'! extract the fourth column of the sales data frame
sal es [, 4]
H extract the gender column of the sales data frame
REVIEW OF BASIC DATA ANALYTIC METHODS USING R
sales$gender
# retrieve the first two rows of the data frame
sales[l:2,]
# retrieve the first, third, and fourth columns
sales[,c(l,3,4)]
l! retrieve both the cust_id and the sales_total columns
sales[,c("cust_id", "sales_total")]
# retrieve all the records whose gender is female
sales[sales$gender=="F",]
The following R code shows that the class of the sales variable is a data frame. However, the type of
the sales variable is a list. A list is a collection of objects that can be of various types, including other lists.
class(sales)
"data. frame"
typeof(sales)
"list"
Lists
Lists can contain any type of objects, including other lists. Using the vector v and the matrix M created in
earlier examples, the following R code creates assortment, a list of different object types.
# build an assorted list of a string, a numeric, a list, a vector,
# and a matrix
housing<- list("own", "rent")
assortment <- list("football", 7.5, housing, v, M)
assortment
[ [1)]
[1) "football"
[ (2])
[1) 7. 5
[ (3])
[ [ 3)) [ [ 1))
[1) "own"
[ [3)) [ [2)]
[1) "rent"
[ [4)]
[1] 1 2 3 4 5
[ [5)]
[11 J
[21 J
[3 1 J
[I 1] [ 1 2] [ 1 3 J
1
3
3
5
0
4
3.1 Introduction toR
In displaying the contents of assortment, the use of the double brackets, [ [] ] , is of particular
importance. As the following R code illustrates, the use of the single set of brackets only accesses an item
in the list, not its content.
# examine the fifth object, loll in the list
class(assortment[S]) .. returns "2.ist" tt
length(assortment[S]) .. returns 1 tt
class(assortment[[S]]) # returns "matrix"
length(assortment[[S]]) # returns 9 {for the 3x3 matrix)
As presented earlier in the data frame discussion, the s tr ( ) function offers details about the structure
of a list.
str(assortment)
List of 5
$ : chr "football"
$ : num 705
$ :List of 2
0 0 $ : chr "own "
0 0$ : chr "rent"
$ int [ 1: 5] 1 2 3 4 5
$ : num [ 1: 3 1 1 : 3] 1 3 3 5 0 4 3 3 3
Factors
Factors were briefly introduced during the discussion of the gender variable in the data frame sales.
In this case, gender could assume one of two levels: ForM. Factors can be ordered or not ordered. In the
case of gender, the levels are not ordered.
class(sales$gender)
is.ordered(sales$gender)
# returns "factor"
# returns FALSE
Included with the ggplot2 package, the diamonds data frame contains three ordered factors.
Examining the cut factor, there are five levels in order of improving cut: Fair, Good, Very Good, Premium,
and Ideal. Thus, sales$gender contains nominal data, and diamonds$cut contains ordinal data.
head(sales$gender)
F F l-1 1'-1 F F
Levels: F l\1
library(ggplot2)
data(diamonds)
# display first six values and the levels
# load the data frame into the R workspace
REVIEW OF BASIC DATA ANALYTIC METHODS USING R
str(diamonds)
'data.frame': 53940 obs. of 10 variables:
$ carat
$ cut
$ color
$ clarity:
$ depth
$ table
$ price
$ X
$ y
$ z
num 0.23 0.21 0.23 0.29 0.31 0.24 0.24 0.26 0.22 ...
Ord.factor w/ 5 levels "Fair"c"Good"c .. : 5 4 2 4 2 3 ...
Ord.factor w/ 7 levels "D"c"E"c"F"c"G"c .. : 2 2 2 6 7 7
Ord.factor w/ 8 levels "I1"c"SI2"c"SI1"< .. : 2 3 5 4 2
num 61.5 59.8 56.9 62.4 63.3 62.8 62.3 61.9 65.1 59.4
num 55 61 65 58 58 57 57 55 61 61 ...
int 326 326 327 334 335 336 336 337 337 338
num 3.95 3.89 4.05 4.2 4.34 3.94 3.95 4.07 3.87 4 ...
num 3.98 3.84 4.07 4.23 4.35 3.96 3.98 4.11 3.78 4.05
num 2.43 2.31 2.31 2.63 2.75 2.48 2.47 2.53 2.49 2.39
head(diamonds$cut) # display first six values and the levels
Ideal Premium Good Premium Good Very Good
Levels: Fair c Good c Very Good < Premium < Ideal
Suppose it is decided to categorize sales$sales_ totals into three groups-small, medium,
and big-according to the amount of the sales with the following code. These groupings are the basis for
the new ordinal factor, spender, with levels {small, medium, big}.
# build an empty character vector of the same length as sales
sales_group <- vector (mode=''character",
length=length(sales$sales_total))
# group the customers according to the sales amount
sales_group[sales$sales_total<100] <- "small"
sales_group[sales$sales_total>=100 & sales$sales_total<500] <- "medium"
sales_group[sales$sales_total>=500] <- "big"
# create and add the ordered factor to the sales data frame
spender<- factor(sales_group,levels=c("small", "medium", "big"),
ordered = TRUE)
sales <- cbind(sales,spender)
str(sales$spender)
Ord.factor w/ 3 levels "small"c"medium"c .. : 3 2 1 2 3 1 1 1 2 1 ...
head(sales$spender)
big medium small medium big small
Levels: small < medium c big
The cbind () function is used to combine variables column-wise. The rbind () function is used
to combine datasets row-wise. The use of factors is important in several R statistical modeling functions,
such as analysis of variance, aov ( ) , presented later in this chapter, and the use of contingency tables,
discussed next.
3.11ntrodudion toR
Contingency Tables
In R, table refers to a class of objects used to store the observed counts across the factors for a given dataset.
Such a table is commonly referred to as a contingency table and is the basis for performing a statistical
test on the independence of the factors used to build the table. The following R code builds a contingency
table based on the sales$gender and sales$ spender factors.
# build a contingency table based on the gender and spender factors
sales_table <- table{sales$gender,sales$spender)
sales_table
small medium big
F 1726 2746 563
M 1656 2723 586
class(sales_table)
typeof(sales_table)
dim{sales_table)
# performs a chi-squared test
summary(sales_table)
Number of cases in table: 10000
Number of factors: 2
returns "table"
returns "integer"
# returns 2 3
Test for independence of all factors:
Chisq = 1.516, df = 2, p-value = 0.4686
Based on the observed counts in the table, the summary {) function performs a chi-squared test
on the independence of the two factors. Because the reported p-value is greater than 0.05, the assumed
independence of the two factors is not rejected. Hypothesis testing and p-values are covered in more detail
later in this chapter. Next, applying descriptive statistics in R is examined.
3.1.4 Descriptive Statistics
It has already been shown that the summary () function provides several descriptive statistics, such as
the mean and median, about a variable such as the sales data frame. The results now include the counts
for the three levels of the spender variable based on the earlier examples involving factors.
summary(sales)
cust
·-
id sales - total nurn --- of orders gende1· spender
!,lin. :100001 r~lin. 30.02 !\lin. 1.000 F:5035 small :3382
1st Qu. :102501 1st Qu.: 80.29 1st Qu.: 2.000 1<1:4965 medium:5469
!\led ian :105001 r
~ ~
u
~ ~
c
QJ
:J
0″
QJ
u:
8 -J
~
0 -‘
Age
FIGURE 3-8 Age distribution o f bank account holders
If the age data is in a vector called age, t he graph can be created with the following R script:
h ist(age, b r eaks=l OO , main= “Age Distributi on of Account Holders “,
xlab=”Age”, ylab=”Frequency”, col =”gray” )
The figure shows that the median age of the account holders is around 40. A few accounts with account
holder age less than 10 are unusual but plausible. These could be custodial accounts or college savings
accou nts set up by the parents of young children. These accounts should be retained for future analyses.
REVIEW OF BASIC DATA ANALYTIC METHODS USING R
However, the left side of the graph shows a huge spike of customers who are zero years old or have
negative ages. This is likely to be evidence of missing data. One possible explanation is that the null age
values could have been replaced by 0 or negative values during the data input. Such an occurrence may
be caused by entering age in a text box that only allows numbers and does not accept empty values. Or it
might be caused by transferring data among several systems that have different definitions for null values
(such as NULL, NA, 0, -1, or-2). Therefore, data cleansing needs to be performed over the accounts with
abnormal age values. Analysts should take a closer look at the records to decide if the missing data should
be eliminated or if an appropriate age value can be determined using other available information for each
of the accounts.
In R, the is . na (} function provides tests for missing values. The following example creates a vector
x where the fourth value is not available (NA). The is . na ( } function returns TRUE at each NA value
and FALSE otherwise.
X<- c(l, 2, 3, NA, 4)
is.na(x)
[1) FALSE FALSE FALSE TRUE FALSE
Some arithmetic functions, such as mean ( } , applied to data containing missing values can yield an
NA result. To prevent this, set the na. rm parameter to TRUE to remove the missing value during the
function's execution.
mean(x)
[1) NA
mean(x, na.rm=TRUE)
[1) 2. 5
The na. exclude (} function returns the object with incomplete cases removed.
DF <- data.frame(x = c(l, 2, 3), y = c(lO, 20, NA))
DF
X y
1 1 10
2 2 20
3 3 NA
DFl <- na.exclude(DF)
DFl
X y
1 1 10
2 2 20
Account holders older than 100 may be due to bad data caused by typos. Another possibility is that these
accounts may have been passed down to the heirs of the original account holders without being updated.
In this case, one needs to further examine the data and conduct data cleansing if necessary. The dirty data
could be simply removed or filtered out with an age threshold for future analyses. If removing records is
not an option, the analysts can look for patterns within the data and develop a set of heuristics to attack
the problem of dirty data. For example, wrong age values could be replaced with approximation based
on the nearest neighbor-the record that is the most similar to the record in question based on analyzing
the differences in all the other variables besides age.
3.2 Exploratory Data Analysis
Figure 3-9 presents another example of dirty data. The distribution shown here corresponds to the age
of mortgages in a bank's home loan portfolio. The mortgage age is calculated by subtracting the orig ina-
tion date of the loan from the current date. The vertical axis corresponds to the number of mortgages at
each mortgage age.
Portfolio Distributio n, Years Since Origination
0
0
"' ~
0
0
~
0
>-u
c
0
(X)
Qj 0
::J 0
cY
CT 0
.,
“‘ .,
“‘
0 0 u:
0 “‘ N 0
~
N
0
0
0 0
Oe• OO 1e+05 2e+05 3e+05 4e•05 5e • 05 4 0 4 5 50 55
Income N = 4000 Band\Yidlh = 0 02069
FIGURE 3·11 (a) Histogram and (b) Density plot of household income
REVIEW OF BASIC DATA ANALYTIC METHODS USING R
Figure 3-11 (b) shows a density plot of the logarithm of household income values, which emphasizes
the distribution. The income distribution is concentrated in the center portion of the graph. The code to
generate the two plots in Figure 3-11 is provided next. The rug ( } function creates a one-dimensional
density plot on the bottom of the graph to emphasize the distribution of the observation.
# randomly generate 4000 observations from the log normal distribution
income<- rlnorm(4000, meanlog = 4, sdlog = 0.7)
summary (income)
Min. 1st Qu. t.Jedian t>!ean 3rd Qu. f.!ax.
4.301 33.720 54.970 70.320 88.800 659.800
income <- lOOO*income
summary (income)
Min. 1st Qu. f.!edian f.!ean 3rd Qu. 1\!ax.
4301 33720 54970 70320 88800 659800
# plot the histogram
hist(income, breaks=SOO, xlab="Income", main="Histogram of Income")
# density plot
plot(density(loglO(income), adjust=O.S),
main="Distribution of Income (loglO scale)")
# add rug to the density plot
rug(loglO(income))
In the data preparation phase of the Data Analytics Lifecycle, the data range and distribution can be
obtained. If the data is skewed, viewing the logarithm of the data (if it's all positive) can help detect struc-
tures that might otherwise be overlooked in a graph with a regular, nonlogarithmic scale.
When preparing the data, one should look for signs of dirty data, as explained in the previous section.
Examining if the data is unimodal or multi modal will give an idea of how many distinct populations with
different behavior patterns might be mixed into the overall population. Many modeling techniques assume
that the data follows a normal distribution. Therefore, it is important to know if the available dataset can
match that assumption before applying any of those modeling techniques.
Consider a density plot of diamond prices (in USD). Figure 3-12(a) contains two density plots for pre-
mium and ideal cuts of diamonds. The group of premium cuts is shown in red, and the group of ideal cuts
is shown in blue. The range of diamond prices is wide-in this case ranging from around $300 to almost
$20,000. Extreme values are typical of monetary data such as income, customer value, tax liabilities, and
bank account sizes.
Figure 3-12(b) shows more detail of the diamond prices than Figure 3-12(a) by taking the logarithm. The
two humps in the premium cut represent two distinct groups of diamond prices: One group centers around
log
10
price= 2.9 (where the price is about $794), and the other centers around log
10
price= 3.7 (where the
price is about $5,012). The ideal cut contains three humps, centering around 2.9, 3.3, and 3.7 respectively.
The R script to generate the plots in Figure 3-12 is shown next. The diamonds dataset comes with
the ggplot2 package.
library("ggplot2")
data(diamonds) # load the diamonds dataset from ggplot2
# Only keep the premium and ideal cuts of diamonds
.. '
3t '
~
o;
; '.
"
It '
n iceDiamonds < - diamonds [diamonds$cut=="P r emi um" I
diamonds$c ut== " Ide a l •, I
s ummary(niceDiamonds$cut )
0 0
Pr m1u
137
’19
“‘ ~ .. 11t
I ll
f.· .. . ~-· … . . .
_ic
)9
I
Petal. Width
Consider the scatterplot from the first row and third col umn of Figure 3-18, where sepal length is com-
pared against petal length. The horizontal axis is the peta l length, and t he vertical axis is t he sepa l length.
The scatterplot shows that versicolor and virginica share similar sepal and petal lengths, although the latter
has longer peta ls. The petal lengths of all setosa are about the sa me, and the petal lengths are remarkably
shorter than the other two species. The scatterplot shows t hat for versicolor and virgin ica, sepal length
grows linea rly with the petal length.
The R code for generating the scatterplot mat ri x is provided next.
I; define the colors
colors<- C( 11 red 11 J 11 green 11 , 11 blue•')
~ draw the plot ma:rix
pairs(iris[l : 4], main= "Fisher ' s Iris Datase t•,
pch = 21, bg = colors[unclass ( iris$Species)]
= ~Qr qrdp~ica: pa~a~ :e~· - cl~!' p!ot - 1~9 :c :te ~1gure ~~a1o~
par (xpd = TRUE )
" ada l<"go::d
legend ( 0.2, 0 . 02, horiz = TRUE, as.vector (unique ( iris$Species )) ,
fil l = colors, bty = "n" )
3.2 Explorat ory Data Analysis
The vector colors defines th e colo r sc heme for the plot. It could be changed to something li ke
colors<- c("gray50", "white" , " black " } to makethescatterplotsgrayscale.
Analyzing a Variable over Tim e
Visua lizing a variable over time is the same as visualizing any pair of vari ables, but in this case the goal is
to identify time-specific patterns.
Figure 3-19 plots the mon thly total numbers of international airline passengers (in thousands) from
January 1940 to December 1960. Enter plot (AirPassengers} in the R console to obta in a similar
graph. The plot shows that, for each year, a la rge peak occurs mid-year around July and August, and a sma ll
peak happens around t he end of the year, possibly due to the holidays. Such a phenomenon is referr ed to
as a seasonality effect.
0
0
CD
0
0
II'>
“‘ Q;
0 0>
c 0 ., ….
“‘ “‘ “‘ 0 Q, 0
< (")
0
0
N
0
~
1950 1952 1954 1956 1958 1960
Tune
FIGURE 3-19 Airline passenger counts from 1949 to 1960
Additionally, the overall trend is that the number of air passengers steadily increased from 1949 to
1960. Chapter 8, "Advanced Analytica l Theory and Methods: Time Series Analysis," discusses the analysis
of such data sets in greater detail.
3.2.5 Data Exploration Versus Presentation
Using visualization for data exploration is different from presenting results to stakeholders. Not every type
of plot is suitable for all audiences. Most of the plots presented earli er try to detail the data as clearly as pos-
sible for data scientists to identify structures and relationships. These graphs are more technical in nature
and are better suited to technical audiences such as data scientists. Nontechnical sta keholders, however,
ge nerally prefer simple, clear graphics that focus on the message rather than the data.
Figure 3-20 shows the density plot on the distribution of account va lues from a bank. The data has been
converted to the log
10
scale. The plot includes a ru g on the bottom to show the distribution of the variable.
This graph is more suitable for data scientists and business analysts because it provides information that
REVIEW OF BASIC DATA ANALYTIC METHODS USING R
can be relevant to the downstream analysis. The graph shows that the transformed account values follow
an approximate normal distribution, in the range from $100 to $10,000,000. The median account value is
approximately $30,000 (1 o4s), with the majority of the accounts between $1,000 (1 03) and $1,000,000 (1 06).
Distribution of Account Values (log10 scale)
CD
ci
II)
ci
oq:
0
~
("') ·a;
c c::)
cu
0
N
c::)
.....
c::)
0
c::)
2 3 4 5
N = 5000 Bandwidth= 0.05759
FIGURE 3-20 Density plots are better to show to data scientists
6 7
Density plots are fairly technical, and they contain so much information that they would be difficult to
explain to less technical stakeholders. For example, it would be challenging to explain why the account
values are in the log
10
scale, and such information is not relevant to stakeholders. The same message can
be conveyed by partitioning the data into log-like bins and presenting it as a histogram. As can be seen in
Figure 3-21, the bulk of the accounts are in the S 1,000-1,000,000 range, with the peak concentration in the
$10-SOK range, extending to $500K. This portrayal gives the stakeholders a better sense of the customer
base than the density plot shown in Figure 3-20.
Note that the bin sizes should be carefully chosen to avoid distortion of the data.ln this example, the bins
in Figure 3-21 are chosen based on observations from the density plot in Figure 3-20. Without the density
plot, the peak concentration might be just due to the somewhat arbitrary appearing choices for the bin sizes.
This simple example addresses the different needs of two groups of audience: analysts and stakehold-
ers. Chapter 12, "The Endgame, or Putting It All Together," further discusses the best practices of delivering
presentations to these two groups.
Following is the R code to generate the plots in Figure 3-20 and Figure 3-21.
# Generate random log normal income data
income= rlnorm(SOOO, meanlog=log(40000), sdlog=log(S))
# Part I: Create the density plot
plot(density(loglO(income), adjust=O.S),
main= 11 Distribution of Account Values (loglO scale)")
# Add rug to the density plot
3.3 Statistica l Methods for Evaluation
r ug (logl O(income))
l• :l... "1 ! .... < bl:-;.5''
breaks = c(O, 1000, 5000, 10000, 50000, 100000, SeS, le6, 2e7 )
"'! 1.:: .... ... ••' ,
bins = cut(income, breaks, include .lowest =T,
labels c ( "< lK", "1 - SK", "5- lOK" , "10 - SOK",
"50-lOOK" , "100 -S OOK" , "SOCK-1M", "> 1M”) )
~ n r •L ri. ..
plot(bins, main “Dis tribut i on of Account Val ues “,
x l ab “Account value ($ USD) “,
ylab = “Number of Accounts”, col= “blue “)
Distribution of Account Values
0 – —<1K 1-5K 5-I OK 10-50K 50- l OOK 100.500K 500K-1M > 11.1
AccOirl value (S USO)
FIGURE 3·21 Histograms are better to show to stakeholders
3.3 Statistical Methods for Evaluation
Visualization is useful for data exploration and presentation, but statistics is crucial because it may exist
throughout the entire Data Analytics Lifecycle. Statistical techniques are used during the initial data explo-
ration and data preparation, model building, evaluation of the final models, and assessment of how the
new models improve the situation when deployed in the field. In particular, statistics can help answer the
following questions for data analytics:
• Model Building and Planning
• What are the best input variables for the model?
• Can the model predict the outcome given the input?
REVIEW OF BASIC DATA ANALYTIC METHODS USING R
• Model Evaluation
• Is the model accurate?
• Does the model perform better than an obvious guess?
• Does the model perform better than another cand idate model?
• Model Deployment
• Is the prediction sound?
• Does t he model have the desired effect (such as reduc ing the co st}?
This sec tion discusses some useful statistical tools that may answer these questions.
3.3.1 Hypothesis Testing
When compari ng populations, such as testing or evaluating the difference of the means from two samples
of data (Figure 3-22}, a common technique to assess the difference or the significance of the difference is
hy p oth esis testin g.
FIGURE 3-22 Distribut ions of two samples of data
The basic concept of hypothesis testing is to form an assertion and test it with data. When perform-
in g hypothesis tests, the common assumption is that there is no difference between two samples. This
assumptio n is used as the default position for building the test or conducting a scientific experiment.
Statisticians refer to this as the null hy p o thesis (H
0
). The altern a tive hyp o th esis (H) is that there is a
3.3 Statistical Method s for Evaluation
difference between two samples. For example, if the task is to identify the effect of drug A compared to
drug Bon patients, the null hypothesis and alternative hypothesis would be th is.
• fl0: Drug A and drug B have the same effect on patients.
• fl A: Drug A has a greater effect than drug Bon patients.
If the task is to identify whether advertising Campaign C is effective on reducing customer churn, the
null hypothesis and alternative hypothesis wou ld be as follows.
• fl0 : Campaign C does not reduce customer churn better than the cu rrent campa ign method.
• fl A: Campaign C does reduce customer churn better than the current campa ign.
It is important to state the null hypothesis and alternative hypothesis, because misstating them is likely
to underm ine the subsequent steps of the hypothesis testing process. A hypothesis test leads to either
rejecting the null hypothesis in favor of the alternative or not rejecting the null hypothesis.
Table 3·5 includes some examples of null and alternative hypotheses that should be answered during
the analytic lifecycle.
TABLE 3-5 Example Null Hypotheses and Alternative Hypotheses
Application Null Hypothesis Alternative Hypothesis
Accuracy Forecast
Recommendation
Engine
Regression
Modeling
Model X does not predict better
than the existing model.
Algorithm Y does not produce
better recommendations than
the current algorithm being
used.
This variable does not affect the
outcome because its coefficient
is zero.
Model X predicts better than the existing
model.
Algorithm Y produces better recommen·
dations than t he current algorithm being
used.
This variable affects outcome because its
coefficient is not zero.
Once a model is built over the t raining data, it needs to be eva luated over the testing data to see if the
proposed model predicts better than the existing model curren tly being used. Th e null hypothesis is that
the proposed model does not predict better than the existing model. The alternative hypothesis is that
the proposed model indeed predicts better than the existing model. In accuracy forecast, the null model
could be that the sales of the next month are the same as the prior month. The hypothesis test needs to
evaluate if the proposed model provides a better prediction. Take a recommendation engine as an example.
The null hypothesis could be that the new algorithm does not produce better recommendations than the
current algorithm being deployed. The alternative hypothesis is that the new algorithm produces better
recommendations than the old algorithm.
When eva luating a model, sometimes it needs to be determined if a given input variable improves the
model. In regression analysis (Chapter 6), for example, this is the same as asking if the regression coefficient
for a variable is zero. The null hypothesis is that the coefficient is zero, which means the variable does not
have an impact on the outcome. The alternative hypothesis is that the coefficient is nonzero, which means
the variable does have an impact on the outcome.
REVIEW OF BASIC DATA ANALYTIC METHODS USING R
A common hypothesis test is to compare the means of two populations. Two such hypothesis test s are
discussed in Section 3.3.2.
3.3.2 Difference of Means
Hypothesis testing is a common approach to draw inferences on whether or not the two populations,
denoted pop1 and pop2, are different from each other. This section provides two hypothesis tests to com-
pare the means of the respective populations based on sam ples randomly drawn from each population.
Specifically, the two hypothesis tests in this section consider the following null and alternative hypotheses.
• Ho: II , = ll 2
• HA: II , “”‘ ll2
The 1’ , and 11
2
denote the population means of pop1 and pop2, respectively.
The basic testing approach is to compare the observed sample means, X, and X
2
, corresponding to each
population. If the values of X
1
and X
2
are approximately equal to each other, the distributions of X, and
X2 overlap substantially (Figure 3-23), and the null hypothesis is supported. A large observed difference
between the sample means indicates that the null hypothesis should be rejected. Formally, the difference
in means can be tested using Student’s t-test or the Welch’s t-test.
Irx, ‘” :X2 .
this area is
large
FIGURE 3-23 Overlap of the two distributions is larg e if X1 ::::: X2
Student’s t-test
Stud ent ‘s t- test ass umes that distributions of t he t wo populations have equal but unknow n
varian ces. Suppose n
1
and n
2
samples are random ly and independently selected from two populations,
pop1 and pop2, respectively. If each population is normally distributed with the same mean (Jt
1
= Jt
2
) and
wi th the sa me variance, then T (the t-statistic ), given in Equation 3-1, follows a t -distribution w ith
n, + n
2
– 2 degrees of f reed om (df).
where (3-1)
3.3 Statistical Methods for Evaluation
The shape of the t-distribution is similar to the normal distribution. In fact, as the degrees of freedom
approaches 30 or more, the t-distribution is nearly identical to the normal distribution. Because the numera-
tor ofT is the difference of the sample means, if the observed value ofT is far enough from zero such that
the probability of observing such a value of Tis unlikely, one would reject the null hypothesis that the
population means are equal. Thus, for a small probability, say a= 0.05, T* is determined such that
P(ITI2: T*) = 0.05. After the samples are collected and the observed value ofT is calculated according to
Equation 3-1, the null hypothesis (p,1 = p2) is rejected ifiTI2: r·.
In hypothesis testing, in general, the small probability, n, is known as the significance level of the test.
The significance level of the test is the probability of rejecting the null hypothesis, when the null hypothesis
is actually TRUE.In other words, for n = 0.05, if the means from the two populations are truly equal, then
in repeated random sampling, the observed magnitude ofT would only exceed r· 5% of the time.
In the following R code example, 10 observations are randomly selected from two normally distributed
populations and assigned to the variables x andy. The two populations have a mean of 100 and 105,
respectively, and a standard deviation equal to 5. Student’s t-test is then conducted to determine if the
obtained random samples support the rejection of the null hypothesis.
# generate random observations from the two populations
x <- rnorm(lO, mean=lOO, sd=S) # normal distribution centered at 100
y <- rnorm(20, mean=lOS, sd=S) ll no~:mal distribution centered at 105
t.test(x, y, var.equal=TRUE)
Two Sample t-test
data: x and y
# run the Student's t-test
t = -1.7828, df = 28, p-value = 0.08547
alternative hypothesis: true difference in means is not equal to 0
95 percent confidence interval:
-6.1611557 0.4271393
sample estimates:
mean of x mean of y
102.2136 105.0806
From the R output, the observed value of Tis t = -1.7828. The negative sign is due to the fact that the
sample mean of xis less than the sample mean of y. Using the qt () function in R, a Tvalue of 2.0484
corresponds to a 0.05 significance level.
# obtain t value for a two-sidec test at a 0.05 significance level
qt(p=O.OS/2, df=28, lower.tail= FALSE)
2.048407
Because the magnitude of the observed T statistic is less than the T value corresponding to the 0.05
significance level Q -1.78281< 2.0484), the null hypothesis is not rejected. Because the alternative hypothesis
is that the means are not equal (p
1
:;z:: 11
2
), the possibilities of both p, > 11
2
and p 1 < 112 need to be considered.
This form of Student's t-test is known as a two-sided hypothesis test, and it is necessary for the sum of the
probabilities under both tails of the t-distribution to equal the significance level. It is customary to evenly
REVIEW OF BASIC DATA ANALYTIC METHODS USING R
divide the significance level between both tails. So, p = 0.05/2 = 0.025 was used in the qt () function to
obtain the appropriate t-value.
To simplify the comparison of the t-test results to the significance level, the R output includes a quantity
known as the p -value. ln the preceding example, the p-value is 0.08547, which is the sum of P(T ~ - 1.7828)
and P(T ~ 1.7828). Figure 3-24 illustrates the t-statistic for the area under the tail of a t-distribution. The -t
and tare the observed values of the t-statistic. ln the R output, t = 1.7828. The left shaded area corresponds
to the P(T ~ - 1.7828), and the right shaded area corresponds to the P(T ~ 1.7828).
-t 0
FIGURE 3-24 Area under the tails (shaded) of a student's t-distribution
In the R output, for a significance level of 0.05, the nu ll hypothesis would not be rejected because the
likelihood of a Tvalue of magnitude 1.7828 or greater would occur at higher probability than 0.05. However,
based on the p -value, if the significance level was chosen to be 0.10, instead of 0.05, the null hypothesis
would be rejected. In general, the p-value offers the probability of observing such a sample result given
the null hypothesis is TRUE.
A key assumption in using Student's t-test is that the population variances are equal. In the previous
example, the t . test ( ) function call includes var . equal=TRUE to specify that equa lity of the vari-
ances should be assumed. If that assumption is not appropriate, then Welch's t-test should be used.
Welch 's t-test
When the equal population variance assumption is not justified in performing Student's t-test for the dif-
ference of means, Welch's t-test [14] can be used based on T expressed in Equation 3-2.
(3-2)
where X,. 5,2, and n, correspond to the i-th sample mean, sample variance, and sample size. Notice that
Welch's t-test uses the sample va riance (5ll for each population instead of the pooled sample variance.
In Welch's test, under the remaining assumptions of random samples from two normal populations with
the same mea n, the distribution of Tis approximated by the t-distribution. The following R code performs
the We lch's t-test on the same set of data analyzed in the ea rlier Student's t-test example.
3.3 Statistical Methods for Evaluation
t.test(x, y, var.equal=FALSE) # run the Welch's t-test
l'lelch Two Sample t-test
data: x andy
t = -1.6596, df = 15.118, p-value = 0.1176
alternative hypothesis: true difference in neans is not equal to o
95 percent confidence interval:
-6.546629 0.812663
sample estimates:
mean of x mean of y
102.2136 105.0806
In this particular example of using Welch's t-test, the p-value is 0.1176, which is greater than the p-value
of 0.08547 observed in the Student's t-test example. In this case, the null hypothesis would not be rejected
at a 0.10 or 0.05 significance level.
It should be noted that the degrees of freedom calculation is not as straightforward as in the Student's
t-test. In fact, the degrees of freedom calculation often results in a non-integer value, as in this example.
The degrees of freedom for Welch's t-test is defined in Equation 3-3.
df=l~r l~:r
--+--
n,-1 n2 -1
(3-3)
In both the Student's and Welch's t-test examples, the R output provides 95% confidence intervals on
the difference of the means. In both examples, the confidence intervals straddle zero. Regardless of the
result of the hypothesis test, the confidence interval provides an interval estimate of the difference of the
population means, not just a point estimate.
A confidence interval is an interval estimate of a population parameter or characteristic based on
sample data. A confidence interval is used to indicate the uncertainty of a point estimate.lfx is the estimate
of some unknown population mean f..L, the confidence interval provides an idea of how close xis to the
unknown p. For example, a 95% confidence interval for a population mean straddles the TRUE, but
unknown mean 95% of the time. Consider Figure 3-25 as an example. Assume the confidence level is 95%.
If the task is to estimate the mean of an unknown value Jt in a normal distribution with known standard
deviation u and the estimate based on n observations is x, then the interval x ± ~ straddles the unknown
value of Jl with about a 95% chance. If one takes 100 different samples and computes the 95% confi-
dence interval for the mean, 95 of the 100 confidence intervals will be expected to straddle the population
mean Jt.
REVIEW OF BASIC DATA ANALYTIC METHODS USING R
FIGURE 3-25 A 95% confidence interval straddlin g the unknown population mean 1J
Confidence intervals appear again in Section 3.3.6 on AN OVA. Return ing to t he discussion of hypoth-
es is t est ing, a key assumpti on in b oth t he Stud ent 's and Welch 's t-tes t is that the relevant population
attri bute is norma lly distributed. For non-norm ally dist ributed data, it is sometimes p ossible to transform
the co llected data to approx imate a normal distribution . For example, taki ng the logarithm of a d ataset
can often transfo rm skewed d ata to a dataset that is at least symmetric arou nd its mean. Howeve r, if such
transform ations are ineffective, there are tes t s like t he Wi lcoxon ra nk-su m test that can be ap plied to see
if t wo population distributions are different.
3.3.3 Wilcoxon Rank-Sum Test
At-test represents a parametric test in t hat it makes assumptions about the population distributions f rom
w hich th e sa mples are drawn. If t he populations cann ot be assu med or transformed to follow a normal
distribut ion, a n onparametric test can be used . The Wilcoxo n rank-sum test [15] is a nonpa ramet ric
hypothesis test that checks w hether two populations are identically d istributed. Assuming the two popula-
t ions are identica lly distributed, o ne would expect that the ordering of any sampled observations would
be evenly intermixed among t hemselves. For example, in orderi ng the observations, one would not expect
to see a large number of observations from one population grouped together, especially at the beginning
or the end of ordering.
Let the t wo p opulations again be popl and pop2, w ith independently random samples of size n
1
and
n
2
respecti vely. The tot al number of observations is then N = n
1
+ n
2
• The first step of the Wilcoxon t est is
to rank t he set of observat ions from t he t wo groups as if they came from one la rge group. The smallest
observation receives a rank of 1, t he second smallest observation receives a rank of 2, and so on with the
largest observatio n being assig ned the rank of N. Ties amo ng the observations receive a ran k equal to
t he average of the ranks they span. The test uses ranks instead of numerical o utcom es t o avoid specific
assumpt io ns about the shape of the distributi on.
After ranking all the ob serva t ions, t he assig ned ranks are summed for at least one population's sample.
If t he distribution of popl is shift ed to t he right of the other distribution, t he rank-sum corr espondi ng to
popl's sa mple shou ld be larger than the rank-sum of pop2. The Wilcoxon rank- sum test determines the
3.3 Statistical Methods for Evaluat io n
significance of the observed rank-su ms. The following R code performs the test on the same dataset used
for the previous t-test.
wilcox.test(x, y, conf.int TRUE)
:·:~.c :-c n r rJ.:
! 1 t
The wilcox. test ( l function ranks the observations, determines the respective rank-sums cor-
responding to each population's sample, and then determines the probability of such rank-sums of such
magnitude being observed assuming that the population distributions are identical. In this example, the
probability is given by the p-value of 0.04903. Thu s, the null hypothesis would be rejected at a 0.05 sig-
nificance level. The reader is cautioned against interpreting that one hypothesis test is clearly better than
another test based solely on the examples given in this section.
Because the Wilcoxon test does not assume anything about the population distribution, it is generally
considered more robust than the t-test. In other words, there are fewer assumptions to violate. However,
when it is reasonable to assume that the data is normally distributed, Student's or Welch's t-test is an
appropriate hypothesis test to consider.
3.3.4 Type I and Type II Erro rs
A hypothesis test may result in two types of errors, depending on whether the test accepts or rejects the
null hypothesis. These two errors are known as type I and type II errors.
• A type I error is the rejection of the null hypothesis when the null hypothesis is TRUE. The probabil-
ity of the type I error is denoted by the Greek letter n .
• A type II error is the acceptance of a null hypothesis when the null hypothesis is FALSE. The prob-
ability of the type II error is denoted by the Greek letter .1.
Table 3-61ists the four possible states of a hypothesis test, including the two types of errors.
TABLE 3-6 Type I and Type II Error
H
0
is true H
0
is false
H
0
is accepted Correct outcome Type II Error
H
0
is rejected Type/error Correct outcome
REVIEW OF BASI C DATA ANALYTIC METHO DS USING R
The significance level, as mentioned in the Student's t-test discussion, is equivalent to the type I error.
For a significance level such as o = 0.05, if the null hypothesis (Jt 1 = J1 1) is TRUE, there is a So/o chance that
the observed Tvalue based on the sample data will be large enough to reject the null hypothesis. By select-
ing an appropriate sig nificance level, the probability of commi tting a type I error can be defined before
any data is collected or analyzed.
The probability of committing a Type II error is somewhat more difficult to determine. If two population
means are truly not equal, the probability of committing a type II error will depend on how far apart the
means truly are. To reduce the probability of a type II error to a reasonable level, it is often necessary to
increase the sample size. This topic is addressed in the next section.
3.3.5 Power and Sample Size
The power of a test is the probability of correctly rejecting the null hypothesis. It is denoted by 1- /.3, where
f] is the probability of a type II err or. Because the power of a test improves as the sample size increases,
power is used to determine the necessary sample size. In the difference of means, the power of a hypothesis
test depends on the true difference of the po pulation means. In ot her words, for a fixed significance level,
a larger sample size is required to detect a smaller difference in the means. In general, the magn itude of
the difference is known as the effect size. As the sample size becomes larger, it is easier to detect a given
effec t size, 6, as illustrated in Figure 3-26.
Moderate Sample Size Larger Sample Size
1------1 1------1
a' a'
F IGURE 3-26 A larger sample size better identifies a fixed effect size
With a large enough sample size, almost any effect size can appear statistica lly sign ificant. However, a
very small effect size may be useless in a practical sense. It is import an t to consider an appropriate effect
size for the problem at hand.
3.3.6ANOVA
The hypothesis tests presented in the previous sections are good for analyzing means between two popu-
lations. But what if there are more than two populations? Consider an examp le of testing the impact of
3.3 Statistical Methods for Evaluation
nutrition and exercise on 60 candidates between age 18 and 50. The candidates are randomly split into six
groups, each assigned with a different weight loss strategy, and the goal is to determine which strategy
is the most effective.
o Group 1 only eats junk food.
o Group 2 only eats healthy food.
o Group 3 eats junk food and does cardia exercise every other day.
o Group 4 eats healthy food and does cardia exercise every other day.
o Group 5 eats junk food and does both cardia and strength training every other day.
o Group 6 eats healthy food and does both cardia and strength training every other day.
Multiple t-tests could be applied to each pair of weight loss strategies. In this example, the weight loss
of Group 1 is compared with the weight loss of Group 2, 3, 4, 5, or 6. Similarly, the weight loss of Group 2 is
compared with that of the next 4 groups. Therefore, a total of 15 t-tests would be performed.
However, multiplet-tests may not perform well on several populations for two reasons. First, because the
number oft-tests increases as the number of groups increases, analysis using the multiplet-tests becomes
cognitively more difficult. Second, by doing a greater number of analyses, the probability of committing
at least one type I error somewhere in the analysis greatly increases.
Analysis of Variance (ANOVA) is designed to address these issues. AN OVA is a generalization of the
hypothesis testing of the difference of two population means. AN OVA tests if any of the population means
differ from the other population means. The null hypothesis of A NOVA is that all the population means are
equal. The alternative hypothesis is that at least one pair of the population means is not equal. In other
words,
0 Ho:Jll = J12 = ··· = Jln
o H A: Jl; ::= J1i for at least one pair of i,j
As seen in Section 3.3.2, "Difference of Means," each population is assumed to be normally distributed
with the same variance.
The first thing to calculate for the AN OVA is the test statistic. Essentially, the goal is to test whether the
clusters formed by each population are more tightly grouped than the spread across all the populations.
Let the total number of populations be k. The total number of samples N is randomly split into the k
groups. The number of samples in the i-th group is denoted as n
1
, and the mean of the group is X1 where
iE[l,k]. The mean of all the samples is denoted as X
0
•
The between-groups mean sum of squares, s;, is an estimate of the between-groups variance. It
measures how the population means vary with respect to the grand mean, or the mean spread across all
the populations. Formally, this is presented as shown in Equation 3-4.
k
52 =-1-~n.·(x.-x )2
8 k-1L...i I I 0
1=1
(3-4)
The within-group mean sum of squares, s~. is an estimate of the within-group variance. It quantifies
the spread of values within groups. Formally, this is presented as shown in Equation 3-5.
REVIEW OF BASIC DATA ANALYTIC METHODS USING R
(3-5)
If s; is much larger than 5~, then some of the population means are different from each other.
The F-test statistic is defined as the ratio of the between-groups mean sum of squares and the within-
group mean sum of squares. Formally, this is presented as shown in Equation 3-6.
(3-6)
The F-test statistic in A NOVA can be thought of as a measure of how different the means are relative to
the variability within each group. The larger the observed F-test statistic, the greater the likelihood that
the differences between the means are due to something other than chance alone. The F-test statistic
is used to test the hypothesis that the observed effects are not due to chance-that is, if the means are
significantly different from one another.
Consider an example that every customer who visits a retail website gets one of two promotional offers
or gets no promotion at all. The goal is to see if making the promotional offers makes a difference. ANOVA
could be used, and the null hypothesis is that neither promotion makes a difference. The code that follows
randomly generates a total of 500 observations of purchase sizes on three different offer options.
offers<- sample(c("offerl", "offer2", "nopromo"), size=SOO, replace=T)
# Simulated 500 observations of purchase sizes on the 3 offer options
purchasesize <- ifelse(offers=="offerl", rnorm(SOO, mean=SO, sd=30),
ifelse(offers=="offer2", rnorm(SOO, mean=SS, sd=30),
rnorm(SOO, mean=40, sd=30)))
# create a data frame of offer option and purchase size
offertest <- data.frame(offer=as.factor(offers),
purchase_amt=purchasesize)
The summary ofthe offertest data frame shows that 170 offerl, 161 offer2, and 169
nopromo (no promotion) offers have been made. It also shows the range of purchase size (purchase_
amt) for each of the three offer options.
# display a summary of offertest where o:fer="offer1"
summary(offertest[offertest$offer=="offerl",])
offer
nopromo:
offe:n :170
offer2 :
purchase_amt
t·li;;.. 4.521
1 s:: Qu . : 5 8 . 1 5 8
i·iedian : 76. 944
I·1ean .Sl. 936
3 rd Qu. : 1 D 4 . 9 59
t•la:·:. :130.507
# display a summary of offertest where o:fer="offer2"
summary(offertest[offertest$offer=="offer2",])
offer
nopromo: 0
offer! 0
offer2 :161
purchase_amt
~lin. 14.04
1st Qu . : 6 9 . 4 6
t·ledian : 90.20
r•lean 89.09
3 rd Qu. : 10 7. 4 8
!•lax. : 154. 3 3
# display a summary of offertest where offer="nopromo"
summary(offertest[offertest$offer== 11 nopromo 11 ,])
offer
nopromo:169
offerl 0
offer2 : 0
purchase_amt
Min. :-27.00
1st Qu.: 20.22
t
offers 2 225222 112611 130.6 <2e-16
Residuals 4 97 428470 862
Signif. codes: 0 1 *** 1 0.001 1 **' 0.01 1 * 1 0.05 '. 1 0.1 1 1 1
The output also includes the 5~ (112,611), 5~ (862), the F-test statistic (130.6), and the p-value (< 2e-16).
The F-test statistic is much greater than 1 with a p-value much less than 1. Thus, the null hypothesis that
the means are equal should be rejected.
However, the result does not show whether offerl is different from offer2, which requires addi-
tional tests. The TukeyHSD (} function implements Tukey's Honest Significant Difference (HSD) on all
pair-wise tests for difference of means.
TukeyHSD(model)
Tukey multiple comparisons of means
95% family-wise confidence level
Fit: aoviformula purchase amt - offers, data
$offers
diff lwr upr
offertest)
p adj
offerl-nopromo 40.961437 33.4638483 48.45903 0.0000000
REVIEW OF BASIC DATA ANALYTIC METHODS USING R
offer2 nop romo 48 . 120286 40 . 51894~6 55 . 72163 O. OOOOOCO
offer2--fferl 7 . 1"8849 -0 . 4315769 14 . 7 4928 o . r692895
The re sult includes p -values of pair-wise comparisons of the three offer options. The p-values for
of ferl- nopromo and of fer- nop romo are equal to 0, smaller than the significance level 0.05.
Thi s suggests t hat both of ferl and offer2 are sign ifica ntly different from n opromo. A p-value of
0.0692895 for off er2 aga inst of fer 1 is greater than the significance level 0.05. This suggests that
of fer2 is not significantly different from offerl.
Because only the influence of one fa ctor (offers) was executed, the presented A NOVA is known as one-
way ANOVA. If the goal is to analyze two factors, such as offers and day of week, that would be a two-way
A NOVA [16]. 1f the goal is to model more than one outcome variable, then multivariate AN OVA (or MAN OVA)
cou ld be used.
Summary
R is a popu lar package and prog ramming language for data exploration, analytics, and visualization. As an
introduction toR, this chapter covers t he R GUI, data 1/0, attribute and data types, and descriptive statistics.
This chapter also discusses how to useR to perform exploratory data analysis, including the discovery of
dirty data, visua lization of one or more variables, and customization of visualization for different audiences.
Finally, the chapter introduces some basic statistical methods. The firs t statistical method presented in the
chapter is the hypothesis testing. The Student's t-test and Welch's t-test are included as two example hypoth-
esis tests designed for testing the difference of means. Other statistical methods and tools presented in this
chapter include confidence interva ls, Wilcoxon rank-sum test, type I and II errors, effect size, and ANOVA.
Exercises
1. How ma ny levels does fdata contain in the following R code?
data = c(1 , 2,2,3,1,2 , 3,3 ,1 , 2,3,3 , 1)
fdata = factor(data)
2. Two vectors, vl and v2, are created with the following R code:
vl <- 1:5
v2 <- 6 : 2
What are the results of cbi nd (vl , v2) and rbind (vl , v2)?
3. What R comma nd(s) would you use to remove null values from a dataset?
4. What R command can be used to install an additiona l R package?
5. What R fun ction is used to encode a vector as a ca tegory?
6. What is a rug plot used for in a density plot?
7. An online retailer wa nts to study the purchase behaviors of its customers. Figure 3-27 shows the den-
sity plot of the purchase sizes (in dol lars) . What wou ld be your recommendation to enhance the plot
to detect more structures that otherwise might be missed?
Bibliography
Be-04
6e-04
£
"' ~ 4e-04
2e-04
09+(}0
0 2000 4000 6000 8000 10000
purchase size (dollars)
fiGURE 3-27 Density plot of purchase size
8. How many sections does a box-and-whisker divide the data into? What are these sections?
9. What attributes are correlated according to Figure 3-18? How would you describe their relationships?
10. What function can be used to tit a nonlinear line to the data?
11. If a graph of data is skewed and all the data is positive, what mathematical technique may be used to
help detect structures that might otherwise be overlooked?
12. What is a type I error? What is a type II error? Is one always more serious than the other? Why?
13. Suppose everyone who visits a retail website gets one promotional offer or no promotion at all. We
want to see if making a promotional offer ma kes a difference. What statistical method wou ld you
recommend for this analysis?
14. You are ana lyzing two norma lly distributed populations, and your null hypothesis is that the mean f1
1
of the first population is equal to the mean 112 of the second. Assume the significance level is set at
0.05. If the observed p ·value is 4.33e-05, what will be your decision regarding the null hypothesis?
Bibliography
[1] The R Project for Statistical Computing, "R Licenses." [Online). Available: http : I l www. r-
proj ec t. orgiLicensesl. [Accessed 10 December 2013].
[2] The R Project for Statistical Computing, "The Comprehensive R Arch ive Network." [Onli ne].
Available: http: I lcran . r-project. orgl. [Accessed 10 December 2013].
REVIEW OF BASIC DATA ANALYTIC METHODS USING R
[3] J. Fox and M. Bouchet-Valat, "The R Commander: A Basic-Statistics GUI for R," CRAN. [Online].
Available: http : I / soc s e rv . mcmaste r. ca / j fox / Misc / Rcmdr / . [Acce ssed 11
December 2013].
[4] G. William s, M. V. Culp, E. Cox, A. Nolan, D. White, D. Medri, and A. Waljee, "Rattle: Graphical User
Interface for Data Mining in R," CRAN. [Online]. Available: ht t p : I I c ran . r - p roj ect . org/
we b / pac kages / rattl e/ index . html. [Accessed 12 December 2013].
[5] RStudio, "RStudio IDE" [Online]. Available: http : I / www. rstudio . com/ide / . [Accessed 11
December 2013].
[6] R Special Interest Group on Databases (R-SIG-DB), "OBI: R Database Interface." CR AN [On line].
Available: http : I I cran . r-proj ec t. o rg/ we b / packages / DBI / index . h tml .
[Accessed 13 December 2013].
(7] B. Ripley, "RODBC: ODBC Database Access," CRAN. [Online]. Available: http : 11 cran . r-pr o j-
e c t . o rg/ web / packages / RODBC / index. html. [Accessed 13 December 2013].
[8] S. S. Stevens, "On the Theory of Scales of Measurement," Science, vol. 103, no. 2684, p. 677-680,
1946.
[9] D. C. Hoaglin, F. Mosteller, and J. W. Tukey, Understanding Robust and Exploratory Data Analysis,
New York: Wi ley, 1983.
[10) F. J. Anscom be, "G raphs in Statistical Analysis," The American Statistician, vol. 27, no. 1, pp. 17- 21,
1973.
[11) H. Wickham, "ggplot2," 2013. [Online]. Availa ble: h ttp: I I ggplo t2 . org I . [Accessed 8
January 2014].
[12) W. S. Cleveland, Visualizing Data, Lafayette, IN: Hobart Press, 1993.
[13] R. A. Fisher, "The Use of Multiple Measurements in Taxonomic Problems," Annals of Eugenics, vol.
7, no. 2, pp. 179- 188, 1936.
[14) B. L. Welch, "The Generalization of "Student's" Problem When Several Different Popu lation
Variances Are Involved," Biometrika, vol. 34, no. 1-2, pp. 28- 35, 1947.
[15) F. Wilcoxon, "Individual Comparisons by Ranking Methods," Biometrics Bulletin, vol. 1, no. 6, pp.
80- 83, 1945.
[16) J. J. Faraway, "Practical Regression and A nova Using R," July 2002. [On line]. Available: ht tp : 11
c ran. r-pro ject . o r g / doc/ c o ntrib/ Fa r awa y - PRA. pdf. [Accessed 22 January
2014].
ADVANCED ANALYTICAL THEORY AND METHODS: CLUSTERING
Building upon the introduction toR presented in Chapter 3, "Review of Basic Data Analytic Methods Using R,"
Chapter 4, "Advanced Analytical Theory and Methods: Cl ustering" through Chapter 9, "Advanced Analytical
Theory and Methods: Tex t Analysis" describe several commonly used analytical methods that may be
considered for the Model Planning and Execution phases (Phases 3 and 4) of the Data Analytics Lifecycle.
This chapter considers clustering techniques and algorithms.
4.1 Overview of Clustering
In general, clustering is the use of unsupervised tech niques for grouping sim ilar objects. In mach ine
learning, unsupervi sed refers to the problem of finding hidden structure withi n unlabeled data. Clustering
techniques are unsupervised in the sense that the data scientist does not determine, in advance, the labels
to apply to the clusters. The structure of the data describes the objects of interest and determines how best
to group the object s. For example, based on customers' personal income, it is straightforward to divide
the customers into three groups depending on arbitrarily selected values. The customers could be divided
into three groups as follows:
• Earn less than $10,000
• Earn between 510,000 and $99,999
• Earn $100,000 or more
In this case, the income leve ls were chosen somewhat subjectively based on easy-to-commun icate
points of de lineation. However, such groupings do not indicate a natural affinity of the customers within
each group. In other word s, there is no inherent reason to believe that the customer making $90,000 will
behave any differently than the customer making 5110,000. As additional dimensions are introduced by
adding more variables about the customers, the ta sk of finding meaningful groupings becomes more
complex. For instance, suppose variables such as age, years of education, household size, and annual
purchase expenditures were considered along with the personal income variable. What are t he natural
occurring groupings of customers? This is the type of question that clustering analysis can help answer.
Clustering is a method often used for exploratory ana lysis of the data. In clustering, there are no pre-
dictions made. Rather, clustering methods find the sim ilarities between objec ts according to the object
attributes and group the simi lar objects into clusters. Clustering techniques are utilized in marke ti ng,
economics, and various branches of science. A popular clustering method is k-means.
4.2 K-means
Given a collection of objects each with n measurable attributes, k-means [1] is an analytical technique that,
for a chose n value of k, identifies k clusters of obj ects based on the objects' proximity to the center of the k
groups. The center is determined as the arithmetic average (mean) of each cluster's n-dimensiona l vector of
attributes. This section describes the algorithm to determ ine the k means as well as how best to apply this
technique to several use cases. Figure 4-1 illustrates three clusters of objects with two attributes. Each object
in the dataset is represented by a small dot color-coded to the closest large dot, the mean of the cluster.
4 .2K-means
• • • •••• • •• • •• • • • •
• • • • • • ~ 0 • • • e• o 0 • e o• • • • • 0 0 • • • 0 ••• ••• • 0 0 0 • • ,8 • e0o o • 0 0 0
0
fiGURE 4-1 Possible k-means clusters for k=3
4.2.1 Use Cases
Clustering is often used as a lead-in to classification. Once the clusters are identified, labels can be applied
to each cluster to classify each group based on its chara cteristics. Classification is covered in more detai l in
Chapter 7, "Adva nced Analytical Theory and Methods: Classification." Clustering is primarily an exploratory
technique to discover hidden structures of the data, possibly as a prelude to more focused analysis or
decision processes. Some specific applications of k-means are image processing, medical, and customer
segmentation.
Image Processing
Video is one example of the growi ng volumes of unstructured data being collected. Within each frame of
a video, k-means analysis ca n be used to identify objects in the video. For each frame, the task is to deter-
mine which pixels are most similar to each other. The attributes of each pixel ca n include bri ghtness, color,
and location, th e x and y coordinates in the frame. With security video images, for example, successive
frames are examined to identify any cha nges to the clusters. These newly identified clusters may indicate
unauthorized access to a facility.
Medical
Patient attributes such as age, height, weight, systolic and diastolic blood pressures, cholesterol level, and
other attributes can identify naturally occurring clusters. These clusters could be used to ta rget individuals
for specific preventive measures or clinica l trial participation. Clustering, in general, is useful in biology for
the classification of plants and animals as well as in the field of human genetics.
ADVANCED ANALYTICAL THEORY AND METHODS: CLU STERI NG
Customer Segmentation
Marketing and sa les groups use k-means to better identify customers who have similar behaviors and
spending patterns. For example, a wireless provider may look at the following customer attributes: monthly
bill, number of text messages, data volume consumed, minutes used during various daily periods, and
years as a customer. The wi reless company could then look at the naturally occurring clusters and consider
tactics to increase sales or reduce the customer ch urn rate, the proportion of customers who end their
relationship with a particular company.
4.2.2 Overview of the Method
To illustrate the met hod to find k clusters from a collection of M objects wit h n attributes, t he two-
dimensional case (n = 2) is examined. It is much easier to visualize the k-means method in two dimensions.
later in the chapter, the two-dimension scenario is generalized to handle any number of attributes.
Because each object in this example has two attributes, it is usefu l to consider each object correspond -
ing to the point (x,. y) , where x andy denote the two attributes and i = 1, 2 ... M. For a given cluster of
m points (m ~M), the point that corresponds to the cluster's mean is called a centroid. In mathematics, a
centroid refers to a point that corresponds to the center of mass for an object.
The k-means algorithm to find k clusters can be described in the following four steps.
1. Choose the value of k and the k initial guesses for the centroids.
In this example, k = 3, and the initial centroids are indicated by the points shaded in red, green,
and blue in Figure 4-2.
0
0 0
0 0 0 0 o
0
8
0
00
0
0
0
0
0
0
0 0
0
0
the k-means analysis wi ll be conducted fork = 3. The process of identifying the appropriate va lue of k is
referred to as finding the “elbow” of the WSS curve.
km kmeans(kmdata,3, nstart~2S)
km
K-means clustering with 3 clusters of sizes 158, 218, 244
Cluster means:
English Math Science
1 97.21519 93.37342 94.86076
2 73.22018 64.62844 65.84862
3 85.84426 79.68033 81.50820
Clustering vector:
[1] 1 1 1 1 1 1
1 1 1 1 1
[41] 1 1 1 1 1
1 1 1 1 1 1 1 1 1 1 1 1
1 1 1
1 1 1 1 1 1 1 1 1 1 1 1 1
1 1
1 1 1 1 1 1
1 1 1 1 1 1 1
1 1 1 1 1
[81] 1 1 1 1 1 1 1
1 1 1 1 1 1 1
1 1 1 1 1 1 1 1
1 1
1 1 1 1 1 1 1 1 1 1 1 1 1
[121] 1 1 1 1 1
3 3 3 3 3
[161] 3
1
1
1
3
1 1
3 3
3
1 1 1 1 1 1 1 1 1 1 1
3 3 3 3 1 1 3
[201] 3 3 3 3 3 3 3 3 3
3 3 3 3 3 3 3 3 3 3
[241] 3 3 3 3 3 3 3 3 3 3 3 3
3 3 3 3 3 3
[281] 3 3 3 3 3 3 3
3 3 3 3 3 3 3
[321] 3
3
[361] 3
3
3 3 3
3 3 3
3
3
2 2 2 2 3 3 2 2 2 2
3
3
3 3 3 3 3 3 3 3 3 3
3 3 3 3 3 3 3 3 3 3
3 3 3 3 3 3 3
3 3 3 3 2 2 2 2 2 2 2
1 1 1 1 1
1 3 3
3 3 3
3 3 3
3 3 3 3 3
3 3 3 3 3
2 3 2 3 3 3
[401] 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2
2 2 2 2 2 2 2 2 2 2
[441] 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 3 2 2 2
2 2 2 2 2 2 2 2 3 2
[481] 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2
2 2 2 2 2 2 2 2 2 2
[521] 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2
2 2 2 2 2 2 2 2 2 2
[561] 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2
2 2 2 2 2 2 2 2 2 2
[601] 3 2 2 3 1 1 3 3 3 2 2 3 2
Within cluster sum of squares by cluster:
[1] 6692.589 34806.339 22984.131
(between_SS I total_SS 76.5 %)
Available components:
[1] “cluster” “centers” “totss”
[6] “betweenss” “size” “iter”
“withinss”
“ifault”
“tot.withinss”
4.2K-means
ADVANCED ANALYTICAL THEORY AND METHODS: CLUSTERING
The displayed contents of the variable km include the following:
o The location of the cluster means
o A clustering vector that defines the membership of each student to a corresponding cluster 1, 2, or 3
o The WSS of each cluster
o A list of all the available k-means components
The reader can find details on these components and using k-means in R by employing the help facility.
The reader may have wondered whether the k-means results stored in km are equivalent to the WSS
results obtained earlier in generating the plot in Figure 4-5. The following check verifies that the results
are indeed equivalent.
c( wss[3] , sum(km$withinss)
[1] 64483.06 64483.06
In determining the value of k, the data scientist should visualize the data and assigned clusters. In the
following code, the ggplot2 package is used to visualize the identified student clusters and centroids.
#prepare the student data and clustering results for plotting
df; as.data.frame(kmdata_orig[,2:4])
df$cluster ; factor(km$cluster)
centers;as.data.frame(km$centers)
gl; ggplot(data=df, aes(x;English, y=Math, color=cluster )) +
geom_point() + theme(legend.position=”right”) +
geom_point(data=centers,
aes(x;English,y=Math, color=as.factor(c(l,2,3))),
size=lO, alpha=.3, show_guide=FALSE)
g2 =ggplot(data=df, aes(x=English, y=Science, color=cluster )) +
geom _point () +
geom_point(data=centers,
aes(x=English,y=Science, color=as.factor(c(l,2,3))),
size=lO, alpha=.3, show_guide=FALSE)
g3 = ggplot(data=df, aes(x=Math, y=Science, color=cluster )) +
geom_point () +
geom_point(data=centers,
aes(x;Math,y=Science, color=as.factor(c(l,2,3))),
size=lO, alpha;.), show_guide=FALSE)
tmp ggplot_gtable(ggplot_build(gl))
100 –
..c:: 90 –
ro so –
::2 – o –
eo –
Q) 100 –
(.) 90 –
~ 80 –
u -o –
en eo –
Q) 100 –
(.) 90 –
~ so –
(.) 70 –
en eo –
grid.arrange(arrangeGrob(gl + theme(legend.position=”none”),
g2 + theme(legend.position=”none”),
g3 + theme(legend.position= “none ” ),
main =”High School Student Cluster Analysis” ,
ncol=l) )
4.2K-means
The resul ting plots are provided in Figure 4-6. The large circles represent the loca tion of the cluster
means provided earlier in the display of the kmcontents. The small dots represent the students correspond-
ing to the appropriate cluster by assigned color: red, blue, or green. In general, the plots indicate the three
clusters of students: the top academic students (red), the academical ly cha llenged students (green), and
the other students (b lue) who fall somewhere between those two groups. The plots also high lig ht which
students may excel in one or two subject areas but struggle in othe r areas.
High School Student Cluster Analysis
• • • •
•
• •
• • • • • • • : • • I • • I •
~~;.:,······ -=· : ~ : : I I I I i I I I I ! I I
I I
eo
• •
•
. I. . . . . I . . :1 •!::!• a . : ‘ : .
I
eo
70
• •
so
Engli sh
•
, . •.. d, ll ll:
• • • a I · • I a • : : • • • • • e I
I I
70 so
English
Math
9D
90
• • • •
100
;pt11!
• • •
100
FIGURE 4-6 Plots of the identified student clust ers
Assig ning labels to the identified clusters is useful to communicate the results of an analysis. In a mar-
keting context, it is common to label a group of customers as frequent shoppers or big spenders. Such
designations are especia lly useful when communicating the clustering results to business users or execu-
tives. It is better to describe the marketing plan for big spenders rather than Cluster #1.
ADVANCED ANALYTICAL THEORY AND METHODS: CLUSTERING
4.2.4 Diagnostics
The heuristic using WSS can provide at least several possible k values to consider. When the num ber of
attributes is relatively small, a common approach to further refine the choice of k is to plot the data to
determine how distinct the identified clusters are from each other. In general, the following questions
should be considered.
• Are the clusters well separated from each other?
• Do any of the clusters have only a few poi nts?
• Do any of the centroids appear to be too close to each other?
In the first case, ideally t he plot would look like the one shown in Figure 4-7, when n = 2. The clusters
are well defined, with considerable space between the fo ur identified clusters. However, in other cases,
such as Figure 4-8, the clusters may be close to each other, and the distinction may not be so obvious.
0
0~ ~C>.>
«:
N <%>0 00 <§
0 @0(/>
I I I I
2 4 6 8
X
FIGURE • -7 Example of distinct clusters
In such cases, it is important to apply some judgment on whether anything different will result by using
more clusters. For example, Figure 4-9 uses six clusters to describe the sa me dataset as used in Figu re 4-8.
If using more clusters does not better distinguish the groups, it is almost certainly better to go with fewer
clusters.
4.2 K-means
0
0
n
0 0 0 fJ ,j
(X) 0
0 0 0
Oo o o 0
00 0 C• 0
10 0 0 C> 0
0 0
oo 0
>- 0
0 0
0 C• 0
0
0
~
00
0
0 0
8
0
0
0 0 0 0
N ~ 0 0
0
0 0
T I I
2 4 6 8
X
F IGURE 4-8 Example of less obvious clusters
0
0 0 8 0 f) 0
(X) – 0
0 0
Oo oo 0
0 0
0
10 – 0 0 0
0 0
0
>- 0 0
•:1 0 0
0
-:r – 0 0 0
8
0
0
0 0 0 0
N ~ 0 0
0
0 0
I
2 4 6 8
X
FIGURE 4-9 Six clusters applied to the points from Figure 4-8
ADVANCED ANALYTICAL THEORY AND METHODS: CLUSTERING
4.2.5 Reasons to Choose and Cautions
K-means is a simple and straightforward method for defining clusters. Once clusters and their associated
centroids are identified, it is easy to assign new objects (for example, new customers) to a cluster based on
the object’s distance from the closest centroid. Because the method is unsupervised, using k-means helps
to eliminate subjectivity from the analysis.
Although k-means is considered an unsupervised method, there are still several decisions that the
practitioner must make:
o What object attributes should be included in the analysis?
o What unit of measure (for example, miles or kilometers) should be used for each attribute?
o Do the attributes need to be rescaled so that one attribute does not have a disproportionate effect on
the results?
o What other considerations might apply?
Object Attributes
Regarding which object attributes (for example, age and income) to use in the analysis, it is important
to understand what attributes will be known at the time a new object will be assigned to a cluster. For
example, information on existing customers’ satisfaction or purchase frequency may be available, but such
information may not be available for potential customers.
The Data Scientist may have a choice of a dozen or more attributes to use in the clustering analysis.
Whenever possible and based on the data, it is best to reduce the number of attributes to the extent pos-
sible. Too many attributes can minimize the impact of the most important variables. Also, the use of several
similar attributes can place too much importance on one type of attribute. For example, if five attributes
related to personal wealth are included in a clustering analysis, the wealth attributes dominate the analysis
and possibly mask the importance of other attributes, such as age.
When dealing with the problem of too many attributes, one useful approach is to identify any highly
correlated attributes and use only one or two of the correlated attributes in the clustering analysis. As
illustrated in Figure 4-10, a scatterplot matrix, as introduced in Chapter 3, is a useful tool to visualize the
pair-wise relationships between the attributes.
The strongest relationship is observed to be between Attribute3 and Attribute7.lfthe value
of one of these two attributes is known, it appears that the value of the other attribute is known with
near certainty. Other linear relationships are also identified in the plot. For example, consider the plot of
Attribute2 against Attribute3.lfthe value of Attribute2 is known, there is still a wide range of
possible values for At tribu t e3. Thus, greater consideration must be given prior to dropping one of these
attributes from the clustering analysis.
4.2 K-mean s
Another option to reduce the number of attributes is to combine several attributes into one measure.
For example, instead of using two attribute variables, one for Debt and one for Assets, a Debt to Asset ra tio
could be used. This option also addresses the problem when the magnitude of an attribute is not of real
interest, but the relative magnitude is a more important measure.
FIGURE 4-10 Scatterplot matrix for seven attributes
ADVANCED ANALYTICAL T H EORY AND METHODS: CLUSTERI NG
Uni t s of Measure
From a computational perspective, the k-means algorithm is somewhat indifferent to the units of measure
for a given attribute (for example, meters or centimeters for a patient’s height). However, the algorithm
will identify different clusters depending on the choice of the units of measure. For example, suppose that
k-means is used to cluster pat ients based on age in years and height in centimeters. For k=2, Figure 4-11
illustrates the two clusters that would be determined for a given dataset.
0
0
0 –
N
E’ 0
~ lO –
J:
01
0 00 0% 00 0 0 00 g> 0
~ o o 00 ~ o
0 ooooo>CX> o o o o
o Ooo 0 o 0
Qi 0
r. 0 –
0
lO –
0
I I I I
0 20 40 60 80
age (years)
FIGURE 4-11 Clusters with height expressed in centimeters
But if the height was rescaled from centimeters to meters by dividing by 100, the resulting clusters
would be slightly different, as illustrated in Figure 4-12.
0
0
I
0 20 40 60 80
age (years)
FIGURE 4-12 Clusters with height expressed in meters
4 .2K·means
When the height is expressed in meters, the magnitude of the ages dominates the distance calculation
between two points. The height attribute provides only as much as the square between the difference of the
maximum height and the minimum height or (2.0 – 0)2 = 4 to the rad icand, the number under the square
root symbol in the distance formu la given in Eq uation 4·3. Age ca n contribute as much as(S0 – 0)2 = 6,400
to the radicand when measuring the distance.
Rescaling
Attributes that are ex pressed in dollars are common in clustering analyses and can differ in magnit ude
from the other attributes. For example, if personal income is expressed in dollars and age is expressed in
years, the income attribute, often exceeding $10,000, can easily dominate the distance calculation with
ages typically less than 100 years.
Although some adjustments could be made by expressing the income in thousands of dollars (for
example, 10 for $10,000), a more straightforward method is to divide each attribute by th e attribute’s
sta nda rd deviation. The resulting attributes wi ll each have a standa rd deviation equal to 1 and wi ll be
without units. Returning to the age and height example, the standard deviations are 23.1 years and 36.4
em, respectively. Dividing each attribute val ue by the appropri ate standard deviation and performing the
k-means analysis yields the result shown in Figure 4-13.
0
0
I I I I
00 OS 10 1 5 20 25 30 35
age (rescaled)
FIGURE4 13 Clusters with rescaled attributes
With the resca led attributes for age and height, the borders of the resulting clusters now fall somewhere
between the two earlier clustering analyses. Such an occurrence is not surpri sing based on the magnitudes
of the attributes of the previous clusteri ng attempts. Some practitioners also subtract the means of the
attributes to center the attributes around zero. However, this step is unnecessa ry because the distance
formu la is only sensitive to the scale of the attribute, not its location.
ADVANCED ANALYTICAL THEORY AND METHODS: CLUSTERING
In many statistical analyses, it is com mon to transform typically skewed data, such as income, with long
tails by taking the logarithm of the data. Such transformation can also be appied ink-means, but the Data
Scientist needs to be aware of what effect th is transformation will have. For example, if /og
10
of income
expressed in dollars is used, the practitioner is essentially stating that, from a clustering perspective, $1,000 is
as cl ose to S10,000as $10,000 is to $100,000 (because log10 1,000 = 3,1og10 10,000 = 4, and log10 100,000 = 5).
In many cases, the skewness of the data may be the reason to perform the clustering analysis in the first place.
Additional Consideration s
The k-means algorithm is sensitive to the starting positions of the initial centroid. Thus, it is important to
rerun the k-means analysis several times for a particular value of k to ensure the cluster results provide the
overa ll minimum WSS. As seen earlier, this task is accompl ished in R by using the nstart option in the
kmeans () function call.
This chapter presented the use of the Euclidean distance function to assign the points to the closest cen-
troids. Other possible function choices include the cosine similarity and the Manhattan distance fun ctions.
The cosine similarity function is often chosen to compare two documents based on the frequency of each
word that appea rs in each of the documents [2). For two points, p and q, at (p
1
, pl’ … p
0
) and (q
1
, q
2
, .• • q
0
),
respectively, the Manhattan distance, d
1
, between p and q is expressed as shown in Equation 4-6.
n
dl(p.q) = L h – q~ (4-6)
I I
The Manhattan dista nce function is analogous to the distance traveled by a car in a city, where the
streets are laid out in a rectangular grid (such as city blocks). In Euclidean distance, the measurement is
made in a straight line. Using Equation 4-6, the distance from (1, 1) to (4, 5) would be J1 – 41 + J1 – 51 = 7.
From an optimi zation perspectiv e, if there is a need to use the Manhattan distan ce for a clustering analysis,
the median is a better choice for the centroid than use of the mean [2).
K-means cl ustering is applicable to obj ects that can be described by attributes that are numerical with
a meaningful distance measure. From Chapter 3, interval and ratio attribute types can certai nly be used.
However, k-means does not handle categorical variables well. For example, suppose a clustering analysis
is to be conducted on new car sales. Among other attributes, such as the sale price, the color of the car is
considered important. Although one could assign numerical values to the color, such as red = 1, yellow
= 2, and green = 3, it is not useful to consider that ye llow is as close to red as yellow is to green from a
clustering perspective. In such cases, it may be necessary to use an alte rn ative clustering methodology.
Such methods are described in the next section.
4.3 Additional Algorithms
The k-means clustering method is easily applied to numeric data where the concept of distance can natu rally
be applied. However, it may be necessary or desirable to use an alternative clustering algorithm. As discussed
at the end of the previous section, k-means does not handle categorical data. In such cases, k-modes [3) is a
common ly used method for clustering categorical data based on the number of differences in the respective
components of the attribute s. For example, if each obj ect has four attributes, the distance from (a, b, e, d)
to (d, d, d, d) is 3. In R, the function kmode () is implemented in the klaR package.
Exercises
Because k-means and k-modes divide the entire dataset into di stinct groups, both approaches are
considered partitioning methods. A third partitioning method is known as Partitioning around Medoids
(PAM) [4). In general, a medoid is a representative object in a set of objects. in cl usteri ng, the medoids are
the objects in each cluster that minimize the sum of the distances from the medoid to the other objects
in the cluster. The advantage of using PAM is that the “center” of each cluster is an actual object in the
dataset. PAM is implemented in R by the pam () function included in the cluster R package. The fpc
R package includes a function pamk () ,which uses the pam () function to find the optimal value fork.
Other clustering methods include hierarchical agglomerative clustering and density clustering methods.
In hierarchical agglomerative clustering, each object is initially placed in its own cluster. The clusters are
then combined with the most similar cluster. This process is repeated until one cluster, which includes all
the objects, exists. The R stats package includes the h clust () function for performing hierarchical
agglomerative clustering. in density-based clustering methods, the clusters are identified by the concentra-
tion of points. The fpc R package includes a function, dbscan () ,to perform density-based clustering
analysis. Density-based clustering can be useful to identify irregularly shaped clusters.
Summary
Clustering analysis groups similar objects based on the objects’ attributes. Clustering is applied in are as
such as marketing, economics, biology, and medicine. This chapter presented a detailed explanation of the
k-means algorithm and its implementation in R. To use k-means properly, it is important to do the following:
• Properly scale the attribute values to prevent certain attributes from dominating the other attributes.
• Ensure that the concept of dista nce between the assigned values within an attribute is meaningful.
• Choose the number of clusters, k, such that the sum of the Within Sum of Squares (WSS) of the
distances is reasonably minimized. A plot such as the exam ple in Figu re 4-5 can be helpfu l in this
respect.
If k-means does not appear to be an appropriate clustering technique for a given dataset, then alterna-
tive techniques such as k-modes or PAM should be considered.
Once the clusters are identified, it is often useful to label these clusters in some descriptive way. Especially
when dealing with upper management, these labels are useful to easily communicate the findings of the
clustering analysis. In clusteri ng, the labels are not preassigned to each object. The labels are subjectively
assigned after the clusters have been identified. Chapter 7 considers severa l methods to perform the cl as-
sification of objects with predetermined labels. Clustering can be used with other analytical techniques,
such as regre ssion. Linear regression and logistic regression are covered in Chapter 6, “Advanced Analytica l
Theory and Methods: Regression.”
Exercises
1. Using the age and height clustering example in section 4.2.5, algebraically illustrate the impact on the
measured distance when the height is expressed in meters rather than centimeters. Explain why different
clusters will result depending on the choice of units for the patient’s height.
2. Compare and contrast five clustering algorithms, assigned by the instructor or selected by the student.
ADVANCED ANALYTICAL THEORY AND METHODS: CLUSTERI NG
3. Using the ruspini dataset provided with the cl us ter package in R, perform a k-means analysis.
Document the findings and j ustify the choice of k. Hint: use data ( ruspini ) to load the dataset into
the R workspace.
Bibliography
[1] J. MacQueen, “Some Methods for Classification and Analysis of Multivari ate Observa tions,” in
Proceedings of th e Fifth Berkeley Symposium on Mathematical Statistics and Probability, Berkeley, CA,
1967.
[2] P.-N. Tan, V. Kumar, and M. Steinbach, Introduction to Data Mining, Upper Saddle River, NJ: Person,
2013.
[3] Z. Huang, “A Fast Clustering Algorithm to Cluster Very Large Categorica l Data Sets in Data Mining,”
1997. [Onlin e] . Available: http : I I ci teseerx . is t . psu . edulviewdoc l
download ? d o i=1 0 . 1. 1 . 13 4. 83&rep=rep1 &type =pdf. [Accessed 13 March 2014].
[4] L. Kaufman and P. J. Rous seeuw, “Partitioning Around Medoids (Program PAM),” in Finding Groups
in Data: An Introduction to Cluster Analysis, Hoboken, NJ, John Wiley & Sons, Inc, 2008, p. 68-125,
Chapter 2.
ADVANCED ANALYTICAL THEORY AND METHODS: ASSOCIATION RULES
This chapter discusses an unsupervised learning method called association rules. This is a descriptive, not
predictive, method often used to discover interesting relationships hidden in a large dataset. The disclosed
relationships can be represented as rules or frequent itemsets. Association rules are commonly used for
mining transactions in databases.
Here are some possible questions that association rules can answer:
• Which products tend to be purchased together?
• Of those customers who are similar to this person, what products do they tend to buy?
• Of those customers who have purchased this product, what other similar products do they tend to
view or purchase?
5.1 Overview
Figure 5-l shows the general logic behind association rules. Given a large collection of transactions (depicted
as three stacks of receipts in the figure), in which each transaction consists of one or more items, associa-
tion rules go through the items being purchased to see what items are frequently bought together and to
discover a list of rules that describe the purchasing behavior. The goal with association rules is to discover
interesting relationsh ips among the items. (The relationship occu rs too frequently to be random and is
meaningful from a business perspective, which may or may not be obvious.) The relationships that are inter-
esting depend both on the business context and the natu re of the algorithm being used for the discovery.
:.• … …
:.•
WIUIO ‘iitl rtot
Jl tJint’t’ –.••
t:UC~~o. •• t:US
lHU110 .,_,.,,,,
‘0 &Ol”
FIGURE 5-1 The genera/logic behind association rules
Cereal
Bread
Milk
Milk
Rules
.. Milk (90%)
.. Milk (40%)
.. Cereal (23%)
.. Apples (10%)
Wine .. Diapers (2%)
Each of the uncovered rules is in the form X ~ Y, meaning that when item X is observed, item Y is
also observed. In this case, the left-hand side (LHS) of the rule is X, and the right-hand side (RHS) of the
rule is Y.
Usi ng association rules, patterns ca n be discovered from the data that allow the association ru le algo-
rithms to disclose rules of related product purchases. The uncovered rules are listed on the right side of
5.1 Overview
Figure 5-1. The first three rules suggest that when cereal is purchased, 90% of the time milk is purchased
also. When bread is purchased, 40% of the time milk is purchased also. When milk is purchased, 23% of
the time cereal is also purchased.
In the example of a retail store, association rules are used over transactions that consist of one or
more items. In fact, because of their popularity in mining customer transactions, association rules are
sometimes referred to as market basket analysis. Each transaction can be viewed as the shopping
basket of a customer that contains one or more items. This is also known as an item set. The term itemset
refers to a collection of items or individual entities that contain some kind of relationship. This could be
a set of retail items purchased together in one transaction, a set of hyperlinks clicked on by one user in
a single session, or a set of tasks done in one day. An item set containing k items is called a k-itemset.
This chapter uses curly braces like { i tern 1, i tern 2, . . . i tern k} to denote a k-itemset.
Computation of the association rules is typically based on item sets.
The research of association rules started as early as the 1960s. Early research by Hajek et al. [1] intro-
duced many of the key concepts and approaches of association rule learning, but it focused on the
mathematical representation rather than the algorithm. The framework of association rule learning was
brought into the database community by Agrawal et al. [2] in the early 1990s for discovering regularities
between products in a large database of customer transactions recorded by point-of-sale systems in
supermarkets. In later years, it expanded to web contexts, such as mining path traversal patterns [3] and
usage patterns [4] to facilitate organization of web pages.
This chapter chooses Apriori as the main focus of the discussion of association rules. Apriori [5] is
one of the earliest and the most fundamental algorithms for generating association rules. It pioneered
the use of support for pruning the itemsets and controlling the exponential growth of candidate item-
sets. Shorter candidate item sets, which are known to be frequent item sets, are combined and pruned
to generate longer frequent itemsets. This approach eliminates the need for all possible item sets to be
enumerated within the algorithm, since the number of all possible itemsets can become exponentially
large.
One major component of Apriori is support. Given an item set L, the support [2] of Lis the
percentage of transactions that contain L. For example, if 80% of all transactions contain item set
{bread}, then the support of {bread} is 0.8. Similarly, if 60% of all transactions contain itemset
{bread, butter}, then the support of {bread, butter} is 0.6.
A frequent itemset has items that appear together often enough. The term “often enough” is for-
mally defined with a minimum support criterion. If the minimum support is set at 0.5, any itemset can
be considered a frequent item set if at least 50% of the transactions contain this itemset. In other words,
the support of a frequent itemset should be greater than or equal to the minimum support. For the
previous example, both {bread} and {bread, butter} are considered frequent item sets at the
minimum support 0.5. If the minimum support is 0.7, only {bread} is considered a frequent itemset.
If an item set is considered frequent, then any subset of the frequent item set must also be frequent.
This is referred to as the Apriori property (or downward closure property). For example, if 60% of the
transactions contain {bread, jam}, then at least 60% of all the transactions will contain {bread} or
{jam}. In other words, when the support of {bread, jam} is 0.6, the support of {bread} or { jam}
is at least 0.6. Figure 5-2 illustrates how the A priori property works. If item set { B, c, D} is frequent, then
all the subsets of this itemset, shaded, must also be frequent itemsets. The Apriori property provides the
basis for the Apriori algorithm.
ADVANCED AN ALYTICAL THEORY AND METHODS: ASSOCIATION RULES
FIGURE 5-2 Item set {A, B,C,D} and its subsets
5.2 Apriori Algorithm
The Apriori algorithm takes a bottom -up itera tive approach to uncovering the frequent itemsets by fi rs t
determ ining al l the possible items (o r 1-itemsets, for exam ple {bread}, {eggs}, {milk}, .. . ) and
then identifyi ng which among them are frequent.
Assuming the minim um support threshold (or the minimum support criterion) is set at 0.5, the algo-
rithm identifies and reta ins t hose itemsets that appear in at least 50% of all tran sactions and discards (or
“prunes away”) the itemsets that have a support less than 0.5 or appear in fewer than 50% of the trans-
actions. The word prune is used like it would be in gardening, where unwa nted branches of a bush are
clipped away.
In the next iteration of the Apriori algorithm, the identified freq uent 1-itemsets are pai red into
2-itemsets (for example, {bread, eggs} , {bread , milk }, {eggs, milk), … ) and again eval u-
ated to identify the frequent 2-itemsets among them.
At each iteration, the algorithm checks whether the support criterion can be met; if it can, the algorithm
grows the itemset, repeating the process until it run s out of support or until the itemsets reach a predefi ned
length. The A priori algorithm [5] is given next. Let va ri able Ck be the set of candidate k-item sets and variable
Lk be the set of k-itemsets that satisfy the minimum support. Given a t ransaction database 0, a minimum
support threshold /5 , and an optional parameter N indicating t he maximum length an itemset could reach,
Apri ori iteratively computes freq uent itemsets Lk . 1 based on Lk.
1 Aprio ri (0 , 6, N )
2 k-1
3 L,- {1-itemsets that satisfy minimum support 6}
4 while L, “”0
s if ~N V (3N 1\ k < N)
5.3 Evaluation of Candidate Rules
6 C~t- 1 .... candi dat e i t emsets generate d from Lk
7 f or each t r a n sac tion t in database 0 do
B increment the counts of Ck _
1
con taine d i n
9 Lk~ l ~ can d idate s i n Ck+l that satisfy minimum s upport 8
10 k - k + l
11 return U L
k k
The first step of the A priori algorithm is t o identify the f requent item sets by starting w ith each item in the
transactions that meets the predefi ned minimum support threshold 8. These itemsets are 1-itemsets
denoted as~. as eac h 1- itemset contai ns only one item. Next, the algorithm grows the it em set s by joining
~onto itself to form new, grown 2-itemsets denot ed as~ and determines the support of each 2-itemset
i n ~- Those itemsets that do not meet the minimum support threshold 8 are pruned away. The growing
and pruni ng process is repeated u ntil no itemset s meet th e minimum suppo rt threshold. Opti onally, a
threshold N can be set up to specify the maxi mum number of items the item set can reach or the maximum
number of iterations of the algorithm. Once completed, o utput of the Apriori algori t hm is the coll ection
of all the frequent k-itemsets.
Next, a co llection of candid ate rules is formed based o n the frequent itemsets uncovered in the itera-
tive process described earli er. For examp le, a freq uent itemset {mi lk, egg s} may sugg est candidate
rules {mi lk}-+{ eggs } and {eg gs} -. {mil k} .
5.3 Evaluation of Candidate Rules
Freq uent itemsets from the previous section can form candidate ru les such as X implies Y (X -7 Y). This
section discusses how measures such as confidence, li ft, and leverage can help evaluate the appropriate-
ness of t hese candidate rules.
Confidence [2] is defined as the measure of certai nty or t ru st worthiness associated with each discov-
ered rule. Mathematically, confidence is the percent of transactions that contain both X and Y out of all
the transactions t hat contain X (see Equation 5-1).
C ,.d ( X Y) Support(X 1\ Y) on11 ence -+ = --'-'------'-
Support( X)
(S-1)
For example, if {b r e a d, e g gs , milk} has a support ofO.lS and {bread , eggs} also has a support
of0.15, the co nfidence of rule (bread, egg s} ->{ milk} is 1, wh ich means 100% of the time a customer
buys bread and eggs, milk is bought as well. The rule is therefore correc t for 100% of the transactions
containing b rea d and egg s.
A relationship may be t hought of as interesting when the algorithm identifi es the relationship w ith a
measure of confidence greater than or equal t o a pred efi ned threshold. This predefined threshold is called
the minimum confidence. A hig her confidence indicates that the rul e (X -7 Y) is more interesting or more
trustworthy, based o n the sam ple dat aset.
So fa r, th is chapter has t alked about t wo common measures that the A priori algorithm uses: support
and confidence. All t he rul es can be ranked based on these t wo measures t o filter o ut the uninteresting
rule s and re tain the interesting ones.
ADVANCED ANALmCAL THEORY AND METHODS: ASSOCIATION RULES
Even though confidence can identify the interesting rules from all the candidate rules, it comes with
a problem. Given rules in the form of X ~ Y, confidence considers only the antecedent (X) and the co-
occurrence of X and Y; it does not take the consequent of the rule (Y) into concern. Therefore, confidence
cannot tell if a rule contains true implication of the relationship or if the rule is purely coincidental. X and
Y can be statistically independent yet still receive a high confidence score. Other measures such as lift [6]
and leverage [7] are designed to address this issue.
Lift measures how many times more often X and Y occur together than expected if they are statisti-
cally independent of each other. lift is a measure [6] of how X andY are really related rather than coinci-
dentally happening together (see Equation 5-2).
Lift(X -t Y) = Support(X 1\ Y)
Support( X)* Support(Y)
(5-2)
Lift is 1 if X and Y are statistically independent of each other. In contrast, a lift of X ~ Y greater than 1
indicates that there is some usefulness to the rule. A larger value of lift suggests a greater strength of the
association between X and Y.
Assuming 1,000 transactions, with {milk, eggs} appearing in 300 ofthem, {milk} appearing in
500, and {eggs} appearing in 400, then Lift(milk -t eggs)= 0.3/(0.5 *0.4) = 1.5. If {bread} appears
in 400 transactions and {milk, bread} appears in 400, then Lift(milk -t bread)=0.4/{0.5*0.4) =2.
Therefore it can be concluded that milk and bread have a stronger association than milk and eggs.
Leverage [7] is a similar notion, but instead of using a ratio, leverage uses the difference (see
Equation 5-3). Leverage measures the difference in the probability of X and Y appearing together in the
dataset compared to what would be expected if X and Y were statistically independent of each other.
Leverage (X -t Y) =Support (X 1\ Y)- Support( X)* Support(Y) (5-3)
In theory, leverage is 0 when X and Yare statistically independent of each other. If X and Y have some
kind of relationship, the leverage would be greater than zero. A larger leverage value indicates a stronger
relationship between X andY. For the previous example, Leverage(milk -t eggs)= 0.3-(0.5*0.4) = 0.1
and Leverage(milk -t bread)=0.4-(0.5*0.4) = 0.2.1t again confirms that milk and bread have a stronger
association than milk and eggs.
Confidence is able to identify trustworthy rules, but it cannot tell whether a rule is coincidental. A
high-confidence rule can sometimes be misleading because confidence does not consider support of
the itemset in the rule consequent. Measures such as lift and leverage not only ensure interesting rules
are identified but also filter out the coincidental rules.
This chapter has discussed four measures of significance and interestingness for association rules:
support, confidence, lift, and leverage. These measures ensure the discovery of interesting and strong
rules from sample datasets. Besides these four rules, there are other alternative measures, such as corre-
lation [8], collective strength [9], conviction [6], and coverage [10]. Refer to the Bibliography to learn how
these measures work.
5.5 An Example: Transactions in a Grocery Store
5.4 Applications of Association Rules
The term market basket analysis refers to a specific implementation of association rules min ing t hat
many companies use for a variety of purposes, including these:
• Broad-scale approaches to better merchandising- what products should be included in or excluded
from the inventory each month
• Cro ss-merchand ising between products and high-margin or high -ticket items
• Physical or logical placement of product within related categori es of products
• Promotional programs-m ultiple product purchase incentives managed through
a loyalty card program
Besides market basket analysis, association rules are commonly used for recommender systems [11 ]
and clickstream analysis [12].
Many online service providers such as Amazon and Netflix use recommender systems. Recommender
systems can use association rules to discover related products or identify customers who have similar
interests. For exa mple, association rules may suggest that those customers who have bought product A
have also bought product B, or those customers who have bought products A, B, and Care more similar
to this customer. These fin dings provide opportunities for re tailers to cross-sell their products.
Clickstrea m analysis refers to the analytics on data related to web browsing and user clicks, which
is stored on the client or the server side. Web usage log files generated on web servers contain huge
amounts of information, and association rules can potentially give useful knowledge to web usage data
analys ts. For example, association rul es may suggest that website visitors who land on page X click on
links A, B, and C much more often than links 0, E, and F. This observation provides val uable insight on
how to better personalize and recommend the content to site visitors.
The next section shows an example of grocery store transactions and demonstrates how to useR to
perform association rule mining.
5.5 An Example: Transactions in
a Grocery Store
An example illustrates the application of the A priori algorithm to a relatively si mple case that generalizes
to those used in practice. Using Rand the a r ules and arul esVi z packages, this example shows how
to use the A priori algorithm to generate frequent itemsets and rules and to evaluate and visual ize the rules.
The following commands install these two packages and import them into the current R workspace:
ins t a ll.packages ( ‘ aru les’ )
ins tall.pa ckages ( ‘arulesViz’ )
library ( ‘ a rul es ‘ )
l i b r ary( ‘ arulesViz ‘ )
ADVANCED ANALYTICAL THEORY AND METHODS: ASSOCIATION RULES
5.5.1 The Groceries Dataset
The example uses the Groceries dataset from the R arules package. The Groceries dataset is
collected from 30 days of real-world point-of-sale transactions of a grocery store. The dataset contains
9,835 transactions, and the items are aggregated into 169 categories.
data(Groceries)
Groceries
transactions in sparse format with
9835 transactions lrowsl and
169 items (columns)
The summary shows that the most frequent items in the dataset include items such as whole milk,
other vegetables, rolls/buns, soda, and yogurt. These items are purchased more often than the others.
summary(Groceries)
transactions as itemMatrix in sparse format with
9835 rows (elements/itemsets/transactionsl and
169 columns !items) and a density of 0.02609146
most frequent items:
whole milk other vegetables
2513 1903
y·ogurt
1372
(Other)
34055
rolls/buns
1809
element (itemset/transaction) length distribution:
sizes
1 2 4 5 6 7 9 10
2159 1643 1299 1005 855 645 545 438 350 246
15 16 17 18 19 20 21 22 23 24
55 46 29 14 14 9 ll 4 6
32
1
Min. 1st Qu. !•led ian r•lean 3rd Qu. !\lax.
1. 000 2.000 3.000 4.409 6.000 32.000
includes extended item information – examples:
labels level2 levell
1 frankfurter sausage meet and sausage
2 sausage sausage meet and sausage
liver loaf sausage meet and sausage
soda
1715
11 12 13 14
132 117 78 77
26 27 28 29
l 3
The class of the dataset is transactions, as defined by the arules package. The
transactions class contains three slots:
o transac tioninfo: A data frame with vectors of the same length as the number of transactions
o i teminfo: A data frame to store item labels
o data: A binary incidence matrix that indicates which item labels appear in every transaction
class(Groceries)
(1] “transactions”
att!: 1:, “package” ,I
[1] “arules”
5.5 An Example: Transactions in a Grocery Store
For the Groceries dataset, the transactioninfo is not being used. Enter
Groceries®i teminfo to display all169 grocery labels as well as their categories. The following
command displays only the first 20 grocery labels. Each grocery label is mapped to two levels of catego-
ries-level2 and level1-where level1 is a superset of level2. For example, grocery label
sausage belongs to the sausage category in level2, and it is part of the meat and sausage
category in levell. (Note that “meet” in level1 is a typo in the dataset.}
Groceries®iteminfo[1:20,]
leveL::
frankfurter sa’Jsage
2 sa’Jsagt: sausa9e
l i ·.rer loal sausage
4 han sausage
5 meat. sausage
6 finished products sausage
7 organic sausaqe sausage
8 chicken p::::ul tl·:-·
9 ~u::..-key·· poultry
10 pod: n::cd:
11 beef beef
12 hamburger meat beef
13 fish fish
1 •l citrus fruit fc,lit
15 tropical :ruit fr:.:i t
16 pip fn:it fr–.:i::
17 grapes fruit
18 berries fruit
19 nuts/prunes fruit
20 root vegetables vegetables
meet a:–.a sa·l:~~age
meet and sa~~Qge
meet and sa~~age
meet and sat:o:;aqc
meet ar,d sai:::,\yc
meet and Sc’lL’dLJe
mee~ a:1d sau~~· .. :=aqe
mee:: a:1d sau,,c;ge
meet and sausage
meet and sausage
ftui t and vegel’lbl·:::s
,.reget~iL’l~s
~/eg~t ~d:~ l~s
veget ai; l es
vegetal’ 1 es
… Jegetal_}les
f:uit anG
t !_”‘Ll_:_ t a.r:d
– !.”tl.i t_ a:1d
fn1i t a:1d
fruit and
fn.tit and vegetables
The following code displays the lOth to 20th transactions of the Groceries dataset. The
[ 1 o : 2 o] can be changed to [ 1 : 9 8 3 5 J to display all the transactions.
apply(Groceries®data[,l0:20], 2,
function(r) paste(Groceries®iteminfo[r, ”labels 11 ), collapse= 11 , 11 )
Each row in the output shows a transaction that includes one or more products, and each transaction
corresponds to everything in a customer’s shopping cart. For example, in the first transaction, a cus-
tomer has purchased whole milk and cereals.
[1] “whole milk, cereals”
[2] “tropical fruit, other vege·:ables, •:ihite ht•·:d, bottled \•later,
chocolate”
[3] “citrus fruit, tropical fc·u:t, ·.·:hole :nil;.:, i·.1:ter, curd, ‘iogurt,
flour, bottled ·.·:ater, d::.shes”
[4] “beef”
ADVANCED ANALYTICAL THEORY AND METHODS: ASSOCIATION RULES
[5] “frankfurter, rolls/buns, soda”
[6] “chicken, tropical fruit”
(7] “butter, sugar, fruit/vegetable juice, newspapers”
[B] “fruit/vegetable juice”
[9] “packaged fruit/vegetables”
[10] “chocolate”
[11] “specialty bar”
The next section shows how to generate frequent itemsets from the Groceries dataset.
5.5.2 Frequent ltemset Generation
The apriori (} function from the arule package implements the Apriori algorithm to create frequent
itemsets. Note that, by default, the apriori (} function executes all the iterations at once. However, to
illustrate how the Apriori algorithm works, the code examples in this section manually set the parameters
of the apriori (} function to simulate each iteration of the algorithm.
Assume that the minimum support threshold is set to 0.02 based on management discretion.
Because the dataset contains 9,853 transactions, an item set should appear at least 198 times to be
considered a frequent item set. The first iteration of the Apriori algorithm computes the support of each
product in the dataset and retains those products that satisfy the minimum support. The following code
identifies 59 frequent 1-itemsets that satisfy the minimum support. The parameters of apr ior i ( )
specify the minimum and maximum lengths of the item sets, the minimum support threshold, and the
target indicating the type of association mined.
itemsets <- apriori(Groceries, parameter;list(minlen=l, maxlen=l,
support=0.02, target= 11 frequent itemsets 11 ))
parame:er specification:
confidence minval smax arem aval originalSupport support minlen
0.8 0.1 1 none FALSE TRUE 0.02 1
max len target ext
1 frequent itemsets FALSE
algorithmic control:
filter tree heap memopt load sort verbose
0.1 TRUE TRUE FALSE TRUE 2 TRUE
apriori - find association rules with the apriori algorithm
version 4.21 (2004.05.09) (c) 1996-2004 Christian Borgelt
set item appearances ... [0 item(s)) done [O.OOs].
set transactions ... [169 item(s), 9835 transaction(s)] done [O.OOs].
sorting and recoding items ... [59 item(s) J done [O.OOs].
creating transaction tree ... done [O.OOs].
checking subsets of size 1 done [O.OOs].
writing ... [59 set(s)) done [O.OOs).
creating S4 object ... done [0. oos].
5.5 An Example: Transactions in a Grocery Store
The summary of the itemsets shows that the support of Htemsets ranges from 0.02105 to 0.25552.
Because the maximum support of the 1-itemsets in the dataset is only 0.25552, to enable the discovery
of interesting rules, the minimum support threshold should not be set too close to that number.
surnrnary(iternsets)
set of 59 itemsets
most frequent items:
frankfurter sausage
1 1
(Other)
54
ham
1
neat
1
chicken
1
element (itemset/transaction) length distribution:sizes
1
59
Min. 1st Qu. Median
1
Mean 3rd Qu.
1 1
summary of quality measures:
support
Min. :0.02105
1st Qu. :0.03015
Median :0.04809
Mean :0.06200
3 rd Qu . : 0 . 0 7 6 6 6
!'-lax . : 0 . 2 55 52
includes transaction ID lists: FALSE
mining info:
data ntransactions support confidence
Groceries 9835 0.02 1
r·1ax.
1
The following code uses the inspect () function to display the top 10 frequent 1-itemsets sorted
by their support. Of all the transaction records, the 59 1-itemsets such as {whole milk},
{other vegetables}, {rolls/buns}, {soda}, and {yogurt} allsatisfytheminimum
support. Therefore, they are called frequent 1-itemsets.
inspect(head(sort(iternsets, by = "support 11 ), 10))
items support
1 {whole milk} 0.25551601
2 {other vegetables} 0.19349263
3 {rolls/buns} 0.18393493
4 {soda} 0.17437722
5 {yogurt} 0.13950178
6 {bottled water} 0.11052364
ADVANCED ANALYTICAL THEORY AND METHODS: ASSOCIATION RULES
7 {root vegetables}
8 {tropical fruit}
9 {shopping bags}
10 {sausagef
0.10899847
0.10.J93137
0.09852567
0.09395018
In the next iteration, the list of frequent 1-itemsets is joined onto itself to form all possible candidate
2-itemsets. For example, 1-itemsets {who 1 e milk} and { soda} would be joined to become a
2-itemset {whole milk, soda}. The algorithm computes the support of each candidate 2-itemset
and retains those that satisfy the minimum support. The output that follows shows that 61 frequent
2-itemsets have been identified.
itemsets <- apriori(Groceries, parameter~list(minlen=2, maxlen=2,
support=0.02, target="frequent itemsets"))
parameter specification:
confidence minval smax arem aval originalSupport support minlen
0.8 0.1 l 1:one ?i\LSE TRUE 0.02 2
ma:den target ext
2 frequent itemsets FALSE
algorithmic control:
filter tree heap memopt load sort verbose
0.1 TRUE TRUE FALSE TRUE 2 TRUE
apriori - find association rules with the apriori algorithm
version 4.21 (2004.05.09) (c) 1996-2004 Christian Borgelt
set item appear.:1nces ... [0 item(s)] done [O.OOs].
set transaction:; ... [169 item(s), 9835 transaction(s;] done [O.OOs).
sorting and receding item.s ... [59 item(s)] done [O.OOs].
creating transaction tree ... done [O.OOs].
checking subsets of size l 2 done [O.OOs].
\•niting ... [61 set:s)] clone [O.OOs].
Cl·eating S4 object ... done [O.OOs].
The summary of the itemsets shows that the support of 2-itemsets ranges from 0.02003 to 0.07483.
summary(itemsets)
set of 61 itemsets
most frequent items:
whole mil~ other vegetables
25 :7
soda
S3
yogurt
9
element (itemset•tra~sactlon length distribution:si:es
2
61
rolls/buns
9
Min. 1st Qu. Median !•lean 3 rd Qu.
2 2 2
summary of quality measures:
support
l·l in . : J . 0 2 0 0 3
1st Qu. :0.02227
!·ledian : 0. 02613
!·lean :0.02951
3rd Qu. :0.03223
!•lax. :0.07483
2
includes transaction ID lists: FALSE
mining info:
2
data ntransactions support confidence
Groceries 9835 0.02
5.5 An Example: Transactions in a Grocery Store
2
The top 10 most frequent 2-itemsets are displayed next, sorted by their support. Notice that whole
milk appears six times in the top 10 2-itemsets ranked by support. As seen earlier, {whole milk} has
the highest support among all the 1-itemsets. These top 10 2-itemsets with the highest support may not
be interesting; this highlights the limitations of using support alone.
inspect(head(sort(itemsets, by ="support 11 ),10))
items support
{other vegetables,
>·:hole milk} 0.07..;83477
2 {·.·:hole milk,
rolls/buns} 0.05663-i47
{whole milk,
yogurt} 0.056024-iO
{root vegetables,
whole milk} 0.04890696
5 {root vegetables,
other vegetables} 0.04738180
6 {other vegetables,
yogurt} 0.04341637
7 {other vegetables,
rolls/buns} 0.04260295
8 {tropical fruit,
\•Jhole milk} 0.04229792
9 { vlhole milk,
soda} 0.04006101
10 {rolls/buns,
soda} 0.03833249
Next, the list of frequent 2-itemsets is joined onto itself to form candidate 3-itemsets. For example
{other vegetables, whole milk} and {whole milk, rolls/buns} wouldbejoined
as {other vegetables, whole milk, rolls/buns}. The algorithm retains those itemsets
ADVANCED ANALYTICAL THEORY AND METHODS: ASSOCIATION RULES
that satisfy the minimum support. The following output shows that only two frequent 3-itemsets have
been identified.
itemsets <- apriori(Groceries, parameter~list(minlen=3, maxlen~3,
support~0.02, target~"frequent itemsets"))
parameter specification:
confidence minval smax arem aval originalSupport support minlen
0.8 0.1 1 none FALSE TRUE 0.02
max len target ext
frequent itemsets FALSE
algorithmic control:
filter tree heap memopt load sort verbose
0.1 TRUE TRUE FALSE TRUE 2 TRUE
apriori - find association rules with the apriori algorithm
version 4.21 (2004.05.09) (c) 1996-2004 Christian Borgelt
set item appearances ... [0 item(s)] done [O.OOs].
set transactions ... [169 item(s), 9835 transaction(s)] done [O.OOs].
sorting and receding items ... [59 item(s)) done (O.OOs].
creating transaction tree ... done [0. OOs) .
checking subsets of size 1 2 3 done [O.OOs].
writing . . . [2 set (s)] done [O. OOs].
creating S4 object ... done (O.OOs].
The 3-itemsets are displayed next:
inspect(sort(itemsets, by ="support"))
items support
1 {root vegetables,
other vegetables,
whole milk} 0.02318251
2 {other vegetables,
whole milk,
yogurt} 0.02226741
In the next iteration, there is only one candidate 4-itemset
{root vegetables, other vegetables, whole milk, yogurt}, and its support is
below 0.02. No frequent 4-itemsets have been found, and the algorithm converges.
itemsets <- apriori(Groceries, parameter~list(minlen~4, maxlen=4,
support=0.02, target~"frequent itemsets"))
parameter specification:
confidence minval smax arem aval origi~alSupport support minlen
0.8 0.1 1 none FALSE TRUE 0.02 4
max len target ext
frequent itemsets FALSE
5.5 An Example: Transactions in a Grocery Store
algorithmic control:
filter tree heap memopt load sort verbose
0.1 TRUE TRUE FALSE TRUE 2 TRUE
apriori - find association rules with the apriori algorithm
version 4.21 (2004.05.09) (c) 1996-2004 Christian Borgelt
set item appearances ... [0 item(s)] done [O.OOs].
set transactions ... [169 item(s), 9835 transaction(s)] done [O.OOs].
sorting and receding items . . . [59 item (s)] done [0. OOs] .
creating transaction tree ... done [O.OOs].
checking subsets of size 1 2 3 done [O.OOs].
writing ... [0 set(s)] done [O.OOs].
creating S4 object ... done [0. OOs] .
The previous steps simulate the Apriori algorithm at each iteration. For the Groceries dataset,
the iterations run out of support when k = 4. Therefore, the frequent itemsets contain 59 frequent
1-itemsets,61 frequent 2-itemsets, and 2 frequent 3-itemsets.
When the max len parameter is not set, the algorithm continues each iteration until it runs out
of support or until k reaches the default maxlen=l o. As shown in the code output that follows, 122
frequent itemsets have been identified. This matches the total number of 59 frequent 1-itemsets, 61
frequent 2-itemsets, and 2 frequent 3-itemsets.
itemsets <- apriori(Groceries, parameter=list(minlen=l, support=0.02,
target= 11 frequent itemsets 11 ))
parameter specification:
confidence minval smax arem aval originalSupport support minlen
0.8 0.1 1 none FALSE TRUE 0.02
max len target ext
10 frequent itemsets FALSE
algorithoic control:
filter tree heap memopt load sort verbose
0.1 ~RUE TRUE FALSE TRUE TRUE
apriori - find association rules with the apriori algorithm
version 4.21 (2004.05.09! (c) 1996-2004 Christian Borgelt
set item appearances ... [0 items!] dOJ:e [o.oos:.
set transactions ... [169 item(s, 9835 transaction(s)) done [O.OOs).
sorting and receding items ... :s9 item(s!] done [O.OOs].
creating transaction tree ... done [O.OOs].
checking subsets of size 1 2 3 done [O.OOs).
,,.;riting ... [122 setl,s)] done [O.OOs].
creating S.f object ... done [O.OOs].
Note that the results are assessed based on the specific business context of the exercise using the
specific dataset. If the dataset changes or a different minimum support threshold is chosen, the Apriori
algorithm must run each iteration again to retrieve the updated frequent itemsets.
ADVANCED ANALYTICAL THEORY AND METHODS: ASSOCIATION RULES
5.5.3 Rule Generation and Visualization
The apriori () function can also be used to generate rules. Assume that the minimum support threshold
is now set to a lower value 0.001, and the minimum confidence threshold is set to 0.6. A lower minimum
support threshold allows more rules to show up. The following code creates 2,918 rules from all the transac-
tions in the Groceries dataset that satisfy both the minimum support and the minimum confidence.
rules <- apriori(Groceries, parameter=list(support=O.OOl,
confidence=0.6, target= 11 rules 11 ))
parameter specification:
confidence minval smax arem aval originalSupport support minlen
0.6 0.1 1 none FALSE TRUE 0.001 1
maxlen target ext
10 rules FALSE
algorithmic control:
filter tree heap memopt load sort verbose
0.1 TRUE TRUE FALSE TRUE 2 TRUE
apriori - find association rules with the apriori algorithm
version 4.21 (2004.05.09) (c) 1996-2004 Christian Borgelt
set item appearances ... (0 item(s)] done (O.OOs].
set transactions ... [169 item{s), 9835 transaction(s)] done (O.OOs].
sorting and receding items . . . [157 item (s) J done [0. OOs].
creating transaction tree ... done [O.OOs].
checking subsets of size 1 2 3 4 5 6 done [0.01s].
writing ... [2918 rule(s)] done [O.OOs].
creating S4 object ... done [0.01s].
The summary of the rules shows the number of rules and ranges of the support, confidence, and lift.
summary(rules)
set of 2918 rules
rule length distribution {lhs + rhs) :sizes
2 3 4 5 6
3 490 1765 626 34
l\1in. 1st Qu.
2.000 4.000
l·ledian
4.000
1-lean 3 rd Qu.
4.068 4.000
l\lax.
6.000
summary of quality measures:
support confidence
Min. :0.001017 Min. :0.6000
1st Qu. :0.001118 1st Qu. :0.6316
l\ledian :0. 001220
Mean :0.001480
3rd Qu. :0.001525
!>lax. :0.009354
l>’Jedian :0.6818
!•lean :0.7028
3rd Qu.: 0. 7500
Max . : 1 . 0 0 0 0
lift
1-lin. 2.348
1st Qu.: 2. 668
r-ledian : 3 . 16 8
!\lean 3.450
3rd Qu.: 3.692
!·lax. : 18. 996
5.5 An Example: Transactions in a Grocery Store
min1n 1 info:
d3•1 ntt n .. f l n
Enter p lot (rules) to display the scatterplot of the 2,918 rules (Figure 5-3), where the horizontal
axis is the support, the vertical axis is the confidence, and the shading is the lift. The scatterplot shows
that, of the 2,918 rules generated from the Groceries dataset, the highest lift occu rs at a low support
and a low confidence.
Scatter plot for 2918 rules
0 9 15
Q)
u
c
Q)
“0 0 8
~
0
10
u
07
5
0 6
0.002 0 004 0.006 0.008
lift
support
FIGURE 5-3 Scatterplot of t he 2,978 rules with minimum support 0.007 and minimum confidence 0.6
Entering p lot ( rules®q u al i ty) displays a scatterplot matrix (Fig ure 5-4) to compare the sup-
port, confidence, and lift of the 2,9 18 rules.
Figure 5-4 shows that lift is propor tional to confidence and illustrates several linear groupings. As
indicated by Equation 5-2 and Equation 5-3, Lift = Confidence / Support(Y). Therefore, when the support
of Y remains the same, lift is proportional to confidence, and the slope of the linear trend is the recipro-
cal of Support(Y). The following code shows that, of the 2,918 rules, there are only 18 different values for
– –
1
– , and the majority occurs at slopes 3.91, 5.17, 7.17, 9.17, and 9.53. This matches the slopes shown
Support(Y)
in the third row and second column of Figure S-4, where the x-axis is the confidence and the y-axis is the lift.
;; comput • Llw l/Suppot·t ry
slo pe<· sort(round (rules®quality$lift I r ul es®quality$con f ide nce , 2))
= D .... sp .... 3.Y ,.. ht 1 Jtiti 1 ~ • j 1 h ~ t! ~ .1 la t :i~ ...
unlist( l apply(spl it(s l o pe,f =slope) ,length ) )
ADVANCED ANALYTICAL THEORY AND METHODS: ASSOCIATION RULES
0
"' ci
cq
0
.....
ci
I
0.002 0.004 0.006 0.008 5 10
FIGURE 5-4 Scatterplot matrix on the support, confidence, and lift of the 2,918 rules
The inspect () fun ction can display content of the rules generated previously.
The following code shows the top ten rules sorted by the lift. Ru le {Instant food
p rodu cts , soda} …. ( hamburger meat} has the highest lift of 18.995654.
inspec t(head(sort(rules , by=”lift”) , 10))
rrturg r <>a· I
0.
3 {ham
0.
CD
0
0
ci
“‘ 0 0 0
0 0 ci
0 0 0
~
~
l{)
15
4 {tropical fruit,
other vegetables,
yogurt,
white bread} => {butter}
0.001016777 0.6666667 12.030581
5 {hamburger meat,
yogurt,
whipped/sour cream} => {butter}
0.001016777 0.6250000 11.278670
6 {tropical fruit,
other vegetables,
whole milk,
yogurt,
domestic eggs} => {butter}
0.001016777 0.6250000 11.278670
7 {liquor,
red/blush wine} => {bottled beer}
0.001931876 0.9047619 11.235269
8 {other vegetables,
5.5 An Example: Transactions in a Grocery Store
butter,
sugar}
0.001016777 0.7142857
9 {whole milk,
butter,
=> {whipped/sour cream}
9.964539
hard cheese}
0.001423488 0.6666667
10 {tropical fruit,
other vegetables,
butter,
=> {whipped/sour cream}
9.300236
fruit/vegetable juice} => {whipped/sour cream}
0.001016777 0.6666667 9.300236
The following code fetches a total of 127 rules whose confidence is above 0.9:
confidentRules <- rules[quality(rules)$confidence > 0.9]
confidentRules
set of 127 rules
The next command produces a matrix-based visualization (Figure 5-5) of the LHS versus the RHS of
the rules. The legend on the right is a color matrix indicating the lift and the confidence to which each
square in the main matrix corresponds.
plot(confidentRules, method= 11 matrix 11 , measure=c( 11 lift 11 , “confidence 11 ),
control=list(reorder=TRUE))
As the previous plot () command runs, the R console would simultaneously display a distinct list of
the LHS and RHS from the 127 rules. A segment of the output is shown here:
Itemsets in Antecedent (LHS)
[1] “{citrus fruit,other vegetables,soda,fruit/vegetable juice}”
[2] “{tropical fruit,other vegetables,whole milk,yogurt,oil}”
ADVANCED ANALYTICAL THEORY AND METHODS: ASSOCIATION RULES
[3] ” (trop ical fnti t, butter, whipped/sour cream, frult/vegetable
juice) ”
[4] ” { t1·opical fruit , g1·apes, •,:hole milk, yogu~:t)”
[5] “{ham,trop ical auit,pip fruit,whole mllk}”
[124] ” {liquar,rec/blush ‘”‘lne}”
Itemsets in Consequent (RHS)
[l] “{whole mllk}” “(rogurt} ” “{root vegetables} ”
[ •l ” {not tled beet}” ” {other vegetables}”
Matrix with 127 rules
5
Cii 4
I
Q;_
c
Q) 3 ~
0″
Q)
“‘ c
0
0 2
20 40 60 80 100 120
Antecedent (LHS)
10
~ 8
6
4
F IGURE 5-5 Matrix-based visualization of LHS and RHS, colored by lift and confidence
0.92 0.96
confidence
Th e follo w in g code provides a visual ization of the top five r ul es with the highest lift. The plot
is shown in Figure S-6. In the graph, the arrow always poi nts from an item on the LHS to an it em on
the RHS. For example, the arrows that connect ham, proce ssed cheese, and whi te bread suggest rule
{h a m, p rocessed cheese} -> {wh i te b rea d } . The legend on the top ri ght of t he graph
shows that the size of a circle indicates the support of the rules ranging from 0.001 to 0.002. The color
(or shade} represents th e li ft, whic h ranges from 11.279 to 18.996. The ru le w ith the high es t lift is
{Inst a n t food produ cts , soda} -> {h a mbu r ger me a t} .
h ighLi ftRule s c- hea d(sort (rules, by= ” lift ” ) , 5)
p lot(highLiftRules , method =” graph” , cont r ol=lis t (type=”items” ))
5 .6 Va lidation a nd Testing
Graph for 5 rules
size: support (0. 001 · 0. 002)
color: lift (11.279 -1 8.996)
tropical fruit
other vegetables
processed cheese
ham
popcorn
t
salty snack
wgite bread
soda
FIGURE 5-6 Graph visualization of the top five rules sorted by lift
5.6 Validation and Testing
yogurt
……
butter
1
~—-whipped/sour cream
hamburger meat
~
Instant food products
After gathering the output rules, it may become necessary to use one or more methods to validate the
results in the business context for the sample dataset. The first approach can be established through
statistical measures such as confidence, lift, and leverage. Rules that involve mutua lly independent items
or cover few transactions are considered uninteresting because they may capture spurious relationships.
As mentioned in Section 5.3, confidence measures the chance that X andY appear together in rela-
tion to the chance X appears. Confidence can be used to identify the interestingness of the ru les.
Lift and leverage both compare the support of X andY against their individual support. While min-
ing data with association rul es, some rul es generated cou ld be purely coincidenta l. For example, if 95%
of customers buy X and 90% of customers buy Y, then X and Y would occur together at least 85% of the
time, even if there is no relationship between the two. Measures like lift and leverage ensure that inter-
esting rul es are identified rather than coincidental ones.
Another set of criteri a can be established through subjective arguments. Even with a high confidence,
a rule may be considered subjectively uninteresting unless it reveals any unexpected profi table actions.
For example, ru les like {paper}-+ { penci 1} may not be subj ectively interesting or mean ingful despite
high support and confidence values. In contrast, a rule like {diaper}->{ b eer } that satisfies both mini-
mum support and minimum confidence can be considered subjectively interesting because this rule is
ADVA NCED ANALYTICAL THEORY AND METHODS: ASSOCIATION RULES
unexpected and may suggest a cross-sell opportun ity for the retailer. This incorporation of subjective
knowledge into the evaluation of rules can be a difficult task, and it requires collaboration with domain
experts. As seen in Chapter 2, “Data Ana lytics Lifecycle,” the domain experts may serve as the business
users or the business intelligence analysts as part of the Data Science team. In Phase 5, the team can com-
municate the results and decide if it is appropriate to operationalize them.
5.7 Diagnostics
Although the Apriori algori thm is easy to understand and implement, some of the rules generated are
uninteresting or practica lly useless. Additionally, some of the rules may be generated due to coincidental
relationships between the variables. Measures like confidence, lift, and leverage should be used along with
human insights to address this problem.
Another problem with association rules is that, in Phase 3 and 4 of the Data Analytics Lifecycle
(Chapter 2), the team must specify the minimum support prior to the model execution, which may lead
to too many or too few rules. In rela ted research, a va riant of the algorithm [13] can use a predefined
target range for the number of rules so that the algorithm can adjust the minimum support accordingly.
Section 5.2 presented the Apriori algorithm, which is one of t he earliest and the most fundame ntal
algorithms for generating association rules. The A priori algorithm reduces the computational workload by
only examining itemsets that meet the specified minimum threshold. However, depending on the size of the
dataset, the Apriori algorithm can be computationa lly expensive. For each level of support, the algorithm
requires a scan of the entire database to obtain the result. Accordingly, as the database grows, it takes more
time to compute in each run. Here are some approaches to improve Apriori’s efficiency:
• Part itioning: Any item set that is potential ly frequent in a transaction database must be frequent in
at least one of the partitions of the transaction database.
• Sa mpli ng: This extracts a subset of the data with a lower support threshold and uses the subset to
perform association rule mining.
• Transaction reduction: A transaction that does not contain frequen t k-itemsets is useless in sub-
sequent scans and therefore can be ignored.
• Hash- based item set cou nting : If the corresponding hash ing bucket count of a k-itemset is
below a certain threshold, the k-itemset cannot be frequent.
• Dynamic itemset counting: Only add new candidate itemsets when all of their subsets are esti-
mated to be frequent.
Summary
As an unsupervised ana lysis technique that uncovers relationships among items, association rules find many
uses in activities, includ ing market basket analysis, clickstream analysis, and recommendation engines.
Although association ru les are not used to predict outcomes or behaviors, they are good at identifying
“interesting” relationships within items from a la rge dataset. Quite often, the disclosed relationships that
the association rules suggest do not seem obvious; they, therefore, provide va luab le insights for institutions
to improve their business operations.
Exercises
The Apriori algorithm is one of the earliest and most fundamental algorithms for association rules. This
chapter used a grocery store example to walk through the steps of A priori and generate frequent k-itemsets
and useful rules for downstream analysis and visualization. A few measures such as support, confidence,
lift, and leverage were discussed. These measures together help identify the interesting ru les and elimi-
nate the coincidental rules. Finally, the chapter discussed some pros and cons of the Apriori algorithm and
highlig hted a few methods to improve its efficiency.
Exercises
1. What is the Apriori property?
2. Following is a list of five transactions that include items A, B, C, and D:
• Tl : { A,B,C
• T2: { A,C }
• T3: { B,C }
• T4: { A,D }
• TS: { A,C,D
Which itemsets satisfy the minimum support of 0.5? (Hint: An item set may include more
than one item.)
3. How are interesting rules identified? How are interesting rul es distinguished from coincidental rules?
4. A local retailer has a database that stores 10,000 transactions of last summer. After analyzing the data,
a data science team has identified the following statistics:
• { ba ttery} appears in 6,000 transactions.
• {sunscreen} appears in 5,000 transactions.
• {sandals} appears in 4,000 transactions.
• {bowls} appears in 2,000 transactions.
• {battery, sunscreen} appears in 1,500 transactions.
• {battery, sandals} appears in 1,000 transactions.
• {battery, bowls} appears in 250 transactions.
• {battery, sunscreen, sandals} appears in 600 tra nsactions.
Answer the following questions:
a. What are the support values of the preceding itemsets?
b. Assuming the minimum support is 0.05, which itemsets are considered frequent?
c. What are the confidence values of {battery}-> { sunscreen} and
{battery, sunscreen}->{ sandals} ? Which of the two rules is more interesting?
d. List all the candidate rules that can be formed from the statistics. Which rules are cons id ered
interesting at the minimum confidence 0.25? Out of these interesting rul es, which rule is con-
sidered the most useful (that is, least coincidental)?
ADVANCED ANALYTICAL THEORY AND METHODS: ASSOCIATION RULE S
Bibliography
[1] P. Hajek, I. Havel, and M. Chytil, “The GUHA Method of Automatic Hypotheses Determination,”
Computing, vol. 1, no. 4, pp. 293-308, 1966.
[2] R. Agrawal, T. lmielinski, and A. Swami, “Mining Association Ru les Between Sets of Items in Large
Databases,” SIGMOD ’93 Proceedings of the 1993 ACM SIGMOD International Conference on
Management of Data, pp. 207-216,1993.
[3] M.-S. Chen, J. S. Park, and P. Yu, “Efficient Data Mi ni ng for Path Traversal Patterns,” IEEE Transactions
on Knowledge and Data Engineering, vol. 10, no. 2, pp. 209-221, 1998.
[4] R. Cooley, B. Mobasher, and J. Srivastava, “Web Mining: Information and Pattern Discovery on the
World Wide Web,” Proceedings of the 9th IEEE International Conference on Tools with Artificial
Intelligence, pp. 558-567, 1997.
[5] R. Agrawal and R. Srikant, “Fast Algorithms for Mining Association Rules in Large Databases,” in
Proceedings of the 20th International Conference an Very Large Data Bases, San Francisco, CA,
USA, 1994.
[6] S. Brin, R. Motwani, J.D. Ullman, and S. Tsur, “Dynamic ltemset Counting and Implication Rules for
Market Basket Data,” SIGMOD, vol. 26, no. 2, pp. 255-264, 1997.
[7] G. Piatetsky-Shapiro, “Discovery, Analysis and Presentation of Strong Rules,” Knowledge Discovery in
Databases, pp. 229-248, 1991.
[8] S. Brin, R. Motwani, and C. Silverstein, “Beyond Market Baskets: Generalizing Association Rules to
Correlations,” Proceedings of the ACM SIGMOD/PODS ’97 Joint Conference, vol. 26, no. 2, pp. 265-
276, 1997.
[9] C. C. Aggarwal and P. S. Yu, “A New Framework for ltemset Generation,” in Proceedings of the
Seventeenth ACM SIGACT-SIGMOD-SIGART Symposium on Principles of Database Systems (PODS
’98), Seattle, Washington, USA, 1998.
[1 O] M. Hahsler, “A Comparison of Commonly Used Interest Measures for Association Rules,” 9 March
2011. [Online). Available: http: I / michael. hahsler . net / research /
association_rules / measures . html. [Accessed 4 March 2014).
[11] W. Lin, S. A. Alvarez, and C. Ruiz, “Efficient Adaptive-Support Association Rule Mining for
Recommender Systems,” Data Mining and Knowledge Discovery, vol. 6, no. 1, pp. 83-105,2002.
[12] B. Mobasher, H. Dai, T. Luo, and M. Nakagawa, “Effective Personalization Based on Association Rule
Discovery from Web Usage Data,” in ACM, 2011.
[13] W. Lin, S. A. Alvarez, and C. Ruiz, “Collaborative Recommendation via Adaptive Association
Rule Mining,” in Proceedings of the International Workshop on Web Mining forE-Commerce
(WEBKDD), Boston, MA, 2000.
ADVANCED ANALYTICAL THEORY AND METHODS: REGRESSION
In general, regression analysis attempts to explain the influence that a set of variables has on the outcome of
another variable of interest. Often, the outcome variable is called a dependent variable because the out-
come depends on the other variables. These additional variables are sometimes called the input variables
or the independent variables. Regression analysis is useful for answering the following kinds of questions:
• What is a person’s expected income?
• What is the probability that an applicant will default on a loan?
Linea r regression is a useful tool for answering the first question, and logistic reg ression is a popular
method for add ressing the second. This chapter exam ines the se two reg ression techniq ues and explains
when one technique is more appropriate than the other.
Regression analysis is a useful explanatory tool that can identify the input va riables that have the great-
est statistica l infl uence on the outcome. With such knowledge and insight, environmental changes can
be attempted to produce more favorable values of the input variables. For example, if it is found that the
reading level of 10-year-old students is an excellent predictor of the students’ success in high school and
a factor in their attending college, then additional emphasis on reading can be considered, implemented,
and evaluated to improve students’ reading levels at a younger age.
6.1 Linear Regression
Linear regression is an analytical technique used to model the relationship between several input variables
and a continuous outcome variable. A key assumption is that the relationsh ip between an input va riable
and the outcome variable is linear. Although this assumption may appear restrictive, it is often possible to
properly transform the input or outcome variables to achieve a li near relationship between the modified
input and outcome variables. Possible transformations will be covered in more detail later in the chapter.
The physical sciences have we ll-known linear models, such as Ohm ‘s Law, which states t hat th e electri-
ca l current flowing through a resistive circuit is linearly proportional to the voltage applied to the circuit.
Such a model is considered deterministic in the sense that if the input values are known, the value of the
outcome va ri able is precisely determined. A linea r regression model is a probabilistic one that accounts for
the randomness that can affect any particular outcome. Based on known input val ues, a linear regression
model provides the expected value of the outcome va riable based on the values of the input variables,
but some uncertainty may remain in predicting any particular outcome. Thus, linear regression models are
useful in physical and social science applications where there may be considerable variation in a particular
outcome based on a given set of input values. After presenting possible linear regression use cases, the
fou ndations of linear regression modeling are provided.
6.1.1 Use Cases
Linear regression is often used in business, government, and other scenarios. Some common practical
applications of linear regression in the real worl d inc lude the following:
• Rea l estate: A simple linear regression analysis can be used to model residential home prices as a
function of the home’s living area. Such a model helps set or evaluate the list price of a home on the
market. The model could be further improved by including other input variables such as number of
bathrooms, number of bed room s, lot size, school district ran kings, crime statistics, and property taxes.
6.1 Linear Regression
o Demand forecasting: Businesses and governments can use linear regression models to predict
demand for goods and services. For example, restaurant chains can appropriately prepare for
the predicted type and quantity of food that customers will consume based upon the weather,
the day of the week, whether an item is offered as a special, the time of day, and the reservation
volume. Similar models can be built to predict retail sales, emergency room visits, and ambulance
dispatches.
o Medical: A linear regression model can be used to analyze the effect of a proposed radiation treat-
ment on reducing tumor sizes. Input variables might include duration of a single radiation treatment,
frequency of radiation treatment, and patient attributes such as age or weight.
6.1.2 Model Description
As the name of this technique suggests, the linear regression model assumes that there is a linear relation-
ship between the input variables and the outcome variable. This relationship can be expressed as shown
in Equation 6-1.
where:
y is the outcome variable
xi are the input variables, for j = 1, 2, .. . , p- 1
{3
0
is the value of y when each xi equals zero
f]i is the change in y based on a unit change in xi’ for j = 1, 2, .. • , p- 1
Eisa random error term that represents the difference in the linear model and a particular
observed value for y
(6-1)
Suppose it is desired to build a linear regression model that estimates a person’s annual income as a
function of two variables-age and education-both expressed in years. In this case, income is the outcome
variable, and the input variables are age and education. Although it may be an over generalization, such
a model seems intuitively correct in the sense that people’s income should increase as their skill set and
experience expand with age. Also, the employment opportunities and starting salaries would be expected
to be greater for those who have attained more education.
However, it is also obvious that there is considerable variation in income levels for a group of people with
identical ages and years of education. This variation is represented byE in the model. So, in this example,
the model would be expressed as shown in Equation 6-2.
(6-2)
In the linear model, the pis represent the unknown p parameters. The estimates for these unknown
parameters are chosen so that, on average, the model provides a reasonable estimate of a person’s income
based on age and education. In other words, the fitted model should minimize the overall error between
the linear model and the actual observations. Ordinary least Squares (OLS) is a common technique to
estimate the parameters.
To illustrate how OLS works, suppose there is only one input variable, x, for an outcome variable y.
Furthermore, n observations of { x, y) are obtained and plotted in Figure 6-1.
ADVANCED ANALYTICAL THEORY AND METHODS: REGRESSION
II)
(“)
0
0
0
0 (“)
II’) 0
N
>- 0 0 N 0
0 0
II)
0
II) 0
2 4 6 8 10 12
X
F1GURE6- Scatcerplot of y versus x
The goal is to find the line that best approximates the relation ship between the outcome va riable
and th e input variables . With OLS, th e objec ti ve is to find the line through these points tha t mini-
mizes the sum of the squa res of the difference b etween each point and the line in the ver tical direc-
tion. In other words, find the values of -1
0
and .-1, such that the su mmation shown in Equat ion 6-3 is
minimized.
(6-3)
The n individual distances to be squa red and then summ ed are illustrated in Figure 6-2. The vertical
lines represent the distance between each observed y value and the line y = i
0
;- i,x.
II)
(“)
0
(“)
II)
N
>- 0
N
II)
0
II)
2 4 6 8 10 12
X
FIGURE 6 2 Scatterplot of y versus x with vertical distances from th e observed points to a fitted line
6.1 Linear Regression
In Figure 3-7 of Chapter 3, “Review of Basic Data Analytic Methods Using R,” the Anscombe’s Quartet
example used OLS to fit the linear regression line to each of the four data sets. OLS for multiple input vari-
ables is a straightforward extension of the one input variable case provided in Equation 6-3.
The preceding discussion provided the approach to find the best linear fit to a set of observations.
However, by making some additional assumptions on the error term, it is possible to provide further capa-
bilities in utilizing the linear regression model. In general, these assumptions are almost always made, so
the following model, built upon the earlier described model, is simply called the linear regression model.
Linear Regression Model (with Normally Distributed Errors)
In the previous model description, there were no assumptions made about the error term; no additional
assumptions were necessary for OLS to provide estimates of the model parameters. However, in most
linear regression analyses, it is common to assume that the error term is a normally distributed random
variable with mean equal to zero and constant variance. Thus, the linear regression model is expressed as
shown in Equation 6-4.
Y = f3o + fJ1X1 + fJ2X2 •• .+ f3P-1xp- 1 +E
where:
y is the outcome variable
xi are the input variables, for j = 1, 2, … , p- 1
{3
0
is the value of y when each xi equals zero
{3i is the change in y based on a unit change in xi’ for j = 1, 2, … , p -1
E- N(O, U
2
) and the ES are independent of each other
(6-4)
This additional assumption yields the following result about the expected value of y, E(y) for given
(x
1
,x2, … xP_ 1):
E(y) = E(/30 + (31x 1 + {32×2 . .. + f)P_ 1xP_ 1 +c)
= !3o + !31×1 + j32x2 •.. + {3p-1xp-1 + E(c)
= {30 + ~j1 x1 + /32×2 ••• + f3P_ 1XP_ 1
Because {3. and x .are constants, the E(y) is the value of the linear regression model for the given
J I
(x
1
,x
2
, •• • xP_ 1). Furthermore, the variance ofy, V(y), for given (x 1,×2, ••• xP _1) is this:
V(y) = V(f3o + (31×1 + /32×2 … + !3p-1xp-1 +e)
=0+V(c)=a 2
Thus, for a given (x1,x2, . .. xP _1), y is normally distributed with mean /30 + /31×1 + ;32x 2 … + !3P _1xP_ 1
and variance a 2• For a regression model with just one input variable, Figure 6-3 illustrates the normality
assumption on the error terms and the effect on the outcome variable, y, for a given value of x.
For x = 8, one would expect to observe a value of y near 20, but a value of y from 15 to 25 would appear
possible based on the illustrated normal distribution. Thus, the regression model estimates the expected
value of y for the given value of x. Additionally, the normality assumption on the error term provides some
useful properties that can be utilized in performing hypothesis testing on the linear regression model and
ADVANCED ANALYTICAL THEORY AND METHODS: REGRESSION
providing confidence intervals on the parameters and the mean ofy given(x1,x2 , … xP_,). The application of
these statistical techniques is demonstrated by applying R to the earlier linear regression model on income.
10
(“)
0
(“)
10
N
>. 0 N
10
0
10
2 4 6
X
FIGURE 6-3 Normal distribution about y for a given value of x
ExampleinR
8 10 12
Returning to the Income example, in addition to the variables age and education, the person’s gender,
female or male, is considered an input variable. The following code reads a comma-separated-value (CSV)
file of 1,500 people’s incomes, ages, years of education, and gender. The first 10 rows are displayed:
income_input = as.data.frame( read. csv ( 11 C: /data/income. csv”)
income_input[l:lO,]
ID Income Age Education Gender
1 1 113 69 12 1
2 2 91 52 18 0
121 65 14 0
4 4 81 58 12 0
5 5 68 31 16 1
6 6 92 51 15 1
7 7 75 53 15 0
8 8 76 56 13 0
9 9 56 42 15
10 10 53 33 11 1
Each person in the sample has been assigned an identification number, ID. Income is expressed in
thousands of dollars. (For example, 113 denotes $113,000.) As described earlier, Age and Education are
expressed in years. For Gender, a 0 denotes female and a 1 denotes male. A summary of the imported
data reveals that the incomes vary from $14,000 to $134,000. The ages are between 18 and 70 years. The
education experience for each person varies from a minimum of 10 years to a maximum of 20 years.
summary(income_input)
ID Income Age Education
f>’lin. 1.0 !>lin. : 14.00 !>lin. :18.00 Min. :10.00
6.1 linear Regression
t Qu .
‘1 X
As described in Cha pter 3, a scatterplot matrix is an informative tool to view the pair-wise relationships
of the va ri ables. The basic assumption of a linear regression model is that there is a linear relationship
between the outcome variable and the input variables. Using the lattice package in R, the scatterplot
matri x in Figure 6-4 is generated with the following R code:
co . OQOOt.:.•OQOOOO
Gender
0 OOOO t~t~OOOOO
0 ·~..O:’C .:K1D t~a=-:-mOEll> 0 0
·-· ••Ill) C8111:111D~-~-= 0 0 ,;:a-••wllq, CD CICI!miDCilltmlll:llllro 0 0
« Ill! »c:::-=:ra::D~ G:II::C 0 0
0 0 0
c.:::o:,.~_.I::::.::t~’21C Educabon 0 ‘J
0 0
~ £-~
Income -~ – ~ ~~~- –~-~ … ~- ,· ..
. ‘” -:~ … .. …
Scatter Plol l.lalnx
FIGURE 6-4 Scatterplot matrix of the variables
ADVANCED ANALYTICAL THEORY AND METHODS: REGRESSION
library(lattice)
splom(-income_input[c(2:5)], groups~NULL, data=income_input,
axis.line.tck = 0,
axis.text.alpha ~ 0)
Because the dependent variable is typically plotted along they-axis, examine the set of scatterplots
along the bottom of the matrix. A strong positive linear trend is observed for Income as a function of Age.
Against Education, a slight positive trend may exist, but the trend is not quite as obvious as is the case
with the Age variable. Lastly, there is no observed effect on Income based on Gender.
With this qualitative understanding of the relationships between Income and the input variables, it
seems reasonable to quantitatively evaluate the linear relationships of these variables. Utilizing the normal-
ity assumption applied to the error term, the proposed linear regression model is shown in Equation 6-5.
Income= {3
0
+ {31Age + !3iducation + {33Gender + E (6-5)
Using the linear model function, lm (),in R, the income model can be applied to the data as follows:
results <- lm(Income-Age + Education + Gender, income_input)
summary(results)
Call:
lm(fornula Income - Age + Education ~ Gender, data incorne_input)
Residuals:
Min 1Q Median 3Q Max
-37.340 -8.101 0.139 7.885 37.271
Coefficients:
Estimate Std. Srro!.~ t value Pl.~ (>It i )
(Intercept) 7.26299 1.95575 3.714 0.000212
.~ge 0.99520 0.02057 48.373 < 2e-l6
Education 1.75788 0.11581 15.179 < 2e-16
Gender -0.93433 0.62388 -1.498 0.134443
Signif. codes: 0 '***' 0.001 '**' 0.01 '* 1 0.05 1 • 1 0.1 1 1 1
Residual standard error: 12.07 on 1496 degrees of freedom
Multiple R-squared: 0.6364, Adjusted R-squared: 0.6357
F-statistic: 873 on 3 and 1496 DF, p-value: < 2.2e-16
The intercept term, {3
0
, is implicitly included in the model. The lm {) function performs the parameter
estimation for the parameters {3i (j = 0, 1, 2, 3) using ordinary least squares and provides several useful
calculations and results that are stored in the variable called results in this example.
After the stated call to 1m ( } , a few statistics on the residuals are displayed in the output. The residuals
are the observed values of the error term for each of then observations and are defined fori= 1, 2, ... n,
as shown in Equation 6-6.
e; = Y; -(bo +blxi,l +b2xi,2 ... +bp-1xi,p-1)
where bi denotes the estimate for parameter {3i for j = 0, 1, 2, ... p - 1
(6-6)
6.1 Linear Regression
From the R output, the residuals vary from approximately -37 to +37, with a median close to 0. Recall
that the residuals are assumed to be normally distributed with a mean near zero and a constant variance.
The normality assumption is examined more carefully later.
The output provides details about the coefficients. The column Estimate provides the OLS estimates
of the coefficients in the fitted linear regression model. In general, the (Intercept) corresponds to
the estimated response variable when all the input variables equal zero. In this example, the intercept cor-
responds to an estimated income of $7,263 for a newborn female with no education. It is important to note
that the available dataset does not include such a person. The minimum age and education in the dataset
are 18 and 10 years, respectively. Thus, misleading results may be obtained when using a linear regression
model to estimate outcomes for input values not representative within the dataset used to train the model.
The coefficient for Age is approximately equal to one. This coefficient is interpreted as follows: For every
one unit increase in a person's age, the person's income is expected to increase by $995. Similarly, for every
unit increase in a person's years of education, the person's income is expected to increase by about $1,758.
Interpreting the Gender coefficient is slightly different. When Gender is equal to zero, the Gender
coefficient contributes nothing to the estimate of the expected income. When Gender is equal to one,
the expected Income is decreased by about $934.
Because the coefficient values are only estimates based on the observed incomes in the sample, there
is some uncertainty or sampling error for the coefficient estimates. The Std. Error column next to
the coefficients provides the sampling error associated with each coefficient and can be used to perform
a hypothesis test, using the t-distribution, to determine if each coefficient is statistically different from
zero. In other words, if a coefficient is not statistically different from zero, the coefficient and the associ-
ated variable in the model should be excluded from the model. In this example, the associated hypothesis
tests' p-values, Pr (>It I), are very small for the Intercept, Age, and Education parameters. As
seen in Chapter 3, a small p-value corresponds to a small probability that such a large t value would be
observed under the assumptions of the null hypothesis. In this case, for a givenj = 0, 1, 2, … , p -1, the null
and alternate hypotheses follow:
H0 : !3i = 0 versus HA: .Bi 7:0
For small p-values, as is the case for the Intercept, Age, and Education parameters, the null
hypothesis would be rejected. For the Gender parameter, the corresponding p-value is fairly large at
0.13.1n other words, at a 90% confidence level, the null hypothesis would not be rejected. So, dropping the
variable Gender from the linear regression model should be considered. The following R code provides
the modified model results:
results2 <- lm(Income - Age + Education, income_input)
summary(results2)
Call:
lm(formula Income - Age + Education, data income_input)
Residuals:
Min lQ Median 3Q Max
-36.889 -7.892 0.185 8.200 37.740
Coefficients:
Estimate Std. Error t value Pr(>jtjl
ADVANCED ANALYTICAL THEORY AND METHODS: REGRESSION
(Intercept) 6.75822 1. 92728 3.507 0.000467 ***
Age 0. 99603 0.02057 48.412 < 2e-16 ***
Education 1.75860 0.11586 15.179 < 2e-16 ***
Signif. codes: 0 '***' 0.001 '**I 0.01 I* I 0.05 I I 0.1
Residual standard error: 12.08 on 1497 degrees of freedom
Multiple R-squared: 0.6359, Adjusted R-squared: 0.6354
F-statistic: 1307 on 2 and 1497 DF, p-value: < 2.2e-16
I I 1
Dropping the Gender variable from the model resulted in a minimal change to the estimates of the
remaining parameters and their statistical significances.
The last part of the displayed results provides some summary statistics and tests on the linear regression
model. The residual standard error is the standard deviation of the observed residuals. This value, along
with the associated degrees of freedom, can be used to examine the variance of the assumed normally
distributed error terms. R-squared (R2) is a commonly reported metric that measures the variation in the
data that is explained by the regression model. Possible values of R2 vary from 0 to 1, with values closer
to 1 indicating that the model is better at explaining the data than values closer to 0. An R2 of exactly 1
indicates that the model explains perfectly the observed data (all the residuals are equal to O).ln general,
the R2 value can be increased by adding more variables to the model. However, just adding more variables
to explain a given dataset but not to improve the explanatory nature of the model is known as overfitting.
To address the possibility of overfitting the data, the adjusted R2 accounts for the number of parameters
included in the linear regression model.
The F-statistic provides a method for testing the entire regression model. In the previous t-tests, indi-
vidual tests were conducted to determine the statistical significance of each parameter. The provided
F-statistic and corresponding p-value enable the analyst to test the following hypotheses:
H0 :{31 =/32 = ... =f3P_1 =0 versus HA :{3i ::cO
foratleastone j =1, 2, ... , p-1
In this example, the p-value of 2.2e- 16 is small, which indicates that the null hypothesis should be
rejected.
Categorical Variables
In the previous example, the variable Gender was a simple binary variable that indicated whether a
person is female or male. In general, these variables are known as categorical variables. To illustrate
how to use categorical variables properly, suppose it was decided in the earlier Income example to
include an additional variable, State, to represent the U.S. state where the person resides. Similar
to the use of the Gender variable, one possible, but incorrect, approach would be to include a
State variable that would take a value of 0 for Alabama, 1 for Alaska, 2 for Arizona, and so on. The
problem with this approach is that such a numeric assignment based on an alphabetical ordering of
the states does not provide a meaningful measure of the difference in the states. For example, is it
useful or proper to consider Arizona to be one unit greater than Alaska and two units greater that
Alabama?
6.1 Linear Regression
In regression, a proper way to implement a categorical variable that can take on m different values is to
add m-1 binary variables to the regression model. To illustrate with the Income example, a binary vari-
able for each of 49 states, excluding Wyoming (arbitrarily chosen as the last of 50 states in an alphabetically
sorted list), could be added to the model.
results3 <- lm(Income-Age + Education,
+ Alabama,
+ Alaska,
+ Arizona,
+ WestVirginia,
+ Wisconsin,
income_ input)
The input file would have 49 columns added for these variables representing each of the first 49
states. If a person was from Alabama, the Alabama variable would be equal to 1, and the other 48 vari-
ables would be set to 0. This process would be applied for the other state variables. So, a person from
Wyoming, the one state not explicitly stated in the model, would be identified by setting all 49 state
variables equal to 0. In this representation, Wyoming would be considered the reference case, and the
regression coefficients of the other state variables would represent the difference in income between
Wyoming and a particular state.
Confidence Intervals on the Parameters
Once an acceptable linear regression model is developed, it is often helpful to use it to draw some infer-
ences about the model and the population from which the observations were drawn. Earlier, we saw that
t-tests could be used to perform hypothesis tests on the individual model parameters, {3i, j = 0, 1, ... , p- 1.
Alternatively, these t-tests could be expressed in terms of confidence intervals on the parameters. R simpli-
fies the computation of confidence intervals on the parameters with the use of the conf int () function.
From the Income example, the following R command provides 95% confidence intervals on the intercept
and the coefficients for the two variables, Age and Education.
confint(results2, level = .95)
2.5 % 97.5 %
(Intercept) 2.9777598 10.538690
Age 0.955677: 1.036392
Education 1.5313393 1.985862
Based on the data, the earlier estimated value of the Education coefficient was 1.76. Using
conf int (),the corresponding 95% confidence interval is (1.53, 1.99), which provides the amount of
uncertainty in the estimate. In other words, in repeated random sampling, the computed confidence interval
straddles the true but unknown coefficient 95% ofthe time. As expected from the earlier t-test results,
none ofthese confidence intervals straddles zero.
ADVANCED ANALYTICAL THEORY AND METHODS: REGRESSION
Confidence Interval on the Expected Outcome
In addition to obtaining confidence intervals on the model parameters, it is often desirable to obtain
a confidence interval on the expected outcome. In the Income example, the fitted linear regression
provides the expected income for a given Age and Education. However, that particular point estimate
does not provide information on the amount of uncertainty in that estimate. Using the predict {)
function in R, a confidence interval on the expected outcome can be obtained for a given set of input
variable values.
In this illustration, a data frame is built containing a specific age and education value. Using this set
of input variable values, the predict {) function provides a 95% confidence interval on the expected
Income for a 41-year-old person with 12 years of education.
Age <- 41
Education <- 12
new_pt <- data.frame(Age, Education)
conf_int_pt <- predict(results2,new_pt,leve1=.95,interval= 11 Confidence 11 )
conf_int_pt
fit lwr upr
1 68.69884 67.83102 69.56667
For this set of input values, the expected income is $68,699 with a 95% confidence interval of ($67,831,
$69,567).
Prediction Interval on a Particular Outcome
The previous confidence interval was relatively close{+/- approximately $900} to the fitted value. However,
this confidence interval should not be considered as representing the uncertainty in estimating a par-
ticular person's income. The predict {) function in R also provides the ability to calculate upper and
lower bounds on a particular outcome. Such bounds provide what are referred to as prediction intervals.
Returning to the Income example, in R the 95% prediction interval on the Income for a 41-year-old
person with 12 years of education is obtained as follows:
pred_int_pt <- predict(results2,new_pt,leve1=.95,interval= 11 prediction 11 )
pred_int_pt
fit upr
1 68.69884 44.98867 92.40902
Again, the expected income is $68,699. However, the 95% prediction interval is ($44,988, $92,409). If
the reason for this much wider interval is not obvious, recall that in Figure 6-3, for a particular input vari-
able value, the expected outcome falls on the regression line, but the individual observations are normally
distributed about the expected outcome. The confidence interval applies to the expected outcome that
falls on the regression line, but the prediction interval applies to an outcome that may appear anywhere
within the normal distribution.
Thus, in linear regression, confidence intervals are used to draw inferences on the popula-
tion's expected outcome, and prediction intervals are used to draw inferences on the next possible
outcome.
6.1 Linear Regression
6.1 .3 Diagnostics
The use of hypothesis tests, co nfidence inte rvals, and prediction intervals is dependent on the model
assumptions being true. The following discussion provides some tools and techniques that can be used to
validate a fitted linear regression model.
Evaluating the Linearity Assumption
A major assumption in linear regression modeling is that the relationship between the input vari ables and
the ou tcome variable is linear. The most fundamental way to evaluate such a rela tionship is to plot the
outcome variable against each input variable. In the Income example, such scatterplots were generated
in Figure 6-4. If the relationship between Age and Income is represented as illustrated in Figure 6-5, a
linear model would not apply. In such a case, it is often useful to do any of the following:
• Transform the outcome variable.
• Transform the input variables.
• Add extra input variables or terms to the regression model.
Common transformations include taking square roots or the logarithm of the variables. Another option
is to create a new input variable such as the age squared and add it to the linear regression model to fit a
quadratic relationship between an input variable and the outcome.
0
"'
0
0
C1>
0
(X)
E
0
0 0
!: U)
0
ro
:::J
0 “0
‘ (ii
~
0
Ci’
0
“‘i
50 60 70 80 90
fitted.values
FIGURE 6-6 Residual plot indicating constant variance
100 11 0
The plot in Fig ure 6-6 indicates that regardless of income value along the fitted linea r regression model,
the resi duals are observed somewhat evenly on both sides of the reference zero line, and t he spread of the
resi duals is fairly constant from one fitted value to t he next. Such a plot would support the mean of zero
and the constant variance assumptions on the error terms.
If the residual plot appea red like any of those in Fig ures 6-7 through 6-10, then some of the earlier
discussed transfo rmations or possible input variab le addition s shou ld be considered and attempted.
Figure 6-7 illustrates the existence of a non linea r t ren d in the residuals. Figure 6-8 illustrates that the
residuals are not centered on zero. Figure 6-9 indicates a linear trend in the residuals acro ss the various
outcomes along the linear re gression model. This plot may indicate a missing variable or term from the
regression model. Figure 6-10 provides an example in which the variance of the error terms is not a constant
but increases along the fitted linear regression model.
Evaluating th e Norm ality Assumption
The residual plots are useful for confirming that th e residuals we re centered on zero and have a con-
stant varia nce. However, the normality assumption still has to be va lidated . As shown in Figure 6-11 ,
the foll owing R code provides a histog ram plot of the residuals from res u l ts2 , the ou tput from the
Income example:
h i st(results2$residuals , mai n=”” )
6.1 Linear Regression
0
logistic function f(y) varies from 0 to 1 as y increases.
0
(I)
z:
c)
a.
(0 )(
C1l 0
+
:::;: ‘fz )
(Intercept) 3.415201 0.163734 20.858 <2e-:.6
Age -0.156643 0.004088 -38.320 <2e-:6
l·larried 0.066432 0.068302 0.973 0.331
Cust _years 0.017857 0.030497 0.586 0.558
Churned contacts 0.382324 0.027313 13. 998 <2e-16
-
Signif. codes: o 1 *** 1 0.001 1 ** 1 0.01 1 * 1 0.05 1 • 1 0.1 I I 1
As in the linear regression case, there are tests to determine if the coefficients are significantly differ-
ent from zero. Such significant coefficients correspond to small values of Pr ( > I z I } , which denote the
p-value for the hypothesis test to determine if the estimated model parameter is significantly different from
zero. Rerunning this analysis without the Cust_years variable, which had the largest corresponding
p-value, yields the following:
Churn_logistic2 <- glm (Churned-Age + Married + Churned_contacts,
data=churn_input, family=binomial(link="logit"))
summary(Churn_logistic2)
Coefficients:
Estimate stc. Error z ·la l ue Pr: > i z I’,
(Intercept l 3.472062 C.132107 26.282 <2e-16
Age -).156635 C.004088 -38.318 <2e-16
Narried ).066430 C.068299 0.973 0. 331
Churned contacts ).381909 0.027302 13.988 <2e-.;.6
-
Signif. codes: 0 1 *** 1 0.001 1 ** 1 0.01 1 * 1 0.05 '. 1 0.1 1 1 1
Because the p-value for the Married coefficient remains quite large, the Married variable is
dropped from the model. The following R code provides the third and final model, which includes only
the Age and Churned_ contacts variables:
Churn_logistic3 <- glm (Churned-Age + Churned_contacts,
6.2 Logistic Regression
data=churn_input, family=binomial(link= 11 logit 11 ))
summary(Churn_logistic3)
Call:
glm(formula Churned - Age + Churned_contacts,
family binomial(link = "logit"), data churn_input}
Deviance Residuals:
Min 1Q Median 3Q Max
-2.4599 -0.5214 -0.1960 -0.0736 3.3671
Coefficients:
Estimate Std. Error z value Pr ( > i z I}
(Intercept} 3. 502716 0.128430 27.27 <2e-16
Age -0.156551 0.004085 -38.32 <2e-16 ***
Churned_contacts 0.381857 O.C27297 13.99 <2e-16 ***
Signif. codes: 0 1 *** 1 0.001 1 ** 1 0.01 '*' 0.05 '. 1 0.1 1 1 1
(Dispersion parameter for binomial family taken to be 1}
Null deviance: 8387.3 on 7999 degrees of freedom
Residual deviance: 5359.2 on 7997 degrees of freedom
AIC: 5365.2
Number of Fisher Scoring iterations: 6
For this final model, the entire summary output is provided. The output offers several values that can
be used to evaluate the fitted model. It should be noted that the model parameter estimates correspond
to the values provided in Equation 6-11 that were used to construct Table 6-1.
Deviance and the Pseudo-R2
In logistic regression, deviance is defined to be -2* logL, where Lis the maximized value ofthe likelihood
function that was used to obtain the parameter estimates. In the R output, two deviance values are pro-
vided. The null deviance is the value where the likelihood function is based only on the intercept term
(y = /3~. The residual deviance is the value where the likelihood function is based on the parameters in
the specified logistic model, shown in Equation 6-12.
y = /3
0
+ /31 *Age + /32 *Churned_ contacts
A metric analogous to R2 in linear regression can be computed as shown in Equation 6-13.
d R2 1
residual dev.
pseu o- = -----
null dev.
null dev. - res. dev.
null dev.
(6-12)
(6-13)
The pseudo-R2 is a measure of how well the fitted model explains the data as compared to the default
model of no predictor variables and only an intercept term. A pseudo-R2 value near 1 indicates a good fit
over the simple null model.
ADVANCED ANALYTICAL THEORY AND METHODS: REGRESSION
Deviance and the Log-Likelihood Ratio Test
In the pseudo-R2 calculation, the -2 multipliers simply divide out. So, it may appear that including such
a multiplier does not provide a benefit. However, the multiplier in the deviance definition is based on the
log-likelihood test statistic shown in Equation 6-14:
T = -2 *log [ Lnull l
Lair.
= -2*1og(Lnull) -(-2)*1og(L011 )
(6-14)
where Tis approximately Chi-squared distributed (x~) with
k degrees of freedom (df) = dfnuJJ - dfalremare
The previous description of the log-likelihood test statistic applies to any estimation using MLE. As can
be seen in Equation 6-15, in the logistic regression case,
T =null deviance- residual deviance- x~ _
1
(6-15)
where p is the number of parameters in the fitted model.
So, in a hypothesis test, a large value ofT would indicate that the fitted model is significantly better
than the null model that uses only the intercept term.
In the churn example, the log-likelihood ratio statistic would be this:
T = 8387.3- 5359.2 = 3028.1 with 2 degrees of freedom and a corresponding p-value that is essentially
zero.
So far, the log-likelihood ratio test discussion has focused on comparing a fitted model to the default
model of using only the intercept. However, the log-likelihood ratio test can also compare one fitted model
to another. For example, consider the logistic regression model when the categorical variable Married
is included with Age and Churned_ contacts in the list of input variables. The partial R output for
such a model is provided here:
summary(Churn_logistic2}
Call:
glm(formula = Churned - Age + 1'-larried + Churned_conta::::ts,
family = binom:.al (1 ink = "logi t") ,
data = churn_input)
Coefficients:
Estimate Std. Error z value Pr (>I z I)
(Intercept) 3.472062 0.132107 26.282 <2e-15 ***
Age -0.156635 0.004088 -38.318 <2e-15 ***
Married 0.066430 0.068299 0.973 0.331
Churned contacts 0.381909 0.027302 13. 988 <2e-16 ***
Signif. codes: 0 '***' 0.001 '**' 0.01 '*' 0.05 '.' D.l' '1
(Dispersion parameter for bino~ial family taken to be 1)
Null deviance: 8387.3 on 7999 degrees of freedom
Residual deviance: 5358.3 on 7996 degrees of freedom
6.2 Logistic Regression
The residual deviances from each model can be used to perform a hypothesis test where H0 : f3Morried = 0
against H A : f3Married ;c 0 using the base model that includes the Age and Churned_ contacts variables.
The test statistic follows:
T = 5359.2- 5358.3 = 0.9 with 7997- 7996 = 1 degree of freedom
Using R, the corresponding p-value is calculated as follows:
pchisq(.9 , 1, lower=FALSE)
[1] 0.3427817
Thus, at a 66% or higher confidence level, the null hypothesis, H0 : ('JMarried = 0, would not be rejected.
Thus, it seems reasonable to exclude the variable Married from the logistic regression model.
In general, this log-likelihood ratio test is particularly useful for forward and backward step-wise meth-
ods to add variables to or remove them from the proposed logistic regression model.
Receiver Operating Characteristic (ROC) Curve
logistic regression is often used as a classifier to assign class labels to a person, item, or transaction
based on the predicted probability provided by the model. In the Churn example, a customer can be
classified with the label called Churn if the logistic model predicts a high probability that the customer
will churn. Otherwise, a Remain label is assigned to the customer. Commonly, 0.5 is used as the default
probability threshold to distinguish between any two class labels. However, any threshold value can be
used depending on the preference to avoid false positives (for example, to predict Churn when actu-
ally the customer will Remain) or false negatives (for example, to predict Remain when the customer
will actually Churn).
In general, for two class labels, C and •C, where "•C" denotes "not C," some working definitions and
formulas follow:
o True Positive: predict C, when actually C
o True Negative: predict •C, when actually •C
o False Positive: predict C, when actually •C
o False Negative: predict •C, when actually C
F I P
. . R (FPR) # of false positives
a se os1t1ve ate = f
# o negatives
1i P .t. R t (TPR) # of true positives rue os1 1ve: a e = ----=----
# of positives
(6-16)
(6-17)
The plot of the True Positive Rate (TPR) against the False Positive Rate (FPR) is known as the Receiver
Operating Characteristic (ROC) curve. Using the ROCR package, the following R commands generate
the ROC curve for the Churn example:
library (ROCR)
pred = predict(Churn_logistic3, type="response")
ADVANCED ANALYTICAL THEORY AND METHODS: REGRESSION
predObj = prediction(p red, churn_input$Churned )
rocObj
aucObj
performance(predObj, measure="tpr", x . measure="fpr")
performance(predObj, measure= "auc")
plot(rocObj, main= paste( "Area under the curve:",
round(aucObj @y.values[[l]] ,4)))
The usefulness of thi s plot in Figure 6-15 is that the preferred outcome of a cla ssifier is to have
a low FPR and a high TPR. So, when moving from left to right on the FPR axis, a go od mode l/
classifier ha s the TPR rapidly approach values near 1, with only a small change in FPR. The closer the
ROC curve tracks along the vertical axis and approaches the upper-left ha nd of t he plot, near the
point (0,1), the better the model/cla ssifier performs. Thus, a useful metric is to compute t he area under
th e ROC curve (AUC ). By examining the axes, it ca n be seen that the theoretical maximu m for t he
area is 1.
~
~
Q)
> = ·u;
0
a.
Q)
2
1-
(X)
0
<0
0
~
0
0
0
0.0
Aiea under the Curve = 0.8877
0.2 0.4 0.6
False positive ra te
FIGURE 6-15 ROC curve for the churn example
0.8 1.0
To illustrate how the FPR and TPR values are dependent on the threshold value used for the classi fi er,
the plot in Figure 6-1 6 was constructed using the following R code:
~extract the alpha(threshold), FPR, and TPR values from rocObj
alpha<- round(as . numeric(unlist(rocObj @alpha.values)),4)
fpr <- round(as . numeric(unlist(roc0bj @x.values)),4)
tpr <- round(as .nume ric(unlist(rocObj@y.values)),4)
# adjust margins a:1d plot TPR a nd FPR
par(mar = c(5,5 , 2 , 5))
plot(alph a,tpr, xlab= "Threshold " , xlim=c(O,l),
ylab ="True positive rate " , type= " l " )
par (new="True " )
plot(alpha,fpr , xlab =" " , ylab="", axes=F, xlim=c(O,l), t ype= "l " )
axis(side=4)
mtext(side=4, line=3, "False positive rate " )
text(0.18 , 0.18, " FPR")
text(0.58 , 0.58 , "TPR")
6 .2 Logist ic Regression
~
Q)
co
ci
"§
Q) ~
>
:OJ
0
·u;
0 “ 0 . .,
·u;
“
ro
N u.
ci
~
0
0.8 1.0
For a threshold value ofO, every item is cl assifi ed as a positive outcome. Thu s, the TPR value is 1. However,
all the negatives are also classified as a positive, and the FPR value is also 1. As the t hreshold value increases,
more and more negative cla ss labels are assigned. Thus, the FPR and TPR values decrease. When the thresh-
old reaches 1, no positive labels are assigned, and the FPR and TPR va lues are both 0.
For the purposes of a cl assifier, a commonly used threshold value is 0.5. A positive label is assigned for any
probability of0.5 or greater. Otherwise, a negative label is assigned. As the following R code illustrates, in t he
analysis of the Churn dataset, the 0.5 threshold corresponds to a TPR va lue of 0.56 and a FPR value of 0.08.
i < - which(round(alpha,2) == . 5)
p aste( "Thr e shold=", (al p ha[i ] ), "TPR=" , t p r[i ] , "FPR=", f pr li ])
'.] "ThrE'sh "d n.c."
Thus, 56% of customers who will churn are properly cla ssified with the Churn label, and 8% of t he
customers who wi ll remain as customers are improperly labeled as Churn. If identifying on ly 56% of the
churners is not acceptable, then the threshold could be lowered. For example, suppose it was decided to
classify with a Churn label any customer with a probabi lity of churning greater than 0.15. Then the fol -
lowing R code indicates that the correspondi ng TPR and FPR val ues are 0.91 and 0.29, respectively. Thus,
91% of the customers who will churn are properly identified, but at a cost of misclassifying 29% of the
customers who will remain.
i < - which(round(alpha,2) == .15 )
paste( " Threshold=", (alpha[i]), " TPR=", t p r[i] , "FPR=", fpr[i])
,1] "Threshold- O. l5•l3 TPR- 0.9116 F?R= .286 .. "
~2] "Threshold 0 . 1518 TPR 0. 0 12::! FPR= 0 . 28-5"
[3] "Threshold= 0.1·179 TPR= 0 . q1·l5 F?R= 0 . 2942 11
[4] "Threshold- 0 . 14r.5 TPR- o . q174 FPR 0. 2°8,"
The ROC cu rve is usefu l fo r evaluating other classifiers and wi ll be utilized again in Chapter 7, "Adva nced
Analytical Theory and Methods: Classification."
ADVANCED ANALYTICAL THEORY AND METHODS: REGRESSION
3000-.
: ooo ...
'E
::J
0
u
1000
0
Histogram of the Probabilities
It can be useful to visualize the observed responses against the estimated probabili ties provided by the
logistic regression. Figure 6-17 provides overlaying histograms for the customers who churned and for the
customers who remained as customers. With a proper fitting logistic model, the customers who remained
tend to have a low probability of churning. Conversely, the customers who churned have a high probability
of churning again. This histogram plot helps visualize the number of items to be properly classi fi ed or mis-
classified.ln the Churn example, an ideal histogram plot would have the remaining customers grouped at
the left side of the plot, the customers who churned at the right side of the plot, and no overlap of these
two groups.
000 0 25 050
Pro babrllry
0 75
C., L>ff”ed
R•”‘-‘”ed
1.00
FIGURE 6-17 Customer counts versus estimated churn probability
6.3 Reasons to Choose and Cautions
Linear regression is suitable when the input variables are continuous or discrete, including categorical data
types, but the outcome variable is continuous. If the outcome variable is ca tegorical, logistic regression
is a better choice.
Both mode ls assume a linear additive function of the input variables. If such an assumption does not
hold true, both regression techniques perform poorly. Furthermore, in linear regression, the assumption of
normally distri buted error terms with a constant variance is important for many of the statistica l inferences
that can be considered. If the various assumptions do not appear to hold, the appropriate transformations
need to be applied to the data.
6.4 Additional Regression Models
Although a collection of input variables may be a good predictor for the outcome variabl e, the analyst
should not infer that the input variables directly cause an outcome. For example, it may be identified that
those individuals who have regular dentist visits may have a reduced risk of heart attacks. However, simply
sending someone to the dentist almost certainly has no effect on that person’s chance of having a heart
attack. It is possible that regular dentist visits may indicate a person’s overa ll health and dietary choices,
which may have a more direct impact on a person’s health. This example illustrates the commonly known
expression, “Correlation does not imply causation.”
Use caution when applying an already fitted model to data that falls outside the dataset used to train
the model. The linear relationship in a regression model may no longer hold at va lues outside the training
dataset. For example, if income was an input variable and the va lues of income ranged from $35,000 to
$90,000, applying the model to incomes well outside those incomes could resu lt in inaccurate estimates
and predictions.
The income regression example in Section 6.1.2 mentioned the possibility of using categori cal variab les
to represent the 50 U.S. states. In a linear regression model, t he state of residence would provide a simple
addi tive term to the income mode l but no other impact on the coefficients of the other input variables,
such as Age and Education. However, if state does influence the other variables’ impact to the income
model, an alternative approach would be to build 50 separate linear regression models: one mode l for
each state. Such an approach is an example of the options and decisions that the data scientist must be
willing to consider.
If several of the input variables are highly correlated to each ot her, the condition is known as
multicollinearity. Multicollinearity can often lead to coefficient estimates that are relatively large in abso-
lute magnitude and may be of inappropriate direction (negative or positive sign). When possible, the major-
ity of these correlated variables should be removed from the model or replaced by a new variable that is
a function of the correlated variables. For example, in a medical application of regression, height and
weight may be considered important input va ri ables, but these variables tend to be correlated . In this
case, it may be useful to use the Body Mass Index (BMI), which is a function of a person’s height and weight.
BMI = weight where weight is in kilograms and height is in meters
height 2
However, in some cases it may be necessary to use the correlated variables. The next section provides
some techniques to address highly correlated variables.
6.4 Additional Regression Models
In the case of multicollinearity, it may make sense to place some re strictions on the mag nitudes of t he
estimated coefficients. Ridge regression , which applies a penalty based on the size of the coefficients, is
one technique that can be applied. In fitting a linear regression model, the objective is to find the values
of the coefficients that minimize the sum of the residuals squared. In ridge regression, a penalty term
proportional to the sum of the squares of the coefficients is added to the sum of the residuals squared.
Lasso regression is a related modeling technique in which the penalty is proportional to the sum of the
absolute values of the coefficients.
ADVANCED ANALYTICAL THEORY AND METHODS: REGRESSION
On ly bi nary outcome va riables were examined in the use of logistic regression. if the outcome va riable
ca n assume more t han two states, m ultinomial logistic regression ca n be used.
Summary
This chapter discussed the use of linear regression and log istic regression to model historical data and to
pred ict future outcomes. Usi ng R, exa mples of each regression technique were presented . Several diag-
nostics to eval uate the models and t he underlying assum ptions were covered.
Although regression analysis is relatively straightforward to perform using many existing soft ware pack-
ages, considerable ca re must be taken in performing and interpreting a regression analysis. This chapter
highlighted that in a reg ression ana lysis, the data scientist needs to do the following:
• Determine the best input variables and their relationship to the outco me variable.
• Understand the underlying assumptions and their impact on the modeling results.
• Transform the variables, as appropri ate, to achieve adherence to the model assumptions.
• Decide whether building one comprehensive model is the best choice or consider building ma ny
models on partitions of t he data.
Exercises
1. In the Income li near regression example, consider the distribution of the outcome variable Income.
Income val ues tend to be highly skewed to the righ t (distribution of va lue has a large tail to t he
righ t). Does such a non-normally distributed outcome va riable violate the general assumption of a
linea r reg ression model? Provide supporting arguments.
2. In the use of a categorica l variable with n possible va lues, explain the following:
a. Why only n – 1 binary variables are necessary
b. Why usi ng n variables would be problematic
3. In the example of usi ng Wyoming as the reference case, discu ss the effect on the estimated model
parameters, includ ing the intercept, if another state was selected as the reference case.
4. Describe how log istic regression can be used as a cl assifier.
5. Discuss how the ROC curve can be used to determine an appropriate threshold value for a classifi er.
6. If the probab ility of an event occurring is 0.4, then
a. What is t he odds ratio?
b. What is the log odds ratio?
7. If b
3
= – .5 is an estimated coefficient in a linear regression model, what is the effect on the odds ratio
for every one unit increase in the value of x
3
?
ADVANCED ANALYTICAL THEORY AND METHODS: CLASSIFICATION
In addition to analytical methods such as clustering (Chapter 4, “Advanced Analytical Theory and Methods:
Clustering”), association rule learning Chapter 5, “Advanced Analytical Theory and Methods: Association
Rules”, and modeling techniques like regression (Chapter 6, “Advanced Analytical Theory and Methods:
Regression”), classification is another fundamental learning method that appears in applications related
to data mining. In classification learning, a classifier is presented with a set of examples that are already
classified and, from these examples, the classifier learns to assign unseen examples. In other words, the
primary task performed by classifiers is to assign class labels to new observations. Logistic regression
from the previous chapter is one of the popular classification methods. The set of labels for classifiers is
predetermined, unlike in clustering, which discovers the structure without a tra ining set and allows the
data scientist optiona lly to create and assign labels to the clusters.
Most classification methods are supervised, in that they start with a training set of prelabeled observa-
tions to learn how likely the attributes of these observations may contribute to the classification of future
unlabeled observations. For example, existing marketing, sales, and customer demographic data can be
used to develop a classifier to assign a “purchase” or “no purchase” label to potential future customers.
Classification is widely used for prediction purposes. For example, by building a classifier on the tran-
scripts of Un ited States Congressional floor debates, it can be determined whether the speeches represent
support or opposition to proposed legislation [1]. Classification can help health care professionals diagnose
heart disease patients [2]. Based on an e-mail’s content, e-mail providers also use classification to decide
whether the incoming e-mail messages are spam [3].
This chapter mainly focuses on two fundamental classification methods: decision trees and
nai”ve Bayes.
7.1 Decision Trees
A decision tree (also called prediction tree) uses a tree structure to specify sequences of decisions and
consequences. Given input X = {x
1
, x
1
, ••• xn) , the goal is to predict a response or output variable Y. Each
memberofthe set { x
1
,x
1
, … xn} is called an input variab le. The prediction can be achieved by constructing
a decision tree with test points and branches. At each test point, a decision is made to pick a specific branch
and traverse down the tree. Eventually, a final point is reached, and a prediction can be made. Each test
point in a decision tree involves testing a particular input variable (or attribute), and each branch represents
the decision being made. Due to its flexibility and easy visualization, decision trees are commonly deployed
in data mining applications for classification purposes.
The input values of a decision tree can be categorical or continuous. A decision tree employs a structure
of test points (ca lled nodes) and branches, which represent the decision being made. A node without fur-
ther branches is called a l eaf n ode. The leaf nodes return class labels and, in some implementations, they
return the probability scores. A decision tree can be converted into a set of decision rules. In the following
example rule, income and mortgage_ amount are input variables, and the response is the output
variable d efa ult with a probability score.
IF .1. ·om
THEN l<>f tu. t Tt•.te WITH PROBABILITY 75%
7.1 Decision Trees
Decision trees have two varieties: classification trees and regression trees. Classification trees usu-
ally apply to output variables that are categorical-often binary-in nature, such as yes or no, purchase
or not purchase, and so on. Regression trees, on the other hand, can apply to output variables that are
numeric or continuous, such as the predicted price of a consumer good or the likelihood a subscription
will be purchased.
Decision trees can be applied to a variety of situations. They can be easily represented in a visual way,
and the corresponding decision rules are quite straightforward. Additionally, because the result is a series
of logical if- then statements, there is no underlying assumption of a linear (or nonlinear) relationship
between the input variables and the response variable.
7.1.1 Overview of a Decision Tree
Figure 7-1 shows an example of using a decision tree to predict whether customers will buy a product.
The term branch refers to the outcome of a decision and is visualized as a line connecting two nodes. If a
decision is numerical, the “greater than” branch is usually placed on the right, and the “less than” branch
is placed on the left. Depending on the nature of the variable, one of the branches may need to include
an “equal to” component.
Internal nodes are the decision or test points. Each internal node refers to an input variable or an
attribute. The top internal node is called the root. The decision tree in Figure 7-1 is a binary tree in that each
internal node has no more than two branches. The branching of a node is referred to as a split.
<---------------------------- Root- Top internal node
'----::311'01;::---'
Male <---------------------Branch - Outcome of test
<--------------Internal Node- Decision on variable
FIGURE 7-1 Example of a decision tree
Sometimes decision trees may have more than two branches stemming from a node. For example, if
an input variable Weather is categorical and has three choices-Sunny, Rainy, and Snowy-the
corresponding node Weather in the decision tree may have three branches labeled as Sunny, Rainy,
and Snowy, respectively.
The depth of a node is the minimum number of steps required to reach the node from the root. In
Figure 7-1 for example, nodes Income and Age have a depth of one, and the four nodes on the bottom
of the tree have a depth of two.
Leaf nodes are at the end of the last branches on the tree. They represent class labels-the outcome
of all the prior decisions. The path from the root to a leaf node contains a series of decisions made at vari-
ous internal nodes.
ADVANCED ANALYTICAL THEORY AND METHODS: CLASSIFICATION
In Figu re 7-1, the root nod e spl its into two branches with a Gender tes t. The right branch contains all
those records with the vari able Gende r equal to Male, and the left branch contai ns all those record s with
t he variable Gen der equal to Female to create the depth 1 internal nodes. Each intern al node effec-
tively acts as the root of a subtree, and a best test for each node is determ ined independently of the other
internal nodes. The left-hand side (LHS) internal node splits on a question based on the Income varia ble
to create leaf nodes at depth 2, whereas the right-hand side (RHS) splits on a question on the Age variable.
The decision tree in Figu re 7-1 shows that females with income less than or equal to $45,000 and males
40 years old or younger are classifi ed as people who would pu rchase the product. In traversi ng this tree,
age does not matter for fema les, and income does not matter for males.
Decision trees are widely used in practice. For example, to classify animals, questions (like cold-blooded
or warm -blooded, mammal or not mammal) are answered to arrive at a certain classification. Another
example is a checklist of symptoms during a doctor's evaluation of a patient. The artificial intelligence
engine of a video game commonly uses decision trees to control the autonomous actions of a character in
response to various scenarios. Retailers can use decision trees to segment customers or predict response
rates to marketing and promotions. Financial institutions can use decision trees to help decide if a loan
app lica tion should be approved or denied. In the case of loan approval, com puters can use the logical
if- then statements to pred ict whether the customer will default on the loan. For customers with a
clear (strong) outcome, no human interaction is req uired; for observations that may not generate a clear
response, a human is needed for the decision.
By limiting the number of splits, a short tree can be created. Short trees are often used as component s
(also cal led weak l earners or base learn ers) in ensemble m etho ds. Ensemble methods use multiple
predictive models to vote, and decisions ca n be made based on the combination of the votes. Some popu-
lar ensemble methods include random forest [4]. bagging, and boosting [5). Section 7.4 discusses these
ensemble methods more.
Th e sim plest short tree is called a decision stu mp, which is a decision tree with the root immediately
connected to the leaf nodes. A decision stump makes a pred iction based on the value of j ust a single input
variable. Figure 7-2 shows a decision stump to classify two species of an iris flower based on the petal width.
The figure shows that, if the petal width is smaller than 1.75 centimeters, it's Iris versicolor; otherwise, it's
Iris virgin ica.
FIGURE 7-2 Example of a decision stump
To illustrate how a decision tree work s, consider th e case of a bank t hat wa nts to market its term
deposit products (such as Certificates of Deposit) to the appropriate customers. Given the demographics
of clients and their reactions to previous campaign phone calls, the bank's goal is to predict which clients
would subscribe to a term deposit. The dataset used here is based on the original dataset collected from a
Portuguese bank on directed marketing campaigns as stated in the work by Moro et al. [6). Figure 7-3 shows
a subset of the modified bank marketing dataset. This dataset includes 2,000 insta nces randomly drawn
7.1 Decision Trees
from the original dataset, and each instance corresponds to a customer. To make the example simple, the
subset only keeps the following ca tegori cal variables: (1) job, (2) marita l statu s, (3) education level,
(4) if the credit is in default, (5) if there is a housing loan, (6) if the customer currently has a personal
loan, (7) con tact type, (8) result of the previous marke ting campaign contact (pou tcome), and finally
(9) if the client actually subscribed to the term deposit. Attributes (1) through (8) are input variables,
and (9) is considered the outcome. The ou tcome subscribed is either yes (meaning the customer
will subscribe to the term deposit) or no (meaning the customer won't subscribe). All the vari ables listed
earlier are categori cal.
job ml(iUI Nuudoo dcfwlt houWng lo.on contKI poultomc wbw:r~
••naac~~ent s inalc ter-tiary ,., ccllulal" unlcnown
entrepreneur aarricd tcrtilry no , .. ,., cellular' unlcnown
services diYOt"CCCI SCCOtW:IIr')' no cellular unlo:now-n ,.,
CIIRIJt iHnt Nt"rieil u rtl a ry , .. c e llular unknown
••n•acNnt urrico secondary no , .. unlie.no.:n unknown
UMIJC IICRt sinJlC tcr tlar")' no ,., unknown unl(no..-n
e ntrepreneu r a ar ricd t cr th ry ,., cdluler hUurc ,.,
adal n. • a,.ricd secondary cellulel'" unknown
blue-collar a ardcd secondary ,., cellula r- o the r
10 aiRIJCIU!R t • • r ricd tt: r thry ,., cellular unknown
II blue• co llar • er ricd s e conda ry no ,., ce llula r unlc.nown
12 ••n•u~nt ohorcco s e conda ry unknown unlenOWin
I) blue-co l lar NrriC'Cl SCCOt\011")' no , .. cclluhl'" unl(no-n no
14 ret i red urdco seconda ry cellular unl(nown
15 unacetttnt sinclc tcrthry ,., cclluler unknown
16 rctlrlt(l urd~ secondary , .. ,., cellular urltnown
17 unc•ploy~ urd~ seconda ry no ,., telephone urltnown
18 unaceacnt divorced ttrt:iary , .. cclluhr urlc nown
19 ••n•cercnt aerdcd tertiary ,., cellular url
tt.ChillCl 1t :33 single : 57l tertl HT
.ttlmin. : •. 3 c •Jr.kno·,·:n
ADVANCED ANALYTICAL THEORY AND METHODS: CLASSIFICATION
housing
no : 916
yes:1084
loan
no :1717
yes: 283
contact
cellular :1287
telephone: 136
unknown : 577
may
jul
aug
jun
nov
ap~·
month pout come
:581 failure: 210
; 34 0 other 79
:278 success: 58
:~32 unknmm: 1653
:183
:ES
1 Other’i :268
subsc:·ibed
no :1789
yes: 211
Attribute job includes the following values.
admin. blue-collar entrep~·eneur
235 435 70
management retired self -emplo;.·ed
423 92 69
student technician unemployed
36 339 60
houser.1aid
63
16 8
unknm·:n
10
Figure 7-4 shows a decision tree built over the bank marketing dataset. The root of the tree shows that
the overall fraction of the clients who have not subscribed to the term deposit is 1,789 out of the total
population of 2,000.
poutcome = fallure,other,unknown
poutcome =success
education = secondary,tertiary
~
~
education = primary,unknown
job = admln.,blue-collar,management,retired,services,technician
~
~
job = self-employed,student,unemployed
~
l§J
FIGURE 7-4 Using a decision tree to predict if a client will subscribe to a term deposit
~
l§)
7.1 Decision Trees
At each split, the decision tree algorithm picks the most informative attribute out of the remaining
attributes. The extent to which an attribute is informative is determined by measures such as entropy and
information gain, as detailed in Section 7.1.2.
At the first split, the decision tree algorithm chooses the pout come attribute. There are two nodes at
depth=l. The left node is a leaf node representing a group for which the outcome of the previous market-
ing campaign contact is a f ai 1 ure, other, or unknown. For this group, 1,763 out of 1,942 clients have
not subscribed to the term deposit.
The right node represents the rest of the population, for which the outcome of the previous marketing
campaign contact is a success. For the population of this node, 32 out of 58 clients have subscribed to
the term deposit.
This node further splits into two nodes based on the education level. If the education level is either
secondary or tertiary, then 26 out of 50 ofthe clients have not subscribed to the term deposit. If
the education level is primary or unknown, then 8 out of 8 times the clients have subscribed.
The left node at depth 2 further splits based on attribute job. If the occupation is admin,
blue collar, management, retired, services, or technician, then 26 out of 45 clients
have not subscribed.lfthe occupation is self -employed, student, or unemployed, then 5 out
of 5 times the clients have subscribed.
7.1.2 The General Algorithm
In general, the objective of a decision tree algorithm is to construct a tree T from a training set s.lf all the
records ins belong to some class c(subscribed =yes, for example), or if sis sufficiently pure (greater than
a preset threshold), then that node is considered a leaf node and assigned the label c. The purity of a node
is defined as its probability of the corresponding class. For example, in Figure 7-4, the root
P( subscribed= yes)= 1- 1789 = 1 0.55%; therefore, the root is only 10.55% pure on the subscribed= yes
2000
class. Conversely, it is 89.45% pure on the subscribed= no class.
In contrast, if not all the records in s belong to class cor if sis not sufficiently pure, the algorithm
selects the next most informative attribute A (duration, marital, and so on) and partitions s according
to A’s values. The algorithm constructs subtrees r,, T2 ••• for the subsets of s recursively until one of the
following criteria is met:
o All the leaf nodes in the tree satisfy the minimum purity threshold.
o The tree cannot be further split with the preset minimum purity threshold.
o Any other stopping criterion is satisfied (such as the maximum depth of the tree).
The first step in constructing a decision tree is to choose the most informative attribute. A common way
to identify the most informative attribute is to use entropy-based methods, which are used by decision tree
learning algorithms such as 103 (or Iterative Dichotomiser 3) [7] and C4.5 [8]. The entropy methods select
the most informative attribute based on two basic measures:
o Entropy, which measures the impurity of an attribute
o Information gain, which measures the purity of an attribute
ADVANCED ANALYTICAL THEORY AND METHODS: CLASSIFICATION
Given a class X and its label x E X , let P(x) be the probability of x. H , ,the entropy of X, is defined as
shown in Equation 7-1.
Hx =-l.”: P(x)log
1
P(x)
(7-1}
·:• X
Equation 7-1 shows that entropy Hx becomes 0 when all P(x ) is 0 or 1. For a binary classification (true
or false}, Hx is zero if P(x) the probability of each label x is either zero or one. On the ot her hand, Hx
achieves the maximum entropy when all the class labels are equally probable. For a binary classification,
H x = 1 if the probability of all cl ass labels is SO/SO. The maximum entropy increases as the number of pos-
sible outcomes increases.
As an example of a binary random va riable, consider tossing a coin with known, not necessarily
fair, probabilities of coming up heads or tails. The corresponding entropy graph is shown in Figure 7-5. Let
x = 1 represent heads and x = 0 represent tails. The entropy of the unknown result of the next toss is maxi-
mized when the coin is fair. That is, when heads and tails have equal probability P( x = 1) = P( x = o) = 0.5,
entropy H x = – (0.5 x log
1
0.5 + 0.5 x log
2
0.5) = 1 . On the other hand, if the coin is not fair, the probabilities
of heads and tails would not be equal and there would be less uncertainty. As an extreme case, when the
probability of to ssing a head is equal to 0 or 1, the entropy is minimized to 0. Therefore, the entropy for a
completely pure variable is 0 and is 1 for a set with equal occurrences for both the classes (head and tail, or
yes and no}.
~
~
(I)
0
P(subscribed = yes) = 0.1055 and P(subscribed = no) = 0.8945. According to Equation 7-1, the base entropy
H,ub•wt>ro = – 0.1055·1og
2
0.1055 0.8945 ·log2 0.8945 :::::: 0.4862.
7.1 Decision Trees
The next step is to identify the conditional entropy for each attribute. Given an attribute x, its value x,
its outcome Y, and its value y, conditional entropy HYJx is the remaining entropy ofY given X, formally defined
as shown in Equation 7-2.
HYIX = .Z:::.:P( x )H(YIX = x)
=- .2::.’.: P(x) .2::.’.: P(ylx)log
2
P(ylxl
(7-2)
‘?xr x “‘yCY
Consider t he banki ng marketing scenario, if the attribu te contact is chosen, X = {c e 1 1 ul ar,
telepho ne, unknown}. The conditional entropy of conta ct considers all three values.
Table 7·1 lists the probabilities related to the contact attribute. The top row ofthe table displays the
probabilities of each value of the attribute. The next two rows contain the probabilities of the class labels
conditioned on the contact.
TABLE 7-1 Conditional Entropy Example
Cellular Telephone Unknown
P(contact) 0.6435 0.0680 0.2885
P(subscribed=yes I contact) 0.1399 0.0809 0.0347
P(subscribed=no I contact) 0.8601 0.9192 0.9653
The conditional entropy of the contact attribute is computed as shown here.
H,ubsc•.~omoct = – ( 0.6435· ( 0.1399·1og2 0.1399 + 0.8601 ·1og2 0.8601)
+ 0.0680 · ( 0.0809 ·log2 0.0809 + 0.9192 ·log2 0.9192)
+ 0.2885 · ( 0.0347 ·log
2
0.0347 + 0.9653 ·log
2
0.9653 )]
= 0.4661
Computation inside the parentheses is on the entropy of the class labels within a single contact
value. Note that the condi tiona l entropy is always less than or equa l to th e base entropy- th at is,
H ,ubswbtdjmawat ~ H ,ubswt>ed· The conditional entropy is sma ll er than the base entropy when the attribute
and the outcome are correlated. In the worst case, when the attribute is uncorrelated with the outcome,
the cond itional entropy equals t he base entropy.
The information gain of an attribute A is defined as the difference between the base entropy and the
conditional entropy of the attribute, as shown in Equation 7-3.
(7·3)
In th e bank marketing example, the information gain of the contact attribute is shown in
Equation 7-4.
= 0.4862-0.4661 = 0.0201 (7-4)
ADVANCED ANA LYT ICAL THEORY AN D METHO DS: CLASSIFICATION
Information gain compares the degree of purity of the parent node before a split with the degree of
purity of the child node after a split. At each split, an attribute with the greatest information gain is consid-
ered the most informative attribute. Information gain indicates the purity of an attribute.
The result of information gain for all the input variables is shown in Table 7-2. Attribute p o u tcome has
the most information gain and is the most informative variable. Therefore, pout come is chosen for the
first split of the decision tree, as shown in Figure 7-4. The values of information gain in Table 7-2 are small
in magnitude, but the relative difference matters. The algorithm splits on the attribute with the largest
information gain at each round.
TABLE 7-2 Calculating Information Gain of Input Variables for the First Split
Attribute ! Information Gain
pout come 0.0289
c ontact 0.0201
ho us ing 0.0133
job 0.0101
e du c ation 0.0034
mar ital 0.0018
l oan 0.0010
de fault 0.0005
Detecting Significant Splits
Quite often it is necessary to measure the significance of a split in a decision tree, especially when
the information gain is small, like in Table 7-2.
Let NA and N8 be the number of class A and class 8 in the parent node. Let NAt represent the number
of class A going to the left child node. N81 represent the number of class 8 going to the left child node,
NAR represent the number of class 8 going to the right child node, and NBR represent the number of
class 8 going to the right child node.
Let p
1
and pR denote the proportion of data going to the left and right node, respectively.
P
– NAt + NBL
L –
NA + NB
7.1 Decision Trees
The following measure computes the significance of a split. In other words, it measures how much
the split deviates from what would be expected in the random data.
where
If K is small, the information gain from the split is not significant. If K is big, it would suggest the
information gain from the split is significant.
Take the first split of the decision tree in Figure 7-4 on variable pout come for example.
NA = 1789,NB =2ll,NAL = l763,NBL = 179,NAR = 26,NBR = 32.
Following are the proportions of data going to the left and right node.
PL = 194_%’ooo = 0.971 and PR = 5%ooo = 0.029.
The N~L’ N~L’ N~R’ and N~R represent the number of each class going to the left or right node if the
data is random. Their values follow.
N~L = 1737.119,N~L = 204.881,N~R = 51.881 and N~R = 6.119
Therefore, K = 126.0324, which suggests the split on pout come is significant.
After each split, the algorithm looks at all the records at a leaf node, and the information gain of each
candidate attribute is calculated again over these records. The next split is on the attribute with the high-
est information gain. A record can only belong to one leaf node after all the splits, but depending on the
implementation, an attribute may appear in more than one split of the tree. This process of partitioning the
records and finding the most informative attribute is repeated until the nodes are pure enough, or there
is insufficient information gain by splitting on more attributes. Alternatively, one can stop the growth of
the tree when all the nodes at a leaf node belong to a certain class (for example, subscribed= yes} or
all the records have identical attribute values.
In the previous bank marketing example, to keep it simple, the dataset only includes categorical vari-
ables. Assume the dataset now includes a continuous variable called duration -representing the number
of seconds the last phone conversation with the bank lasted as part of the previous marketing campaign.
A continuous variable needs to be divided into a disjoint set of regions with the highest information gain.
ADVANCED ANALYTICAL THEORY AND METHODS: CLASSIFICATION
A brute-force method is to consider every value of the continuous variable in the training data as a candi-
date split position. This brute-force method is computationally inefficient. To redu ce the complexity, the
training records can be sorted based on the duration, and the candidate splits can be identified by taking
the midpoints between two adjacent sorted values. An examples is if the duration consists of sorted values
{140, 160, 180, 200} and the candidate splits are 150, 170, and 190.
Figure 7-6 shows what the decision tree may look like when considering the durat ion attribute. The
root splits into two partitions: those clients with duration < 456 seconds, and those with duration ;:: 456
seconds. Note that for aesthetic purposes, labels for the job and contact attributes in the figure are
abbreviated.
dur~bon < 456
poutcome • ftr ,oth,unk
poutcome = "ec
Q
dur<1l1on < 150 \
G6
duration >= 150
no
1785 I 2000
dur~tion >”‘ 456
conuct • ttp,unk
g
dur-ation < 1004 \
G6
durabon ;.• 1004
~
~
conU~ct = ell
duration < 700
0 ·-~
job • <1d ,h sm ,rtt,uNn . job ""
4
} -of two attributes
(x
1
and x). The corresponding decision tree is on the right side of the figure. A decision surface corresponds
to a leaf node of the tree, and it can be reached by traversing from the root of the tree following by a seri es
of decisions according to the value of an attribute. The decision surface can only be axis-aligned for the
decision tree.
E
~ · B
c D
A
A B c D E
FIGURE 7-8 Decision surfaces can only be axis-aligned
The structure of a decision tree is sensitive to small variations in the training data. Although the dataset is
the same, constructing two decision trees based on two different subsets may result in very different trees. If
a tree is too deep, overfitting may occur, because each split reduces the training data for subsequent splits.
Decision trees are not a good choice if the dataset contains many irrelevant variables. This is different
from the notion that they are robust with redundant variables and correlated variables. If the dataset
ADVANCED ANALYTICAL THEORY AND METHODS: CLASSIFICATION
contains redundant variables, the resulting decision tree ignores all but one of these variables because
the algorithm cannot detect information gain by including more redundant variables. On the other hand,
if the dataset contains irrelevant variables and if these variables are accidentally chosen as splits in the
tree, the tree may grow too large and may end up with less data at every split, where overfitting is likely
to occur. To address this problem, feature selection can be introduced in the data preprocessing phase to
eliminate the irrelevant variables.
Although decision trees are able to handle correlated variables, decision trees are not well suited when
most of the variables in the training set are correlated, since overfitting is likely to occur. To overcome
the issue of instability and potential overfitting of deep trees, one can combine the decisions of several
randomized shallow decision trees-the basic idea of another classifier called random forest [4]-or use
ensemble methods to combine several weak learners for better classification. These methods have been
shown to improve predictive power compared to a single decision tree.
For binary decisions, a decision tree works better if the training dataset consists of records with an even
probability of each result. In other words, the root of the tree has a 50% chance of either classification.
This occurs by randomly selecting training records from each possible classification in equal numbers. It
counteracts the likelihood that a tree will stump out early by passing purity tests because of bias in the
training data.
When using methods such as logistic regression on a dataset with many variables, decision trees can help
determine which variables are the most useful to select based on information gain. Then these variables
can be selected for the logistic regression. Decision trees can also be used to prune redundant variables.
7.1.5 Decision Trees in R
In R, rpart is for modeling decision trees, and an optional package rpart . plot enables the plotting
of a tree. The rest of this section shows an example of how to use decision trees in R with rpart . p 1 ot
to predict whether to play golf given factors such as weather outlook, temperature, humidity, and wind.
In R, first set the working directory and initialize the packages.
setwd { “c: /”)
install.packages(“rpart.plot”) # install package rpart.plot
library(“rpart”) # load libraries
library(“rpart.plot”)
The working directory contains a comma-separated-value (CSV) file named DTdata. csv. The file
has a header row, followed by 10 rows of training data.
Play,Outlook,Temperature,Humidity,Wind
yes,ra:ny,cool,normal,FALSE
no,rainy,cool,normal,TRUE
yes,overcast,hot,high,FALSE
no,sunny,mild,high,FALSE
yes,ra~ny,cool,normal,FALSE
yes,sunny,cool,normal,FALSE
yes,rainy,cool,normal,FALSE
yes,sunny,hot,normal,FALSE
yes,overcast,mild,high,TRUE
no,sunny,mild,high,TRUE
7.1 Decision Trees
The CSV file contains five attributes: Play, Outlook, Temperature, Humidity, and
Wind. Playwould be the output variable (or the predicted class}, and outlook, Temperature,
Humidity, and Wind would be the input variables. In R, read the data from the CSV file in the working
directory and display the content.
play_decision <- read. table ( 11 DTdata. csv", header=TRUE, sep=", ")
play_decision
Play Outlook Ternperatur·e Humidity ;·iind
1 yes rainy cool n::Jrmal FALSE
2 no r-a1n:/ cool rDrmal TRUE
yes overcast hot high FJ:..LSE
4 no sun~y r.li ld high F.Z..LSE
5 yes rain;.· cool r:::Jrmal FALSE
6 yes sunny cool :nnnal FJ:..LSE
7 )'es rai::y cool r,:Jrmal FJ:,.LSE
yes sunny hot rDrmal FJ..LSE
9 yes overcast mild high TRUE
10 no sunny mild high TRUE
Display a summary of play_decision.
summary(play_decision)
Play·
no :3
yes:7
Outlook Temperature Humidity
overcast::?.
rain;.·
sunny :4
cool:5
bot :2
mild:3
high :·l
normal:6
·::ind
:•loje :logical
FJ.<.i~SE: 7
TRUE :3
NA's :0
The rpart function builds a model of recursive partitioning and regression trees [9]. The following
code snippet shows how to use the rpart function to construct a decision tree.
fit <- rpart(Play- Outlook+ Temperature +Humidity+ Wind,
method="class 11 ,
data=play_decision,
control=rpart.control(minsplit=l),
parms=list(split=•information•))
The rpart function has four parameters. The first parameter, Play - Out look + Temperature
+ Hurnidi ty + Wind, is the model indicating that attribute Play can be predicted based on attri-
butes Outlook, Temperature, Humidity, and Wind. The second parameter, method, is set
to "class," telling R it is building a classification tree. The third parameter, data, specifies the dataframe
containing those attributes mentioned in the formula. The fourth parameter, control, is optional and
controls the tree growth. In the preceding example, control=rpart. control (minspli t=l)
requires that each node have at least one observation before attempting a split. The rninspl it= 1 makes
sense for the small dataset, but for larger datasets rninspl it could be set to 10% of the dataset size to
combat overfitting. Besides minsp 1 it, other parameters are available to control the construction of the
decision tree. For example, rpart. control (rnaxdepth=10, cp=O. 001) limits the depth ofthe
ADVANCED ANALYTICAL THEORY AND METHODS: CLASSIFICATION
tree to no more than 10, and a split must decrease the overall lack of fit by a factor of 0.001 before being
attempted. The last parameter (parms) specifies the purity measure being used for the splits. The value of
sp 1 it can be either information (for using the information gain) or g ini (for using the Gini index).
Enter sununary (fit) to produce a summary of the model built from rpart.
The output includes a summary of every node in the constructed decision tree. If a node is a leaf, the out-
put includes both the predicted class label (yes or no for Play) and the class probabilities-P (Play).
The leaf nodes include node numbers 4, 5, 6, and 7.1f a node is internal, the output in addition displays the
number of observations that lead to each child node and the improvement that each attribute may bring
for the next split. These internal nodes include numbers 1, 2, and 3.
summary(fit)
Call:
rpart(formula = P:ay- Outlook t Temperature+ Humidity+ Wind,
data= play_dec:..sion, method= "class",
parrr.s = list 1:sp'Cit = "in!'ormation"),
control= rpart.controllminsplit 1) i
n= 10
CP nsplit rel error xerror xstd
0.3333333
2 0.0100000
1.000000 0.~830~59
1.666667 0.5270~63
Variable importance
Wind Outlook Temperature
51 29 20
Node number 1: 10 obse!·vations, complexity param=0.3333333
predicted c:ass=yes expected loss=0.3 P 1 nodel
class counts: 7
probabilities: 0.300 J.700
left son=2 I 3 obs :· rig:1t son=3 ! 7 obs •
Primary splits:
Temperature splits as RRL, :..rr.prove=1. 3282860, 10 missing)
\'lind < 0.5 to the !·ight, in~prove=1.3282860, ·.0 missing)
Outlook splits as ?.LL,
Humidity splits as LE,
Surrogate splits:
improve=0.8161371, (0 missing)
improve=C.6326870, (0 missing;
Wind< 0.5 to the right, agree=O.B, adj=0.333, 0 split'
Node number 2: 3 observations, complexity param=0.3333333
predicted class=no expected loss=0.3333333 PinojeJ =0.3
class counts:
probabilities: 0.667 0.333
left son=4 (2 obs) right son=S (1 obsJ
Primar}· splits:
Outlook splits as R-L, improve=l.9095430, (0 missing)
\·lind < O.S :-.o the left, improve=0.523248l, (0 missing)
Node number 3: 7 observat io:1s, complexi t·_.: p
I 0~ ! <~
® ~ @ ~
FIGURE 7-9 A decision tree built from DTdata.csv
The decisions in Figure 7-9 are abbreviated. Use the following command to spell out the full names and
display the classification rate at each node.
rpart.plot(fitl type=4 1 extra=2~ clip.right.labs=FALSE,
varlen~0 1 faclen=O)
The decision tree can be used to predict outcomes for new datasets. Consider a testing set that contains
the following record.
Outlook= 11 rainy 11 , Temperature="mild 11 , Humidity= 11 high 11 , Wind=FALSE
The goal is to predict the play decision of this record. The following code loads the data into Rasa data
frame newda ta. Note that the training set does not contain this case.
newdata <- data.frame(Outlook= 11 rainy 11 , Temperature= 11 mild 11 ,
Humidity= 11 high", Wind::FALSE)
newdata
Outlook Temperature Humidity Wind
1 rainy mild high FALSE
Next, use the predict function to generate predictions from a fitted rpart object. The format of
the predict function follows.
predict(object 1 newdata = list(),
type :: C ("vector II 1 11 probll 1 11 Classll 1 11 matrix"))
7.2 Na..-ve Bayes
Parameter type is a cha racter string denoting the type of the predicted value. Set it to either p rob
or clas s to predict using a decision tree model and receive the result as either the class probabi lities
or just the class. The output shows that one instance is classified as Play=no, and zero instances are
classified as P 1 a y= y e s . Therefore, in both cases, the decision tree predicts that the play decision of the
testing set is not to play.
predi ct (fit,newdata =ne wdat a, type= "prob " )
no ye-s
c
p redict (fi t ,newdata=newdata,type="clas s ")
no
Levels : r.o yes
7.2 Na·ive Bayes
Na'ive Bayes is a probabilistic classification method based on Bayes' theorem (or Bayes' law) with a few
tweaks. Bayes' theorem gives the relationship between the probabilities of two events and their conditional
probabilities. Bayes' law is named after the English mathematician Thomas Bayes.
A na'ive Bayes classifier assumes that the presence or absence of a particular feature of a class is unre-
lated to the presence or absence of other features. For example, an object ca n be classified based on
its attribu tes such as shape, color, and we ight. A reasonable classification for an object that is spherical,
yellow, and less than 60 grams in weight may be a tennis ball. Even if these features depend on each other or
upon the existence of the other featu res, a na'ive Bayes classifier considers all these properties to contribute
independently to the probabil ity that the object is a tennis ball.
The input variables are generally categorical, but variations of the algorithm can accept continuous
variables. There are also ways to convert continuous variables into ca tegorical ones. This process is often
referred to as the discre tization of continuous variables. In the ten nis ball example, a continuous variable
such as weight can be grouped into intervals to be converted into a categorical variable. For an attribute
such as income, t he attribute can be converted into categorical values as shown below.
• Low Incom e: income < $10,000
• Working Class: $10,000 s income < $50,000
• Middle Class: $50,000 s income < $1,000,000
• Upper Class: income ~ $1,000,000
The output typica lly includes a class label and its correspo nding probability score. The probability score
is not t he true probabi lity of the class label, but it's proportional to the true probability. As shown later in
the chapter, in most implementations, the output includes the log probabili ty for the class, and class labels
are assigned based on the highest values.
ADVANCED ANALYTICAL THEORY AND METHODS: CLASSIFICATION
Because na"ive Bayes classifiers are easy to implement and can execute efficiently even without prior
knowledge of the data, they are among the most popular algorithms for classifying text documents. Spam
filtering is a classic use case of na'ive Bayes text classification. Bayesian spam filtering has become a popular
mechanism to distinguish spam e-mail from legitimate e-mail. Many modern mail clients implement vari-
ants of Bayesian spam filtering.
Na"ive Bayes classifiers can also be used for fraud detection [11]. In the domain of auto insurance, for
example, based on a training set with attributes such as driver's rating, vehicle age, vehicle price, historical
claims by the policy holder, police report status, and claim genuineness, na"ive Bayes can provide probability-
based classification of whether a new claim is genuine [12].
7.2.1 Bayes' Theorem
The conditional probability of event C occurring, given that event A has already occurred, is denoted as
P(CjA) , which can be found using the formula in Equation 7-6.
P(CjA)= P(AnC)
P(A) (7-6)
Equation 7-7 can be obtained with some minor algebra and substitution of the conditional probability:
P(CjA)= P(A!C)·P(C)
P(A) (7-7)
whereCistheclasslabel C E {c1,c2 , . .. c) and A is the observed attributes A= {a1,a2 , ... am} . Equation 7-7
is the most common form of the Bayes' theorem.
Mathematically, Bayes' theorem gives the relationship between the probabilities of C and A, P(C) and
P(A), and the conditional probabilities of C given A and A given C, namely P(CjA) and P(AIC).
Bayes' theorem is significant because quite often P(C I A) is much more difficult to compute than
P(A I C) and P(C) from the training data. By using Bayes' theorem, this problem can be circumvented.
An example better illustrates the use of Bayes' theorem. John flies frequently and likes to upgrade his
seat to first class. He has determined that if he checks in for his flight at least two hours early, the probability
that he will get an upgrade is 0.75; otherwise, the probability that he will get an upgrade is 0.35. With his
busy schedule, he checks in at least two hours before his flight only 40% of the time. Suppose John did not
receive an upgrade on his most recent attempt. What is the probability that he did not arrive two hours early?
let C= {John arrived at least two hours early}, and A= {John received an upgrade}, then -,c ={John
did not arrive two hours early}, and -,A= {John did not receive an upgrade}.
John checked in at least two hours early only 40% of the time, or P(C)=0.4. Therefore,
P( -,() =1-P(C)=0.6.
The probability that John received an upgrade given that he checked in early is 0.75, or P( A I C)= 0.75.
The probability that John received an upgrade given that he did not arrive two hours early is 0.35, or
P(AI-,C) = 0.35. Therefore, P(•AI•C) = 0.65.
The probability that John received an upgradeP(A) can be computed as shown in Equation 7-8.
P(A)= P(AnC) +P(An•C)
= P( C)·P(A IC)+P( ·C)·P(AI·C)
= 0.4 X 0.75 + 0.6 X 0.35
=0.51
7.2 Na'ive Bayes
(7-8)
Thus, the probability that John did not receive an upgrade P( •A) = 0.49. Using Bayes' theorem, the
probability that John did not arrive two hours early given that he did not receive his upgrade is shown in
Equation 7-9.
P( .q ·A) = P( ·AI•C) · P( •C)
P( ·A)
0.65 X 0.6 ::::;: 0.7
96
0.49
(7-9)
Another example involves computing the probability that a patient carries a disease based on the result
of a lab test. Assume that a patient named Mary took a lab test for a certain disease and the result came
back positive. The test returns a positive result in 95% of the cases in which the disease is actually present,
and it returns a positive result in 6% of the cases in which the disease is not present. Furthermore, 1% of
the entire population has this disease. What is the probability that Mary actually has the disease, given
that the test is positive?
let c = {having the disease} and A = {testing positive}. The goal is to solve the probability of having
the disease, given that Mary has a positive test result, P(CjA). From the problem description, P( C)= 0.01,
P( ·C)= 0.99, P(AIC) = 0.95 and P(AI·C) = 0.06.
Bayes' theorem defines P(C I A)= P(A I C)P(C) I P(A). The probability of testing positive, that is P(A), needs
to be computed first. That computation is shown in Equation 7-10.
P(A) = P(AnC)+P(An•C)
= P(C). P(AIC) + P( ·C). P(A I·C)
= 0.01 X 0.95 + 0.99 X 0.06 = 0.0689 (7-10}
According to Bayes' theorem, the probability of having the disease, given that Mary has a positive test
result, is shown in Equation 7-11.
P(CjA)= P(AIC)P(C) = 0.95x0.01 ::::::0.1379
P(A) 0.0689
(7-11)
That means that the probability of Mary actually having the disease given a positive test result is only
13.79%. This result indicates that the lab test may not be a good one. The likelihood of having the disease
ADVANCED ANALYTICAL THEORY AND METHODS: CLASSIFICATION
was 1% when the patient walked in the door and only 13.79% when the patient walked out, which would
suggest further tests.
A more general form of Bayes' theorem assigns a classified label to an object with multiple attributes
A= {a1,a2 , ••. ,am} such that the label corresponds to the largest value of P(c1IA). The probability that a set
of attribute values A (composed of m variables a1,a2 , ••• ,am) should be labeled with a classification label c1
equals the probability thatthe set of variables al' a2 , ••• , am given c1 is true, times the probability of c1 divided
by the probability ofa1,a2 , •• • ,am. Mathematically, this is shown in Equation 7-12.
P( lA)
_ P(a
1
,a
2
, ••• ,amlc
1
)·P(c) . _
1 2 c. - ,1-, , ... n
' P(a
1
,a
2
, ••• ,am)
(7-12)
Consider the bank marketing example presented in Section 7.1 on predicting if a customer would subscribe
to a term deposit. LetA be a listofattributes{job, marital, education, default, housing, loan,
contact,poutcome}. According to Equation 7-12, the problem is essentially to calculate P(c
1
IA), where
c
1
E {subscribed= yes,subscribed =no}.
7.2.2 Na·ive Bayes Classifier
With two simplifications, Bayes' theorem can be extended to become a na"ive Bayes classifier.
The first simplification is to use the conditional independence assumption. That is, each attribute is
conditionally independent of every other attribute given a class label 'r See Equation 7-13.
m
P(al'a
2
, •• • ,amlc1) = P(a1lc1)P(a2 lc1)···P(amlc1) = f]P(a1lc1)
}=1
Therefore, this na'ive assumption simplifies the computation of P(al'a2, ••• ,amlc;l.
(7-13)
The second simplification is to ignore the denominatorP(a1,a2 , ••• ,am). BecauseP(a1,a2 , ... ,am) appears
in the denominator of P(c
1
IA) for all values of i, removing the denominator will have no impact on the
relative probability scores and will simplify calculations.
Na"ive Bayes classification applies the two simplifications mentioned earlier and, as a result,
P(c
1
la
1
,a
2
, ••• ,am) is proportional to the product of P(a
1
lc
1
) times P(c;). This is shown in Equation 7-14.
m
P(c1IA) 0< P(c1)· f1P(a1lc1)
}=1
i = 1,2, ... n
The mathematical symbol 0< indicates that the LHS P(c
1
IA) is directly proportional to the RHS.
(7-14)
7.2 Na'ive Bayes
Section 7.1 has introduced a bank marketing dataset (Figure 7-3). This section shows how to use the
naiVe Bayes classifier on this dataset to predict if the clients would subscribe to a term deposit.
Building a na'ive Bayes classifier requires knowing certain statistics, all calculated from the training set.
The first requirement is to collect the probabilities of all class labels, P(c
1
).1n the presented example, these
would be the probability that a client will subscribe to the term deposit and the probability the client will
not. From the data available in the training set, P( subscribed= yes)~ 0.11 and P( subscribed= no)~ 0.89.
The second thing the na'ive Bayes classifier needs to know is the conditional probabilities of each
attribute a
1
given each class label c
1
, namely P(a
1
lc1) • The training set contains several attributes: job,
marital, education, default, housing, loan, contact,andpoutcome. For each attribute
and its possible values, computing the conditional probabilities given subscribed= yes or subscribed= no
is required. For example, relative to the marital attribute, the following conditional probabilities are
calculated.
P(single I subscribed= yes)~ 0.35
P(married I subscribed= yes)~ 0.53
P(divorced I subscribed= yes)~ 0.12
P(single I subscribed= no)~ 0.28
P(married I subscribed = no) ~ 0.61
P(divorced I subscribed= no)~ 0.11
After training the classifier and computing all the required statistics, the na'ive Bayes classifier can be
tested over the testing set. For each record in the testing set, the na'ive Bayes classifier assigns the classifier
label c1 that maximizesP(c1) ·Tij : 1 P(a1lc1).
Table 7-4 contains a single record for a client who has a career in management, is married, holds a sec-
ondary degree, has credit not in default, has a housing loan but no personal loans, prefers to be contacted
via cellular, and whose outcome of the previous marketing campaign contact was a success. Is this client
likely to subscribe to the term deposit?
TABLE 7-4 Record of an Additional Client
Job Marital Education Default Housing Loan Contact Poutcome
management married secondary no yes no cellular Success
ADVANCED ANALYTICAL THEORY AND METHODS: CLASSIFICATION
The conditional probabilities shown in Table 7-5 can be calculated after building the classifier with the
training set.
TABLE 7-5 Compute Conditional Probabilities for the New Record
j a
1
P(a
1
I subscribed= yes) P(a
1
I subscribed=no)
job= management 0.22 0.21
2 marital = married 0.53 0.61
3 education = 0.46 0.51
secondary
4 default = no 0.99 0.98
5 housing= yes 0.35 0.57
6 loa n = no 0.90 0.85
7 contact= cellular 0.85 0.62
8 poutcome= 0.15 0.01
success
Beca use P(c
1
la
1
,a
2
, .. . ,am) is proportional to the product of P(a
1
1c, )(j E [1,m]) times (c1), the na'ive Bayes
classifier assigns the class label c
1
• which results in t he greatest value over all i. Thus, P(c1la1,a2 , . •• ,am) is
computed for each c1 with P(c11Al P(c1) • IJ j : 1 P( a1lc1) •
For A ={management, married, secondary, no, yes, no, cellular, success},
P(yesiA )
label subscribed = yes. That is, the client is classified as likely to subscribe to the term deposit.
Although the scores are small in magnitude, it is the ratio of P(yesiA ) and P(naiA ) that matters. In fact,
the scores of P( yesiA) and P( nolA) are not the true probabi lities but are only proportional to the true prob-
abilities, as shown in Equation 7-14. After all, if the scores were indeed the true probabilities, the sum of
P( yesiA) and P(noiA ) would be equal to one. When looking at problems with a large number of attributes,
or attri butes with a high number of levels, these va lues can become very smal l in magnit ude (close to zero),
resulting in even smaller differences of the scores. This is the problem of n umerical underflow, caused by
mu ltiplying several probability val ues that are close to zero. A way to alleviate the problem is to compute
7.2 Naive Bayes
the logarithm of the products, which is equivalent to the summation of the logarithm of the probabilities.
Thus, the na’ive Bayes formula can be rewritten as shown in Equation 7-15.
m
P(c;IA) ex: logP(c;}+ L:.::logP(ailc;)
j=l
i=1,2, … n (7-15}
Although the risk of underflow may increase as the number of attributes increases, the use of logarithms
is usually applied regardless of the number of attribute dimensions.
7.2.3 Smoothing
If one of the attribute values does not appear with one of the class labels within the training set, the
corresponding P(ailc;} will equal zero. When this happens, the resulting P(c;IA) from multiplying all the
P(ailc)(j E [1,m]} immediately becomes zero regardless of how large some of the conditional probabilities
are. Therefore overfitting occurs. Smoothing techniques can be employed to adjust the probabilities of
P(ailc;} and to ensure a nonzero value ofP(c;IA). A smoothing technique assigns a small nonzero probability
to rare events not included in the training dataset. Also, the smoothing addresses the possibility of taking
the logarithm of zero that may occur in Equation 7-15.
There are various smoothing techniques. Among them is the Laplace smoothing (or add-one} tech-
nique that pretends to see every outcome once more than it actually appears. This technique is shown in
Equation 7-16.
• count(x)+1
P (x)= E)count(x)+ 1]
(7-16}
For example, say that 100 clients subscribe to the term deposit, with 20 of them single, 70
married, and 10 divorced. The “raw” probability is P(singlelsubscribed=yes)=20/100=0.2.
With Laplace smoothing adding one to the counts, the adjusted probability becomes
P'(singlelsubscribed =yes)= (20+ 1}1[(20+ 1)+(70+ 1)+(10+ 1}] ~ 0.2039.
One problem of the Laplace smoothing is that it may assign too much probability to unseen events. To
address this problem, Laplace smoothing can be generalized to use£ instead of 1, where typically£ E [0, t
See Equation 7-17.
•• count( x) + £
P ( x) = -E-) c-o-un-t.,-( x.,–) +-~-=] (7-17)
Smoothing techniques are available in most standard software packages for na”ive Bayes classifiers.
However, if for some reason (like performance concerns) the na·ive Bayes classifier needs to be coded
directly into an application, the smoothing and logarithm calculations should be incorporated into the
implementation.
7.2.4 Diagnostics
Unlike logistic regression, na’ive Bayes classifiers can handle missing values. Na”ive Bayes is also robust to
irrelevant variables-variables that are distributed among all the classes whose effects are not pronounced.
ADVANCED ANALYTICAL THEORY AND METHODS: CLASSIFICATION
The model is simple to implement even without using libraries. The prediction is based on counting the
occurrences of events, making the classifier efficient to run. Na’ive Bayes is computationally efficient and is
able to handle high-dimensional data efficiently. Related research [13] shows that the naive Bayes classifier
in many cases is competitive with other learning algorithms, including decision trees and neural networks.
In some cases na”ive Bayes even outperforms other methods. Unlike logistic regression, the na”ive Bayes
classifier can handle categorical variables with many levels. Recall that decision trees can handle categorical
variables as well, but too many levels may result in a deep tree. The na”ive Bayes classifier overall performs
better than decision trees on categorical values with many levels. Compared to decision trees, na’ive Bayes
is more resistant to overfitting, especially with the presence of a smoothing technique.
Despite the benefits of na”ive Bayes, it also comes with a few disadvantages. Na”ive Bayes assumes the
variables in the data are conditionally independent. Therefore, it is sensitive to correlated variables because
the algorithm may double count the effects. As an example, assume that people with low income and low
credit tend to default. If the task is to score “default” based on both income and credit as two separate
attributes, na’ive Bayes would experience the double-counting effect on the default outcome, thus reduc-
ing the accuracy of the prediction.
Although probabilities are provided as part of the output for the prediction, na”ive Bayes classifiers in
general are not very reliable for probability estimation and should be used only for assigning class labels.
Na’ive Bayes in its simple form is used only with categorical variables. Any continuous variables should be
converted into a categorical variable with the process known as discretization, as shown earlier. In com-
mon statistical software packages, however, na”ive Bayes is implemented in a way that enables it to handle
continuous variables as well.
7.2.5 Na·ive Bayes in R
This section explores two methods of using the na’ive Bayes classifier in R. The first method is to build from
scratch by manually computing the probability scores, and the second method is to use the nai veBayes
function from the el o 71 package. The examples show how to use na’ive Bayes to predict whether employ-
ees would enroll in an onsite educational program.
In R, first set up the working directory and initialize the packages.
setwd ( “c: I”)
install.packages(“el071”) # install pac~age e1071
library(el071) # load the library
The working directory contains a CSV file (samplel. csv). The file has a header row, followed by 14
rows of training data. The attributes include Age, Income, JobSatisfaction, and Desire. The
output variable is Enrolls, and its value is either Yes or No. Full content of the CSV file is shown next.
Age,Income,JobSatisfaction,Desire,Enrolls
<=30,High,No,Fair,No
<=30,High,No,Excellent,No
31 to 40,High,No,Fair,Yes
>40,Medium,No,Fai~.Yes
>40,Low,Yes,Fai~.Yes
>40,Low,Yes,Excellent,No
31 to 40,Low,Yes,Excellent,Yes
<=30,Medium,No,Fair,No
<=30,Low,Yes,Fair,Yes
>40,Medium,Yes,Fair,Yes
<=30,Medium,Yes,Excellent,Yes
31 to 40,Medium,No,Excellent,Yes
31 to 40,High,Yes,Fair,Yes
>40,Medium,No,Excellent,No
<=30,Medium,Yes,Fair,
7.2 Na'ive Bayes
The last record of the CSV is used later for illustrative purposes as a test case. Therefore, it does not
include a value for the output variable Enrolls, which should be predicted using the na'ive Bayes
classifier built from the training set.
Execute the following R code to read data from the CSV file.
# read the data into a table from the file
sample <- read. table ( 11 samplel . csv n , header= TRUE, sep= 11 , " )
# define the data frames for the NB classifier
traindata <- as.data.frame(sample[l:l4,])
testdata <- as.data.frame(sample[lS,])
Two data frame objects called trainda ta and test data are created for the na'ive Bayes classifier.
Enter traindata and testdata to display the data frames.
The two data frames are printed on the screen as follows.
traindata
Age Income JobSatisfaction Desire Enrolls
1 <=30 High No Fai~ No
2 <=30 High No Excellent No
3 31 to 40 High No Fai:.- Yes
4 >40 i-ledi urn No Fai:.- Yes
5 >40 Lo~,o: Yes Fai:- Yes
6 >40 Low Yes Excellent No
7 31 to 40 Low “les Excellent Yes
8 <=30 f.lediurn No Fai:- No
9 <=30 Low Yes Fai:- Yes
10 >40 Medium Yes Fah· Yes
11 <=30 Ivledium Yes Excellent Yes
12 31 to 40 Medium No Excellent Yes
13 31 to 40 High Yes Fair· Yes
14 >40 1-!ediurn No Excellent No
testdata
Age Income JobSatisfaction Cesire Enrolls
15 <=30 !•ledium Yes Fair
The first method shown here is to build a na'ive Bayes classifier from scratch by manually computing the
probability scores. The first step in building a classifier is to compute the prior probabilities ofthe attributes,
ADVANCED ANALYTICAL THEORY AND METHODS: CLASSIFICATION
including Age, Income, JobSatisfaction, and Desire. According to the naive Bayes classifier,
these attributes are conditionally independent. The dependent variable (output variable) is Enrolls.
Compute the prior probabilities P(c;) for Enrolls, where c; E C and C = {Yes,No}.
tprior <- table(traindata$Enrolls)
tprior
No Yes
tprior <- tprior/sum(tprior)
tprior
No Yes
0.0000000 0.3571429 0.6428571
The next step is to compute conditional probabilitiesP(AIC), where A ={Age, Income, JobSatisfaction,Desire}
and C ={Yes, No}. Count the number of "No" and "Yes" entries for each Age group, and normalize by the
total number of"No" and "Yes" entries to get the conditional probabilities.
ageCounts <- table (traindata [, c ("Enrolls", "Agen)])
age Counts
f1.ge
ageCounts <- ageCounts/rowSums(ageCounts)
ageCounts
Age
Em.,olls <=30 >40 31 ::o 40
No 0.6000000 0.4000000 0.0000000
Yes 0.2222222 0.3333333 0.44~444~
Do the same for the other attributes including Income, JobSatisfaction, and Desire.
incomeCounts <- table (traindata [, c ( 11 Enrolls", "Income 11 )])
incomeCounts <- incomeCounts/rowSums(incomeCounts)
incomecounts
Yes 0.2222222 0.3333333 .4444444
jsCounts <- table(traindata[,c("Enrolls", "JobSatisfaction")])
jsCounts <- jsCounts/rowSums(jsCounts)
jsCounts
Jobsatisfaction
Enrolls No Yes
No o.soooooo o.~oooca
Yes 0.3333333 0.6GGG667
desireCounts <- table (traindata [, c ("Enrolls", "Desire'•) 1)
desireCounts <- desireCounts/rowSums(desireCounts)
desireCounts
Desire
Enrolls Exce:lent Fair
No 0.60000CO 8.~0080CO
Yes 0.3333333 O.E66GGG7
7.2 Na'ive Bayes
According to Equation 7-7, probability P(c1IA) is determined by the product of P(ajlc,) times the (c1)
where c
1
=Yes and c
2
=No. The larger value of P(YesiA) and P(NoiA) determines the predicted result of
the output variable. Given the test data, use the following code to predict the Enrolls.
prob_yes <-
agecounts [ '1Yes 11 , testdata [, c ("Age 11 ) 1 1 *
incomeCounts ["Yes 11 , testdata [,c ( 11 Income'1 ) 11 *
j sCounts [ 11 Yes ., , testdata [, c ( '1JobSatisfaction 11 ) 1 1 *
desireCounts [ 11 Yes", testdata [, c ( '1Desire 11 ) 1 1 *
tprior [ 11 Yes 11 1
prob_no <-
ageCounts [ 11 No", testdata [, c ( '1Age") 11 *
incomeCounts ["No 11 , testdata [, c ( 11 Income 11 ) 1 1 *
jsCounts ["No", testdata [, c ( 11 JobSatisfaction") 1 1 *
desireCounts [ "No'•, testdata [, c ( "Desire 11 ) 1 1 *
tprior["No"1
max(prob_yes,prob_no)
As shown below, the predicted result of the test set is Enrolls= Yes.
prob_yes
Yes
0.02821869
prob_no
No
0.006857:..43
max(prob_yes, prob_no)
[l! 0.0282l869
ADVANCED ANALYTICAL THEORY AND METHODS: CLASSIFICATION
Theel o 71 package in R has a built-in nai veBayes function that can compute the conditional
probabilities of a categorical class variable given independent categorical predictor variables using the
Bayes rule. The function takes the form of nai veBayes (formula I data I ••• ) I where the argu-
ments are defined as follows.
o formula: A formula ofthe form class - xl + x2 + ... assuming xll x2 ... are conditionally
independent
o data: A data frame of factors
Use the following code snippet to execute the model and display the results.
model <- naiveBayes(Enrolls - Age+Income+JobSatisfaction+Desire 1
traindata)
# display model
model
The output that follows shows that the probabilities of model match the probabilities from the
previous method. The default laplace= laplace setting enables the Laplace smoothing.
Naive Bayes Classif~er f2r Discrete Predictors
Call:
naiveBayes.default'v '!, laplace laplace,
h-priori probabilit~es:
y No ''les
0.0000000 0.35714~9 0.6428571
Conditional probabilities:
P..ge
y >40 31 to 40
No 0.6000000 0.4000000 0.0000000
Yes 0. 2222222 0. 33 3333~ 0 . •l444•l4’l
Income
y High LO’d i•ledium
No 0 .4000000 C.20000,’JO ·:J . ·l 0 0 🙂 0 0 0
“/es 0.2222222 0 3 3 3 3 3 3 ~ 0 .. J.;4444·l
~obSatisfac:ion
“l No Y•:;:”
No 0 8 0 8• ()I~ 0 c ~ ] ~c~’ ·:: 0 ;J
‘£es 0.3333333 il.((GhGE7
y
Desire
Excellent Fair
No 0.6000000 0.40COOOO
Yes 0.3333333 0.6666667
7.2 Na’ive Bayes
Next, predicting the outcome of Enrolls with the t es tda t a shows the result is Enrolls= Yes.
# predict with testdata
results <- predict (model,testdata)
# display results
results
[l] Yes
Levels: ~~o Yes
The nai veBayes function accepts a Laplace parameter that allows the customization of the c value
of Equation 7-17 for the Laplace smoothing. The code that follows shows how to build a na"ive Bayes clas-
sifier with Laplace smoothing c = 0.01 for prediction.
# use the NB classifier with Laplace smoothing
modell = naiveBayes(Enrolls -., traindata, laplace=.Ol)
# display model
modell
Naive Bayes Classifier for ~~scre~e ?redictcrs
Call:
naiveaayes.defaul:,v Z, y
h-priori probabilities:
y
0.0000000 0.3571429 C.642857l
Conditional probabilities:
.i'-.ge
y < = 3 C: >~ ‘=
0.333333333 0 333333332
No 0 598409543 0 399602386
Yes 0.222591362 0 ::33333333
Inco:ne
y High Low
Y, laplace
Jl to .;o
0 333333333
0 00198807:.2
0 .444075305
t·ledium
0.3333333 0.3333333 O.l333333
No 0.3996024 0.2007952 0.3996024
Yes 0.2225914 0.33:33333 0.4440753
laplace1
ADVANCED ANALYTICAL THEORY AND METHODS: CLASSIFICATION
y
r:
The test case is again classified as Enroll s=Yes.
:: predirt ·.·:.:..~”” .. · ~ d:-1 .. ~
results! <- predict (modell,testdata )
results!
~ v ...
7.3 Diagnostics of Classifiers
So far, this book has talked about three classifi ers: logistic reg ression, decision trees, and na"lve Bayes. These
three methods can be used to classify instances into distinct groups according to the similar characteris tics
they share. Each of these classifiers faces the same issue: how to evaluate if they perform well.
A few tools have been desig ned to evaluate the performance of a classifier. Such tools are not limited
to th e three classifiers in this book but rather serve the purpose of assessing cla ssifiers in genera l.
A confusion matrix is a specifi c table layout that allows visualization of the performance of a classifier.
Table 7-6 shows the confusion matrix for a two -class classifier. True positives (TP) are the number
of positive insta nces the classifier correctly identified as positive. False positives (FP) are the number of
instances in which the cl assifier identified as positive but in reality are negative. True negatives (TN) are
the number of negative instances the classifier correctly identified as negative. False negatives (FN) are the
num ber of instanc es classified as negative but in reality are positive. In a two-class classification, a preset
threshold may be used to sepa rate positives from negatives. TP and TN are the correct guesses. A good
classifier should have large TP and TN and small (ideally zero) numbers for FP and FN.
TABLE 7 6 Confusion Matrix
True Positives (TP) False Negatives (FN)
False Positives (FP) True Negatives (TN)
7.3 Diagnostics of Classifiers
In the bank marketing example, the training set includes 2,000 instances. An additional100 instances
are included as the testing set. Table 7-7 shows the confusion matrix of a na"ive Bayes classifier on 100 clients
to predict whether they would subscribe to the term deposit. Of the 11 clients who subscribed to the term
deposit, the model predicted 3 subscribed and 8 not subscribed. Similarly, of the 89 clients who did not
subscribe to the term, the model predicted 2 subscribed and 87 not subscribed. All correct guesses are
located from top left to bottom right of the table. It's easy to visually inspect the table for errors, because
they will be represented by any nonzero values outside the diagonal.
TABLE 7-7 Confusion Matrix of Naive Bayes from the Bank Marketing Example
The accuracy {or the overall success rate) is a metric defining the rate at which a model has classified
the records correctly. It is defined as the sum ofTP and TN divided by the total number of instances, as
shown in Equation 7-18.
TP+TN
Accuracy= x 100%
TP+TN+FP+FN
(7-18)
A good model should have a high accuracy score, but having a high accuracy score alone does not
guarantee the model is well established. The following measures can be introduced to better evaluate the
performance of a classifier.
As seen in Chapter 6, the true positive rate (TPR) shows what percent of positive instances the classifier
correctly identified. It's also illustrated in Equation 7-19.
TPR = ____!!!___
TP+FN
(7-19)
The false positive rate {FPR) shows what percent of negatives the classifier marked as positive. The FPR
is also called the false alarm rate or the type I error rate and is shown in Equation 7-20.
FPR=_!!_
FP+TN
(7-20)
The false negative rate (FNR) shows what percent of positives the classifier marked as negatives. It
is also known as the miss rate or type II error rate and is shown in Equation 7-21. Note that the sum of
TPR and FNR is 1.
FNR=__!!!_
TP+FN
{7-21)
ADVANCED ANALYTICAL THEORY AND METHODS: CLASSIFICATION
A well-performed model should have a high TPR that is ideally 1 and a low FPR and FNR that are ideally 0.
In reality, it's rare to have TPR = 1, FPR = 0, and FNR = 0, but these measures are useful to compare the perfor-
mance of multiple models that are designed for solving the same problem. Note that in general, the model
that is more preferable may depend on the business situation. During the discovery phase of the data analytics
lifecycle, the team should have learned from the business what kind of errors can be tolerated. Some business
situations are more tolerant of type I errors, whereas others may be more tolerant of type II errors. In some
cases, a model with a TPR of 0.95 and an FPR of 0.3 is more acceptable than a model with a TPR of 0.9 and an
FPR of 0.1 even if the second model is more accurate overall. Consider the case of e-mail spam filtering. Some
people (such as busy executives) only want important e-mail in their inbox and are tolerant of having some
less important e-mail end up in their spam folder as long as no spam is in their inbox. Other people may not
want any important or less important e-mail to be specified as spam and are willing to have some spam in
their in boxes as long as no important e-mail makes it into the spam folder.
Precision and recall are accuracy metrics used by the information retrieval community, but they can be
used to characterize classifiers in general. Precision is the percentage of instances marked positive that
really are positive, as shown in Equation 7-22.
, . . TP
rfeCISIOn = ---
TP+fP
(7-22)
Recall is the percentage of positive instances that were correctly identified. Recall is equivalent to the
TPR. Chapter 9, "Advanced Analytical Theory and Methods: Text Analysis," discusses how to use precision
and recall for evaluation of classifiers in the context of text analysis.
Given the confusion matrix from Table 7-7, the metrics can be calculated as follows:
TP+TN 3+87
Accuracy= x 1 00% = x 100% = 90%
TP+ TN +FP+FN 3+87 +2+8
TP 3
TPR (or Recall)=--=--~ 0.273
TP+FN 3+8
FPR = _FP_ = - 2- ~ 0.022
FP+TN 2+87
FNR = _FN_ = - 8- ~ 0.727
TP+FN 3+8
, . . TP 3
06 rreCISIOn = --- = -- = .
TP+FP 3+2
These metrics show that for the bank marketing example, the na"ive Bayes classifier performs well
with accuracy and FPR measures and relatively well on precision. However, it performs poorly on TPR and
FNR. To improve the performance, try to include more attributes in the datasets to better distinguish the
characteristics of the records. There are other ways to evaluate the performance of a classifier in general,
such as N-fold cross validation (Chapter 6) or bootstrap [14].
Chapter 6 has introduced the ROC curve, which is a common tool to evaluate classifiers. The abbre-
viation stands for receiver operating characteristic, a term used in signal detection to characterize the
trade-off between hit rate and false-alarm rate over a noisy channel. A ROC curve evaluates the performance
of a classifier based on the TP and FP, regardless of other factors such as class distribution and error costs.
The vertical axis is the True Positive Rate (TPR), and the horizontal axis is the False Positive Rate (FPR).
7.3 Diagnostics of Classifiers
As seen in Chapter 6, any classifier can achieve the bottom left of the graph where TPR = FPR = 0 by
classifying everything as negative. Similarly, any classifier can achieve the top right of the graph where
TPR = FPR = 1 by classifying everything as positive. If a classifier performs "at chance" by random guessing
the results, it can achieve any point on the diagonal line TPR=FPR by choosing an appropriate threshold of
positive/negative. An ideal classifier should perfectly separate positives from negatives and thus achieve
the top-left corner (TPR = 1, FPR = 0). The ROC curve of such classifiers goes straight up from TPR = FPR
= 0 to the top-left corner and moves straight right to the top-right corner. In reality, it can be difficult to
achieve the top-left corner. But a better classifier should be closer to the top left, separating it from other
classifiers that are closer to the diagonal line.
Related to the ROC curve is the area under the curve (AUC). The AUC is calculated by measuring the
area under the ROC curve. Higher AUC scores mean the classifier performs better. The score can range from
0.5 (for the diagonal line TPR=FPR) to 1.0 (with ROC passing through the top-left corner).
In the bank marketing example, the training set includes 2,000 instances. An additional100 instances
are included as the testing set. Figure 7-10 shows a ROC curve ofthe na'ive Bayes classifier built on the
training set of 2,000 instances and tested on the testing set of 100 instances. The figure is generated by the
following R script. The ROCR package is required for plotting the ROC curve. The 2,000 instances are in a
data frame called bank train, and the additional100 instances are in a data frame called bank test.
library (ROCR)
# training set
banktrain <- read.table("bank-sample.csv",header=TRUE,sep=",")
# drop a few columns
drops<- c("balance", "day", "campaign", "pdays", "previous 11 , "month")
banktrain <- banktrain [,! (names(banktrain) %in% drops)]
# testing set
banktest <- read.table("bank-sample-test.csv",header=TRUE,sep= 11 , 11 )
banktest <- banktest [,! (names(banktest) %in% drops)]
# build the naive Bayes classifier
nb_model <- naiveBayes(subscribed-.,
data=banktrain)
# perform on the testing set
nb_prediction <- predict(nb_model,
# remove column "subscribed"
banktest[,-ncol(banktest)],
type=' raw' )
score <- nb _prediction [, c ("yes") 1
actual_class <- banktest$subscribed == 'yes'
pred <- prediction(score, actual_class)
ADVA NCED ANALYTICAL THEORY AND METHODS: CLASSIFICATION
perf <- performance (pred, "tpr" , " fpr " )
plot(perf, lwd=2, xlab="False Positive Rate (FPR) ",
ylab="True Positive Rate (TPR) " )
abline (a=O, b=l, col="gray50" , lty=3 )
The following R code shows that the corresponding AUC score of the ROC cur ve is about 0.915.
auc < - performance (pred, "auc" )
auc <- unlist (slot (auc, "y . values" ))
auc
0
(X)
[ 0
t:.
"' U) 1;j 0 a:
"' > Q …. ill
0 0
Q.
“‘ 2 N …. 0
0
0
00 02 04 06
False PoSlOve Rate (FPR)
08 10
FIGURE 7 10 ROC curve of the naive Bayes classifier on the bank marketing dataset
7.4 Additional Classification Methods
Besides the two classifiers introduced in this chapter, several other methods are commonly used for clas-
sification, includi ng bagging [15], boosting [5], random forest [4], and support vector machines (SVM) [16].
Bagging, boosti ng, and random forest are all examples of ensemble methods that use multiple models to
obtain better predictive performance than can be obtained from any of the constituent models.
Bagging (or bootstrap aggregating) [15] uses the bootstrap technique that repeatedly samples with
replacement from a dataset accord ing to a uniform probability distribution. “With replacement” mea ns
that when a sam ple is selected for a training or testing set, t he sample is still kept in the dataset and may
be selected again. Because the sampling is with replacement, some sam ples may appear several times in
a training or testi ng set, whereas others may be absent. A model or base classifi er is trained separately on
each bootstrap sample, and a test sample is assigned to the cl ass t hat received the highest number of votes.
Similar to bagging, boosting (or Ada Boost) [17] uses votes for classi fi cation to combine the output of indi-
vidual models. In addition, it combines models of the same type. However, boosting is an iterative procedure
Sum mary
where a new model is influenced by the performances of those models built previously. Furthermore,
boosting assigns a weight to each training sample t hat reflects its importance, and the weight may adap-
tively change at the end of each boosting round. Bagging and boosting have been shown to have better
performances [5) than a decision tree.
Random forest [4] is a class of ensemble methods using decision tree classifiers. It is a combination of tree
predictors such that each tree depends on the values of a random vector sampled independently and with
the same distribution for all trees in the forest. A special case of random forest uses bagging on decision
t rees, where samples are randomly chosen with replacement from the original training set.
SVM [16] is another common classification method that combines linear models with instance-based
learning techniques. Support vector machines select a small number of critical boundary instances cal led
sup port vec t ors fr om each cla ss and build a lin ea r decision function that separates them as widely as
possible. SVM by default can efficiently perform linear classifications and can be configured to perform
nonlinear classifications as well.
Summary
This chapter focused on two cl assification methods: decision trees and na’ive Bayes. It discussed the theory
behind these classifiers and used a bank marketing exa mple to explain how the methods work in practice.
These classifiers along with logistic regression (Chapter 6) are often used for the classification of data. As
this b ook has discussed, each of t hese methods has it s own advantages and disadvantages. How does one
pick t he most suit able method for a given classification problem? Table 7-8 offers a list of things to consider
when select ing a classifier.
TABLE 7·8 Choosing a Suitable Classifier
Concerns Recommended Method(s)
Output ofthe classification should include class Logistic regression, decision tree
probabilities in addition to the class labels.
Analysts wan t to gain an insight into how the vari- Logistic reg ression, decision tree
abies affect the model.
The problem is h igh dimensional. Na’ive Bayes
Some of the in put variables might be correlated. Logistic regression, decision tree
Some of the input va ri ables might be irrelevant. Decision tree, na’ive Bayes
The data contains categori cal variables with a Decision tree, na’ive Bayes
large number of levels.
The data contains mixed vari able types. Log istic reg ression, decision tree
There is nonlinear dat a or discontinuities in the Decision tree
input va riables that would affect the output.
ADVANCED ANALYTICAL THEORY AND METHODS: CLASSIFICATION
After the classification, one can use a few evaluation tools to measure how well a classifier has performed
or compare the performances of multiple classifiers. These tools include confusion matrix, TPR, FPR, FNR,
precision, recall, ROC curves, and AUC.
In add ition to the decision trees and na’ive Bayes, other methods are commonly used as classifiers. These
methods include but are not limited to bagging, boosting, random forest, and SVM.
Exercises
1. For a bina ry classification, describe the possible values of entropy. On what conditions does
entropy reach its minimum and maximum values?
2. In a decision tree, how does the algorithm pick the attributes for splitting?
3. John went to see the doctor about a severe headache. The doctor selected John at random to
have a blood test for swine nu, which is suspected to affect 1 in 5,000 people in this country. The
test is 99% accurate, in the sense that the probability of a false positive is 1%. The probability of a
false negative is zero. John’s test came back positive. What is the probability that John has swine
flu?
4. Which classifier is considered computationally efficient for high-dimensional problems? Why?
5. A data science team is working on a classification problem in which the dataset contains many
correlated variables, and most of them are categorical variables. Which classifier should the team
consider using? Why?
6. A data science team is working on a classification problem in which the dataset contains many
correlated variables, and most of them are continuous. The team wa nts the model to output the
probabilities in addition to the class labels. Which classifier should the team consider using? Why?
7. Consider the following confusion matrix:
What are the true positi ve rate, false positive rate, and false negative rate?
Bibliography
Bibliography
[1] M. Thomas, B. Pang, and L. Lee, “Get Out the Vote: Determining Support or Opposition from
Congressional Floor-Debate Transcripts,” in Proceedings of th e 2006 Conference on Empirical
Methods in Natural Language Processing, Sydney, Aus tralia, 2006.
[2) M. Shouman, T. Turner, and R. Stocker, “Using Decision Tree for Diagnosing Heart Disease Patients,”
in Australian Computer Society, Inc., Ballarat, Australia, in Proceedings of the Ninth Australasian
Data Mining Conference (AusDM ’11).
[3) I. Androutsopoulos, J. Koutsias, K. V. Chandrinos, G. Paliouras, and C. D. Spyropoulos, “An Evaluation
of Na’fve Bayesian Anti-Spam Fi ltering,” in Proceedings of the Workshop on Machine Learning in
the New Information Age, Barcelona, Spai n, 2000.
[4) L. Brei man, “Random Forests,” Machine Learning, vol. 45, no. 1, pp. 5- 32, 2001 .
[5] J. R. Quin lan, “Bagg ing, Boosting, and C4. 5,” AAAI/IAAI, vol. 1, 1996.
[6) S. Moro, P. Cortez, and R. Laureano, “Using Data Mini ng for Bank Direct Marketing: An Application
of the CRISP-DM Methodology,” in Proceedings of the European Simulation and Modelling
Conference- ESM’2011, Guimaraes, Portugal, 2011.
[7) J. R. Quinlan, “Induction of Decision Trees,” Machine Learning, vol. 1, no. 1, pp. 81-106, 1986.
[8) J. R. Quinlan, C4. 5: Programs for Machine Learning, Morgan Kaufma nn, 1993.
[9) L. Brei man, J. H. Friedma n, R. A. Olshen, and C. J. Stone, Classification and Regression Trees,
Belmont, CA: Wadsworth Internationa l Group, 1984.
[10) T. M. Mitchell, “Decision Tree Learning,” in Machine Learning, New York, NY, USA, McGraw-H ill, Inc.,
1997, p. 68.
[11) C. Phua, V. C. S. Lee, S. Kate, and R. W. Gayler, “A Comprehensive Survey of Data Mining-Based Fraud
Detection,” CaRR, vol. abs/1009.6119, 2010.
[12) R. Bhowmik, “Detecting Auto Insurance Fraud by Data Mining Techniques,” Journal of Emerging
Trends in Computing and Information Sciences, vol. 2, no. 4, pp. 156- 162,2011.
[13) D. Michie, D. J. Spiegel halter, and C. C. Taylor, Machine Learning, Neural and Statistical
Classification, New York: Ell is Horwood, 1994.
[14) I. H. Witten, E. Frank, and M. A. Hall, ” The Bootstra p,” in Data Mining, Burlington, Massachusetts,
Morgan Kaufmann, 2011 , pp. 155-156.
[15) L. Breiman, “Bagging Predictors,” Machine Learning, vol. 24, no. 2, pp. 123-140, 1996.
[16) N. Cristianini and J. Shawe-Taylor, An Introduction to Support Vector Machines and Other Kernel-
Based Learning Methods, Cambridge, United Kingdom: Cambridge university press, 2000.
[17) Y. Freund and R. E. Schapire, “A Decision-Theoretic Generalization of On-Line Learning and an
Application to Boosting,” Journal of Computer and System Sciences, vol. 55, no. 1, pp. 119-139, 1997.
ADVANCED ANALYTICA L THEORY AND METHODS: TIME SER IE S ANA LYSIS
This chapter examines the topic of time seri es analysis and its applications. Emphasis is placed on identifying
the underlying structure of the time seri es and fitting an appropriate Autoregressive Integrated Moving
Average (ARIMA) model.
8.1 Overview of Time Series Analysis
Time seri es analysis attempts to model the underlying structure of observations taken over time. A time
series, denoted Y = a + bX , is an ordered sequence of equally spaced values over time. For example,
Figure 8-1 provides a plot of the monthly number of international airline passengers over a 12-year peri od.
u;-
‘0
c
ro
0 V>
::J 0
0 I()
§.
V>
iii 0 0)
c 0
Q)
(“)
V>
V>
ro
0..
~ 0
< 0
0 20 40 60 80 100 120 140
Time (Monlhs)
FIGURE 8-1 Monthly in ternational airline passengers
In this exampl e, the time series consists of an ordered sequence of 144 values. The analyses presented
in this cha pter are limited to equally spaced time series of one variable. Following are the goa ls of time
seri es analysis:
• Identify and model t he structure of the time series.
• Forecast future values in the time series.
Time series analysis has many applications in finance, economics, biology, engineering, retai l, and
manufacturing. Here are a few specific use cases:
• Re tai l sa les: For various product lines, a clothing retailer is looking to forecast future month ly sa les.
These forecasts need to account for the seasonal aspects of the customer's purchasing decisions. For
example, in the northern hemisphere, sweater sales are typically brisk in the fall season, and swimsuit
sales are the hig hest during the late spring and early summer. Thu s, an appropriate time series model
needs to account for fluctuating demand over the ca lendar year.
• Spare parts p la nni ng: Companies' servi ce organizations have to forecast future spare pa rt
demands to ensure an adequate supply of parts to repair customer products. Often the spares inven-
tory consists of thousands of distinct part numbers. To forecast future demand, complex models for
each part number can be built using input variables such as expected part failure rates, service diag-
nostic effectiveness, forecasted new product shipments, and forecasted trade-ins/decommissions.
8.1 Overview ofTime Series Analysi s
However, time series analysis can provide accurate short-term forecasts based simply on prior spare
part demand history.
• Stock trading : Some high-frequency stock traders utilize a technique called pairs trading. In pairs
trading, an identified strong positive correlation between the prices of two stocks is used to detect
a market opportunity. Suppose the stock prices of Company A and Company B consistently move
together. Time series analysis can be applied to the difference of these companies' stock prices over
time. A statistica lly larger than expected price difference indicates that it is a good time to buy the
stock of Company A and sell the stock of Company B, or vice versa. Of course, th is trading approach
depends on t he abi lity to execute the trade quickly and be able to detect when the correlation in the
stock prices is broken. Pairs trading is one of many techniques that fa lls into a trading strategy called
statistical arbitrage.
8.1 .1 Box-Jenkins Methodology
In this chapter, a time series consists of an ordered sequence of equally spaced values over time. Examples
of a time series are monthly unemployment rates, daily website visits, or stock prices every second. A time
series can consist of the following components:
• Trend
• Seasonality
• Cyclic
• Random
The trend refers to the long-term movement in a time series. It indicates whether the observation values
are increasi ng or decreasing over t ime. Examples of trends are a steady increase in sales month over month
or an annual decline of fata lities due to ca r accidents.
The seasonality component describes the fixed, periodic fluctuation in the observations over time.
As the name suggests, the seasonality component is often related to the calendar. For example, monthly
retail sales can fluctuate over the year due to the weather and holidays.
A cyclic component also refers to a periodic fluctuation, but one that is not as fixed as in the case of a
seasonality component. For example, retails sales are influenced by the general state of the economy. Thus,
a retail sales time series can often follow the lengthy boom-bust cycles of the economy.
After accounting for the other three components, the random component is what remains. Although
noise is certainly part of this random component, there is often some underlying structure to this random
component that needs to be modeled to forecast future values of a given time series.
Developed by George Box and Gwilym Jenkins, the Box-Jenkins methodology for time series analysis
involves the following three main steps:
1. Condition data and select a model.
• Identify and account for any trends or seasonality in the time series.
• Examine the rema ining time series and determine a suitable model.
2. Estimate t he model parameters.
3. Assess the model and return to Step 1, if necessary.
ADVANCED ANALYTICAL THEORY AND METHODS: TIME SERIES ANALYSIS
The primary focus of this chapter is to use the Box-Jenkins methodology to apply an A RIMA model to
a given time series.
To fully explain an ARIMA (Autoregressive Integrated Moving Average) model, this section describes the
model's various parts and how they are combined . As stated in the first step of the Box-Jenkins methodol-
ogy, it is necessary to remove any trends or seasonality in the time series. This step is necessa ry to achieve
a time series with certain properties to whic h autoregressive and moving average models can be appl ied.
Such a time series is known as a stationary time series. A time series, Y, fort = 1,2,3, ... ,, is a stationary time
series if the following three conditions are met:
(a) The expected value (mean) of Y, is a constant for all values oft.
(b) The variance of Y, is finite.
• (c) The covariance of Y, andY, h depends only on the value of h = 0, 1, 2, .. .for all t.
The covariance of Y, andY,. his a measure of how the two variables, Y, andY, _ h• vary together. It is
expressed in Equation 8-1.
(8-1)
If two variables are independent of each other, their covariance is zero. If the vari ables change together
in the same direction, the variables have a positive covariance. Conversely, if the variables change together
in the opposite direction, the variables have a negative covariance.
For a stationa ry time seri es, by condition (a), the mean is a constant, say''· So, for a given stationary
sequence, Y,. the covariance notation can be simplified to what's shown in Equation 8-2.
(8-2)
By part (c), the covariance between two points in the time series can be nonzero, as long as the value
of the covariance is only a functi on of h. Equation 8-3 is an exa mple for h = 3.
(8-3)
It is imp ortant to note th at for h 0, the cov(O) = cov(y,,y,) = var(y,) fo r all t. Because t he
var (y,) < , by condition (b), the variance of Y, is a constant for all t. So the constant variance coupled with
part (a), E [y,] = I'· for all t and some constant 1', suggests that a stationary time series can look like Figure
8-2. In this plot, the points appear to be centered about a fixed constant, zero, and the variance appears
to be somewhat constant over time.
8.2.1 Autocorrelation Function (ACF)
Although there is not an overall trend in the time series plotted in Figure 8-2, it appears that each point is
somewhat dependent on the past points. The difficulty is that the plot does not provide insight into the
covariance of the variables in the time series and its underlying structure. The plot of autocorrelation
function (ACF) provides this insight. For a stationary time series, the ACF is defined as shown in Equation 8-4.
8.2 ARIMA Model
ACF ( h)= cov(y, ,y, . h) cov(h)
Jcov (y,, Y, ) cov (y, . h, Y, h) cov(O)
(8-4)
0 100 200 300 400 500 600
Time
fiGURE 8· A plot of a stationary series
Because the cov(O) is the variance, the ACF is analogous to the correlation function of two variables,
carr (y,, Y, , h), and the value of the ACF falls between - 1 and 1. Thus, the closer the absolute value of ACF(h)
is to 1, the more useful Y, can be as a predictor of Y, . h .
Usi ng the sa me dataset plotted in Figure 8-2, the plot of the ACF is provided in Figure 8-3.
~ -
CD
c) -
u.. (()
0 c)
, Y,_. +t/>2 Yr-2 + … +¢ P Yr-p
+ g I + 91 g I_ 1 + … + {Jq g I_ q
where 15 is a constant for a nonzero-centered time series
4> i is a constant for j = 1, 2, … , p
tj>P=;e.O
(}*is a constant fork= 1, 2, … , q
oq =;e.O
c
1
– N (0, a~) for all t
(8-15)
If p = 0 and q =;e. 0, then the ARMA(p,q) model is simply an AR(p) model. Similarly, if p = 0 and q =;e. 0,
then the ARMA(p,q) model is an MA(q) model.
To apply an ARMA model properly, the time series must be a stationary one. However, many time
series exhibit some trend over time. Figure 8-7 illustrates a time series with an increasing linear trend
over time. Since such a time series does not meet the requirement of a constant expected value (mean),
the data needs to be adjusted to remove the trend. One transformation option is to perform a regres-
sion analysis on the time series and then to subtract the value of the fitted regression line from each
observed y-value.
If detrending using a linear or higher order regression model does not provide a stationary series, a
second option is to compute the difference between successive y-values. This is known as differencing.
In other words, for the n values in a given time series compute the differences as shown in Equation 8-16.
d
1
= Y,- y
1
_
1
fort=2,3, … ,n (8-16)
ADVANCED ANALYTICAL THEORY AND METHODS: TIME SERIES A N ALYSIS
0
“‘ –
0
0
0
“‘
0
0
0 0
0
FIGURE 8-7 A rime series wirh a rrend
I
10 20 30
T1me
I I
40 50
The mean of the time series plotted in Figure 8-8 is certainly not a constant. Applying differencing to
the time series results in the plot in Figure 8-9. This plot illustrates a time series with a constant mean and
a fairly constant variance over time.
g –
0
(0 –
>-
0
‘“‘” o o9 ooo o ooo
o
0
9 0 0 ,… ” 9 0 ,,p “”‘ 0 Qo o ‘0 …
Cll
15
en
Cll
::J
(ij
>
>
on …..
I
0
FIGURE 8-10 Twice differenced series
0
9
9
50 100 150
Time
Because the need to make a time series stationary is common, the differencing can be included (inte-
grated) into the ARMA model definition by defining the Autoregressive Integrated Moving Average
model, denoted AR IMA(p,d,q). The structure of the ARIMA model is identica l to the expression in
Equation 8-15, but the ARMA(p,q) model is applied to the time seri es, Y,, after applying differencing d times.
Additionally, it is often necessary to account for seasonal patterns in time seri es. For example, in the
retail sales use case example in Section 8.1, monthly clothing sales track closely with the calendar month.
Similar to the earlier option of detrending a series by first applying linear regression, the seasonal pattern
could be determined and the time series appropriately adjusted. An alternative is to use a seasonal
autoregressive integrated moving average model, denoted ARIMA(p,d,q) x (P,D,Q), where:
• p, d, and q are the same as defined previously.
• s denotes the seasonal period.
• Pis the number of terms in the AR model across the s peri ods.
• Dis the number of differences applied across the s periods.
• Q is the number of terms in the MA model across the s periods.
ADVANCED ANALYTICAL THEORY AN D METHODS: TIME SER IES ANALYSIS
For a time seri es with a seasonal pattern, following are typical values of s:
• 52 for weekly data
• 12 for monthly data
• 7 for daily data
The next section presents a seasonal ARIMA example and describes several techniques and approaches
to identify the appropriate model and forecast the future.
8.2.5 Building and Evaluat ing an ARIMA Model
For a large country, the monthly gasoline production measured in millions of barrels has been obtained
for the past 240 months (20 years). A market research firm requires some short-term gasoline production
forecasts to assess the petroleum industry’s ability to deliver future gasoline supplies and the effect on
gasoline prices.
l i brary (f orecast )
~ read in gasoline produc•1on t1me series
M monthly gas producti n •xpte~dEd in mill1ons of barrels
gas __prod_ input <- as. d ata . f rame ( r ead.csv ( "c: / data / gas__prod. csv " )
~create a time seti s ob1e·t
gas__prod <- ts (gas__prod _i nput[ , 2])
Hexamine the time series
plo t (gas __prod, xlab = "Time (months) ",
ylab = "Gas o line p roduction (mi llions o f barrels ) " )
Using R, the dataset is plotted in Figure 8-11.
U>
a;
t:: .,
D
0 0 0
(I) v c
0
~ 0 co
c (“)
0
“”‘ 0 :>
‘0 0
0
s
0 0 (I)
ro ;3;
(!)
0 50 100 150 200
Time (months)
FIGURE 8 11 Monthly gasoline production
8 .2 A RIMA Model
In R, the ts () function creates a t im e series object from a vector or a matrix. The use of time seri es
objects in R si mplifies the analysis by providi ng several methods that are tailored specifical ly for handling
equal ly time spaced data seri es. For example, the plot ( ) function does not require an explicitly speci-
fied variable for the x-axis.
To apply an ARMA model, the dataset needs to be a stationary time series. Using the d i f f () function,
the gasoline production time series is differenced once and plotted in Figure 8-12.
plot( di ff( ga s _pro d ))
abline (a =O , b =O)
0 …,
0
N
::0 0
e
‘1
“‘ 0 ro
!!:
:::: 0
‘6 ~
‘
0
~
0
<{
0 50 100 150 200
FtGURE 8·12 Differenced gasoline production time series
The differen ced time series has a constant mean near ze ro with a fairly constant varian ce over time.
Thu s, a stationa ry time series has been obtained. Using the following R code, the ACF and PACF plots for
the differenced seri es are provided in Figures 8-13 and 8-14, respectively.
# exam nt ACF and PArF of difftrenced ser ~s
acf(diff (ga s_prod), xaxp = c(O, 4 8, 4), l a g.max= 4 8 , ma i n= " " )
pacf(diff (gas _prod) , xaxp = c(O , 48 , 4 ) , lag. max=48 , main= '"' }
The dashed lines provide upper and lower bounds at a 95% significance level. Any value of the ACF or
PACF outside of these bounds indicates that the va lue is significantly different from zero.
Figure 8-13 shows several signi f icant ACF valu es. The slowly decaying ACF values at lags 12, 24,
36, and 48 are of particular interest. A similar behavior in the ACF was seen in Figure 8-3, but for lags 1, 2, 3, ...
Figure 8-13 indicates a seasonal autoregressive pattern every 12 months. Examining the PACF plot in Figure
8-14, the PACF value at lag 12 is quite large, but the PACF values are cl ose to zero at lags 24, 36, and 48.
Thus, a seasonal AR(1) model with period = 12 wi ll be considered. lt is often useful to address the seasonal
porti on of the overa ll ARMA model before add ressing the nonseasonal portion of the model.
ADVANCED ANALYTICAL THEORY AND METHODS: TIME SERIES ANALYSIS
u.
u
<
0 12
FIGURE 8 13 ACF of the differenced gasoline time series
u.
N
0
u 0
< 0
;;;
i: N
8'_ 9
0 12
FIGURE 8 1 PACF of the differenced gasoline time series
24
Lag
24
Lag
36 48
36 48
The a rima () function in R is used to fit a (0,1,0) x (1,0,0)
12
model. The analysis is applied to the original
time series variable, gas _prod. The differencing, d = 1, is specified by the order= c(0,1,0) term.
arima 1 <- arima (gas_prod,
order=c(0 ,1, 0) ,
seasona l= list(orde r=c (1,0 , 0) ,period=1 2))
arima 1
s.e. O . .J32--i
sigmaA2 estimated as 37.29: log like:ihood=-778.69
AIC=l561.38 AICc=l561.43 BIC=l568.33
8.2 ARIMA Model
The value of the coefficient for the seasonal AR(1) model is estimated to be 0.8335 with a standard
error of 0.0324. Because the estimate is several standard errors away from zero, this coefficient is consid-
ered significant. The output from this first pass ARIMA analysis is stored in the variable arima _1, which
contains several useful quantities including the residuals. The next step is to examine the residuals from
fitting the (0, 1,0) x (1,0,0)
12
A RIMA model. The ACF and PACF plots of the residuals are provided in Figures
8-15 and 8-16, respectively.
#examine ACF and PACF of the (O,l,O)x(l,O,O)l2 residuals
acf(arima_l$residuals, xaxp = c(O, 48, 4), lag.max=48, main='11')
pacf(arima_l$residuals, xaxp = c(O, 48, 4), lag.max=48, main= 1111 )
It)
0
LL
0
<
0
0
It)
q
0 12 24 36 48
Lag
FIGURE 8-15 ACF of residuals from seasonal AR(1) model
The ACF plot of the residuals in Figure 8-15 indicates that the autoregressive behavior at lags 12, 24, 26,
and 48 has been addressed by the seasonal AR(1) term. The only remaining ACF value of any significance
occurs at lag l.ln Figure 8-16, there are several significant PACF values at lags 1, 2, 3, and 4.
Because the PACF plot in Figure 8-16 exhibits a slowly decaying PACF, and the ACF cuts off sharply
at lag 1, an MA(1) model should be considered for the nonseasonal portion of the ARMA model on the
differenced series. In other words, a (0,1,1) x (1,0,0)12 ARIMA model will be fitted to the original gasoline
production time series.
arima_2 <- arima (gas_prod,
order=c(O,l,l),
seasonal= list(order=c(l,O,O),period=l2))
arima_2
Series: gas_pr:Jd
.=-.P.I;·LZ.. ( 0' l' 1 ! :1' 0 ' 0 I [ L]
Coefficients:
mal sa.rl
ADVANCED ANALYTICAL THEORY AND METHODS: TIME SERIES ANALYSI S
acf(arima_2$res iduals, xaxp = c(O , 4 8 , 4 ) , lag . max=48, main= " " )
pacf(arima_2$residuals , xaxp = c(O , 48 , 4 ) , l a g . ma x=48, main= "" )
0
0
0
u.
u
<
15 N
€ 9 II)
a.
oq-
9
0 12 24 36 48
Lag
FIGURE 8 16 PACF of residuals from seasonal AR(I) model
Based on the standard errors associated with each coefficient estimate, the coefficients are significantly
different from zero. In Figures 8-17 and 8-18, the respective ACF and PACF plots for the residual s from the
second pass ARIMA model indicate that no further term s need to be considered in th e ARIMA model.
Cl
“0
0 ;;; ..
0:
‘£?
0
~
‘
l/)
~
‘
0 50 100
Time
FIGURE 8 -19 Plot of residuals from the fitted (0,1,1) x {1,0,0)
12
model
150 200
0
(I)
0
~
co
c
GJ
;::,
0 CT
~
0 be considered instead of
an ARMA(p,q) model?
ADVANCED ANALYTICAL THEORY AND METHODS: TEXT ANALYSIS
Text analysis, sometimes called text analytics, refers to the representa tion, processing, and modeling of
textual data to derive usefu l insights. An important component of text analysis is text mining, the process
of discoveri ng relationships and interesting patterns in large text collections.
Text analysis suffers from the cu rse of high dimensional ity. Take the popu lar children’s book Green Eggs
and Ham [1) as an example. Author Theodor Geisel (Dr. Seuss) was challenged to write an entire book with
j ust SO distinct words. He responded with the book Green Eggs and Ham, which con tains 804 total words,
only SO of them distinct. These SO word s are:
a, am, and, anywhere, are, be, boat, box, car, cou ld, dark, do, eat, eggs, fox, goat, good, green, ha m,
here, house, I, if, in, let, like, may, me, mouse, not, on, or, rain, Sam, say, see, so, thank, that, the, them,
there, they, trai n, tree, try, will, with, wou ld, you
There’s a substantial amount of repetition in the book. Ye t, as repetiti ve as the book is, modeling it as a
vector of counts, or features, for each distinc t word still results in a SO-dimension problem.
Green Eggs and Ham is a simple book. Text analysis often deals with textual data that is far more com-
plex. A corp us (plural: corpora) is a large collection of texts used for va rious purposes in Natural Lang uage
Processing (N LP). Table 9-1 list s a few example corpora that are commonly used in NLP research.
TABLE 9-1 Example Corpora in Natural Language Processing
Corpus Word Count Domain Website
Shakespeare 0.88million Written http : // shakespeare.mit.edu/
Brown Corpus 1 million Written http://icame.uib.no/brown / bcm .html
Penn Treebank 1 million Newswire http : //www.cis .upenn .
edu/ -treebank/
Switchboard Phone 3million Spoken http://catalog.ldc.upenn.edu/ LDC97S62
Conversations
British National 100million Written and http: //www.natcorp.ox.ac.
Corpus spoken uk /
NA News Corpus 350million Newswire http://catalog.ldc.upenn.edu/ LDC95T21
European Parliament 600 million Legal http: //www . statmt.org /
Proceedings Parallel europarl /
Corpus
Google N-Grams 1 trillion Written http://catalog.ldc.upenn.edu/
Corpus LDC2006T13
The smallest corpus in the list, the complete works of Shakespeare, contai ns about 0.88 million words.
In contrast, the Google n-gram corpu s contains one trill ion words from publicly accessible web pages. Out
of the one trillion words in the Google n-g ram corpus, there might be one million distinct words, which
would correspond to one million dimensions. The high dimensionality of text is an important issue, and it
has a direct im pact on the complexities of many text analysis tasks.
9. 1 Text An a lysis Ste ps
Another major challenge with text analysis is that most of the time the text is not structured. As intro-
duced in Chapter 1, “Introduction to Big Data Analytics,” this may include quasi-structured, semi-s tructured,
or unstructured data. Table 9-2 shows some example data sources and data format s that text analysis may
have to deal with. Note that this is not meant as an exhausti ve list; rather, it high lights the challenge of
text analysis.
TAB LE 9-2 Example Data Sources and Formats for Text Analysis
Data Source Data Format Data Structure Type
News articles TXT, HTML, or Scanned PDF Unstructured
Literature TXT, DOC, HTML, or PDF Unstructured
E-mail TXT, MSG, or EML Unstructured
Web pages HTML Semi-structured
Server logs LOG or TXT Semi-structured or
Quasi-structured
Social network API firehoses XML, JSON, or RSS Semi-structured
Call center transcripts TXT Unstructured
9.1 Text Analysis Steps
A text analysis problem usual ly consists of three important steps: parsing, search and retrieval, and text
mining. Note that a text analysis problem may also consist of other subtasks (such as discourse and seg-
mentation) that are outside the scope of this book.
Parsing is the process that takes unstruct ured text and imposes a structure for further analysis. The
unstructured text could be a plain text file, a weblog, an Extensible Markup Language (XML) file, a HyperText
Markup Language (HTML) file, or a Word document. Parsing deconstru cts the provided text and renders
it in a more structured way for the subsequent steps.
Search and retrieval is the identification of the documents in a corpus that contain search items such
as specific words, phrases, topics, or entities like people or organ izations. These search items are genera lly
called key terms. Search and retrieval originated from the fi eld of library scie nce and is now used exten-
sively by web search engines.
Text mining uses the terms and indexes produced by the prior two steps to discover mean ingfu l insights
pertaining to domains or problems of interest. With th e proper representation of the text, many of the
techniques mentioned in the previous chapters, such as clustering and classification, can be adapted to text
mining . For example, the k-means from Chapter 4, “Advanced Analytical Theory and Methods: Clustering,”
can be modified to cluster text documents into groups, where each group represents a collect ion of docu-
ments with a similar topic [2]. The distance of a document to a centroid represents how closely the document
talks about that topic. Classifica tion tasks such as sentiment analysis and spam filteri ng are prominent use
ADVANCED ANALYTICAL THEORY AND METHODS: TEXT ANALYSIS
cases for the na’ive Bayes classifier (Chapter 7, “Adva nced Ana lytical Theory and Methods: Classification”).
Text mining may utilize methods and techniques from various fields of study, such as statistical analysis,
information retrieval, data mining, and natural language processing.
Note that, in reality, all three steps do not have to be present in a text analysis project. If the goal is to
construct a corpus or provide a catalog se rvice, for example, the focus would be the parsing task using one
or more text preprocessing techniqu es, such as part-of-speech (POS) tagging, named entity recognition,
lemmatization, or stemming. Fu rthermore, the three tasks do not have to be sequential. Sometimes their
orders might even look like a tree. For exa mple, one cou ld use parsing to build a data store and choose to
either search and retri eve the related documents or use text mi ning on the enti re data store to gain insights.
Part-of-Speech (POS) Tagging, Lemmatization, and Stemming
The goal of POS tagging is to bu ild a model whose input is a sentence, such as:
he saw a fox
and whose output is a tag sequence. Each tag marks the POS for the corresponding word, such as:
PRPVBD DT NN
accord ing to the Penn Treebank POS tags [3]. Therefore, the four words are mapped to pronoun
(personal), verb (past tense). determiner, and noun (singula r), respectively.
Both lemmatization and stemming are techniques to reduce t he number of dimensions and reduce
inflections or variant forms to the base form to more accurately measure the number of times each
word appea rs.
With the use of a given dictionary, lemmatizatio n finds the correct dictionary base form of a word.
For example, given the sentence:
obesity causes many problems
the output of lemmatization wou ld be:
obesity cause many problem
Different from lemmatization, stemming does not need a dictionary, and it usually refers to a cru de
process of stri pping affixes based on a set of heu ri stics with t he hope of correctly achieving the
goal to reduce infl ections or variant forms. After the process, words are strip ped to become stems.
A stem is not necessa rily an actual word defined in the natura l lang uage, bu t it is sufficient to dif-
ferentiate itse lf from the stems of other words. A well-known rule-based stemmin g algorithm is
Porte r’s s te mming algorithm. It defines a set of production ru les to iteratively transform words
into their stems. For the sentence shown previously:
obesity causes many problems
the output of Porter’s stemming algorithm is:
obes caus mani problem
9 .2 A Text Analysis Example
9.2 A Text Analysis Example
To further describe the three text ana lysis steps, consider the fictitious company ACME, maker of two
products: bPhone and bEbook. ACME is in strong competition with other companies that manufacture
and sell similar products. To succeed, ACME needs to produce excellent phones and eBook readers and
increase sales.
One of the ways the company does this is to monitor what is being said about ACME products in social
media. In other word s, what is the buzz on its products? ACME wants to search all that is said about ACME
products in social media sites, such as Twitter and Facebook, and popular review sites, such as Amazo n
and ConsumerReports. It wa nts to answer questions such as these.
• Are people mentioni ng its products?
• What is being said? Are the products seen as good or bad? If people think an ACME product is bad,
why? For example, are they complaining about the battery life of the bPhone, or the response time
in their bEbook?
ACME can monitor the social media buzz using a simple process based on the three steps outlined in
Section 9.1. This process is ill ustrated in Figure 9-1, and it includes the modules in the nex t list.
2. Represent Text 6. Gain Insights
FIGURE 9-1 ACME’s Text Analysis Process
1. Collect raw text (Section 9.3). This corresponds to Phase 1 and Phase 2 of the Data Analytic
Lifecycle. In this step, the Data Science team at ACME monitors websites for references to specific
products. The websites may include social media and review sites. The team could interact with
social network application programming interfaces (APis) process data feed s, or scrape pages
and use product names as keywords to get the ra w data. Regular expressions are commonly
used in this case to identify text that matches certain patterns. Additional filters can be appl ied
to the raw data for a more focused study. For example, only retrieving the reviews originating in
New York instead of the entire United States would allow ACME to conduct regional studies on
its products. Generally, it is a good practice to apply filters during the data collection phase. They
can red uce 1/0 workloads and minimize the storage requirements.
2. Represent text (Section 9.4). Con vert each review into a suitable document representation with
proper indices, and build a corpus based on these indexed reviews. This step corresponds to
Phases 2 and 3 of the Data Analytic Lifecycle.
ADVANCED ANALYTICAL THEORY AND METHODS: TE XT ANALYSIS
3. Compute the usefulness of each word in the reviews using methods such as TFIDF (Section 9.5).
This and the following two steps correspond to Phases 3 through 5 of the Data Analytic Li fe cycle.
4. Categorize documents by topics (Section 9.6). This can be achieved through topic models (such
as latent Dirichlet allocation).
5. Determine sentiments of the reviews (Section 9.7). Identify whether the reviews are positive or
negative. Many product review sites provide ratings of a product with each review. If such infor-
mation is not available, techniques like sentiment analysis can be used on the textual data to
infer the underlying sentiments. People can expre ss many emotions. To keep the process simple,
ACME considers sentiments as positive, neutral, or negative.
6. Review the results and gain greater insights (Section 9.8). This step corresponds to Phase 5 and
6 of the Data Analytic Lifecycle. Marketing gathers the results from the previous steps. Find out
what exactly makes people love or hate a product. Use one or more visua lization techniques
to report the findings. Test the soundness of the conclusions and operationalize the findings if
applicable.
This process organizes the topics presented in the rest of the chapter and calls out some of the difficul-
ties that are unique to text analysis.
9.3 Collecting Raw Text
Recall that in the Data Analytic Lifecycle seen in Chapter 2, “Data Analytics Lifecycle,” discovery is the first
phase. In it, the Data Science team investigates the problem, understands the necessary data sources, and
formulates initial hypotheses. Correspondingly, for text analysis, data must be collected before anything
can happen. The Data Science team starts by actively monitoring vari ous websites for user-g enerated
contents. The user-generated contents being collected co ul d be related articles from news portals and
blogs, comments on ACME’s products from online shops or reviews sites, or social media posts that contain
keywords b Phone or bEbook. Regardless of where the data comes from, it’s likely that the team would
deal with semi-structured data such as HTML web pages, Really Simple Syndication (RSS) feeds, XML, or
JavaScript Object Notation (JSON) files. Enough structure needs to be imposed to find the part of the raw
text that the team really cares about. In the brand management example, ACME is interested in what the
revi ews say about bPho n e or b Eb ook and when the reviews are posted. Therefore, the team will actively
collect such information.
Many websites and services offer public A Pis [4, 5) for th ird -party developers to access their data. For
example, the Twitter API [6) allows developers to choose from the Streaming API or the REST API to retrieve
public Twitter posts that contain the keywords bPhone or bEbook. Developers can also read tweets in real
time from a specific user or tweets posted near a specific venue. The fetched tweets are in the JSON format.
As an example, a sample tweet that contains the keyword b Phone fetched using the Twitter Streaming
API version 1.1 is shown next.
:4
06
07
08
09 },
-157.81538521787621,
21.3002578885766
10 “favorite_count”: 0,
11 “id”: 36810148827682401J.
12 “id_str”: “36810148827632401–i”,
13 “lang”: “en”,
14
15
16
17
18
19
20
“metadata”: {
“iso_language_code”: “en”,
“result_type”: “recent”
},
“retweet count”: 0,
“retto.’eeted”: false,
“source”: “Twitter for bPhone“,
“text”: “I once had a gf back in the day. Then the bPhone
came out lol”,
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
-!1
42
43
45
“truncated”: false,
“user”: {
“contributors enabled”: false,
“created at”: ,.i·ion Jun 24 09:15:54 ~oooo 2013″,
“default_profile”: false,
“default_profile_image”: false,
“description”: “Love Life and Live Good”,
“favourites count”: 23,
“follow_request_sent”: false,
“followers_count”: 96,
“following”: false,
“friends count”: 347,
“geo enabled”: false,
“id”: 2542887414,
“id str”: “2542887414”,
“is_translator”: false,
11 lang”: “e:1-gb”,
“listed_count”: 0,
“location”: “Beautiful Ha\·:aii”,
“name”: “The Original DJ Ice”,
“notifications”: false,
“profile_background_color”: “CO;)EED”,
46 “profile_background_image_url”:
47 “http://aO.twimg.com/profile_bg_imgs/378BOOOOO/b12e56725ee “,
48 “profile_background_tile”: true,
49 “profile_image_url”:
50 “http://aO.twimg.com/profile_imgs/378800010/2d55a4388bcffd5 “,
51
52
“profile_link_color”: “008484”,
“profile_sidebar_bcrder_color”: “FFFFFF”,
ADVANCED ANALYTICAL THEORY AND METHODS: TEXT ANALYSIS
53
54
55
56
57
58
59
60
61
62
63
64
“profile sidebar fill colo!.-“: “DDEEFE”,
“profile_tezt_color”: “333333”,
“prof::._ le _use_ bac~:ground _image”: true,
11 protec~ed”: false,
“screen_nameu: “DJ~Ice”,
“statuses_ccunt”: 186,
“time zone”: “Ha•t\faii”,
“url”: null,
“utc_offset”: -36000,
“verified”: false
Fields ere at ed _at at line 2 and text at line 22 in the previous tweet provide the information that
interests ACME. The created_ at entry stores the timestamp that the tweet was published, and the text
field stores the main content of the Twitter post. Other fields could be useful, too. For example, utilizing
fields such as coordinates (line 3 to 9), user’s local language (1 ang, line 40), user’s 1 oca t ion (line 42),
time_zone (line 59), and utc_ offset (line 61) allows the analysis to focus on tweets from a specific
region. Therefore, the team can research what people say about ACME’s products at a more granular level.
Many news portals and blogs provide data feeds that are in an open standard format, such as RSS or
XML. As an example, an RSS feed for a phone review blog is shown next.
01
02
03
04 http:; /wwwophonesocom/linkohtm< link>
05
06
07
08
09 < l ink>ht tp: I /www 0 phones 0 com/link 0 htm< /link>
10
11
13
The content from the t i t1 e (line 7), the description (line 8), and the published date (pubDa t e,
line 11) is what ACME is interested in.
If the plan is to collect user comments on ACME’s products from online shops and review sites where
APis or data feeds are not provided, the team may have to write web scrapers to parse web pages and
automatically extract the interesting data from those HTML files. A web scraper is a software program
(bot) that systematically browses the World Wide Web, downloads web pages, extracts useful information,
and stores it somewhere for further study.
Unfortunately, it is nearly impossible to write a one-size-fits-all web scraper. This is because websites
like online shops and review sites have different structures. It is common to customize a web scraper for a
specific website. In addition, the website formats can change over time, which requires the web scraper to
9.3 Collecting Raw Text
be updated every now and then. To build a web scraper for a specific website, one must study the HTML
source code of its web pages to find patterns before extracting any useful content. For example, the team
may find out that each user comment in the HTML is enclosed by a DIV element inside another DIV with
the ID usrcommt, or it might be enclosed by a DIVelement with the CLASS commtcls.
The team can then construct the web scraper based on the identified patterns. The scraper can use
the cur 1 tool [7] to fetch HTML source code given specific URLs, use XPath [8] and regular expressions to
select and extract the data that match the patterns, and write them into a data store.
Regular expressions can find words and strings that match particular patterns in the text effectively and
efficiently. Table 9-3 shows some regular expressions. The general idea is that once text from the fields of
interest is obtained, regular expressions can help identify if the text is really interesting for the project. In
this case, do those fields mention bPhone, bEbook, or ACME? When matching the text, regular expres-
sions can also take into account capitalizations, common misspellings, common abbreviations, and special
formats for e-mail addresses, dates, and telephone numbers.
TABLE 9-3 Example Regular Expressions
Regular Expression Matches Note
b(Pip)hone bPhone, bphone Pipe “r means “or”
bEbo*k bEbk, bEbok, bEbook, bEboook, “*”matches zero or more occur-
bEbooook, bEboooook, ••• rences of the preceding letter
bEbo+k bEbok, bEbook, bEboook, “+”matches one or more occur-
bEbooook, bEboooook, ••• rences of the preceding letter
bEbo{2,4}k bEboo~bEbooo~bEbooook “{2,4}” matches from two to four
repetitions of the preceding letter
“o”
“”I love Text starting with “I love” “A” matches the start of a string
ACME$ Text ending with “ACME” “$” matches the end of a string
This section has discussed three different sources where raw data may come from: tweets that contain
keywords bPhone or bEbook, related articles from news portals and blogs, and comments on ACME’s
products from online shops or reviews sites.
If one chooses not to build a data collector from scratch, many companies such as GNIP [9] and DataSift
[10] can provide data collection or data reselling services.
Depending on how the fetched raw data will be used, the Data Science team needs to be careful
not to violate the rights of the owner of the information and user agreements about use of websites
during the data collection. Many websites place a file called robots. txt in the root directory-that is,
http: I I . .. /robots. txt (for example, http: I /www. amazon. com/robots. txt).lt lists
the directories and files that are allowed or disallowed to be visited so that web scrapers or web crawlers
know how to treat the website correctly.
ADVANCED ANALYTICAL THEORY AND METHODS: TE XT ANALYSIS
9.4 Representing Text
After the previous step, the team now has some raw text to start with.ln this data representation step, raw
text is first transformed with text normalization techniques such as tokenization and case folding. Then it
is represented in a more structured way for analysis.
Tokenization is the task of separating (also called tokenizing) words from the body of text. Ra w text is
converted into collections of tokens after the token ization, where each token is generally a word.
A common approach is tokenizing on spaces. For example, with the tweet shown previously:
tokenization based on spaces would output a list of tokens.
1 e_~ , IJ 1, 1, I
Note that token “day . ” contains a period. This is t he result of only using space as the separator.
Therefore, tokens “day.” and “day” would be considered different terms in t he downstream analysis
un less an additional lookup table is provided. One way to fix the problem without the use of a lookup table
is to remove the period if it appears at the end of a sentence. Another way is to tokenize the text based on
punctuation marks and spaces. In th is case, the previous tweet would become:
However, tokenizing based on punctuation marks might not be well suited to certain scenarios. For
exa mple, if the text contains contractions such as we 1 11, tokenizing based on punctuation will split them
into sepa rated words we and 11. For words such as can 1 t , the output would be can and t . It would
be more preferable either not to tokenize them or to tokenize we 1 11 into we and 1 11, and can 1 t into
can and 1 t . The 1 t token is more recogniza ble as negative than the t token. If the tea m is dealing with
certain tasks such as information extraction or sentiment analysis, tokenizing solely based on punctuation
marks and spaces may obscure or even distort meanings in the text.
Tokenizat ion is a much more difficult task than one may expec t. For examp le, should words like
state-of- the -art, Wi- Fi, and San Francisco be considered one token or more? Should words
like Resume, resume, and resume all map to the same token? Tokenization is even more difficult beyond
English. In German, for example, there are many unsegmented compound nouns. In Chinese, there are no
spaces between words. Japanese has several alphabets interm ingled. This list can go on.
It’s safe to say that there is no si ngle tokenizer that wi ll work in every scenario. The team needs to decide
what counts as a token depending on the domain of the task and select an appropriate tokenization tech-
nique that fits most situations well. In reality, it’s common to pair a standard tokenization technique with
a lookup table to address the contractions and terms that should not be token ized. Sometimes it may not
be a bad idea to develop one’s own tokenization from scratch.
Another text normalization tech nique is ca lled case folding, which reduces all letters to lowercase (or
the opposite if applicable). For the previous tweet, after case folding the text would become this:
)Jl I ! ,j , l. ll . .. … ::. ,…
9.4 Representing Text
One needs to be cautious applying case folding to tasks such as information extraction, sentiment
analysis, and machine translation. If implemented incorrectly, case folding may reduce or change the mean-
ing of the text and create additional noise. For example, when General Motors becomes general
and motors, the downstream analysis may very likely consider them as separated words rather than
the name of a company. When the abbreviation of the World Health Organization WHO or the rock band
The Who become who, they may both be interpreted as the pronoun who.
If case folding must be present, one way to reduce such problems is to create a lookup table of words
not to be case folded. Alternatively, the team can come up with some heuristics or rules-based strategies
for the case folding. For example, the program can be taught to ignore words that have uppercase in the
middle of a sentence.
After normalizing the text by tokenization and case folding, it needs to be represented in a more struc-
tured way. A simple yet widely used approach to represent text is called bag-of-words. Given a document,
bag-of-words represents the document as a set of terms, ignoring information such as order, context,
inferences, and discourse. Each word is considered a term or token (which is often the smallest unit for the
analysis). In many cases, bag-of-words additionally assumes every term in the document is independent.
The document then becomes a vector with one dimension for every distinct term in the space, and the
terms are unordered. The permutation 0* of a document D contains the same words exactly the same
number of times but in a different order. Therefore, using the bag-of-words representation, document D
and its permutation D* would share the same representation.
Bag-of-words takes quite a na..-ve approach, as order plays an important role in the semantics of text.
With bag-of-words, many texts with different meanings are combined into one form. For example, the
texts “a dog bites a man” and “a man bites a dog” have very different meanings, but they would share the
same representation with bag-of-words.
Although the bag-of-words technique oversimplifies the problem, it is still considered a good approach
to start with, and it is widely used for text analysis. A paper by Salton and Buckley [11] states the effective-
ness of using single words as identifiers as opposed to multiple-term identifiers, which retain the order
of the words:
In reviewing the extensive literature accumulated during the past 25 years in the
area of retrieval system evaluation, the overwhelming evidence is that the judicious
use of single-term identifiers is preferable to the incorporation of more complex
entities extracted from the texts themselves or obtained from available vocabulary
schedules.
Although the work by Salton and Buckley was published in 1988, there has been little, if any, substantial
evidence to discredit the claim. Bag-of-words uses single-term identifiers, which are usually sufficient for
the text analysis in place of multiple-term identifiers.
Using single words as identifiers with the bag-of-words representation, the term frequency (TF) of
each word can be calculated. Term frequency represents the weight of each term in a document, and it is
proportional to the number of occurrences of the term in that document. Figure 9-2 shows the 50 most
frequent words and the numbers of occurrences from Shakespeare’s Hamlet. The word frequency distribu-
tion roughly follows Zipf’s Law [12, 13]-that is, the i-th most common word occurs approximately 1 I i as
ADVANCED ANALYTICAL THEORY AND METHODS: TEXT ANALYSIS
often as the most frequent term. In other words, the frequency of a word is inversely proportional to its
rank in the frequency table. Term frequency is revisited later in t his chapter.
800
600
200
FIGURE 9-2 The 50 most frequent words in Shakespeare’s Hamlet
What’s Beyond Bag-of-Words?
Bag-of-words is a common technique to start with. But sometimes the Data Science team prefers
other methods of text representation that are more sophisticated. These more advanced methods
consider factor s such as word order, context, inferences, and discourse. For example, one such
method can keep track of the word order of every document and compare the normalized dif-
ferences of the word orders [14]. These advanced techniques are outside the scope of this book.
Besides extracting the terms, their morphological features may need to be included. The morpho-
logica l features specify additiona l information about the terms, which may includ e root words, affixes,
part-of-speech tags, named entities, or intonation (variations of spoken pitch). The features from this step
contribute to the downstream ana lysis in classification or sentiment analysis.
The set of features that need to be extracted and stored highly depends on the specific task to be per-
formed.lf the task is to label and distinguish the part of speech, for example, the featu res will include all the
words in the text and their corresponding part-of-speech tags. If the task is to annotate the named entities
9.4 Representing Text
like names and organizations, the features highlight such information appeari ng in the text. Constructing
the featu res is no trivial task; quite often thi s is done entire ly manual ly, and sometimes it requ ires domain
expertise.
Sometimes creating features is a text analysis task all to itself. One such example is topic modeling.
Topic modeling provides a way to quickly ana lyze large volumes of raw text and identify the latent topics.
Topic modeling may not require the documents to be labeled or annotated. It ca n discover topics directly
from an analysis of the raw text. A topic consists of a cluster of words that frequently occur together and
that share the same theme. Probabilistic topic modeling, discussed in greater detail later in Section 9.6, is
a suite of algorithms that aim to parse large archives of documents and discover and annotate the topics.
It is important not only to create a representation of a document but also to create a representation
of a corpus. As introduced earlier in the chapter, a corpus is a collection of documents. A corpus could be
so large that it includes all the documents in one or more languages, or it could be smaller or limited to a
specific domain, such as technology, medicine, or law. For a web search engine, the entire World Wide Web
is the relevant corpus. Most corpora are much smalle r. The Brown Corpus [15] was the first mi llion-word
electronic corpus of English, created in 1961 at Brown University. It includes text from around 500 sou rces,
and the source has been categorized into 15 ge nres, such as news, editorial, fiction, and so on. Table 9-4
lists the genres of the Brown Corpus as an example of how to orga nize informatio n in a corpus.
TABLE 9-4 Categories of the Brown Corpus
Category Number of Sources Example Source
A. Reportage 44 Chicago Tribune
B. Editoria l 27 Christ ian Science Monitor
C. Reviews 17 Life
D. Relig ion 17 William Pollard: Physicist and
Christian
E. Skills and Hobbies 36 Joseph E. Choate: The American
Boating Scene
F. Popular Lore 48 David Boroff: Jewish Teen-Age
Culture
G. Belles Lettres, Biography, 75 Selma J. Cohen: Avant-Garde
Memoirs, and so on Choreography
H. Miscellaneous 30 U.S. Dep’t of Defense: Medicine in
National Defense
J. Learned 80 J. F. Vedder: Micrometeorites
K. General Fiction 29 David Stacton: The Judges of the
Secret Court
(continues)
ADVA NCED ANALYTICAL THEORY AND METHODS: TEXT ANA LYSIS
TABLE 9-4 Categories of the Brown Corpus (Continued)
Category Number of Sources Example Source
L. Mystery and Detective 24 5. L. M. Barlow: Monologue of
Fiction Murder
M. Science Fiction 6 Jim Harmon: The Planet with No
Nightmare
N. Adventure and Western 29 Paul Brock: Toughest Lawman in
Fiction the Old West
P. Romance and Love Story 29 Morley Callaghan: A Passion in
Rome
R.Humor 9 Evan Esar: Humorous English
Many corpora focus on specific domains. For example, the BioCreative corpora [16) are from biology,
the Switchboard corpus [17) contains telephone conversations, and the Eu ropean Parliament Proceed ings
Parallel Corpus [18) was extracted from the proceedings of th e European Parliament in 21 Eu ropean
languages.
Most corpora come with metadata, such as the size of the corpus and the doma ins from which the
text is extracted. Some corpora (such as the Brown Corpus) include the information content of every word
appearing in the text. Inform ation content (IC) is a metric to denote the importance of a term in a corpus.
The conventional way [19) of measuring the IC of a term is to combine t he knowledge of its hierarchical
structure from an ontology with statistics on its actual usage in text derived from a corpus. Terms with
higher IC values are cons idered more important than terms with lower IC values. For example, the word
necklace generally has a higher IC va lue than t he word jewelry in an English corpus because jewelry is
more genera l and is likely to appear more often than necklace. Resea rch shows that IC can help measure
the semantic similarity of terms [20). In addition, such measures do not require an annotated corpu s, and
they generally achieve strong correlations with human judgment [21, 20).
In the brand management example, the team has collected the ACME prod uct reviews and turned them
into the proper represen tation with the techniques discussed earlier. Next, the reviews and the representa-
tion need to be stored in a searchable archive for future reference and research. Th is archive cou ld be a SQL
database, XML or JSON fi les, or plain text files from one or more directories.
Corpus statistics such as IC can help identify the importance of a term from the documents being ana-
lyzed . However, IC values included in the metadata of a tra ditional corp us (such as Brown corpu s) sitting
externally as a knowledge base cannot satisfy the need to analyze the dynamically changed, unstructured
data from the web. The problem is twofold. First, both traditional corpora and IC metadata do not change
over time. Any term not existing in the corpus text and any newly invented words would automatically
receive a zero IC value. Second, the corpus represents the entire knowledge base for the algorithm being
used in the downstream analysis. The nature of the unstructu red text determines that the data be ing
analyzed can contain any topics, many of which may be absent in the given knowledge base. For example,
if the task is to research peo ple’s attitudes on musicians, a traditional co rpus constructed SO years ago
would not know that the term U2 is a band; therefore, it would receive a zero on IC, which means it’s not an
9.5 Term Frequency-Inverse Document Frequency (TFIDF)
important term . A better approach would go through all the fetched documents and find out that most of
them are related to music, with U2 appearing too often to be an unimportant term. Therefore, it is neces-
sa ry to come up with a metric t hat can easily adapt to the context and nature of the text instead of relying
on a traditional corpus. The next section discusses such a metric. It’s known as Term Frequency-Inverse
Document Frequency (TFIOF). which is based entirely on all the fetched documents and which keeps track
of the importance of term s occurring in each of t he documents.
Note that the fetched documents may change constantly over time. Consider the case of a web search
engin e, in which each fetched document corresponds to a matching web page in a search result. The
documents are added, modified, or rem oved and, as a result, the metrics and indices must be updated
correspondingly. Add itional ly, word distri butions can change over time, which reduces the effectiveness
of class ifiers and filters (such as spam filters) unless they are retrained.
9.5 Term Frequency-Inverse Document
Frequency (TFIDF)
This section presents TFIOF, a measure widely used in information retrieva l and text analysis. Instead of
using a traditional corpus as a knowledge base, TFIDF directly works on top of the fetched documents and
treats these documents as t he “corpus.” TFIDF is robust and efficient on dynamic content, because docu-
ment changes require only the update of frequency counts.
Given a term t and a document d = {t
1
, t
2
,tl’ . . . t ” } containi ng n terms, the si mplest form of term
frequency oft in d can be defin ed as the number of times t appears in d, as shown in Equation 9-1.
n
T~ (t ,d) = L f(t, t;)
j –‘”1
where
{
1 if t = t’
f(t t ‘ ) = ‘
‘ 0, otherwise
(9-1)
To understand how the term frequency is computed, consider a bag-of-word s vector space of 10 words:
i, l ove, acme, my, b e b o ok, bphone, f an t astic, slotv, terr ible, and terrifi c . Given the
text I love LOVE my b Phon e extracted from the RSS feed in Section 9.3, Table 9-5 shows its cor-
responding term frequency vector aft er case folding and tokenization.
TABLE 9-5 A Sample Term Frequency Vector
Term Frequency
love 2
acme 0
(continues)
ADVANCED ANALYTICAL THEORY AND METHODS: TEXT ANALYSIS
TABLE 9 ·5 A Sample Term Frequency Vector (Continued)
Term Frequency
my
be book 0
bphone
fantastic 0
slow 0
terrible 0
terrific 0
The term fre quency fun ction can be logarithmically sca led. Recall t hat in Figure 3-1 1 and Figure 3-12
of Chapter 3, “Review of Basic Data Analytic Methods Using R,” it shows the logari thm can be applied to
distribution with a long tail to enabl e more data detail. Similarl y, the logarithm ca n be appl ied to word
frequencies whose distribution also contains a long tail, as shown in Eq uation 9-2.
TF2 (t ,d ) = log [T~ (t,d)+ 1) (9-2)
Because longer documents contain more term s, they tend to have higher term frequency val ues. They
also tend to contain more distinct term s. These factors can conspi re to raise the term frequency va lues
of longer docu ments and lead to undesi rable bias favoring longer documents. To address this problem,
the term frequency can be norma lized. For example, the term frequency of term t in document d ca n be
normalized based on the number of terms in d as shown in Equation 9-3.
( )
T~ (t,d)
TF
1
t ,d =–
n
(9-3)
Besides the three common defi nitions mentioned earl ier, there are other less common va ri ations [22)
of term frequency. In practice, one needs to choose the term freq uency defin ition t hat is the most suitable
to the data and t he problem to be solved.
A term frequ ency vector (shown in Table 9-5) can become very high dimensional because the bag-of-
words vector space can grow substantially to incl ude all the words in Engl ish. The high dimensiona lity
makes it difficult to store and parse the text and contribute to performance issues related to text ana lysis.
For the purpose of reducing dimensionality, not all the words from a given language need to be included
in the term frequency vector. In English, for example, it is common to remove words such as the, a, of,
and, to, and other articl es that are not likely to contribute to semantic understanding. These common
words are ca lled stop wo rds. Lists of stop words are avai lable in various lang uages for automating the
identification of stop words. Among them is the Snowball’s stop words list [23) that contains stop words
in more than ten languages.
9.5 Term Frequency-Inverse Document Frequency (TFIDF)
Another simp le yet effecti ve way to reduce dimensionality is to st ore a t erm and its fr eq uency on ly
if th e term appea rs at least o nce in a document. Any term not existi ng in the term freque ncy vec tor by
default wil l have a frequ ency of 0. Therefore, the previous term frequ ency vector would be sim plified t o
what is show n in Tab le 9-6.
TABLE 9-6 A Simpler Form of the Term Frequency Vector
Term Frequency
love 2
my
bphone
Som e NLP tec hn iq ues such as lemm atization and stemming can also reduce high dimensionality.
Lemmatization and st emming are t w o d ifferent techniques that combine vari ous forms of a word. Wi th
these techniques, words such as play, plays, played, and pl a y in g can be mapped to the same t erm.
It has been shown t hat the t erm frequen cy is based on the raw coun t of a t erm occurri ng in a st and-
alone document. Term freq uency by itself suffers a critical problem: It regards that stand-alone d ocu ment
as t he enti re worl d. The importance of a term is solely based on its presence in this particular docu ment.
Stop words such as th e, and, and a could be inappropri ately considered the most important because
they have t he highest frequencies in every docu ment. For exam ple, the to p three most frequent words in
Shakespeare’s Hamlet are all stop words {t he, and, and of, as show n in Figure 9-2). Besides stop w ord s,
words t hat are more general in mean ing t end t o appear more often, t hus havi ng higher t erm freque ncies.
In an art icle about consumer telecommunicat ions, the word phone would be li kely to receive a high term
freque ncy. As a resu lt, the important keywords such as b Phone and bEbook and their related words could
appea r t o be less im portan t. Consider a search engi ne t hat responds t o a sea rch query and fe tches re leva nt
documents. Using term f req uency alone, the search engine wou ld not properly assess how releva nt each
document is in relation to the search query.
A quick fi x for t he prob lem is to int roduce an additional variable that has a broader view of the world –
considering t he im port ance of a term not only in a single document but in a collection of documents, or
in a corpu s. The additional variable should reduce the effect of the t erm frequency as the t erm appears in
more d ocuments.
Indeed , tha t is the intention of the inverted document frequency {IDF). Th e IDF inversely corresponds
to t he do cument frequency {OF), wh ich is defined to be the num ber of documents in the co rpu s that
contai n a t erm. Let a co rpus D contain N documents. Th e docum ent f re qu ency of a term t in corpu s
D = [d1,d2 , •• • d11 } is defin ed as shown in Equation 9-4.
ADVANCED ANALYTICAL THEORY AND METHODS: TEXT ANALYSIS
N
DF(t)= l:f'(t,d) d;ED;IDI=N
i=l
where
f'(t,d’) = {~. if tEd’ (9-4) otherwise
The Inverse document frequency of a term t is obtained by dividing N by the document frequency of
the term and then taking the logarithm of that quotient, as shown in Equation 9-5.
N
IDF, ( t) =log DF(t) (9-5)
If the term is not in the corpus, it leads to a division-by-zero. A quick fix is to add 1 to the denominator,
as demonstrated in Equation 9-6.
N
/D~(t)=log ( )
DF t +1
(9-6)
The precise base of the logarithm is not material to the ranking of a term. Mathematically, the base
constitutes a constant multiplicative factor towards the overall result.
Figure 9-3 shows 50 words with (a) the highest corpus-wide term frequencies (TF), (b) the highest docu-
ment frequencies (OF), and (c) the highest Inverse document frequencies (IDF) from the news category
of the Brown Corpus. Stop words tend to have higher TF and DF because they are likely to appear more
often in most documents.
Words with higher IDF tend to be more meaningful over the entire corpus. In other words, the IDF of
a rare term would be high, and the IDF of a frequent term would be low. For example, if a corpus contains
1,000 documents, 1,000 of them might contain the word the, and 10 of them might contain the word
bPhone. With Equation 9-5, the IDF of the would be 0, and the IDF of bPhone would be log100, which
is greater than the IDF of the. If a corpus consists of mostly phone reviews, the word phone would prob-
ably have high TF and DF but low IDF.
Despite the fact that IDF encourages words that are more meaningful, it comes with a caveat. Because
the total document count of a corpus (N) remains a constant, IDF solely depends on the DF. All words having
the same DF value therefore receive the same IDF value.IDF scores words higher that occur less frequently
across the documents. Those words that score the lowest DF receive the same highest IDF.In Figure 9-3
(c), for example, sunbonnet and narcotic appeared in an equal number of documents in the Brown
corpus; therefore, they received the same IDF values. In many cases, it is useful to distinguish between
two words that appear in an equal number of documents. Methods to further weight words should be
considered to refine the IDF score.
The TFIDF (or TF-IDF) is a measure that considers both the prevalence of a term within a document
(TF) and the scarcity of the term over the entire corpus (IDF). The TFIDF of a term t in a document dis
defined as the term frequency of t in d multiplying the document frequency of t in the corpus as shown
in Equation 9-7:
TFIDF(t,d) = TF(t,d}x/DF(t) (9-7)
“T1
” c :» m
10
w
~
~
==<'
0
3
OJ
a
~
8
-a
c
"' v,'
😉
~
8
.§
~
:§.
s- ~
s- 0
11) a.
;)-
-a·
::>–
~
8
-a
s;
:;1
0
.:->”!
~
C3 ..,.,
IOF
wwwwwwwww~
u,a,a,:….:….ex,ex,
-· …… V”‘
~::;'”‘
ro ro o
;;; ‘”‘o ;;;
::::> ~ “‘
– “0 ::<:
--i c 0 , "' ~ - . a.
Oz"'
, 0 ::;
V'l ...... -·
'"' ro
~ ~ ~3
0 “‘
‘”‘ – 0 c 0 ~
3 “‘ ~
f1) – ~
::> ro –
vr 3 ~
cr s· s·
Q ~ ~
c “0 0
“‘ f1) ‘”‘
f1) ‘”‘ c
– ::;; 3
::; ‘”‘ 11)
ro a. ::>
–i 0 ~
, ‘”‘ 0”
< c c
~ 3 ~
c ro o
ro ::> ‘”‘
“‘ -‘”‘ ‘ c
3 “‘ ~
“‘ 0 –
‘< ...... ~
c:r:T""'
f1) f1) 0
"' ~ a. "' ~
~ 3 ~ * ~ ~ ::> ro ~
~ 3 ~
Vi’ Q.l
“‘a. !!.o
‘< '"'
- c 0 '
10
v.
~
3 ...,
iil
.c
c:
II>
:s
n
‘<
I
:;
<
!!:
"' II>
0
0
n
c:
3
II>
;a
~
II>
.c
c:
II>
:s
n
‘<
-=i ...,
a
.:!!
ADVANCED ANALYTICAL THEORY AND METHODS: TEXT ANALYSIS
TFIDF is efficient in that the calculations are simple and straightforward, and it does not require knowl-
edge of the underlying meanings of the text. But this approach also reveals little of the inter-document
or intra-document statistical structure. The next section shows how topic models can address this short-
coming ofTFIDF.
9.6 Categorizing Documents by Topics
With the reviews collected and represented, the data science team at ACME wants to categorize the reviews
by topics. As discussed earlier in the chapter, a topic consists of a cluster of words that frequently occur
together and share the same theme.
The topics of a document are not as straightforward as they might initially appear. Consider these two
reviews:
1. The bPhoneSx has coverage everywhere. It's much less flaky than my old bPhone4G.
2 . While I love ACME's bPhone series, I've been quite disappointed by the bEbook. The text is illeg-
ible, and it makes even my old NBook look blazingly fast.
Is the first review about bPhone5x or bPhone4G? Is the second review about bPhone, bEbook, or NBook?
For machines, these questions can be difficult to answer.
Intuitively, if a review is talking about bPhoneSx, the term bPhoneSx and related terms (such as
phone and ACME) are likely to appear frequently. A document typically consists of mu ltiple themes run-
ning through the text in different proportions- for example, 30% on a topic related to phones, 15% on
a topic related to appearance, 10% on a topic rela ted to shipping, 5% on a topic related to ser-
vice, and so on.
Document grouping can be achieved with clustering methods such ask-means clustering [24] or clas-
sification method s such as support vector machines [25]. k-nearest neighbors [26]. or na'lve Bayes [27).
However, a more feasible and prevalent approach is to use topic modelin g. Topic mode ling provides
tools to automatically organize, search, understand, and summarize from vast amounts of information.
Topic models [28, 29) are statistical models that examine words from a set of documents, determine the
themes over the text, and discover how the themes are associated or change over time. The process of
topic modeling can be simplified to the following.
1. Uncover the hidden topical patterns within a corpus.
2. Annotate documents according to these topics.
3. Use annotations to organize, search, and summarize texts.
A topic is formally defined as a distribution over a fixed vocabulary of words [29]. Different topics wou ld
have different distributions over the same vocabulary. A topic can be viewed as a cluster of words with
related meanings, and each word has a corresponding weight inside this topic. Note that a word from the
vocabulary can reside in multiple topics with different weights. Topic models do not necessarily require
prior knowledge of the texts. The topi cs can emerge solely based on analyzing the text.
The simplest topic model is latent Dirichle t allocation (LDA) [29]. a generative probabilistic model of
a corpus proposed by David M. Blei and two other researchers. In generative probabilistic modeling, data
9.6 Categorizing Documents by Topics
is treated as the resu lt of a generative process that includes hidden va riables. LDA assumes that there is a
fixed vocabulary of words, and the number of the latent topics is predefined and rem ai ns constant. LDA
assumes that each latent topic follows a Dirichlet distribution [30] over the vocabulary, and each document
is represented as a random mixture of latent topics.
Figure 9-4 illustrates t he intuitions behind LDA. The left side of the fi gure shows four topics built from a
corpus, where each topic contains a list of the most important words from the vocabu lary. The fou r example
topics are related to problem, policy, neural, and report. For each document, a distribution over the topics
is chosen, as shown in the histogram on the right. Next, a topic assignment is picked for each word in the
document, and the word from th e corresponding topic (colored discs) is chosen. In rea lity, only the docu-
ments (as shown in the middle of the figure) are avai lable. The goal of LDA is to infer the underl ying topics,
topic proportions, and topic assignments for every document.
Topics
Document
problem
technique
game
play
policy
0.05
0.04
0.02
0 .01
0.02
Topic
Assignments
Topic
Proportions
reinforcement 0 .02
state 0.01
model 0 .01
neural 0.06
learning 0.05
networks 0.05
system 0 .04
report 0.05
technical 0.03
paper 0.02
universi ty 0 .02
FIGURE 9-4 The intuitions behind LOA
The reader ca n refer to the origi nal paper [29] for the mathematical detail of LDA. Basica lly, LDA can
be viewed as a case of hierarchica l Bayesian estimation with a posterior distribution to group data such as
documents with similar topics.
Many programmi ng tools provide software packages that can perform LDA over datasets. R comes with
an lda package [31] that has built-in functions and samp le datasets. The lda package was developed
by David M. Blei's re search group [32]. Figure 9-5 shows the distributions of ten topics on nine scientific
documents random ly drawn from the cora dataset of the lda package. The cora dataset is a collection
of 2,410 scientific documents extracted from the Cora search engine [33].
ADVANCED ANALYTICAL THEORY AND METHODS: TEXT ANALYSIS
0 ·a.
g
r einforcem ent.stato.tcarnlng .optlmal.policy
dccision.multiple .exocution.parallol .lns truction
genelic.controUitnoss problem population
reasoning.design.knowlodgo systom.case
feature.learning network teatufOS "'sua!
neurat.networks network grant research
neural.learn1ng notwofks system network
repon.bayestan.lechmcal umvcrs•ty.cha1n
learmng.conceplalgonthm examples learner
algonthm.distnbution.bayosian crror.probabJhty
reinforcement.stato.learnlng.oplimal .pollcy
decision.multiple.execution parnllot .lnstruct•on
genetic.controtfitness pro~om populat1on
reasoning.design.knowlodgo .systom.casc
feature.learning.no~ features V~Sual
neural.networks nctwof"k grant r&s.earch
neural.learntng nctwotks system network
report.bayestan lochniC.11 UOIVCfSIIY chatn
learning.concept algorithm examples learner
algonthm.distnbution.baycslan .orror probab1l1ty
reinforcement. state loarmng .ophmal policy
docision.multip!e.execution pnrallollnstruction
gcnetic.control.fitness problem population
reasoning.design knowledgo system case
reature.learmng network features VISual
neural.netwotks netwof'k grant research
neural.learmng.networks system network
report.bayes•an.techmcal .unlvcrsny cha1n
learning.concept.algonlhm.o~tamplcs. loarner
algorilhm.distribution bayesi an error proba~llty
1 2 3
I • I -- I -I I
4 5 6 - -• • •
• - --7 8 9 •
I
• -I • I - I 0 25 50 75 0 25 50 75 0 25 50 75 proporl ion (%)
topic
ulgor•thm .d•stnbutlon.bayeslan.error.probab•hty
l earning.conccpt.algorlthm.cxamples learner
report .baycsian.tochnical.university.chain
nourallearmng.nctworks.system.network
neural networks network grant research
feature &earmng netwcrt.features VISual
reasorung de~n. k.r'IO'-N1edge system.case
genet•c.control .fitness problem.populaiJOn
doclslon.mui!Jplo.execution .parallel.instruction
• rolnforccment.stato.learning.optlmal.pollcy
FIGURE 9-5 Distributions of t en topics over nine scientific documents from the Co ra dataset
The code that follows shows how to generate a graph similar to Figure 9-5 using Rand add-on packages
such as lda and ggplot.
require ( "ggplot2")
require ( "reshape2" )
require ( "lda" )
!: ~oad d ~qm ·1Ls 'lnd ·rocabulary
data (cora uments )
data (cora.vocab)
theme_ set(theme_ bw () )
= :~umbt.l
K <- 1 0
!: ~;\) .be" toe unen:s to d1sp~ y
N <- 9
result c- ld!.c~ll3psed.g1bbs.sa~pla. d cu~ents,
K, ## Num clusters
cora . vocab,
25, ijij Num iterations
0 . 1,
9.7 Determining Sentiments
0 . 1,
compute . log . likel:hood= TRUE )
:: :iet the t r \•; ds 1;·, --ht 1~~-~l-
top . words c- top . topic .words (resul t$topics, 5, by . score=TRUE)
0 .:d - >P. p != • •- : ..
topic . props c- t ( res~lt$document_sums) I co1Sums(result$document_s ums)
docume nt . samples c- sample(1 : dim(top ic . p rops) [1], N)
topic . props c- topic . props[document. samples , ]
topic . p r ops[is.na(topic . props ) J c- 1 I K
colnames (t op ic . prop s ) c- a pply( t op . wo rds , 2 , pa ste, collapse=” ” )
topic .props . df c- mel t (cbind( data . f r a me(topic . props ) ,
docume nt=factor( 1:N )) ,
variable . name=”top ic”,
i d .vars = “docume n t ” )
qplot (topic , va lue *100 , fill= t opic , stat =”identi t y”,
yla b=”proportion (%) “, data= t op ic . props . d f ,
geom=”histogram ” ) +
t heme(ax is . text. x = element_text( angle =O, hjust=1 , size= 12 )) +
coo rd_fl ip () +
fac e t_wrap(- doc ument , nco l =3)
Topic models can be used in document modeling, document cla ssi fica tion, and collaborative filtering
[29). Topic models not only can be applied to textual data, they ca n also help annotate images. Just as a
document ca n be considered a collection of topics, images can be considered a collection of image features.
9.7 Determining Sentiments
In addition to the TFIDF and topic models, the Data Science team may want to identify the sentiments in user
comments and reviews of the ACME products. Sentiment analysis refers to a group of tasks that use statistics
and natural language processing to mine opinions to identify and extract subj ective inform ation from texts.
Early work on sentiment analysis focused on detecting the polarity of product reviews from Epinions [34)
and movie reviews from the Internet Movie Database (IMDb) [35) at the document level. Later work handles
sentiment analysis at the sentence level [36). More recently, the focus has shifted to phrase-level [37) and
short-text forms in response to the popularity of micro-blogging services such as Twitter [38, 39, 40, 41 , 42).
ADVANCED ANALYTICAL THEORY AND METHODS: TEXT ANALYSIS
Intuitively, to conduct sentiment analysis, one can manually construct lists of words with positive senti-
ments (such as br i 11 ian t, awesome, and spectacular) and negative sentiments (such as awful,
stupid, and hideous). Related work has pointed out that such an approach can be expected to achieve
accuracy around 60% [35], and it is likely to be outperformed by examination of corpus statistics [43].
Classification methods such as na”ive Bayes as introduced in Chapter 7, maximum entropy (MaxEnt), and
support vector machines (SVM) are often used to extract corpus statistics for sentiment analysis. Related
research has found out that these classifiers can score around 80% accuracy [35, 41, 42] on sentiment
analysis over unstructured data. One or more of such classifiers can be applied to unstructured data, such
as movie reviews or even tweets.
The movie review corpus by Pang et al. [35] includes 2,000 movie reviews collected from an IMDb
archive of the rec.arts.movies.reviews newsgroup [43]. These movie reviews have been manually tagged
into 1,000 positive reviews and 1,000 negative reviews.
Depending on the classifier, the data may need to be split into training and testing sets. As seen previ-
ously in Chapter 7, a useful rule of the thumb for splitting data is to produce a training set much bigger
than the testing set. For example, an 80/20 split would produce 80% of the data as the training set and
20% as the testing set.
Next, one or more classifiers are trained over the training set to learn the characteristics or patterns
residing in the data. The sentiment tags in the testing data are hidden away from the classifiers. After the
training, classifiers are tested over the testing set to infer the sentiment tags. Finally, the result is compared
against the original sentiment tags to evaluate the overall performance of the classifier.
The code that follows is written in Python using the Natural Language Processing Toolkit (NLTK) library
(http: I /nltk. org/).lt shows how to perform sentiment analysis using the na’ive Bayes classifier over
the movie review corpus.
The code splits the 2,000 reviews into 1,600 reviews as the training set and 400 reviews as the testing
set. The na’ive Bayes classifier learns from the training set. The sentiments in the testing set are hidden away
from the classifier. For each review in the training set, the classifier learns how each feature impacts the
outcome sentiment. Next, the classifier is given the testing set. For each review in the set, it predicts what
the corresponding sentiment should be, given the features in the current review.
import nltk.classify.util
from nltk.classify import NaiveBayesClassifier
from nltk.corpus import movie_reviews
from collections import defaultdict
import numpy as np
# define an 80/20 split for train/test
SPLIT= 0.8
def word_feats(words):
feats ; defaultdict(lambda: False)
for word in words:
feats[word] ; True
return feats
posids ; movie_reviews.fileids(‘pos’)
9.7 Determining Sentiments
negids = movie_reviews.fileids(•neg’}
posfeats
negfeats
[(word_feats(movie_reviews.words(fileids=[f)}}, ‘pos’}
for f in posids]
[(word_feats(movie_reviews.words(fileids=[f]}}, •neg’}
for f in negids)
cutoff = int(len(posfeats} * SPLIT}
trainfeats = negfeats[:cutoff] + posfeats[:cutoff]
testfeats = negfeats[cutoff:] + posfeats[cutoff:)
print ‘Train on %d instances\nTest on %d instances’ % (len(trainfeats},
len(testfeats))
classifier = NaiveBayesClassifier.train(trainfeats}
print ‘Accuracy: •, nltk.classify.util.accuracy(classifier, testfeats)
classifier.show_most_informative_features(}
# prepare confusion matrix
pos [classifier.classify(fs} for (fs,l) in posfeats[cutoff:]]
pos np.array(pos}
neg [classifier.classify(fs} for (fs,l) in negfeats[cutoff:)]
neg np.array(neg}
print ‘Confusion matrix:’
print ‘\t’*2, ‘Predicted class’
print ‘-‘*40
print ‘l\t %d (TP} \tl\t %d (FN} \tl Actual class’ %
(pos == ‘pos•} .sum(}, (pos ‘neg’} .sum(}
print ‘-‘*40
print ‘l\t %d (FP} \tl\t %d (TN) \tl’ %
(neg == ‘pos’) . sum(), (neg == ‘neg’} . sum())
print ‘-‘*40
The output that follows shows that the na”ive Bayes classifier is trained on 1,600 instances and tested
on 400 instances from the movie corpus. The classifier achieves an accuracy of 73.5%. Most information
features for positive reviews from the corpus include words such as outs tanding, vulnerable,
and astounding; and words such as insul ting,ludicrous, and uninvol vingare the most
informative features for negative reviews. At the end, the output also shows the confusion matrix corre-
sponding to the classifier to further evaluate the performance.
Tra~n en 1600 ~nstances
~est Infor~ati~e Fea:~res
ADVANCED ANALYTICAL THEORY AND METHODS: TEXT ANALYSIS
As discussed ea rlier in Chapter 7, a confusion mat rix is a specific table layout that allows visua lization
of the performance of a model over the testing set. Every row and column corresponds to a possible class in
the dataset. Each cell in the matrix shows the number of test examples for which the actual class is the row
and the predicted class is the column. Good results correspond to large numbers down the main diagonal
(TP and TN) and small, ideally zero, off-diagonal elements (FP and FN). Table 9-7 shows the con fu sion matrix
from the previous program output for the testing set of 400 reviews. Because a well-performed classifier
should have a confusion matrix with large numbers forTP and TN and ideally near zero numbers for FP and
FN, it ca n be concluded that the na”ive Bayes classifier ha s many false negatives, and it does not perform
very well on th is testing set.
TABLE 9 -1 Confusion Matrix for the Example Test ing Set
Chapter 7 has introduced a few measures to evaluate the performance of a classifier beyond the confu-
sion matrix. Precision and recall are two measures commonly used to evaluate tasks related to text analysis.
Definitions of precision and recall are given in Equations 9-8 and 9-9.
P
. . TP
reCISIOn =—
TP + FP
TP
Recall =
TP + FN
(9-8)
(9-9)
9.7 Determining Sentiments
Precision is defined as the percentage of documents in the results that are relevant. If by entering
keyword bPhone, the search engine returns 100 documents, and 70 of them are relevant, the precision
of the search engine result is 0.7o/o.
Recall is the percentage of returned documents among all relevant documents in the corpus. If by
entering keyword bPhone, the search engine returns 100 documents, only 70 of which are relevant while
failing to return 10 additional, relevant documents, the recall is 70/ (70+ 10) = 0.875.
Therefore, the na’ive Bayes classifier from Table 9-7 receives a recall of 195 I (195 + 5) = 0.975 and a
precision of195/{195+ 101):::::: 0.659.
Precision and recall are important concepts, whether the task is about information retrieval of a search
engine or text analysis over a finite corpus. A good classifier ideally should achieve both precision and recall
close to l.O.In information retrieval, a perfect precision score of 1.0 means that every result retrieved by a
search was relevant (but says nothing about whether all relevant documents were retrieved), whereas a
perfect recall score of 1.0 means that all relevant documents were retrieved by the search (but says nothing
about how many irrelevant documents were also retrieved). Both precision and recall are therefore based
on an understanding and measure of relevance. In reality, it is difficult for a classifier to achieve both high
precision and high recall. For the example in Table 9-7, the na’ive Bayes classifier has a high recall but a
low precision. Therefore, the Data Science team needs to check the cleanliness of the data, optimize the
classifier, and find if there are ways to improve the precision while retaining the high recall.
Classifiers determine sentiments solely based on the datasets on which they are trained. The domain
of the data sets and the characteristics of the features determine what the knowledge classifiers can learn.
For example, lightweight is a positive feature for reviews on laptops but not necessarily for reviews on
wheelbarrows or textbooks. In addition, the training and the testing sets should share similar traits for
classifiers to perform well. For example, classifiers trained on movie reviews generally should not be tested
on tweets or blog comments.
Note that an absolute sentiment level is not necessarily very informative. Instead, a baseline should
be established and then compared against the latest observed values. For example, a ratio of 40o/o posi-
tive tweets on a topic versus 60o/o negative might not be considered a sign that a product is unsuccessful
if other similar successful products have a similar ratio based on the psychology of when people tweet.
The previous example demonstrates how to use na’ive Bayes to perform sentiment analysis. The example
can be applied to tweets on ACME’s bPhone and bEbook simply by replacing the movie review corpus
with the pretagged tweets. Other classifiers can also be used in place of na’ive Bayes.
The movie review corpus contains only 2,000 reviews; therefore, it is relatively easy to manually tag
each review. For sentiment analysis based on larger amounts of streaming data such as millions or billions
of tweets, it is less feasible to collect and construct datasets of tweets that are big enough or manually
tag each of the tweets to train and test one or more classifiers. There are two popular ways to cope with
this problem. The first way to construct pretagged data, as illustrated in recent work by Go et al. [41] and
Pak and Paroubek [42], is to apply supervision and use emoticons such as : ) and : ( to indicate if a tweet
contains positive or negative sentiments. Words from these tweets can in turn be used as clues to classify
the sentiments of future tweets. Go et al. [41] use classification methods including na’ive Bayes, MaxEnt, and
SVM over the training and testing datasets to perform sentiment classifications. Their demo is available at
http: I /www. sentiment140. com. Figure 9-6 shows the sentiments resulting from a query against
the term “Boston weather” on a set of tweets. Viewers can mark the result as accurate or inaccurate, and
such feedback can be incorporated in future training of the algorithm.
ADVANCED ANALYTICAL THEORY AND METHODS: TE XT ANALYSIS
Sentiment140 l=l
Boston~a:lle1
S.nUmtnt ~n.1.1ys11 tor Bolton weatl’ltr
TWetu. about: Boston weather
• •
.alMWZl 1 really t>a:e bol!on w .. ther I gona rnoYe OU1 or there
….,.. , ~.
Aar.t.!a~BfW!IQOf @Ain)’Mane!!rlrwnef ThOfougnly
So far this chapter has discussed several text ana lysis tasks including text collection, text representation,
TFIDF, top ic models, and sentiment analysis. This section shows how ACME uses these techniques to gain
insights into customer opinions about its products. To keep th e example simple, this section only uses
bPhone to illustrate the steps.
ADVANCED ANALYTICAL THEORY AND METHODS: TEXT ANALYSIS
Corresponding to the data collection phase, the Data Science team has used bPhone as the keyword
to collect more than 300 reviews from a popular technical review website.
The 300 reviews are visualized as a word cloud after removing stop words. A word cloud (or tag
cloud ) is a visual representation of textual data. Tags are generally single words, and the importance of
each word is shown with font size or color. Figure 9-9 shows the word cloud built from the 300 reviews.
The reviews have been previously case folded and tokenized into lowe rca sed words, and stop words have
been removed from the text. A more frequently appearing word in Figure 9-9 is shown with a larger font
size. The orientation of each word is on ly for the aesthetical purpose. Most of the graph is taken up by the
words phone and bphone, which occur frequently but are not very informative. Overall, the graph reveals
little information. The team needs to conduct further analyses on the data.
o ne
.. 2::
. § d”~ “””‘ ~ ~ II b button tfl … -.·
~ ~- i ‘ -got ~new ~Oik lng-•• ~ • Pone
r – ••”” d d (.Uf’ f I
also •. love :~.:!’; ·=-~-~t pro .. ~~~ g 0 0 ,.,;·11 . – t c -o- tt(t~d
use used condition(S I Ike on e g re- at JUSt & a c me ~0 l N~•’Y
• I • w•”~~ ~ • – WOrkS ~ phOneS – -t; ~ ~~me
• ~ fir st …… bought …, b get c t -·
• .., ‘ r ., amazon- ….. back ‘:”” .. screen uy f,.~~ •. “‘ .. –.
Lf«~ !t ….. ,.. I-
(‘ 1: P
great price. not a scratch or bump on th~ bphone ‘ 1t came • lot speedier than
ex pected so thats al w ays • plus 1 Its just wonderful . only had 1t for a couple
of days and could n’t ask for anythi ng m ore 1 1 1 1
w buttons d id n ot work september os. 20t3
when 1 went to have my contacts transferred 1t was found that t he two
buttons need to switch did not work consistantl y
~ it’s a bphone. August 12, 2013
1 hate acme and acme products. base both on pri nci p le and on funct iona lity c
or lack ther eof l . that bei ng sai d 1 guess thi s phone 1s great for old people
that aren’t tech SaWy, I bOUgh t thiS for my aunt ,
just like new 1 love this phone August to, 2013
my phone,. great 1t came on time with everything ‘”the box and new. nothi ng
was use. 1t was all ‘” Its orig i na l packag i ng
ri g reat product August 10. 2013
1 lost my phone on halloween and needed • quick and easy rep lacement .
the phone was'” very good cond it i on . and had met my ex pectation .
ot * bphone My 26, 2013
t he phone was clean . unscratched and on good condlt1on. 1 just wi sh the
battery l asted a littl e longer. otherwi se a great purchase 1
_.,..,… bphone July 2s. 2011
love my bphone 1 have not had any problems with 11. 1t ‘s easy to use and
holds so much more than my last phone .
FIGURE 9-12 Reviews highlighted by TFIOF values
Topic models such as LOA can ca tegorize the r eviews into topi cs. Figures 9-13 and 9-14 show circular
graphs of t o p ics as results of the LOA. Th ese figures are p roduced w ith tools and te chnologies such as
Pyth on, NoSQL, and 03.js. Figure 9-13 v isualizes te n to pics buil t from the five-sta r reviews. Each topic
focuses on a d i ffere nt aspect that can characterize t he r eviews. The d i sc si ze represents the weig ht o f a
wo rd. In an interactive envi ro nm ent, hoveri ng t h e mouse over a t opic d i spl ays the full wor ds and their
corresponding weigh t s.
Figure 9-14 v i sua lizes ten topics f rom o ne-star reviews. For example, t he bottom-righ t topic contains
words such as but ton, power, and broken, w hich may i ndicate that bPhone has problems related
to button and power suppl y. Th e Data Sc ience team ca n t r ack down these revi ews and fin d ou t if t hat ‘s
rea lly th e case.
9.8 Gaining Insights
FIGURE 9· .s Ten topics on five-star reviews
ADVANCED ANALYTICAL THEORY AND METHODS: TEXT ANALYSIS
FIGURE 9-14 Ten topics on one-star revie ws
Figure 9-15 provides a different way t o visualize the topics. Five topics are extracted from five-star
reviews and on e-star reviews, resp ectively. In an interactive environment, hove ring the mouse o n a
t opic highlights th e correspondi ng words in this topic. The screenshots in Figu re 9-15 were t aken when
Top ic 4 is highlighted for both groups. The weight of a word in a topic is indicated by the disc size.
9.8 Ga ining Insights
wou’d 0 0 would 0 0 t wort.s 0 0 0 wor~119 week Cl work 0 way 0 went 0
warehouse 0 used I usng 0 use
tme 0 two I 0 thank 0 slot 0
system 0 sm 0 • sweet C!) screen 0 l 1-urpnsed 0 return 0 seler 0 0 recerved 0 see 0 product 0
recoi’M’I!nd 0 0 people
• receroed 0 0 one 0 0 0 p.rOduct 0 new 0 0 pleased 0 never 0 perfed 0 mac 0
one 0 know a
okSer 0 got 0
nice 0 get 0
love 0 0 free 0
like 8 fD< 0 9reat 0 g 0 frst 0
got disappoi1ted 0
good 0 0 device 0
gl'le 0 de-sct'1)tion 0
frst 0 coukl 0
day El case 0
came 0 cant • buyer 0 buy 0 0 I 0
brand 0 0 button • 0 bOught 0 0 broken 0
better 0 boug ht 0 0 0
able 0 back 0 0
Topic 1 Topic 2 Topic 3 Tope ~ TopicS Toptc 1 Topic 2 Topic 3 Topic.: Topic S
FIGURE 9-15 Five topics on five-star reviews (left) and 1-star reviews (right)
The Data Science team has also conducted sentiment analysis over 100 tweets from the popular micro-
blogg ing site Twitter. The result is shown in Fig ure 9-16. The left side represents negative sentiments, and
the right side represents positive sentiments. Vertica lly, the tweets have been randomly placed for aesthetic
purposes. Each tweet is shown as a disc, where the size represents the number of followe rs of the user who
made the original tweet. The color shade of a disc represents how frequently this tweet has been retwee ted.
The figure indicates t hat most customers are satisfied w ith ACME 's bPhone.
ADVANCED ANALYTICAL THEORY AND METHODS: TEXT ANALYSIS
•
unpleasant • • ••
•
• • pleasant
• • • • • •
fiGURE 9·16 Sentiment analysis on Tweets related to bPhone
Summary
This chapter has discussed severa l subtasks of text analysis, including parsing, search and retrieval, and
text mining. With a brand management examp le, the chapter talks about a typical text analysis process:
(1) collecting raw text, (2) representi ng text, (3) using TFIDF to compute the usefulness of each word in the
texts, (4) categorizing documents by topics using topic modeli ng, (5) sentiment analysis, and (6) gaining
greater insights.
Overa ll text analysis is no trivia l ta sk. Corresponding to the Data Analytic Lifecycle, the most time-
consuming parts of a text analysis project often are not performing the statistics or implementing algo-
rithms. Chances are the team would spend most of the time formulating the problem, getting the data,
and preparing the data.
Exercises
1. What are t he main challenges of text analysis?
2. What is a corpus?
3. What are common words (such as a, and, of) cal led?
4. Why can't we use TF alone to measu re the usefu lness of the words?
5. What is a caveat of I OF? How does TFIDF address the problem?
6 . Name three benefits of using the TFIDF.
7. What methods can be used for sentiment analysis?
8. What is the defin ition of topic in topic models?
9 . Explain the trade-offs for precision and recal l.
1 0. Perform LOA topic modeli ng on the Reuters-2 1578 corpus using Python and LOA. The NLTK has
already come with the Reuters-21 578 corpus. To import this corpu s, enter the following comment in
t he Python prompt:
from nltk . corpus import reu t ers
Bibliography
The LOA has already been implemented by several Python libraries such as gensim [45]. Either
use one such library or implement your own LOA to perform topic modeling on the Reuters-21578
corpus.
11 . Choose a topic of your interest, such as a movie, a celebrity, or any buzz word. Then collect 100 tweets
related to this topic. Hand-tag them as positive, neutral, or negative. Next, split them into 80 tweets
as the training set and the remaining 20 as the testing set. Run one or more classifiers over these
tweets to perform sentiment analysis. What are the precision and recall of these classifiers? Which
classifier performs better than the others?
Bibliography
[1] Dr. Seuss, "Green Eggs and Ham," New York, NY, USA, Random House, 1960.
[2] M. Steinbach, G. Karypis, and V. Kumar, "A Comparison of Document Clustering Techniques," KDD
Workshop on Text Mining, 2000.
[3] "The Penn Treebank Project," University of Pennsylvania [O nline] . Available: http: I /www . cis
. upenn . edu/ -treebank/home . html. [Accessed 26 March 2014].
[4] Wikipedia, "List of Open A Pis" [O nl ine]. Available: http: I I e n. wikipedia. org/wiki/
Li st_of_open_APis. [Accessed 27 March 2014].
[51 ProgrammableWeb, "API Directory" [Online]. Available: http: I / wwvl. programmableweb.
com/apis/directory. [Accessed 27 March 2014].
[6] Twitter, "Twitter Developers Site" [Online]. Available: ht tps : I /dev . twitter . com/ .
[Accessed 27 March 2014].
[7] "Curl and libcurl Tools" [Online]. Available: http : I I curl. haxx. se/. [Accessed 27 March
2014].
[8] "XML Path Language (X Path) 2.0," Worl d Wide Web Consortium, 14 December 2010. [Onl ine].
Avai lable: http: I /www. w3. org/TR/xpath20 / . [Accessed 27 March 2014].
[9] "Gnip: The Source for Social Data," GNIP [Online]. Available: http : I /gnip. com/ . [Accessed 12
June 2014].
[10] "DataSift: Power Decisions with Social Data." DataSift [Online]. Available: http: I I datasi ft .
com/ . [Accessed 12 June 2014].
[11] G. Salton and C. Buckley, "Term-Weighting Approaches in Automatic Text Retrieval," in Information
Processing and Management, 1988, pp. 513-523.
[12] G. K. Zipf, Human Behavior and the Principle of Least Effort, Reading, MA: Addison-Wesley, 1949.
[13] M. E. Newman, "Power Law s, Pareto Distributions, and Zipf's Law," Contem porary Physics, vol. 46,
no.5,pp.323-351, 2005.
[1 4 ] Y. Li, D. Mclean, Z. A. Bandar, J.D. O'Shea, and K. Crockett, "Sentence Similarity Based on Semantic
Nets and Corpus Statistics," IEEE Transactions on Knowledge and Data Engineering, vol. 18, no. 8,
pp. 1138- 1150, 2006.
[15] W. N. Francis and H. Kucera, "Brown Corpus Manual," 1979. [Online]. Available: ht tp : I /i came .
uib . no/brown/bcm . html.
[16] "Cri tica l Assessme nt of Information Extraction in Biology (BioCreative)" [Online]. Ava ilable:
http: I /www . biocreati ve . org/ . [Accessed 2 April2014].
[17] J. J. Godfrey and E. Holliman, "Switchboard-1 Relea se 2," Linguistic Data Consortium, Philadelphia,
1997. [Online]. Available: http: I /catalog . ldc . upenn. edu/LDC97S62 . [Accessed 2
April 2014].
ADVANCED ANALYTICAL THEORY AND METHODS: TEXT ANA LYSIS
[18) P. Koehn, "Europarl: A Parallel Corpus for Statistical Machine Translation," MT Summit, 2005.
(19) N. Seco, T. Veale, and J. Hayes, "An Intrinsic Information Content Metric for Semantic Similarity in
WordNet," ECAI, vol. 16, pp. 1089-1090,2004.
[20] P. Resnik, "Using Information Content to Evaluate Semantic Similarity in a Taxonomy," In
Proceedings of the 14th International Joint Conference on Artificial intelligence (IJCA/'95), vol. 1,
pp. 448-453, 1995.
[21) T. Pedersen, "Information Content Measures of Sema ntic Sim ilarity Perform Better Without
Sense-Tagged Text," Human Language Technologies: The 2010 Annual Conference of the North
American Chapter of the Association for Computational Linguistics, pp. 329-332, June 2010.
[22] C. D. Manning, P. Raghavan, and H. Schutze, "Document and Query Weig hting Schemes," in
Introduction to Information Retrieval, Cambridge, United Kingdom, Cambridge University Press,
2008, p. 128.
[231 M. Porter, "Porter's English Stop Word List," 12 February 2007. [Online]. Available: http://snowba ll.
tartarus.org/algorithms/english/stop.txt. [Accessed 2 April2014].
[24] M. Steinbach, G. Karypis, and V. Kumar, "A Compa ri son of Document Clustering Techniques," KDD
workshop on text mining, vol. 400, no. 1, 2000.
[25] T. Joachims, "Transductive Inference for Text Classification Using Support Vector Machines," ICML,
vol. 99, pp. 200-209, 1999.
[26] P. Soucy and G. W. Mineau, "A Simple KNN Algorithm for Text Categorization," /COM, pp. 647-648,
2001.
[27) B. Liu, X. Li, W. S. Lee, and P. S. Yu, "Text Classification by Labeling Words," AAAI, vol. 4, pp. 425-430,
2004.
[28) D. M. Blei, "Probabilistic Topic Models," Communications of the ACM, vol. 55, no. 4, pp. 77-84,
2012.
[29) D. M. Blei, A. Y. Ng, and M. I. Jordan, "Latent Dirichlet Allocation," Journal of Machine Learning
Research, vol. 3, pp. 993-1022,2003.
[30) T. Minka, " Estimating a Dirichlet Distribution," 2000.
[31) J. Chang, "Ida: Collapsed Gibbs Sampling Methods for Topic Models," CRAN, 14 October 2012.
[Online]. Available: http: I I cran . r-proj ect. org l web l packages l ldal . [Accessed 3
April 2014].
[32) D. M. Blei, "Topic Modeling Software" [Online). Available: http://www.cs.princeton.edu/-blei/
topicmodeling.html. [Accessed 11 June 2014].
[33) A. McCallum, K. Nigam, J. Rennie, and K. Seymore, "A Machine Learning Approach to Building
Domain-Specific Search Engines," /JCA/, vol. 99, 1999.
[34) P. D. Turney, "Thumbs Up or Th umbs Down? Semantic Ori entation Applied to Unsupervised
Classification of Reviews," Proceedings of the Association for Computational Linguistics,
pp. 417-424, 2002.
[35) B. Pang, L. Lee, and S. Vaithyanathan, "Thumbs Up? Sentiment Classification Using Machine
Learning Techniques," Proceedings ofEMNLP, pp. 79- 86,2002.
[36) M. Hu and B. Liu, "Mining and Summarizing Customer Reviews," Proceedings of the Tenth ACM
S/GKDD International Conference on Knowledge Discovery and Data Mining, pp. 168-177, 2004.
[37) A. Agarwal, F. Biadsy, and K. R. Mckeown, "Contextua l Phrase- Level Pola rit y Analysis Using Lexical
Affect Scoring and Syntactic N-Grams," Proceedings of the 12th Conference of the European
Chapter of the Association for Computational Linguistics, pp. 24-32, 2009.
Bibliography
[38] B. O'Connor, R. Balasubramanyan, B. R. Routledge, and N. A. Smith, "From Tweets to Polls:
Linking Text Sentiment to Public Opinion Time Series," Proceedings of the Fourth International
Conference on Weblogs and Social Media, ICWSM '70, pp. 122- 129,2010.
(39] A. Agarwal, B. Xie, I. Vovsha, 0. Rambow and R. Passonneau, "Sentiment Analysis of Twitter Data," In
Proceedings of the Workshop on Languages in Social Media, pp. 30-38, 2011 .
(40] H. Sa if, Y. He, and H. Alani, "Semantic Sentiment Ana lysis ofT witter," Proceedings of the 11th
International Conference on The Semantic Web (ISWC12), pp. 508-524,2012.
[41] A. Go, R. Bhayani, and L. Huang, "Twitter Sentiment Classification Using Distant Supervision,"
CS224N Project Report, Stanford, pp. 1- 12, 2009.
(42] A. Pak and P. Paroubek, "Twitter as a Corpus for Sentiment Analysis and Opinion Mining,"
Proceedings of the Seventh International Conference on Language Resources and Evaluation
(LREC70), pp. 19-21, 2010.
[43] B. Pang and L. Lee, "Opinion Mining and Sentiment Analysis," Foundations and Trends in
Information Retrieval, vol. 2, no. 1-2, pp. 1- 135,2008.
[44] "Amazon Mechanical Turk" [Online]. Available: http://www.mturk.com/. [Accessed 7 April2014].
[45] R. Rehurek, "Python Gensim Library" [Onl ine]. Avai lable: http://radimrehu rek.com/gensim/.
[Accessed 8 April 2014].
ADVANCED ANALYTICS-TECHNOLOG Y AND TOOLS: MAPREDUCE AND HADOOP
Chapter 4, "Advanced Analytical Theory and Methods: Clustering," through Chapter 9, "Advanced Analytical
Theory and Method s: Text Analysis," covered several useful analytical methods to classify, predict, and
examine relationships within the data. This chapter and Chapter 11, "Advanced Analytics- Technology and
Tools: In-Database Analytics," address several aspects of collecting, stori ng, and processing unstructured
and structured data, respective ly. This chapter presents some key technologies and tools related to the
Apache Hadoop software library, "a framework that allows for the distributed processing of large datasets
across clusters of computers using simple programming models" [1] .
This chapter focu ses on how Hadoop stores data in a distributed system and how Hadoop implements a
simple programmi ng paradigm known as MapReduce. Although this chapter makes some Java-specific re f-
erences, the only intended prerequisite knowledge is a basic understanding of programming. Fu rthermore,
the Java-specific details of writing a MapReduce program for Apache Hadoop are beyond the scope of
th is text. This omission may appear troublesome, but tools in the Hadoop ecosystem, such as Apache
Pig and Apache Hive, can often eliminate the need to explicitly code a MapReduce prog ram. Along with
other Hadoop-related tools, Pig and Hive are covered in a portion of this chapter deal ing with the Hadoop
eco system.
To illustrate the power of Hadoop in handling unstructured data, the following discussion provides
several Hadoop use cases.
10.1 Analytics for Unstructured Data
Prior to conducting data ana lysis, the requ ired data must be collected and processed to extract the useful
information. The degree of initia l processing and data preparation depends on the volume of data, as well
as how straightforward it is to unders tand the structure of the data.
Recall the four types of data structures discussed in Chapter 1, "Introduction to Big Data Analytics":
• Structured: A specific and consistent format (for example, a data table)
• Semi-structured : A self-describing format (for exam ple, an XML file)
• Quasi-structured: A somewhat inconsistent format (for example, a hyperlink)
• Unstructured: An inconsistent format (for exam ple, text or video)
Structured data, such as relationa l database management system (RDBMS) tables, is typically the easiest
data format to interpret. However, in practice it is still necessa ry to unders tand the various values that may
appear in a certain column and what these values represent in different situations (based, for exam ple,
on the conte nts of the other columns for the same record ). Also, some columns may contain unstructured
text or stored objects, such as pictures or vid eos. Although the tools presented in th is chapter focus on
unstructured data, these tools can also be utilized for more structured datasets.
10.1.1 Use Cases
The following material provides several use cases for MapReduce. The MapReduce paradigm offers the
means to break a large task into smaller tasks, run tasks in para llel, and consolidate the outputs of the
individual tasks into the final output. Apache Hadoop includes a software implementation of MapReduce.
More details on MapReduce and Hadoop are provided later in this chapter.
10.1 Analytics for Unstructured Data
IBM Watson
In 2011, IBM's computer system Watson participated in the U.S. television game show Jeopardy against
two of the best Jeopardy champions in the show's history. In the game, the contestants are provided a
clue such as "He likes his martinis shaken, not stirred" and the correct response, phrased in the form of a
question, would be, "Who is James Bond?" Over the three-day tournament, Watson was able to defeat the
two human contestants.
To educate Watson, Hadoop was utilized to process various data sources such as encyclopedias, diction-
aries, news wire feeds, literature, and the entire contents of Wikipedia [2]. For each clue provided during
the game, Watson had to perform the following tasks in less than three seconds [3]:
o Deconstruct the provided clue into words and phrases
o Establish the grammatical relationship between the words and the phrases
o Create a set of similar terms to use in Watson's search for a response
o Use Hadoop to coordinate the search for a response across terabytes of data
o Determine possible responses and assign their likelihood of being correct
o Actuate the buzzer
o Provide a syntactically correct response in English
Among other applications, Watson is being used in the medical profession to diagnose patients and
provide treatment recommendations [4].
Linkedln
Linked In is an online professional network of 250 million users in 200 countries as of early 2014 [5]. Linkedln
provides several free and subscription-based services, such as company information pages, job postings,
talent searches, social graphs of one's contacts, personally tailored news feeds, and access to discussion
groups, including a Hadoop users group. Linkedln utilizes Hadoop for the following purposes [6]:
o Process daily production database transaction logs
o Examine the users' activities such as views and clicks
o Feed the extracted data back to the production systems
o Restructure the data to add to an analytical database
o Develop and test analytical models
Yahoo!
As of 2012, Yahoo! has one of the largest publicly announced Hadoop deployments at 42,000 nodes
across several clusters utilizing 350 petabytes of raw storage [7]. Yahoo!'s Hadoop applications include the
following [8]:
o Search index creation and maintenance
o Web page content optimization
ADVANCED A NALYTICS-TECHNOLOGY AND TOOLS: MAPREDUCE AND HADOOP
• Web ad placement optimization
• Spam filters
• Ad-hoc analysis and analytic model development
Prior to deployi ng Hadoop, it took 26 days to process three years' worth of log data. With Hadoop, the
processing time was reduced to 20 mi nutes.
10.1.2 MapReduce
As mentioned earlier, the MapReduce paradigm provides the mea ns to break a large task into smaller tasks,
ru n the tasks in parallel, and consolidate the outputs of the individual tasks into the final output. As its
name implies, MapReduce consists of two basic parts-a map step and a reduce step- detailed as follows:
Map:
• Applies an operation to a piece of data
• Provides some intermediate out put
Reduce:
• Consolidates the intermediate outputs from the map steps
• Provides the final output
Each step uses key/value pairs, denoted as
the key/va lue pairs as a simple ordered pair. However, the pairs can ta ke fairly complex forms. For example,
t he key could be a filename, and the value could be the entire contents of the fi le.
The simplest illustration of MapReduce is a word count example in which the task is to simply count
the number of times each word appears in a collection of documents. In practice, the obj ecti ve of such an
exercise is to establish a list of words and their frequency for purposes of search or establi shing the relative
im portance of certa in word s. Cha pter 9 provides more details on text analytics. Figure 10-1 illustrates the
MapReduce processing for a single input- in this case, a line of text.
<1234, "For ea ch wo rd in each string" >
• Map
• Reduce
FIGURE 10 -1 Example of how MapReduce works
10.1 Analytics for Unstructured Data
In this example, the map step parses the provided text string into individual words and emits a
set of key/value pairs of the form
reduce step sums the 1 values and outputs the
each appeared twice in the given line of text, the reduce step provides a corresponding key/value
pair of
It should be noted that, in this example, the original key, 12 34, is ignored in the processing. In a typical
word count application, the map step may be applied to millions of lines of text, and the reduce step will
summarize the key/value pairs generated by all the map steps.
Expanding on the word count example, the final output of a MapReduce process applied to a set of
documents might have the key as an ordered pair and the value as an ordered tuple of length 2n. A
possible representation of such a key/value pair follows:
<(filename, datetime), (word1,5, word2,7, ••• , wordn,6)>
In this construction, the key is the ordered pair f i 1 ename and date time. The value consists of the
n pairs of the words and their individual counts in the corresponding file.
Of course, a word count problem could be addressed in many ways other than MapReduce. However,
Map Reduce has the advantage of being able to distribute the workload over a cluster of computers and
run the tasks in parallel. In a word count, the documents, or even pieces of the documents, could be pro-
cessed simultaneously during the map step. A key characteristic of Map Reduce is that the processing of
one portion of the input can be carried out independently of the processing of the other inputs. Thus, the
workload can be easily distributed over a cluster of machines.
U.S. Navy rear admiral Grace Hopper (1906-1992), who was a pioneer in the field of computers, provided
one of the best explanations of the need for using a group of computers. She commented that during pre-
industrial times, oxen were used for heavy pulling, but when one ox couldn’t budge a log, people didn’t try
to raise a larger ox; they added more oxen. Her point was that as computational problems grow, instead of
building a bigger, more powerful, and more expensive computer, a better alternative is to build a system
of computers to share the workload. Thus, in the Map Reduce context, a large processing task would be
distributed across many computers.
Although the concept of Map Reduce has existed for decades, Google led the resurgence in its interest
and adoption starting in 2004 with the published work by Dean and Ghemawat [9]. This paper described
Google’s approach for crawling the web and building Google’s search engine. As the paper describes,
MapReduce has been used in functional programming languages such as Lisp, which obtained its name
from being readily able to process lists (llit Qrocessing).
In 2007, a well-publicized Map Reduce use case was the conversion of 11 million New York Times news-
paper articles from 1851 to 1980 into PDF files. The intent was to make the PDF files openly available to
users on the Internet. After some development and testing of the MapReduce code on a local machine,
the 11 million PDF files were generated on a 100-node cluster in about 24 hours [10].
What allowed the development of the MapReduce code and its execution to proceed easily was that
the Map Reduce paradigm had already been implemented in Apache Hadoop.
ADVANCED ANALYTICS-TECHNOLOGY AND TOOLS: MAPREDUCE AND HADOOP
1 0.1.3 Apache Hadoop
Although MapReduce is a simple paradigm to understand, it is not as easy to implement, especially in a
distributed system. Executing a MapReduce job (the MapReduce code run against some specified data)
requires the management and coordination of several activities:
o MapReduce jobs need to be scheduled based on the system’s workload.
o Jobs need to be monitored and managed to ensure that any encountered errors are properly handled
so that the job continues to execute if the system partially fails.
o Input data needs to be spread across the cluster.
o Map step processing of the input needs to be conducted across the distributed system, preferably on
the same machines where the data resides.
o Intermediate outputs from the numerous map steps need to be collected and provided to the proper
machines for the reduce step execution.
o Final output needs to be made available for use by another user, another application, or perhaps
another MapReduce job.
Fortunately, Apache Hadoop handles these activities and more. Furthermore, many of these activities
are transparent to the developer/user. The following material examines the implementation of Map Reduce
in Hadoop, an open source project managed and licensed by the Apache Software Foundation [11].
The origins of Hadoop began as a search engine called Nutch, developed by Doug Cutting and Mike
Cafarella. Based on two Google papers [9] [12], versions of Map Reduce and the Google File System were
added to Nutch in 2004.1n 2006, Yahoo! hired Cutting, who helped to develop Hadoop based on the code
in Nutch [13]. The name “Hadoop” came from the name of Cutting’s child’s stuffed toy elephant that also
inspired the well-recognized symbol for the Hadoop project.
Next, an overview of how data is stored in a Hadoop environment is presented.
Hodoop Distributed File System (HDFS)
Based on the Google File System [12], the Hadoop Distributed File System (HDFS) is a file system that
provides the capability to distribute data across a cluster to take advantage of the parallel processing of
MapReduce. HDFS is not an alternative to common file systems, such as ext3, ext4, and XFS. In fact, HDFS
depends on each disk drive’s file system to manage the data being stored to the drive media. The Hadoop
Wiki [14] provides more details on disk configuration options and considerations.
For a given file, HDFS breaks the file, say, into 64MB blocks and stores the blocks across the cluster. So,
if a file size is 300MB, the file is stored in five blocks: four 64MB blocks and one 44MB block. If a file size is
smaller than 64 MB, the block is assigned the size of the file.
Whenever possible, HDFS attempts to store the blocks for a file on different machines so the map
step can operate on each block of a file in parallel. Also, by default, HDFS creates three copies of each
block across the cluster to provide the necessary redundancy in case of a failure. If a machine fails, HDFS
replicates an accessible copy of the relevant data blocks to another available machine. HDFS is also rack
aware, which means that it distributes the blocks across several equipment racks to prevent an entire rack
failure from causing a data unavailable event. Additionally, the three copies of each block allow Hadoop
some flexibility in determining which machine to use for the map step on a particular block. For example,
1 0.1 Analytics for Unstructured Data
an idle or underutilized machine that contains a data block to be processed can be scheduled to process
that data block.
To manage the data access, HDFS utilizes three Java daemons (background processes): NameNode,
Data Node, and Secondary NameNode. Running on a single machine, the NameNode daemon determines
and tracks where the various blocks of a data file are stored. The DataNode daemon manages the data
stored on each machine. If a client application wants to access a particular file stored in HDFS, the applica-
tion contacts the NameNode, and the NameNode provides the application with the locations of the various
blocks for that file. The application then communicates with the appropriate Data Nodes to access the file.
Each DataNode periodically builds a report about the blocks stored on the DataNode and sends the
report to the NameNode. If one or more blocks are not accessible on a Data Node, the NameNode ensures
that an accessible copy of an inaccessible data block is replicated to another machine. For performance
reasons, the Name Node resides in a machine’s memory. Because the NameNode is critical to the opera-
tion of HDFS, any unavailability or corruption of the NameNode results in a data unavailability event on
the cluster. Thus, the Name Node is viewed as a single point of failure in the Hadoop environment [15]. To
minimize the chance of a NameNode failure and to improve performance, the NameNode is typically run
on a dedicated machine.
A third daemon, the Secondary NameNode, provides the capability to perform some of the NameNode
tasks to reduce the load on the NameNode. Such tasks include updating the file system image with the
contents of the file system edit logs. It is important to note that the Secondary NameNode is not a backup
or redundant Name Node. In the event of a NameNode outage, the Name Node must be restarted and ini-
tialized with the last file system image file and the contents of the edits logs. The latest versions of Hadoop
provide an HDFS High Availability (HA) feature. This feature enables the use of two NameNodes: one in an
active state, and the other in a standby state. If an active NameNode fails, the standby Name Node takes
over. When using the HDFS HA feature, a Secondary NameNode is unnecessary [16].
Figure 10-2 illustrates a Hadoop cluster with ten machines and the storage of one large file requiring
three HDFS data blocks. Furthermore, this file is stored using triple replication. The machines running the
NameNode and the Secondary Name Node are considered master nodes. Because the Data Nodes take their
instructions from the master nodes, the machines running the Data Nodes are referred to as worker nodes.
Structuring a MapReduce Job in Hadoop
Hadoop provides the ability to run MapReduce jobs as described, at a high level, in Section 10.1.2. This
section offers specific details on how a Map Reduce job is run in Hadoop. A typical MapReduce program in
Java consists of three classes: the driver, the mapper, and the reducer.
The driver provides details such as input file locations, the provisions for adding the input file to the
map task, the names of the mapper and reducer Java classes, and the location of the reduce task output.
Various job configuration options can also be specified in the driver. For example, the number of reducers
can be manually specified in the driver. Such options are useful depending on how the MapReduce job
output will be used in later downstream processing.
The mapper provides the logic to be processed on each data block corresponding to the specified input
files in the driver code. For example, in the word count Map Reduce example provided earlier, a map task
is instantiated on a worker node where a data block resides. Each map task processes a fragment of the
text, line by line, parses a line into words, and emits
ADVANCED ANALYTICS- TECHNOLOGY AND TOOLS: MAPREDUCE AND HADOOP
times word appea rs in the line of text. The key/value pairs are stored temporarily in the worker node’s
memory (or cached to the node’s disk).
Master Nodes
Name Node
lnput_file.txt
.. Block 1,2,3
Secondary
Name Node
f iGURE 10-2 A file stored in HDFS
8 Worker Nodes across 2 Racks
r———-., r———-.,
Data Node
I
Block 1 Block 2
Data Node
Block 3
Data Node
Block 1
Data Node
Block 2 Block 3
Data Node
Block 1
Data Node
Block 2
Data Node
Block 3
Data Node
I
I
L———…1 L———…1
Next, the key/valu e pairs are processed by the built-in shuffle and sort functionality based on the
number of reducers to be executed.ln this simple example, there is only one reducer. So, all the intermedi-
ate data is passed to it. From the various map task outputs, for each unique key, arrays (lists in Java) of the
associated values in the key/va lue pairs are constructed. Also, Hadoop ensures t hat the keys are passed to
each reducer in sorted order. In Figure 10-3,
alphabetical ly by < For, ( 1 ) > and the rest of the key/ value pairs until the last key/value pa ir is passed to
the reduce r. The ( ) denotes a list of values which, in this case, is just an array of ones.
In general, each reducer processes the values for each key and emits a key/va lue pa ir as defined by the
reduce logic. The output is then stored in HDFS like any other fil e in, say, 64 MB blocks replicated three
times across the nodes.
Additional Considerations in Structuring a MapReduce Job
The preceding discussion presented the basics of structuring and running a MapReduce job on a Hadoop
cluster. Several Hadoop features provide additional functionality to a MapRed uce j ob.
First, a combiner is a useful option to apply, when possible, between the map task and the shuffle and
sort. Typically, the combiner applies the same logic used in the red ucer, but it also applies this logic on the
output of each map task. In the word count example, a combiner sums up the number of occurrences of
10.1 Analytics for Unstructured Data
each word from a mapper’s output. Figure 10-4 illustrates how a combiner processes a single string in the
simple word count example.
<1234, "For each word in each string" >
• Map
Shuffle and Sort
• Reduce
FIGURE 1C .) Shuffle and sort
<1234, "For each word in each string">
• Map
• Combine
• Shuffle and Sort
FIGURE 1 C Using a combiner
Thus, in a production setting, instead of ten t housand possible < the , 1 > key/ value pai rs being emit-
ted from the map task to the Shuffle and Sort, the combiner emits one
The reduce step still obtains a list of values for each word, but instead of receiving a list of up to a million
onesli st(1,1, … ,1 ) forakey,thereduce stepobtainsalist,suchaslist ( 10000 , 964, .
. . , 834 5 ) , which might be as long as the number of map ta sks that were run. The use of a combiner
minimizes the amount of intermediate map o utput that the reducer must store, tran sfer over the network,
and process.
ADVANCED ANALYTICS-TECHNOLOG Y AND TOOLS: MAPREDUCE AND HADOOP
Another useful option is the partitioner. It determines the reducers that receive keys and the cor-
responding list of values. Usi ng the simple word count exa mple, Figure 10-5 shows that a partitioner can
send every word that begins with a vowel to one reducer and the other words that begin with a consonant
to another reducer.
<1234, "For each word in each string">
.. Map
.. Partition (Shuffle)
.. Reduce
fiGURE 10 5 Using a custom partitioner
.. Reduce
As a more practical example, a user cou ld use a partitioner to separate the output into separate fi les
for each calendar year for subsequent analysis. Also, a partitioner could be used to ensure that the work-
load is evenly distributed across the red ucers. For example, if a few keys are known to be associated with
a large majority of the data, it may be useful to ensure that these keys go to separate reducers to achieve
better overall performa nce. Otherwise, one reducer might be assig ned the majority of the data, and the
MapRed uce job wi ll not complete until that one long-running reduce task completes.
Developing and Executing a Hadoop MapReduce Program
A common approach to develop a Hadoop MapReduce program is to write Java code using an Interactive
Development Environment (IDE) tool such as Eclipse [17]. Compared to a plaintext editor or a command-
line interface (CLI), IDE tools offer a better experience to write, compile, test, and debug code. A typical
MapReduce program consists of three Java files: one each for the driver code, map code, and reduce code.
Additional, Java files can be written for the combiner or the custom partitioner, if applicable. The Java
code is compiled and stored as a Java Archive (JAR) file. This JAR file is then executed against the specifi ed
HDFS input fil es.
Beyond learning the mechanics of submitting a MapReduce job, three key challenges to a new Hadoop
developer are defining the logic of the code to use the Map Reduce paradigm; learning the Apache Hadoop
Java classes, methods, and interfaces; and implementi ng the driver, map, and reduce functionalit y in Java.
Some prior experience with Java makes it easier for a new Hadoop developer to focus on learning Hadoop
and writing the Map Reduce j ob.
For users who prefer to use a prog ramming language other than Java, there are some other options.
One option is to use the Hadoop Streaming API, which allows the user to write and run Hadoop j obs
with no direct know ledge of Java [18]. However, knowledge of some other prog ramming language, such
as Python, C, or Ruby, is necessary. Apache Hadoop provides the Hadoop – streaming. jar fi le that
10.1 Analytics for Unstructured Data
accepts the HDFS paths for the input/output files and the paths for the files that implement the map and
reduce functionality.
Here are some important considerations when preparing and running a Hadoop streaming job:
o Although the shuffle and sort output are provided to the reducer in key sorted order, the reducer
does not receive the corresponding values as a list; rather, it receives individual key/value pairs. The
reduce code has to monitor for changes in the value of the key and appropriately handle the new key.
o The map and reduce code must already be in an executable form, or the necessary interpreter must
already be installed on each worker node.
o The map and reduce code must already reside on each worker node, or the location of the code must
be provided when the job is submitted. In the latter case, the code is copied to each worker node.
o Some functionality, such as a partitioner, still needs to be written in Java.
o The inputs and outputs are handled through stdin and stdout. Stderr is also available to track the
status of the tasks, implement counter functionality, and report execution issues to the display [18].
o The streaming API may not perform as well as similar functionality written in Java.
A second alternative is to use Hadoop pipes, a mechanism that uses compiled C++ code for the map
and reduced functionality. An advantage of using C++ is the extensive numerical libraries available to
include in the code [19].
To work directly with data in HDFS, one option is to use the C API (libhdfs) or the Java API provided
with Apache Hadoop. These APis allow reads and writes to H DFS data files outside the typical MapReduce
paradigm [20]. Such an approach may be useful when attempting to debug a MapReduce job by examin-
ing the input data or when the objective is to transform the HDFS data prior to running a MapReduce job.
Yet Another Resource Negotiator (YARN)
Apache Hadoop continues to undergo further development and frequent updates. An important change
was to separate the Map Reduce functionality from the functionality that manages the running of the jobs
and the associated responsibilities in a distributed environment. This rewrite is sometimes called Map Reduce
2.0, or Yet Another Resource Negotiator (YARN). YARN separates the resource management of the cluster
from the scheduling and monitoring of jobs running on the cluster. The YARN implementation makes it
possible for paradigms other than MapReduce to be utilized in Hadoop environments. For example, a Bulk
Synchronous Parallel (BSP) [21] model may be more appropriate for graph processing than MapReduce
[22] is. Apache Hama, which implements the BSP model, is one of several applications being modified to
utilize the power of YARN [23].
YARN replaces the functionality previously provided by the Job Tracker and TaskTracker daemons.
In earlier releases of Hadoop, a Map Reduce job is submitted to the Job Tracker daemon. The Job Tracker
communicates with the NameNode to determine which worker nodes store the required data blocks
for the MapReduce job. The Job Tracker then assigns individual map and reduce tasks to the Task Tracker
running on worker nodes. To optimize performance, each task is preferably assigned to a worker node
that is storing an input data block. The TaskTracker periodically communicates with the Job Tracker on
the status of its executing tasks. If a task appears to have failed, the Job Tracker can assign the task to a
different TaskTracker.
ADVANCED ANALYTICS – TECHNOLOGY AND TOOLS: MAPREDUCE AND HADOOP
10.2 The Hadoop Ecosystem
So far, this chapter has provided an overview of Apache Hadoop relative to its implementation of HDFS
and the Map Reduce paradigm . Had oop’s popularity has spawned proprietary and open source tools to
make Apache Hadoop easier to use and provide additional fu nctionality and features. This portion of the
chapter examines the follo wing Hadoop-related Apache proj ects:
• Pig : Provides a high-level data-flow programming language
• Hive: Provides SOL-li ke access
• Ma hout: Provides analytical tools
• HBase: Provides real-time reads and writes
By masking the details necessary to develop a MapReduce program, Pig and Hive each enable a devel-
oper to write high-level code that is later translated into one or more MapReduce program s. Beca use
Map Reduce is intended for batch processing, Pig and Hive are also intended for batch processi ng use cases.
Once Hadoop processes a dataset, Ma hout provides severa l tools that ca n analyze the data in a Hadoop
envi ronment. For example, a k- means clusteri ng analysis, as described in Chapter 4, can be conducted
using Mahout.
Differentiati ng itself from Pig and Hive batch processing, HBase provides the ability to perform real-t ime
reads and writes of data stored in a Hadoop environment. This real-time access is accomplished partly by
storing data in memory as well as in HDFS. Also, HBase does not rely on MapReduce to access the HBase
data. Because the design and operation of HBase are significantly different from relational databases and
the other Hadoop tools exa mined, a detai led description of HBase wi ll be presented.
10.2.1 Pig
Apache Pig consists of a data fl ow language, Pig Latin, and an environment to execute the Pig code. The
main benefit of using Pig is to utilize the power of Map Reduce in a distributed system, while simplifyi ng
the tasks of developing and executing a MapReduce job. In most cases, it is transparent to the user that
a MapRed uce j ob is running in the background when Pig comma nds are executed . This abstraction layer
on top of Hadoop simplifies the development of code against data in HDFS and makes Map Reduce more
accessible to a la rger audience.
Like Hadoop, Pig’s origin began at Yahoo! in 2006. Pig was transferred to th e Apache Soft ware
Fou ndation in 2007 and had its first release as an Apache Hadoop subproj ect in 2008. As Pig evolves over
time, three main characteri stics persist: ease of prog rammin g, behind-the-scenes code optimization, and
extensibil ity of capabilities [24].
With Apache Hadoop and Pig already installed, the basics of using Pig include entering the Pig execu-
tion environment by typing pig at the command prompt and then entering a sequence of Pig instruction
Jines at the grunt prompt.
An example of Pig-specific commands is shown here:
$ pig
g r unt> records = LOAD ‘ / user / customer.txt’ AS
(cust_id:INT , first_ name : CHARARRAY,
l a s t_name :CHARARRAY,
emai l _address :CHARARRA Y) ;
g r unt > f i lt er ed_re cords = FILTER r eco rds
10.2 The Hadoop Ecosystem
BY email_address ma t ches ‘ . *@isp. com ‘;
grunt > STORE filte r ed r e cords I NTO ‘ / user / isp_ customers ‘ ;
grunt> qui t
$
At the first grun t prompt, a text fil e is designated by the Pig variable recor ds with four defin ed
fi elds: cust id, f irst_ name, l ast_ name, and e ma il_ address . Next, the vari able f il-
te r ed_ r ecor d s is assigned those record s where t he email_ add r es s ends wit h ®i s p . com to
extract the customers whose e-mail address is from a particular Internet service provider (ISP). Using the
STORE command, the filtered records are written to an HDFS fold er, i sp _ cus tamer s . Finally, to exit the
interactive Pig envi ronment, execute the QUI T command. Al ternatively, these individual Pig commands
could be written to the file f i l t er _script. pig and submit them at the command prompt as follows:
$ p i g fil ter_script . pig
Such Pig instructions are translated, behind the scenes, into one or more MapReduce jobs. Thus, Pig
simplifi es the coding of a MapRedu ce job and enables the user to quickly develop, test, and debug the
Pig code. In this particular example, the MapReduce j ob would be initiated after the STORE command
is processed. Prior to the STORE command, Pig had begun to build an execution plan but had not yet
initiated MapReduce processing.
Pig provides for the execution of several common data manipulation s, such as inner and outer joins
between two or more files (tables). as would be expected in a typical relational database. Writ ing these
j oins explicitly in MapReduce using Hadoop would be quite involved and compl ex. Pi g also provides a
GROUP BY functionality that is similar to the Group By fun ctionality offered in SOL. Chapter 11 has
more details on using Gr o up By and other SOL statements.
An additional feature of Pig is that it provides many built-in functi ons that are easily utilized in Pig code.
Table 10-1 includes several useful function s by category.
TABLE 10 1 Built- In Pig Functions
Eval Load/Store Math String DateTime
AVG BinS t or age ( ) ABS INDEXOF AddDurat i o n
CONCAT Jso nLo ader CEIL LAST Current Time
I NDEX_OF
COUNT Jso nStorage cos, ACOS L CFORST DaysBetween
COUNT STAR Pig Dump EXP L OWER Get Day
DIFF Pi g Stora ge FLOOR REG EX – GetHour
EXTRACT
(co ntinues)
ADVA NC ED ANALYTICS-TECH NO LOG Y AND TOOLS: MAPR EDUCE AND HADOOP
To • r , 0-1 Built-In Pig Functions (Continued)
Eval Load/Store Math String DateTime
Is Empty Text Loader LOG, LOGlO REPLACE GetMinute
MAX HBaseStorage RANDOM STRSPLIT GetMonth
MIN ROUND SUBSTRING Get Week
SIZE SIN, ASIN TRIM GetWeekYear
SUM SQRT UCF IRST Get Year
TOKENIZE TAN, ATAN UPPER MinutesBetween
SubtractDuration
ToDate
Other functions and the details of these built-in functions can be found at the pig. apache. org
website 125].
In terms of extensibility, Pig allows the execution of user-defined func tions (UDFs) in its environment.
Thus, some complex operations can be coded in the user’s language of choice and executed in the Pig
environment. Users can share their UDFs in a repository called the Piggybank hosted on the Apache site
[26]. Over time, the most useful UDFs may be included as bui lt-in functi ons in Pig.
10.2.2 Hive
Similar to Pig, Apache Hive enables users to process data without explicitly writing MapReduce code. One
key difference to Pig is that the Hive language, HiveQL (Hive Query Language), resembles Structured Query
Language (SQL) ra ther than a scripting language.
A Hive table structure consists of rows and col umns. The rows typically correspond to some record,
transaction, or particular entity (for example, customer) detail. The values of the corresponding columns
represent the various attributes or characteristics for eac h row. Hadoop and its ecosystem are used to
apply some structure to unstructured data. Therefore, if a table structu re is an appropriate way to view
the restructured data, Hive may be a good tool to use.
Additionally, a user may consider using Hive if the user has experience with SQL and the data is already
in HDFS. Another consideration in using Hive may be how data will be updated or added to the Hive tables.
If data will simply be added to a table periodically, Hive works well, but if there is a need to update data
in place, it may be benefi cial to consider another tool, such as HBase, which will be discussed in the next
section.
Although Hive’s performance may be better in ce rtai n applications than a conventional SQL database,
Hive is not intended for rea l-time querying. A Hive query is first translated into a MapRed uce job, which
is then submitted to the Hadoop cl uster. Thus, t he execution of the query has to compete for resources
with any other submitted job. Like Pig, Hive is intended for batch processing. Again, HBase may be a better
choice for real-time query needs.
10.2 The Hadoop Ecosystem
To summarize the preceding discussion, consider using Hive when the following conditions exist:
o Data easily fits into a table structure.
o Data is already in HDFS. (Note: Non-HDFS files can be loaded into a Hive table.)
o Developers are comfortable with SQL programming and queries.
o There is a desire to partition datasets based on time. (For example, daily updates are added to the
Hive table.)
o Batch processing is acceptable.
The remainder of the Hive discussion covers some HiveQL basics. From the command prompt, a user
enters the interactive Hive environment by simply entering hive:
$ hive
hive>
From this environment, a user can define new tables, query them, or summarize their contents. To
illustrate how to use HiveQL, the following example defines a new Hive table to hold customer data, load
existing HDFS data into the Hive table, and query the table.
The first step is to create a table called customer to store customer details. Because the table will be
populated from an existing tab (‘\t’)-delimited HDFS file, this format is specified in the table creation query.
hive> create table customer (
cust_id bigint,
first_name string,
last_name string,
email_address string)
row format delimited
fields terminated by ‘\t’;
The following HiveQL query is executed to count the number of records in the newly created table,
customer. Because the table is currently empty, the query returns a result of zero, the last line of the
provided output. The query is converted and run as a Map Reduce job, which results in one map task and
one reduce task being executed.
hive> select count(*) from customer;
Total MapReduce jobs =
Launching Job l out o~
Number of reduce tasks determined at compile time: 1
Starting Job= job_1394125045·nr,~o001, Trading URL =
http://pivhdsne:BOBB/proxy/application 1394125045435 0001/
Kill Command = /usr lib/gphd!hadoop bin/hadocp jcb
-k:ll JOb :194:25C45435
Hadoop job ir:fcrn1atior: :o1· Stage-:: r:urr.i)er cf :~tat=:,pe:!.-s: –,
number of reducer·s: 1
2014-03-06 12:30:23,542 Stage-1 map
2014-03-06 12:30:36,586 Stagc-1 map
Cumulative CPU 1.71 sec
2014-03-06 12:30:48,500 Stage-1 ~ap
Cumulative CP~ 3.76 sec
0 ‘”, n·,duce = 0 ~”
100’~. recbce o~.
!-~d··_:~e
ADVANCED ANALYTICS-TECHNOLOGY AND TOOLS: MAPREDUCE AND HADOOP
HilpR•~duce ‘:’<::-t '" ,'''l:J;ulat:·.,ce C'?U t~:ne:
F. n ci e d _:; 2 c : ' ~· .; l -~ 5 C .; "4 c "'
,:c~b :: :-:al:: ; :-·c·_L: :•'·: .:.
:iDF'S ;-:~i:.,: " s;_;.~·,:;::.s::;
Tu:al 1•lapRi:dUc•· ··p;r '1'ime S!>c:r:t: • :’;eccmds
When querying large tables, Hive outperforms and scales better than most conventional database
queries. As stated earlier, Hive translates HiveQL queries into MapReduce jobs that process pieces of large
datasets in parallel.
To load the customer table with the contents of HDFS file, customer. txt, it is only necessary to
provide the HDFS directory path to the file.
hive> load data inpath ‘/user/customer.txt’ into table customer;
The following query displays three rows from the customer table.
hive> select * from customer limit 3;
.-~ c 576
It is often necessary to join one or more Hive tables based on one or more columns. The following
example provides the mechanism to join the customer table with another table, orders, which stores
the details about the customer’s orders. Instead of placing all the customer details in the order table, only
the corresponding cust _ id appears in the orders table.
hive> select o.order_number, o.order_date, c.*
from orders o inner join customer c
on o.cust_id = c.cust_id
where c.email_address = ‘mary.jones®isp.com’;
Tuta 1 Hap?.eclu .,,
f: i l 1 Command
-;·.iE j ub
H~Hhop J cb : n i
::ur:Lte.!:· :__·:_
;-.
-~ -. –
10.2 The Hadoop Ecosystem
:.:::j.c,:l J·~c
:.:cl p ==~.c. j ‘-~ :::~ .::2 r: ‘~
:-.•. r .- ;:.,,-, j:
lS!-::.
The use of joins and SQL in general will be covered in Chapter 11. To exit the Hive interactive environ-
ment, use quit.
hive> quit;
$
An alternative to running in the interactive environment is to collect the HiveQL statements in a script
{for example, my _script. sql) and then execute the file as follows:
$ hive -f my_script.sql
This introduction to Hive provided some of the basic HiveQL commands and statements. The reader
is encouraged to research and utilize, when appropriate, other Hive functionality such as external tables,
explain plans, partitions, and the INSERT INTO command to append data to the existing content of
a Hive table.
Following are some Hive use cases:
o Exploratory or ad-hoc analysis of HDFS data: Data can be queried, transformed, and exported
to analytical tools, such as R.
o Extracts or data feeds to reporting systems, dashboards, or data repositories such as
HBase: Hive queries can be scheduled to provide such periodic feeds.
o Combining external structured data to data already residing in HDFS: Hadoop is excellent
for processing unstructured data, but often there is structured data residing in an RDBMS, such as
Oracle or SQL Server, that needs to be joined with the data residing in HDFS. The data from an RDBMS
can be periodically added to Hive tables for querying with existing data in HDFS.
10.2.3 HBase
Unlike Pig and Hive, which are intended for batch applications, Apache HBase is capable of providing
real-time read and write access to data sets with billions of rows and millions of columns. To illustrate the
differences between HBase and a relational database, this section presents considerable details about the
implementation and use of HBase.
The HBase design is based on Google’s 2006 paper on Bigtable. This paper described Bigtable as
a “distributed storage system for managing structured data.” Google used Bigtable to store Google
product-specific data for sites such as Go ogle Earth, which provides satellite images of the world. Big table
was also used to store web crawler results, data for personalized search optimization, and website click-
stream data. Bigtable was built on top of the Google File System. MapReduce was also utilized to process
ADVANCED ANALYTICS- TECHNOLOGY AND TOOLS: MAP REDUCE AND HADOOP
data into or out of a Bigtable. For example, the raw clickstream data was stored in a Bigtable. Periodically,
a sched uled MapReduce job would run that would process and summarize the newly added clickstream
data and append the results to a second Big table [27].
The development of HBase began in 2006. HBase was included as part of a Hadoop distribution at the
end of 2007. In May 2010, HBase became an Apache Top Level Project. Later in 2010, Facebook began to
use HBase for its user messaging infrastructure, which accommodated 350 million users sending 15 billion
messages per month [28].
HBase Architecture and Data Model
HBase is a data store t hat is intended to be distributed across a cluster of nodes. Like Hadoop and many of
its related Apache projects, HBase is built upon HDFS and achieves its real-time access speeds by sharing
the workload over a large number of nodes in a distributed cluster. An HBase table consists of rows and
columns. However, an HBase table also has a third dimension, version, to maintain the different values of
a row and column intersection over time.
To illustrate this third dimension, a simple example would be that for any given online customer, several
shipping add resses could be sto red. So, the row would be indicated by a customer number. One column
would provide the shipping address. The value of the shipping address would be added at the intersection
of the customer number and the shipping address column, along with a timestamp corresponding to when
the customer last used this shipping address.
During a customer’s checkout process from an online retailer, a website might use such a table to retrieve
and display the customer’s previous shipping addresses. As shown in Figure 10-6, the customer can then
select the appropriate address, add a new address, or delete any addresses tha t are no longer relevant.
Checkout (Step 2 of 4 )
Choose a shipping address:
U 1600 Pennsylvania Avenue NW
Washington DC, 20500 USA
U London SW1A 1AA, United Kingdom
U 3111TU, ~ m w 282001, India
~ ‘
Add a new address I
FIGURE 10-6 Choosing a shipping address at checkout
Last Used
~ Delete I 15-Apr-2014
~ Delete I 15-Mar-2014
~ Delete I 14-Feb-2014
Of course, in addition to a customer’s shipping address, other customer information, such as billing
address, prefe rences, billi ng credits/debits, and customer benefits (for exa mple, free shi pping) must be
stored. For this type of application, real -time access is required. Thus, the use of the batch processing of
Pig, Hive, or Hadoop’s MapReduce is not a reasonable implementation approach. The following discussion
examines how HBase stores the data and provides real -time read and write access.
10.2 Th e Hadoop Ecosystem
As mentioned, HBase is built on top of HDFS. HBase uses a key/value structu re to store the contents
of an HBase table. Each value is the data to be stored at the intersection of the row, column, and version.
Each key consists of the following elements [29]:
• Row length
• Row (sometimes called the row key)
• Column family length
• Column family
• Colum n qualifier
• Version
• Key type
The row is used as the primary attribute to access the contents of an HBase table. The row is the basis
for how the data is distributed across the cluster and allows a query of an HBase table to qu ickly retrieve
the desired elements. Thu s, the structure or layout of the row has to be specifically designed based on how
the data will be accessed. In this respect, an HBase tab le is purpose built and is not intended for general
ad-hoc querying and analysis. In other words, it is important to know how the HBase table will be used;
this understanding of the table’s usage helps to optimally define the construction of the row and the table.
For example, if an HBase table is to store the content of e-mails, the row may be constructed as the
concatenation of an e-mail address and the date sent. Because the HBase table will be stored based on
the row, the retri eval of thee-mails by a given e-mail address will be fairly efficient, but the retrieval of all
e-mails in a certain date range will take much longer. The later discussion on regions provides more details
on how data is stored in HBase.
A column in an HBa se table is designated by the combination of the column family and the col-
umn qualifier. The column family provides a high-level grouping for the column qualifiers. In the earlier
shipping address example, the row could contain the order_ number, and the order details could be
stored under the column family orders , using the column qualifiers such as s h ipping_ a ddress,
bi 11 ing_ address, o rder_ date. In HBase, a column is specified as column family:column quali-
fier. In the example, the column ord ers : shipping_ ad dress refers to an order’s shipping address.
A cell is the intersection of a row and a col umn in a table. The version, sometimes call ed the time-
stamp, provides the abi lity to maintain different values for a cell’s contents in HBase. Although the user
can define a custom value for the version when writing an entry to the table, a typical HBase implemen-
tation uses HBase’s default, the current system time. In Java, this timestamp is obtained with Sys tern
. g etCu r r e n t Time Mil lis () ,the number of milliseconds since January 1, 1970. Because it is likely
that only the most recent version of a cell may be required, the cells are stored in descending order of the
version. If the application requires the cells to be stored and retrieved in ascending order of their creation
time, the approach is to use Long . MAX_ VALUE – System. getCu r ren tTimeMillis ( l in
Java as the versio n nu mber. Long. MAX_ VALUE corresponds to the maximum va lue that a long integer
can be in Java. In this case, the stori ng and sorting is still in descending order of the version values.
Key type is used to identify whether a particular key corresponds to a write operation to the HBase
table or a delete operation from the table. Technically, a delete from an HBase table is accomplished with
a write to the table. The key type indicates the purpose of the write. For deletes, a tombstone ma rker is
ADVANCED ANALYTICS-TECHNOLOGY AND TOOLS: MAPREDUCE AND HADOOP
written to the table to indicate that all cell versions equal to or older than the specified timestamp should
be deleted for the corresponding row and column f ami 1 y: column qualifier.
Once an HBase environment is installed, the user can enter the HBase shell environment by entering
hbase shell at the command prompt. An HBase table, my_ table, can then be created as follows:
$ hbase shell
hbase> create ‘my_table’, ‘cfl’, ‘cf2’,
{SPLITS =>[‘250000’, ‘500000’, ‘750000’]}
Two column families, c f 1 and cf 2, are defined in the table. The SPLITS option specifies how the table
will be divided based on the row portion of the key. In this example, the table is split into four parts, called
regions. Rows less than 2 so o o 0 are added to the first region; rows from 2 5o o o o to less than 50 0 0 0 o
are added to the second region, and likewise for the remaining splits. These splits provide the primary
mechanism for achieving the real-time read and write access. In this example, my_ table is split into four
regions, each on its own worker node in the Hadoop cluster. Thus, as the table size increases or the user load
increases, additional worker nodes and region splits can be added to scale the cluster appropriately. The
reads and writes are based on the contents of the row. HBase can quickly determine the appropriate region
to direct a read or write command. More about regions and their implementation will be discussed later.
Only column families, not column qualifiers, need to be defined during HBase table creation. New
column qualifiers can be defined whenever data is written to the HBase table. Unlike most relational data-
bases, in which a database administrator needs to add a column and define the data type, columns can be
added to an HBase table as the need arises. Such flexibility is one of the strengths of HBase and is certainly
desirable when dealing with unstructured data. Over time, the unstructured data will likely change. Thus,
the new content with new column qualifiers must be extracted and added to the HBase table.
Column families help to define how the table will be physically stored. An HBase table is split into
regions, but each region is split into column families that are stored separately in HDFS. From the Linux
command prompt, running hadoop f s -1 s – R /hbase shows how the HBase table, my_ table,
is stored in HBase.
$ hadoop fs -ls -R /hbase
0 2014-02-28 16:40 /hbase/my_table/028ed22e02ad07d2d73344cd53a1lfb4
243 2014-02-28 16:40 /hbase/my_table/028ed22e02ad07d2d73344cd53allfb4/
. leg ioninfo
0 2014-02-28 16:4C hbase/~y_table,028ed22e02ad07d2d73344cd53allfb4 1
cfl
0 2014-02-28 16:40 hbase/my_table/028ed22e02ad07d2d73344cd53a1lfb4/
cf2
0 2014-02-28 16:40 1hbase/my_table/2327b09784889e6198909d8b8f342289
255 2014-02 28 16:40 1hbase/my_table/2327b09784889e6:98909d8b8f342289/
. ~-e~ioninfc
2014-02-28 16:40 hbase/m·.: table./2327b09784889eG:.989D9d8b8f342:289i
cfl
0 2014-02-28 16:40 /hbase/my_table/2327b09784889e6198909d8b8f342289/
cf2
0 2014-02-:28 16:40 /hbase/my_table/4b4fc9ad951297efe2b9b386~0f7a5fd
267 2014-02-28 16:40 hb~se/my_table 1 4b4fc9ad951297e!e:b9b38640f7a5fd•
. !_·~g i oninf c
10.2 The Hadoop Ecosystem
0 2014-02-28 16:40 /hbase my_table, 4b4fc9ad951297efe2b9b38640f7a5fd/
cf1
0 201~-02-28 16:40 /hbase my table.4b4fc9ad951297e~e2b9b38E40~7a5fd,
cf2
0 2014-02-28 16:40 /hbase/my_table/e40beOJ71f43135e36ea67edec6e31e3
267 2014-02-28 16:40 /hbase/mytable/e40be03’11f43135e36ea67edec6e31e3/
. regior:infc
2014-02-28 16:40 :1base.1m;.· table. e40beD37lf.;3::.35e36eaG7edec6e3le3
cfl
0 2014-02-28 16:40 1hbase;rny table;e40beD37:f4Jl35e36ea67edec6e3le3•
cf2
As can be seen, four subdirectories have been created under /hbase/mytable. Each subdirectory
is named by taking the hash of its respective region name, which includes the start and end rows. Under
each of these directories are the directories for the column families, cfl and c£2 in the example, and the
. regioninfo file, which contains several options and attributes for how the regions will be maintained.
The column family directories store keys and values for the corresponding column qualifiers. The column
qualifiers from one column family should seldom be read with the column qualifiers from another column
family. The reason for the separate column families is to minimize the amount of unnecessary data that
HBase has to sift through within a region to find the requested data. Requesting data from two column
families means that multiple directories have to be scanned to pull all the desired columns, which defeats
the purpose of creating the column families in the first place.ln such cases, the table design may be better
off with just one column family. In practice, the number of column families should be no more than two or
three. Otherwise, performance issues may arise [30].
The following operations add data to the table using the put command. From these three put opera-
tions, datal and da ta2 are entered into column qualifiers, cql and cq2, respectively, in column family
c f 1. The value da ta3 is entered into column qualifier cq3 in column family c f 2. The row is designated
by row key 0 0 0 7 0 0 in each operation.
hbase> put •my_table’, ‘000700’, ‘cfl:cql’, ‘datal’
0 rm·: i. s: in 0. C C! 3 0 secor.ds
hbase> put •my_table’, ‘000700’, ‘cfl:cq2’, ‘data2′
0 row(s) in 0.0030 seconds
hbase> put •my_table’, ‘000700’, ‘cf2:cq3’, ‘data3’
0 row(s) in 0.0040 seconds
Data can be retrieved from the HBase table by using the get command. As mentioned earlier, the
timestamp defaults to the milliseconds since January 1, 1970.
hbase> get ‘my_table’, ‘000700’, ‘cf2:cq3’
COLUMN CELL
cf2: cq3 timestarnp=l3 93 866 lJ 37U, value~dctta3
1 ro·”· (s 1 in 0. 0350 seconds
ADVANCED ANALYTICS-TECHNOLOGY AND TOOLS: MAPREDUCE AND HADOOP
By default, the get command returns the most recent version. To illustrate, after executing a second
put operation in the same row and column, a subsequent get provides the most recently added value
ofdata4.
hbase> put ‘my_table’, ‘000700’, •cf2:cq3′, ‘data4′
0 row(s) in 0.0040 seconds
hbase> get •my_table’, ‘000700’, ‘cf2:cq3’
COLU1·1N CELL
cf2:cq3
1 row(sl
timestamp=139386643l669,
in 0.0080 seconds
val;_:e=data4
The get operation can provide multiple versions by specifying the number of versions to retrieve. This
example illustrates that the cells are presented in descending version order.
hbase> get ‘my_table’, ‘000700’, {COLUMN;> ‘cf2:cq3’, VERSIONS;> 2}
COLU1·1N CELL
cf2:cq3 timestamp=139386643l669, ‘ialue=data4
cf2:cq3 timestamp=l393866138714, value=data3
2 row(s) in 1.0200 seconds
A similar operation to the get command is scan. A scan retrieves all the rows between a specified
STARTROW and a STOPROW, but excluding the STOPROW. Note: ifthe STOPROW was setto 000700,
only row 000600 would have been returned.
hbase> scan ‘my_table’, {STARTROW => ‘000600’, STOPROW =>’000800′}
R0\’1
000600
0007CO
0007CO
0007CO
COLut•IN+CELL
column=cfl:cq2, timestamp=l393866792008, value=data5
column=cfl:cql, timestamp=l393866105687, value=datal
column=cf1:cq2, timestamp=l393866122073, value=data2
col~mn=cf2:cq3, timestamp=139386643l6E9, value=data4
in 0.0400 seconds
The next operation deletes the oldest entry for column c f 2 : cq3 for row o o o 7 o o by specifying the
timestamp.
hbase> delete •my_table’, ‘000700’, ‘cf2:cq3’, 1393866138714
0 row(s) irl 0.0110 seconds
Repeating the earlier get operation to obtain both versions only provides the last version for that cell.
After all, the older version was deleted.
hbase> get ‘my_table’, ‘000700’, {COLUMN=> ‘cf2:cq3’, VERSIONS=> 2}
COLUf.1N CELL
cf2:cq3 timestamp~l39386E431669, value,data4
1 rovlis) in O.Ol3J seconds
10.2 The Hadoop Ecosystem
However, running a scan operation, with the RAW option set to true, reveals that the deleted entry
actually remains. The highlighted line illustrates the creation of a tombstone marker, which informs the
defau lt get and scan operations to ignore all older cell versions of the particular row and column.
When will the deleted entries be permanently removed? To understand this process, it is necessary to
understand how HBase processes operations and achieves the real-time read and write access. As men-
tioned earlier, an HBase table is split into regions based on the row. Each region is maintained by a worker
node. During a put or delete operation agai nst a particular reg ion, the worker node first writes the
command to a Write Ahead Log (WAL) file for the region. The WAL ensures that the operations are not lost
if a system fails. Next, the results of the operation are stored within the worker node’s RAM in a repository
called MemStore [31).
Writing the entry to the MemStore provides the real-time access required. Any client can access the
entri es in the MemStore as soon as they are written. As the MemStore increases in size or at predetermined
time intervals, the sorted MemStore is then written (flus hed) to a file, known as an HFi le, in HDFS on the
same worker node. A typical HBase implementation flushes the MemStore when its contents are slightly
less than the HDFS block size. Over time, these flushed files accumulate, and the worker node performs a
minor compaction that performs a sorted merge of the various flushed files.
Meanwhi le, any get or scan requests that the worker node receives examine these possible storage
locations:
• MemStore
• HFiles resulting from MemStore flu shes
• HFiles from minor compactions
Thus. in the case of a delete operation followed relatively quickly by a get operation on the same
row, the tombstone marker is found in the MemStore and the corresponding previous versions in the smaller
HFiles or previously merged HFiles. The get command is instantaneously processed and the appropriate
data returned to the client.
Overtime, as the smaller HFiles accumulate, the worker node runs a majorcompaction that merges the
smaller HFiles into one large HFile. During the major compaction, the deleted entries and the tombstone
markers are permanently removed from the fil es.
Use Cases for HBase
As described in Google’s Big table paper, a common use case for a data store such as HBase is to store the
results from a web crawler. Using this paper’s example, the row com . cnn . www, for example, corresponds
ADVANCED ANALYTICS-TECHNOLOGV AND TOOLS: MAPREDUCE AND HADOOP
to a website URl, www. cnn. com. A column family, called anchor, is defined to capture the website
URLs that provide links to the row’s website. What may not be an obvious implementation is that those
anchoring website URLs are used as the column qualifiers. For example, if sportsillustrated
. cnn. com provides a link to www. cnn. com, the column qualifier is sportsillustrated. cnn
. com. Additional websites that provide links to www. cnn. com appear as additional column qualifiers.
The value stored in the cell is simply the text on the website that provides the link. Here is how the CNN
example may look in HBase following a get operation.
hbase> get ‘web_table’, ‘com.cnn.www’, {VERSIONS=> 2}
Additional results are returned for each corresponding website that provides a link to www. cnn
. com. Finally, an explanation is required for using com. cnn. www for the row instead of www. cnn
. com. By reversing the URLs, the various suffixes(. com, . gov, or . net) that correspond to the Internet’s
top-level domains are stored in order. Also, the next part of the domain name (cnn) is stored in order. So,
all of the cnn. com websites could be retrieved by a scan with the STAR TROW of com. cnn and the
appropriate STOPROW.
This simple use case illustrates several important points. First, it is possible to get to a billion rows and
millions of columns in an HBase table. As of February 2014, more than 920 million websites have been
identified [32]. Second, the row needs to be defined based on how the data will be accessed. An HBase
table needs to be designed with a specific purpose in mind and a well-reasoned plan for how data will be
read and written. Finally, it may be advantageous to use the column qualifiers to actually store the data
of interest, rather than simply storing it in a cell. In the example, as new hosting websites are established,
they become new column qualifiers.
A second use case is the storage and search access of messages. In 2010, Face book implemented such
a system using HBase. At the time, Facebook’s system was handling more than 15 billion user-to-user
messages per month and 120 billion chat messages per month [33]. The following describes Facebook’s
approach to building a search index for user in boxes. Using each word in each user’s message, an HBase
table was designed as follows:
o The row was defined to be the user I D.
o The column qualifier was set to a word that appears in the message.
o The version was the message I D.
o The cell’s content was the offset of the word in the message.
This implementation allowed Facebook to provide auto-complete capability in the search box and to
return the results of the query quickly, with the most recent messages at the top. As long as the message
IDs increase over time, the versions, stored in descending order, ensure that the most recent e-mails are
returned first to the user [34].
These two use cases help illustrate the importance of the upfront design of the HBase table based on
how the data will be accessed. Also, these examples illustrate the power of being able to add new columns
10.2 The Hadoop Ecosystem
by adding new column qualifiers, on demand. In a typical RDBMS implementation, new columns require
the involvement of a DBA to alter the structure of the table.
Other HBase Usage Considerations
In addition to the HBase design aspects presented in the use case discussions, the following considerations
are important for a successful implementation.
o Java API: Previously, several HBase shell commands and operations were presented. The shell com-
mands are useful for exploring the data in an HBase environment and illustrating their use. However,
in a production environment, the HBase Java API could be used to program the desired operations
and the conditions in which to execute the operations.
o Column family and column qualifier names: It is important to keep the name lengths of the
column families and column qualifiers as short as possible. Although short names tend to go against
conventional wisdom about using meaningful, descriptive names, the names of column family name
and the column qualifier are stored as part of the key of each key/value pair. Thus, every additional
byte added to a name over each row can quickly add up. Also, by default, three copies of each HDFS
block are replicated across the Hadoop cluster, which triples the storage requirement.
o Defining rows: The definition of the row is one of the most important aspects of the HBase table
design. In general, this is the main mechanism to perform read/write operations on an HBase table.
The row needs to be constructed in such a way that the requested columns can be easily and quickly
retrieved.
o Avoid creating sequential rows: A natural tendency is to create rows sequentially. For example,
if the row key is to have the customer identification number, and the customer identification num-
bers are created sequentially, HBase may run into a situation in which all the new users and their
data are being written to just one region, which is not distributing the workload across the cluster as
intended [35]. An approach to resolve such a problem is to randomly assign a prefix to the sequential
number.
o Versioning control: HBase table options that can be defined during table creation or altered later
control how long a version of a cell’s contents will exist. There are options forTimeTolive (TTL) after
which any older versions will be deleted. Also, there are options for the minimum and maximum
number of versions to maintain.
o Zookeeper: HBase uses Apache Zookeeper to coordinate and manage the various regions running
on the distributed cluster. In general, Zookeeper is “a centralized service for maintaining configura-
tion information, naming, providing distributed synchronization, and providing group services. All
of these kinds of services are used in some form or another by distributed applications.” [36] Instead
of building its own coordination service, HBase uses Zookeeper. Relative to HBase, there are some
Zookeeper configuration considerations [37].
1 0.2.4 Mahout
The majority of this chapter has focused on processing, structuring, and storing large datasets using Apache
Hadoop and various parts of its ecosystem. After a dataset is available in HDFS, the next step may be to
apply an analytical technique presented in Chapters 4 through 9. Tools such as Rare useful for analyzing
relatively small datasets, but they may suffer from performance issues with the large datasets stored in
Hadoop. To apply the analytical techniques within the Hadoop environment, an option is to use Apache
ADVANCED ANALYTIC S- TECHNOLOGY AND TOOLS: M APREDUCE AND HADOOP
Mahout. This Apache project provides executa ble Java libraries to apply analytical techniques in a scalable
manner to Big Data. In general, a mahout is a person who controls an elephant. Apache Mahout is the toolset
that directs Hadoop, the elephant in this case, to yield meaningful analytic results.
Mahout provides Java code that implements the algorithms for several techniques in the fol lowing
three categories [38]:
Classification:
• Logistic regression
• Na”lve Bayes
• Random forests
• Hidden Markov models
Clustering:
• Canopy clustering
• K-means clustering
• Fuzzy k-means
• Expectation maximization (EM)
Recommenders/collaborative filtering:
• Nondistributed recommend ers
• Distributed item-based collaborative filtering
Pivotal HD Enterprise with HAWQ
Users ca n download and install Apache Hadoop and the described ecosystem tools directly from the
www. apache . org website. Another installation option is downloading commercially packaged
distributions of the various Apache Hadoop projects. These distributions often include additional
user functionality as well as cluster management utilities. Pivotal is a compa ny that provides a
distribution called Pivotal HD Enterprise, as illustrated in Figure 10-7.
Pivotal HD Enterprise includes several Apache software components that have been presented in
this chapter. Additional Apache software includes the following:
• Oozie: Manages Apache Hadoop jobs by acting as a workflow scheduler system
• Sqoop: Efficiently moves data between Hadoop and relational databases
• Flume: Collects and aggregates streaming data (for example, log data)
Additional functionality provided by Pivota l includes [39] the following :
• Command Center is a robust cluster management tool that allows users to install, con-
figure, monitor, and manage Hadoop components and services through a web graphi-
cal interface. It simplifies Hadoop cluster installation, upgrades, and expansion using a
comprehensive dashboard with instant views of the health of the cluster and key perfor-
mance metrics. Users can view live and historical information about the host, application,
1 0.2 Th e Hadoop Ecosyste m
and job-level metrics across the entire Pivotal HD cluster. Command Center also provides
CLI and web services APis for integration into enterprise monitoring services.
Pivotal HD Enterprise
HAWQ ‘
-‘dv.n:cd Data !me S.Mcc•
I
HDFS
– •ww·• • Aj>acho
FIGURE 10·7 Components of Pivotal HD Enterprise
• PMXal –
• Graphlab on Open MPI (Message Passing Interface) is a highly used and mature
graph-based, high-performing, distributed computation framework that easily scales
to graphs with bil lions of vertices and edges. It is now ab le to run natively within an
existing Hadoop cluster, eliminating costly data movement. This allows data scientists
and analysts to leverage popular algorith ms such as page rank, collaborative fi ltering,
and computer visio n natively in Hadoop rather than copying the data somewhere
else to run the analytics, which wou ld lengthen data science cycles. Combined with
MADiib’s machine learning algorithms for relational data, Pivotal HD becomes the
leading advanced analytical platform for machine learning in the world.
• Hadoop Virtualizat ion Extensions (HVE) plug-ins make Hadoop aware of the virtual
topology and sca le Hadoop nodes dynamically in a virtual environment. Pivotal HD is
the first Hadoop distribution to include HVE plug-ins, enabling easy deployment of
Hadoop in an enterprise environment. With HVE, Pivotal HD can deliver truly elastic
sca lability in the cloud, augmenting on-premises deployment options.
• HAWQ (HAdoop With Query) adds SQL’s expre ssive power to Hadoop to accelerate
data analytics proj ects, simplify development while increasi ng productivity, expa nd
Hadoop’s ca pa bilities, and cut costs. HAWQ can help render Hadoop queries fast er
than any Hadoop-based query interface on the market by adding rich, proven, parallel
SQL processing facilities. HAWQ leverages existing business intelligence and analytics
products and a workforce’s existing SQL skills to bring more than 100 times perfor-
mance improvement to a wide ran ge of query types and workload s.
ADVANCED ANALYTICS-TECHNOLOGY AND TOOLS: MAP REDUCE AND HADOOP
10.3 NoSQL
NoSQL (Not only Structu red Query Language) is a term used to describe t hose data stores that are applied
to unstructured data. As described earlier, HBase is such a tool that is ideal for storing key/values in column
families. In general, the power of NoSQL data stores is that as the size of the data grows, the im plemented
solution can scale by simply addi ng additional machines to the distribu ted system. Four major categories
of NoSQL tools and a few examples are provided next [40).
Key/value stores contai n data (the value) that can be simply accessed by a given identifier (the key). As
described in the MapRed uce discussion, the values can be complex. In a key/ va lue store, there is no stored
structure of how to use t he data; the client that reads and writes to a key/value store needs to maintain
and utilize the logic of how to meaningfully extract the useful elements from the key and the value. Here
are some uses for key/va lue stores:
• Using a custom er’s login ID as the key, the va lue contains the customer’s preferences.
• Using a web session ID as the key, the value contains everythi ng that was captured during t he
session.
Document stores are usefu l when the val ue of the key/ value pair is a file and the file itself is self-
describing (for example, JSON or XML). The underl ying structure of the docu ments can be used to query
and customize the display of the documents’ content. Because the document is self-describi ng, the docu-
ment store can provide additional functionality over a key/value store. For example, a document store may
provide the ability to create indexes to speed the searching of the documents. Otherwise, every document
in the data store would have to be exami ned. Document stores may be useful for the followi ng:
• Content management of web pages
• Web analytics of stored log data
Column family stores are useful for sparse data sets, records with thousands of columns but only
a few colu mns have entries. The key/ value concept still applies, but in this case a key is associated with a
collection of columns. In this collection, related columns are grouped into col umn families. For example,
columns for age, gender, income, and education may be grouped into a demographic family. Column family
data stores are useful in the following instances:
• To store and render blog entries, tags, and viewers’ feed back
• To store and update various web page metrics and cou nters
Graph databases are intended for use cases such as networks, where there are items (people or web
page links) and relationships between these items. While it is possible to store graphs such as trees in a
relational data base, it often becomes cumbersome to navigate, scale, and add new relationships. Gra ph
databases he lp to overcome these possible obstacles and can be optimized to quickly traverse a
gra ph (move from one item in the network to another item in the network). Following are examp les of
graph database implementations:
• Socia l networks such as Facebook and Linkedln
• Geospatial applications such as delivery and traffic systems to optim ize the time to reach one or more
dest inations
Summary
Table 10-2 provides a few examples of NoSQL data stores. As is often the case, the choice of a specific
data store should be made based on the functional and performance requirements. A particular data store
may provide exceptional functionality in one aspect, but that func tionality may come at a loss of other
functiona lity or performance.
TABLE 10 2 Examples of NoSQL Data Stores
Category Data Store Website
Key/Value Red is redis.io
Voldemort www.project-voldemort . com/voldemort
Document Couch DB couchdb.apache . org
Mongo DB www.mongodb.org
Column family Cassandra cassandra.apache.org
HBase hbase.apache.org/
Graph FlockDB github.com/twitter/flockdb
Neo4j www.neo4 j . org
Summary
This chapter examined the MapReduce paradigm and its app lica tion in Big Data analytics. Specifica lly,
it examined the implementation of MapReduce in Apache Hadoop. The power of MapReduce is realized
with the use of the Hadoop Distributed File System (HDFS) to store data in a distributed system. The abil ity
to run a MapReduce job on the data stored across a cluster of machines enables the parallel processing
of petabytes or exabytes of data. Furthermore, by adding additional machines to the cluster, Hadoop can
scale as the data volumes grow.
This chapter examined several Apache projects within the Hadoop ecosystem. By providing a higher-
level programming language, Apache Pig and Hive si mplify the code development by masking the under-
lying MapReduce log ic to perform common data processi ng tasks such as filtering, joining data sets, and
res tructuring data. Once the data is properly conditioned within the Hadoop cluster, Apache Mahout can
be used to conduct data analyses such as clustering, classification, and collaborative filtering.
The strength of MapReduce in Apache Hadoop and the so far mentioned projects in the Hadoop eco-
system are in batch processing environments. When real-time processing, including read and writes, are
required, Apache HBase is an option. HBase uses HDFS to store large volum es of data across the cluster,
but it also maintains recent changes within memory to ensure the real-time avai labil ity of the latest data.
Whereas MapReduce in Hadoop, Pig, and Hive are more general-purpose tools that can address a wide
range of tasks, HBase is a somewhat more purpose-specific tool. Data will be retrieved from and written
to the HBase in a well-understood ma nner.
ADVANCED ANALYTICS-TECHNOLOGY AND TOOLS: MAPREDUCE AND HADOOP
HBase is one example of the NoSQL (Not on ly Structured Query Language) data stores that are being
developed to address specific Big Data use cases. Maintaining and traversing social network graphs are
examples of relational dat abases not being the best choice as a data store. However, relational databases
and SQL remain powerful and common tools and will be examined in more detail in Chapter 11 .
Exercises
1. Research and document additional use cases and actual implementations for Hadoop.
2. Compare and contrast Hadoop, Pig, Hive, and HBase. List strengths and weaknesses of each tool se t.
Research and summarize three published use cases for each tool se t.
Exercises 3 through 5 require some programming background and a working Hadoop environment.
The text of the novel Wa r and Peace can be downloaded from http: I I o n l i n ebooks
. 1 ibrary . upe nn . edu/ and used as the dataset for these exercises. However, other data sets
can easily be substituted. Document all processing steps applied to the data.
3. Use MapReduce in Hadoop to perform a word count on the specified dataset.
4. Use Pig to perform a word count on the specified dataset.
5 . Use Hive to perform a word count on the specified dataset.
Bibliography
[1) Apache, “Apache Hadoop,” [On line]. Avai lab le: http: I / h a d oop. apache. org/ . [Accessed 8
May 2014).
[2) Wikipedia, “IBM Watson,” [Online). Ava ilable: h tt p : I I en . wikipedia . org / wik i I IBM_
Watson. [Accessed 11 Februry 2014).
[3] D. Davidian, “IBM.com,” 14 February 2011. [Online]. Available: https : I / www-304 . ibm. com/
connect i o n s / b l o g s / davidian / tag s / hadoo p?lang=en_u s. (Accessed 11 February
2014].
(4] IBM, “IBM.com,” [Onli ne). Available: http: I / www- 03. i b m. com/ inno v ation/ u s /
wa t son / wa t son_in_heal thcare . s h t ml. [Accessed 11 Februar y 2014].
[5] Linked in, “Lin kedln,” [Onl ine]. Available: http : I / www . linkedi n. c o m/ about -us.
[Accessed 11 February 2014].
(6] Linked In, “Hadoop,” [Online]. Avai lable: ht t p : I I data . linkedi n . c om/ projects /
h a d oop. [Accessed 11 February 2014].
(7] S. Singh, “http://developer.ya hoo.com/,” [On line). Available: h ttp: I / develo per
. ya h o o. com/ blogs / h a doop/apache- hbase- y a h oo- multi-tena n cy-
h elm – a g ai n – 1717104 22 . html. [Accessed 11 February 2014].
Bibliography
[8] E. Baldeschwieler, “http://www.slideshare.net,” [Online). Available: h ttp : I /www
. slideshare . net/ydn / hadoop -yahoo-internet-scale-data-processi ng.
[Accessed 11 February 2014).
[9) J. Dean and S. Ghemawat, “MapReduce: Simplified Data Processing on Large Clusters,” [Onli ne].
Avai lable: http : I /resea rch. google . com/arc hi ve/mapreduce . html. [Accessed
11 February 2014).
[10] D. Gottfrid, “Self-Service, Prorated Supercomputing Fun!,” 01 November 2007. [Online]. Available:
http: //open.blogs .nytimes . com/2007/11/0l/self-
service – prorated-super-computing- fun/ . [Accessed 11 February 2014).
[11] “apache.org,” [Online). Available: http: I /www . apach e . org/. [Accessed 11 February 2014).
[12] S. Ghemawat, H. Gobioff, and S.-T. Leung, “The Google File System,” [Online). Available: ht tp: I I
static . googleuse rcontent . com/media/research . google . com/en / us /
archi ve / gf s- sosp2003 . pdf. [Accessed 11 February 2014).
[13) D. Cutting, “Free Search: Rambilings About Lucene, Nutch, Hadoop and Other Stuff,” [Online).
Available: http : I I cutting . wordpress . com. [Accessed 11 February 2014).
[14] “Hadoop Wiki Disk Setup,” [Online]. Available: http : I / wiki. apache . org/ hadoop /
DiskSetup. [Accessed 20 February 2014].
[15] “wiki.apache.org/hadoop,” [Online]. Available: http: I / wiki. apache . o r g/hadoop/
NameNode. [Accessed 11 February 2014].
[16] “HDFS High Availability,” [Online]. Available: http : I /hadoop. apache . org / docs /
current/hadoop-yarn / hadoop -yarn-site/
HDFSHighAvailabil ityWi thNFS . html. [Accessed 8 May 2014].
[17] Eclipse. [On line]. Available: https: I / www . ecl ipse. erg/downloads / . [Accessed 27
February 2014].
[18] Apache, “Hadoop Streaming,” [Online]. Available: ht tps : I / wiki. apache . org/hadoop/
HadoopStreaming. [Accessed 8 May 2014).
[19] “Hadoop Pipes,” [Online]. Availabl e: http: I / hadoop . apache . org/docs / rl . 2 . 1 /
a pi / org/ apache/hadoop/mapred/pipes/package- summary. html. [Accessed
19 February 2014].
[20] “HDFS Design,” [Online]. Available: http: I / hadoop . apach e . erg/docs/ stablel /
hdf s _design . html. [Accessed 19 February 2014].
[21] “BSP Tutorial,” [Online]. Available: http : I /hama . apache . org/hama _ bsp _t utorial
. h tml. [Accessed 20 February 2014).
[22] “Hama,” [Onlin e] . Available: h ttp : I / h ama . apache . org/ . [Accessed 20 February 2014].
[23] “PoweredByYarn,” [Online]. Available: http : I / wiki . apache . org /hadoop/
PoweredByYarn. [Accessed 20 February 2014).
[24] “pig.apache.org,” [O nline]. Available: http: I / pig . apache. org / .
ADVANCED ANALYTICS – TECHNOLOGY AND TOOLS: MAPREDUCE AND HADOOP
[25] “Pig,” [Online). Available: http : I /pig . apache. org/ . [Accessed 11 Feb 2014).
[26] “Piggybank,” [Online]. Available: https: I / cwiki . apache . erg / confluence /
display / PIG/ PiggyBank. [Accessed 28 February 2014].
[27] F. Chang, J. Dean, S. Ghemawat, W.C. Hsieh, D.A. Wallach, M. Burrows, T. Chandra, A. Fikes, and R.E.
Gruber Fay Chang, “Big table: A Distributed Storage System for Structured Data,” [Online). Available:
http : I / research . google . com/archi ve/bigtabl e . html. [Accessed 11 February
2014).
[28] K. Muth ukkaruppan, “The Underlyi ng Technology of Messages,” 15 November 2010. [Online) .
Available: http : I / www. facebook. com/notes/ facebook-engineering /
the-underlying-technology-of -me ssages / 4 54991608919. [Accessed 11
Febru ary 2014).
[29] “HBase Key Value,” [Online]. Available: http : I / hbase. apache. erg / book/ regions
. arch . html. [Accessed 28 Febru ary 2014].
[30] “Number of Column Families,” [Online]. Ava ilable: http : I /hbase . apache . org / book /
number . o f . cfs .h tml.
[31] “HBase Regionserver,” [Online]. Available: http: I / hbase . apache . org / book /
regionserver . arch . html. [Accessed 3 Ma rch 2014].
[32] “Netcraft,” [Online]. Available: http : I /news . netcraft . com/archives / 2014 /0 2 / 0 3/
february- 2014 -web- serv er- survey . html. [Accessed 21 February 2014].
[33] K. Muthukkaruppan, “The Underlying Technology of Messages,” 15 November 2010. [On line].
Avai lable: http : I /www . facebook . com/notes/facebook-engineering /
the-unde rlying – technology-of -messages/ 454 991608919. [Accessed 201 1
February 2014].
[34] N. Spiegelberg. [On line] . Available: http : I / www. slide share . net / brizzzdotcom/
facebook-messages – hbase. [Accessed 11 February 2014].
[35] “HBase Rowkey,” [Online]. Available: http : I / hbase . apache . org / book/ rowkey
. design . html. [Accessed 4 March 2014].
[36] “Zookeeper,” [Online]. Available: http : I / z o okeeper . apache . org / . [Accessed 11 Feb
2014].
[37] “Zookeeper,” [Online]. Available: http : I / hbase . apache. org / book/ zookeeper
. html. [Accessed 21 February 2014].
[38] “Mahout,” [Online]. Available: http : I / mahout . apache . org / users / basics /
algorithms . html. [Accessed 19 February 2014].
[39] “Pivotal HD,” [Online]. Available: http : I / www . gopi votal . com/ big-data /
pivotal- hd. [Accessed 8 May 2014] .
[40] P. J. Sadalage and M. Fowler, NoSQL Distilled: A Brief Guide to the Emerging World of Polyglot,
Upper Sadd le River, NJ: Add ison Wesley, 2013.
ADVANCED ANALYTICS-TECHNOLOGY AND TOOLS: IN-DATABASE ANALYTICS
In -database analytics is a broad term that describes the processing of data within its repository. In
many of the earlier R examples, data was extracted from a data source and loaded into R. One advantage of
in-database analytics is that the need for movement of the data into an analytic tool is eliminated. Also, by
performing the analysis within the database, it is possible to obtain almost real-time results. Applications
of in-database analytics include credit card transaction fraud detection, product recommendations, and
web advertisement selection tailored for a particular user.
A popular open-source database is PostgreSQL. This name references an important in-database analytic
language known as Structured Query Language (SQL). This chapter examines basic as well as advanced
topics in SQL. The provided examples of SQL code were tested agai nst Green plum database 4.1.1.1, which
is based on PostgreSQL 8.2.15. However, the presented concepts are applicable to ot her SQL environments.
11.1 SQL Essentials
A relational database, part of a Relational Database Management System (RDBMS }, organizes data in tables
with established relationships between the tables. Figure 11-1 shows the relationships between five tables
used to store details about orders placed at an e-commerce retailer.
orders
oorder_id
0 product_id
0 customer _id
0 item_shipment_status_code
0 order _datetime
0 ship_datetime
0 payment_method_code
0 tax_amount
0 item_quantity
0 item _price
0 ordering_session_id
owebsite uri
FIGURE 11-1 Relationship diagram
oro duct cat e or
~-II 0 product_id
0 category _id ~-II
o ca
o ca
tegory _id
tegory _name — Oprice 0 product name
l=
\_.customer
0 customer _id
=
0 first_nam e
o last_name
0 email address
customer demoaraohics
4 0 customer _id
Deity
0 state_code
ocountry
0 customer _gender
0 customer _age
0 frequent_shopper _class
0 tenure_with_company _in_years
O parent
0 multiple_kids
0 student
0 innovation_adapter _category
0 fashion buyino cateoory
11.1 SQL Essentials
The table orders contains records for each order transaction. Each record contains data elements
such as the product_id ordered, the customer_id for the customer who placed the order, the
order_ da tetime, and so on. The other four tables provide additional details about the ordered items
and the customer. The lines between the tables in Figure 11-1 illustrate the relationships between the tables.
For example, a customer’s first name, last name, and gender from the customer table can be associated
with an orders record based on equality of the customer_id in these two tables.
Although it is possible to build one large table to hold all the order and customer details, the use of five
tables has its advantages. The first advantage is disk storage savings. Instead of storing the product name,
which can be several hundred characters in length, in the orders table, a much shorter product_ id,
of perhaps a few bytes, can be used and stored in place of the product’s name.
Another advantage is that changes and corrections are easily made. In this example, the table
cat egoryis used to categorize each product. If it is discovered that an incorrect category was assigned to
a particular product item, only the category_idin the product table needs to be updated. Without
the product and category tables, it may be necessary to update hundreds of thousands of records
in the orders table.
A third advantage is that products can be added to the database prior to any orders being placed.
Similarly, new categories can be created in anticipation of entirely new product lines being added to the
online retailer’s offerings later.
In a relational database design, the preference is not to duplicate pieces of data such as the customer’s
name across multiple records. The process of reducing such duplication is known as normalization. It is
important to recognize that a database that is designed to process transactions may not necessarily be
optimally designed for analytical purposes. Transactional databases are often optimized to handle the
insertion of new records or updates to existing records, but not optimally tuned to perform ad-hoc query-
ing. Therefore, in designing analytical data warehouses, it is common to combine several of the tables and
create one larger table, even though some pieces of data may be duplicated.
Regardless of a database’s purpose, SQL is typically used to query the contents of the relational data-
base tables as well as to insert, update, and delete data. A basic SQL query against the customer table
may look like this.
SELECT first_name,
last_name
FROM customer
WHERE customer_id = 666730
first name last name
Mason Hu
This query returns the customer information for the customer with a customer_ i d of 666730. This
SQL query consists of three key parts:
o SELECT: Specifies the table columns to be displayed
o FROM: Specifies the name of the table to be queried
o WHERE: Specifies the criterion or filter to be applied
ADVANCED ANALYTICS-TECHNOLOGV AND TOOLS: IN-DATABASE ANALYTICS
In a relational database, it is often necessary to access related data from multiple tables at once. To
accomplish this task, the SQL query uses JOIN statements to specify the relationships between the mul-
tiple tables.
11.1.1 Joins
Joins enable a database user to appropriately select columns from two or more tables. Based on the rela-
tionship diagram in Figure 11-1, the following SQL query provides an example of the most common type
of join: an inner join.
SELECT c.customer_id,
o.order_id,
o.product_id,
o.item_quantity AS qty
FROM orders o
INNER JOIN customer c
ON o.customer_id ; c.customer_id
WHERE c.first_name = ‘Mason’
AND c.last_name = ‘Hu’
customer id order id product_ id
666730 51965-1172-6384-6923 33611
666730 79487-2349-4233-6891 34098
666730 39489-4031-0789-6076 33928
666730 29892-1218-2722-3191 33625
666730 07751-7728-7969-3140 34140
666730 85394-8022-6681-4716 33571
qty
5
4
1
This query returns details of the orders placed by customer Mason Hu. The SQL query joins the two
tables in the FROM clause based on the equality of the customer_id values. In this query, the specific
customer_ i d value for Mason Hu does not need to be known by the programmer; only the customer’s
full name needs to be known.
Some additional functionality beyond the use of the INNER JOIN is introduced in this SQL query.
Aliases o and care assigned to tables orders and customer, respectively. Aliases are used in place of
the full table names to improve the readability of the query. By design, the column names specified in the
SELECT clause are also provided in the output. However, the outputted column name can be modified
with the AS keyword.ln the SQL query, the values of it em_ quantity are displayed, butthis outputted
column is now called qty.
The INNER JOIN returns those rows from the two tables where the ON criterion is met. From
the earlier query on the customer table, there is only one row in the table for customer Mason Hu.
Because the corresponding customer_id for Mason Hu appears six times in the orders table, the
INNER JOIN query returns six records. If the WHERE clause was not included, the query would have
returned millions of rows for all the orders that had a matching customer.
Suppose an analyst wants to know which customers have created an online account but have
not yet placed an order. The next query uses a RIGHT OUTER JOIN to identify the first five
11.1 SQL Essentials
customers, alphabetically, who have not placed an order. The sorting of the records is accomplished with the
ORDER BY clause.
SELECT c.customer_id,
c.first_name,
c.last_name,
o.order_id
FROM orders o
RIGHT OUTER JOIN customer c
ON o.customer_id c.customer_id
WHERE o.order_id IS NULL
ORDER BY c.last_name,
c.first name
LIMIT 5
customer id first nane last name order id
– – –
143915 Abigail Aaron
965886 Audrey· Aaron
9820~2 Cartel· Aaro::
125302 Daniel .::..aror:
103964 Emily .::..arcr:
In the SQL query, a RIGHT OUTER JOIN is used to specify that all rows from the table customer,
on the right-hand side {RHS) of the join, should be returned, regardless of whether there is a matching
customer_id in the orders table. In this query, the WHERE clause restricts the results to only
those joined customer records where there is no matching order_ i d. NULL is a special SQL keyword
that denotes an unknown value. Without the WHERE clause, the output also would have included all the
records that had a matching customer_id in the orders table, as seen in the following SQL query.
SELECT c.customer_id,
c.first_name,
c.last_narne,
o.order_id
FROM orders o
RIGHT OUTER JOIN customer c
ON o.customer_id c.customer_id
ORDER BY c.last_name,
c.first name
LIMIT 5
customer id firs: name
:L43915
222599
222599
222599
222599
.=..b:gail
.i:>.cidison
.::..ddison
Addison
P1ddison
las: Eame
SOJ:~-7~76-3355-6960
2:C07-75~l-l255-353l
Aaro:1
Aaron 69225-1638-2944-0264
ADVANCED ANALYTIC$-TECHNOLOGY AND TOOLS: IN-DATABASE ANALYTIC$
In the query results, the first customer, Abigail Aaron, had not placed an order, but the next customer,
Addison Aaron, has placed at least four orders.
There are several other types of join statements. The LEFT OUTER JOIN performs the same
functionality as the RIGHT OUTER JOIN except that all records from the table on the left-hand side
(LHS) of the join are considered. A FULL OUTER JOIN includes all records from both tables regardless of
whether there is a matching record in the other table. A CROSS JOIN combines two tables by matching
every row of the first table with every row of the second table. If the two tables have 100 and 1,000 rows,
respectively, then the resulting CROSS JOIN of these tables will have 100,000 rows.
The actual records returned from any join operation depend on the criteria stated in the WHERE
clause. Thus, careful consideration needs to be taken in using a WHERE clause, especially with outer joins.
Otherwise, the intended use of the outer join may be undone.
11.1.2 Set Operations
SQL provides the ability to perform set operations, such as unions and intersections, on rows of data. For
example, suppose all the records in the orders table are split into two tables. The orders_ arch table,
short for orders archived, contains the orders entered prior to January 2013. The orders transacted in or
after January 2013 are stored in the orders_recent table. However, all the orders for product_id
33611 are required for an analysis. One approach would be to write and run two separate queries against
the two tables. The results from the two queries could then be merged later into a separate file or table.
Alternatively, one query could be written using the UNION ALL operator as follows:
SELECT customer_id,
order_id,
order_datetime,
product_id,
item_quantity AS qty
FROM orders_arch
WHERE product_id = 33611
UNION ALL
SELECT customer_id,
order_id,
order_datetirne,
product_id,
itern_quantity AS qty
FROM orders_recent
WHERE product_id ~ 33611
ORDER BY order_datetime
customer id order id
– –
643126 13501-6446-6326-0182
725940 70733-4014-1618-2531
7-!2448 03107-1712-8668-9967
order
–
datetime product_ id qty
2005-01-02 19:28:08 33611 1
2005-01-08 06:16:31 33611
2005-Dl-08 16:11:39 33611
6408..;7
660446
647335
736:9-Cl27-0657-7~16 2013-01-05 14:53:27 33611
55160-7129 2408-9181 2013-01-07 83:59:36 33Gl1
75014-7339-1214-6447 2013-Cl-27 13:02:10 33611
11.1 SQL Essentials
l
The first three records from each table are shown in the output. Because the resulting records from
both tables are appended together in the output, it is important that the columns are specified in the
same order and that the data types of the columns are compatible. UNION ALL merges the results of
the two SELECT statements regardless of any duplicate records appearing in both SELECT statements.
If only UNION was used, any duplicate records, based on all the specified columns, would be eliminated.
The INTERSECT operator determines any identical records that are returned by two SELECT state-
ments. For example, if one wanted to know what items were purchased prior to 2013 as well as later, the
SQL query using the INTERSECT operator would be this.
SELECT product_id
FROM orders_arch
INTERSECT
SELECT product_id
FROM orders_recent
prcduct_id
30
31
It is important to note that the intersection only returnsaproduct_id if it appears in both tables and
returns exactly one instance of such a product_id. Thus, only a list of distinct product IDs is returned
by the query.
To count the number of products that were ordered prior to 2013 but not after that point in time, the
EXCEPT operator can be used to exclude the product IDs in the orders_ recent table from the product
IDs in the orders_ arch table, as shown in the following SQL query.
SELECT COUNT(e.*)
FROM (SELECT product_id
FROM orders_arch
EXCEPT
SELECT product_id
FROM orders_recent) e
13569
The preceding query uses the COUNT aggregate function to determine the number of returned rows
from a second SQL query that includes the EXCEPT operator. This SQL query within a query is sometimes
ADVANCED ANALYTICS-TECHNOLOGY AND TOOLS: IN-DATABASE ANALYTICS
called a subquery or a nested query. Subqueries enable the construction of fairly complex queries without
having to first execute the pieces, dump the rows to temporary tables, and then execute another SQL query
to process those temporary tables. Subqueries can be used in place of a table within the FROM clause or
can be used in the WHERE clause.
11.1.3 Grouping Extensions
Previously, the COUNT ( ) aggregate function was used to count the number of returned rows from a
query. Such aggregate functions often summarize a dataset after applying some grouping operation to
it. For example, it may be desired to know the revenue by year or shipments per week. The following SQL
query uses the SUM ( ) aggregate function along with the GROUP BY operator to provide the top three
ordered items based on i t em_ quantity.
SELECT i.product_id,
SUM(i.item_quantity) AS total
FROM orders_recent i
GROUP BY i.product_id
ORDER BY SUM(i.item_quantity) DESC
LIMIT 3
1506~
15060
GROUP BY can use the ROLL UP () operator to calculate subtotals and grand totals. The following
SQL query employs the previous query as a subquery in the WHERE clause to supply the number of items
ordered by year for the top three items ordered overall. The ROLLUP operator provides the subtotals,
which match the previous output for each product_ i d, as well as the grand total.
SELECT r.product_id,
FROM
WHERE
GROUP
ORDER
DATE_PART(‘year’, r.order_datetime) AS year,
SUM(r.item_quantity) AS total
orders_recent r
r.product_id IN (SELECT o.product_id
FROM orders_recent o
GROUP BY o.product_id
ORDER BY SUM(o.item_quantity) DESC
LIMIT 3)
BY ROLLUP( r.product_id, DATE_PART(‘year’, r.order_datetime)
BY r.product_id,
DATE_PART(‘year’, r.order_datetime)
product_ id year total
15060 2013 5996
15060 2014 57
15060 6053
15066 2013 6030
15066
15066
15072
15072
15072
2014
2013
2014
11.1 SQL Essentials
52
6082
6023
66
6089
18224
The CUBE operator expands on the functionality of the ROLL UP operator by providing subtotals for
each column specified in the CUBE statement. Modifying the prior query by replacing the ROLL UP opera-
tor with the CUBE operator results in the same output with the addition of the subtotals for each year.
SELECT r.product_id,
DATE_PART( 1 year 1 , r.order_datetime) AS year,
SUM(r.item_quantity) AS total
FROM orders_recent r
WHERE r.product_id IN (SELECT o.product_id
FROM orders_recent o
GROUP BY o.product_id
ORDER BY SUM(o.item_quantity) DESC
LIMIT 3)
GROUP BY CUBE( r.product_id, DATE_ PART ( 1 year 1 , r.order_datetime)
ORDER BY r.product_id,
DATE_ PART ( ‘year 1 , r.order_datetime
product_id year total
15060 2013 5996
15060 2014 57
15060 6053
15066 2013 6030
15066 2014 52
15066 6082
15072 2013 6023
15072 2014 66
15072 6089
2013 :8049 …. additional row
2014 :75 .,… additional row
:8224
Because null values in the output indicate the subtotal and grand total rows, care must be taken
when null values appear in the columns being grouped. For example, null values may be part of the
dataset being analyzed. The GROUPING ( ) function can identify which rows with null values are used
for the subtotals or grand totals.
SELECT r.product_id,
DATE_PART( 1 year 1 , r.order_datetime)
SUM(r.item_quantity)
AS year,
AS total,
GROUPING(r.product_id) AS group_id,
GROUPING(DATE_PART( 1 year 1 , r.order_datetime)) AS group_year
FROM orders_recent r
ADVANCED ANALVTICS-TECHNOLOGY AND TOOLS: IN-DATABASE ANALYTICS
WHERE r.product_id IN (SELECT o.product_id
FROM orders_recent o
GROUP BY o.product_id
ORDER BY SUM(o.item_quantity) DESC
LIMIT 3)
GROUP BY CUBE( r.product_id, DATE_PART(‘year’, r.order_datetime)
ORDER BY r.product_id,
DATE_PART(‘year’, r.order_datetime)
product id year total group_id group_year
15060 2013 5996 0 0
15060 2014 57 0 0
15060 6053 0 1
15066 2013 6030 0 0
15066 2014 52 0 0
15066 6082 0 1
15072 2013 6023 0 0
15072 2014 66 0 0
15072 6089 0 1
2013 1804 9 1 0
2014 175 1 0
18224 1 1
In the preceding query, group _year is set to 1 when a total is calculated across the values of year.
Similarly, group_id is set to 1 when a total is calculated across the values of product_id.
The functionality of ROLL UP and CUBE can be customized via GROUPING SETS. The SQL query
using the CUBE operator can be replaced with the following query that employs GROUPING SETS to
provide the same results.
SELECT r.product_id,
DATE_PART(‘year’, r.order_datetime) AS year,
SUM(r.item_quantity) AS total
FROM orders_recent r
WHERE r.product_id IN (SELECT o.product_id
FROM orders_recent o
GROUP BY o.product_id
ORDER BY SUM(o.item_quantity) DESC
LIMIT 3)
GROUP BY GROUPING SETS( ( r.product_id,
ORDER BY r.product_id,
DATE_PART(‘year’, r.order_datetime) ),
r .product_id ) ,
DATE_PART(‘year•, r.order_datetime) ),
( ) )
DATE_PART(‘year’, r.order_datetime)
11.1 SQL Essentials
The listed grouping sets define the columns for which subtotals will be provided. The last grouping set,
} , specifies that the overall total is supplied in the query results. For example, if only the grand total was
desired, the following SQL query using GROUPING SETS could be used.
SELECT r.product_id,
DATE_PART(‘year’, r.order_datetime) AS year,
SUM(r.item_quantity) AS total
FROM orders_recent r
WHERE r.product_id IN (SELECT o.product_id
FROM orders_recent o
GROUP BY o.product_id
ORDER BY SUM(o.item_quantity) DESC
LIMIT 3)
GROUP BY GROUPING SETS( ( r.product_id,
DATE_PART(‘year•, r.order_datetime) ) ,
( ) )
ORDER BY r.product_id,
DATE_PART(‘year’, r.order_datetime)
product_id year total
15060 2013 5996
15060 2014 57
15066 2013 6030
15066 2014 52
15072 2013 6023
15072 2014 66
18224
Because the GROUP BY clause can contain multiple CUBE, ROLL UP, or column specifications, dupli-
cate grouping sets might occur. The GROUP_ ID ( ) function identifies the unique rows with a 0 and the
redundant rows with a 1, 2, …. To illustrate the function GROUP_ ID (}, both ROLLUP and CUBE are
used when only one specific product_id is being examined.
SELECT r.product_id,
DATE_PART(‘year’, r.order_datetime) AS year,
SUM(r.item_quantity) AS total,
GROUP_ID() AS group_id
FROM orders_recent r
WHERE r.product_id IN ( 15060 )
GROUP BY ROLLUP( r.product_id, DATE_PART(‘year’, r.order_datetime) ),
CUBE( r.product_id, DATE_PART(‘year’, r.order_datetime) )
ORDER BY r.product_id,
DATE_PART(‘year’, r.order_datetime),
GROUP_ID()
ADVANCED ANALYTICS- TECHNOLOGY AND TOOLS: IN-DATABASE ANALYTICS
prod~1cc 1d year tot.al group_ ict
15060 2013 5996 0
15060 2013 5996 1
15060 :!OU 5996 3
15060 2011 5996 ~
15060 2013 5996 ‘)
15060 ~013 5996 6
!5060 201·l 57 0
1’>060 ~01·; 57
15060 201<1 57 2
15060 2014 57
15060 201·1 57 ·l
15060 2014 57 5
15060 .. 014 57 6
15060 6053
:5060 6053 1
15060 6053 ~
-01 · 5996
2014 57 0
6053 n
Filtering on the group_ i d values equal to zero yields unique re cords. This filtering can be accom-
pl ished with the HAVING clau se, as illustrated in the next SQL query.
SELECT r . product_id,
DATE_PART('year ' , r . order_datetime) AS year,
SUM(r . item_quantity) AS total,
GROUP_ ID() AS group_id
FROM orders_recent r
WHERE r.product_id IN ( 15060 )
GROUP BY ROLLUP( r.product_id , DATE_PART( ' year ' , r.order_datetime) ) ,
CUBE( r . product_id, DATE_PART('year', r . order_datetime) )
HAVING GROUP_ID() = 0
ORDER BY r.product_id,
produc-: ld
15060
15060
15060
DATE_PART ( ' year', r . order_datetime) ,
GROUP_ID()
i'E:"t."ll total 91:"0U! ld
2013 5996 0
201-l 57 0
6053 0
::ou 5996 0
201-l 57 0
6053 0
11.2 In-Database Text Analysis
SQL offer s several basic text string functions as well as wildcard sea rch functionality. Related S ELECT
statements and their results enclosed in the SQL comment delimiters, I ** 1. include the following:
1 1 .2 1n- Database Text Analysis
SELECT SUBSTRING( ' 1234567 89 0', 3 ' 2) /* r·etm:ns I 34 I ' /
SELECT ' 12 345678 90 ' LIKE ' %7% ' / * returns Tnlf> • !
SELECT ‘ 123 45678 9 0’ LIKE ‘ 7 %’ / * returns False • f
SELECT ‘1 23 4 5678 90 ‘ LIKE ‘ – 2% ‘ / • re::urns True
SELECT ‘1 23 4 567890 ‘ LIKE ‘ 3\ ‘ . re:.t:rr:s Fal.:·~ .
SELECT ‘1234567890 ‘ LIKE I 3\ ‘ . re:.:.::r:;;,::- .,.. … .J. –
This section examines more dynamic and flexible tools for text analysis, called regular expressions, and
t heir use in SQL queries to perform pattern matching . Table 11 -1 includes several forms of the com pari son
op erator used with regu lar expressions and related SQL examp les t hat produce a True result.
TABLE 11 • 1 Regular Expression Operators
Operator Description Example
Contains the regular expre ssion (case ‘123a 5 67 ‘ – ‘a ‘
sensitive)
-* Contains the regular expression (case ‘123a567’ – * ‘A’
insensitive)
!- Does not contain the regular expression ‘ 1 23a56 7 ‘ ! – ‘ A ‘
(case sensitive)
!-* Does not contain t he regular expression ‘ 12 3 a567’ ! – * ‘b ‘
(case insensitive)
More comp lex form s of the pattern s tha t are specifi ed at the RHS of the comparison operator can be
con stru cted by using the elements in Table 11 -2.
TABLE 11 · 2 Regular Expression Elements
Element Description
Matches item a orb (alb)
Looks for matches at t he beginning of the string
$ Looks for matches at the end ofthe string
Matches any sing le character
* Matches preceding item zero o r more times
+ Matches preceding item one or more times
? Makes the preceding item optional
{n } Matches the preceding item exactly n times
(continues)
ADVANCED ANALYTICS- TECHNOLOGY AND TOOLS: IN-DATABASE ANALYTICS
TABLE 1 1 2 Regular Expression Elements (Continued)
Elem ent D escription
( ) M atches the contents exactl y
f I M atch es any of t he charact ers in the content, su ch as [0- 9]
\\x Matches a nonalphanumeric character named x
\\y Matches an escape string \ y
To illustrate the use of these elem en ts, the foll owing S ELECT stat ements include exa mples in which
t he com pa risons are Tr ue or False.
• m :t ~ h( .,. ….. ,.- , . ‘:·: :·’ •
SELECT
SELECT
‘ 123a567’ ‘ 23 l b’
‘123a567’ – ‘ 3 2 lb’
• re~u:ns Tru
• r-eturns FAJ.!ie •
• mat hF thE> beginn~:1g
SELECT ‘123a567’ – ‘ A1 2 3a’
SELECT ‘123a567’ – ‘ Al 2 3a7’
tho string * /
. . I'” .. ::d o: :.~· “” r1ng
SELECT ‘ 123a567 ‘ – ‘ a 56 7$ ‘
SELECT ‘ 123a567 ‘ – ‘ 27 $’
! • matches a ny sing le chal~CLPl
SELECT ‘ 1 23 a 567 ‘ – I 2 • a I
SEL ECT ‘ 123a567 ‘ ‘2 .. 5 ‘
SELECT ‘ 123a567 ‘ ‘2 .. . 5 ‘
SELECT ‘ 123a567’ – ‘ 2 * ‘
SELECT ‘1 23a567’ – ‘ 2 *a’
SELECT ‘1 23a 567’ – ‘ 7 *a ‘
SELECT ‘123a567’ – ‘ 37 * ‘
SELECT ‘ 123a567’ ‘ 87 *’
ed:.::g c .. ~ … H:: ..
SELECT ‘123a567’ – ‘ 2 + ‘
SELECT ‘ 123a567 ‘ ‘2+a’
SEL ECT ‘ 123a567’ – ‘7+a ‘
SELECT ‘ 123a567 ‘ – ‘3 7+ ‘
SELECT ‘ 1 23a56 7 ‘ – ‘ 87 +’
• m 1kF- Lh< !- .·
-t–a·/erag of ·,:f-·•_l:·:n 3, .,,
,._a·Jeraae [ \’.’r~ t-‘ f..~· l, ·1, s’
~. 5
5, 6
6, 7
ADVANCED ANALYTICS- TECHNOLOG Y AND TOOLS: IN-DATABASE ANALYTICS
201 4 10 1528593 15 ~8051 . 6
2014 11 1558 147 1545714 . 2
2014 12 1565726 15 49404
201 4 13 1551356 15 ·:881:! . 6
20 14 1·· 1543198 :5.; 39:’0. :-
201 4 15 15-5636 1~36767.6
201~ 16 153 3185 :53166-.~
20 14 17 1530 463 1527313 . 6
20 14 18 1525829 :529787.9
20 14 19 15-1455 1~32649
20 14 20 1513007 1533370
2014 21 1552491 1512116
2014 22 1534069 1539713.6
2014 2l 1519559 1538199.6
201 4 24 1559 442 1539086.2 •-il’:· rage of …. ·ee::s 2:!,23,24,25,2€
:!01-i 25 1525 .;17 l540340.75 -average of .,..·eef:s 2J,2-i,25,26
2- 1.; :’ 1 ~ fO_:?.; : ~ . R • ”:e1·age of wee~:s :! ·1, :c:, .~G
The windowing function uses the built-in aggregate function AVG ( ) , which computes the arithmetic
average of a set of values. The ORDER BY clause sorts the records in chronological order and specifies
which rows should be included in the averaging process with the current row. In this SQL query, the mov-
ing average is based on the current row, the preced ing two rows, and the following two rows. Because
the dataset does not include the last two weeks of 2013, the first moving average value of 1,572,999.333 is
the average of the first three weeks of 2014: the current week and the two subsequent weeks. The moving
average value for the second week, 1,579,941.75, is the sales value for week 2 averaged with the prior week
and the two subsequent weeks. For weeks 3 through 24, the moving average is based on the sa les from
5-week peri ods, centered on the current week. At week 25, the window begins to include fewer weeks
because the following weeks are unavailable. Figure 11-3 illustrates the applied smoothing process against
the weekly sales figures.
0
0
0
0
0
<0
8
8
<0
I()
0
8
0
N
I()
0 5
FIGURE 11-3 Weekly sales with moving averages
10 15 20 25
Week
11.3 Advanced SQL
Built-in window functions may vary by SQL implementation. Table 11-3 [1] from the PostgreSQL docu-
mentation includes the list of general-purpose window functions.
TABLE 11-3 Window Functions
Function Description
row_number ()
rank()
dense_ rank ()
percent_ rank ()
cume_dist ()
ntile(num_buckets
integer}
lag (value any [,
offset integer [,
default any ] ] )
lead(value any [,
offset integer [,
default any ] ] )
first_value(value
any)
last_value(value
any)
nth_value(value
any, nth integer)
Number ofthe current row within its partition, counting from 1.
Rank ofthe current row with gaps; same as row _number of its first
peer.
Rank of the current row without gaps; this function counts peer
groups.
Relative rank of the current row: (rank- 1) I (total rows -1).
Relative rank of the current row: (number of rows preceding or peer
with current row) I (total rows).
Integer ranging from 1 to the argument value, dividing the partition as
equally as possible.
Returns the value evaluated at the row that is offset rows before the
current row within the partition; if there is no such row, instead return
default. Both offset and default are evaluated with respect to the
current row.lf omitted, offset defaults to 1 and default to null.
Returns the value evaluated at the row that is offset rows after the
current row within the partition; if there is no such row, instead return
default. Both offset and default are evaluated with respect to the cur-
rent row.lf omitted, the offset defaults to 1 and the default to null.
Returns the value evaluated at the first row of the window frame.
Returns the value evaluated at the last row of the window frame.
Returns the value evaluated at the nth row of the window frame
(counting from l);null if no such row.
http://www.postgresql.org/docs/9.3/static/functions-window.html
11.3.2 User-Defined Functions and Aggregates
When the built-in SQL functions are insufficient for a particular task or analysis, SQL enables the user to
create user-defined functions and aggregates. This custom functionality can be incorporated into SQL
queries in the same ways that the built-in functions and aggregates are used. User-defined functions can
also be created to simplify processing tasks that a user may commonly encounter.
ADVANCED ANAL YTICS-TECHNOLOGY AND TOOLS: IN-DATABASE ANALYTICS
For example, a user-defined function can be written to translate text strings for female (F) and male
(M} to 0 and 1, respectively. Such a function may be helpful when formatting data for use in a regression
analysis. Such a function, fm _convert () , could be implemented as follows:
CREATE FUNCTION fm_convert(text) RETURNS integer AS
'SELECT CASE
WHEN $1 D I 'F' I THEN 0
WHEN $1 "' I'M' I THEN 1
ELSE NULL
END'
LANGUAGE SQL
IMMUTABLE
RETURNS NULL ON NULL INPUT
In declaring the function, the SOL query is placed within single quotes. The first and only passed value
is referenced by $1. The SOL query is followed by a LANGUAGE statement that explicitly states that
the preceding statement is written in SOL Another option is to write the code in C. IMMUTABLE indi-
cates that the function does not update the database and does not use the database for lookups. The
IMMUTABLE declaration informs the database's query optimizer how best to implement the function. The
RETURNS NULL ON NULL INPUT statement specifies how the function addresses the case when
any of the inputs are null values.
In the online retail example, the fm_ convert () function can be applied to the customer_
gender column in the customer_ demographics table as follows.
SELECT customer_gender,
fm_convert(customer_genderl as male
FROM customer_demographics
LIMIT 5
customer_gender male
M 1
F 0
F 0
M 1
M 1
Built-in and user-defined functions can be incorporated into user-defined aggregates, which can then
be used as a window function. In Section 11.3.1, a window function is used to calculate moving averages
to smooth a data series. In this section, a user-defined aggregate is created to calculate an Exponentially
Weighted Moving Average (EWMA). For a given time series, the EWMA series is defined as shown in
Equation 11-1.
EWMA =I Y, 1 o.y
1
+(1-o).fWMA
1
_
1
whereO ~a~ 1
for t = 11
for t~2
(11-1)
The smoothing factor, determines how much weight to place on the latest point in a given time series.
By repeatedly substituting into Equation 11-1 for the prior value of the EWMA series, it can be shown that
the weights against the original series are exponentially decaying backward in time.
11.3 Advanced SQL
To implement EWMA smoothing as a user-defined aggregate in SQL, the functionality in Equation 11-1
needs to be implemented first as a user-defined function.
CREATE FUNCTION ewma_calc(numeric, numeric, numeric) RETURNS numeric as
/* $1 prior value of EW~~ */
/* $2 = current value of series */
/* $3 = alpha, the smoothing factor */
'SELECT CASE
WHEN $3 IS NULL
OR $3 < 0
OR $3 > 1 THEN NULL
WHEN $1 IS NULL THEN $2
WHEN $2 IS NULL THEN $1
ELSE ($3 * $2) + (1-$3) *$1
END’
LANGUAGE SQL
IMMUTABLE
/* bad alpha */
/* t = 1 */
/* y is unkno\”m *I
/* t >= 2 */
Accepting three numeric inputs as defined in the comments, the ewma _calc ( ) function addresses
possible bad values of the smoothing factor as well as the special case in which the other inputs are null.
The ELSE statement performs the usual EWMA calculation. Once this function is created, it can be refer-
enced in the user-defined aggregate, ewma ( ) .
CREATE AGGREGATE ewma(numeric, numeric)
(SFUNC = ewma_calc,
STYPE = numeric,
PREFUNC = dummy_function)
In the CREATE AGGREGATE statement for ewma (), SFUNC assigns the state transition function
(ewma _calc in this example) and STYPE assigns the data type of the variable to store the current state
of the aggregate. The variable for the current state is made available to the ewma _calc ( ) function as the
first variable, $1. In this case, because the ewma _calc () function requires three inputs, the ewma ()
aggregate requires only two inputs; the state variable is always internally available to the aggregate. The
PREFUNC assignment is required in the Greenplum database for use in a massively parallel processing
(MPP) environment. For some aggregates, it is necessary to perform some preliminary functionality on
the current state variables for a couple of servers in the MPP environment. In this example, the assigned
PREFUNC function is added as a placeholder and is not utilized in the proper execution of the ewma ( )
aggregate function.
As a window function, the ewma ( ) aggregate, with a smoothing factor of 0.1, can be applied to the
weekly sales data as follows.
SELECT year,
week,
sales,
ewma(sales, .1)
OVER (
ORDER BY year, week)
FROM sales_by_week
WHERE year = 2014
ADVANCED ANALY TICS- TECHNOLOGY AND TOO LS: IN-DATABASE ANA LYTICS
AND week <= 26
ORDER BY year ,
week
year 'j,' k sa:es ~· .. :rna
2Q:.; 156~ 5; .. 56~'5'9.00
:; !·i 2 l5~212S 156'297.9J
201·· .3 15S2;;l 156700:.21
201-1 4 1600769 1570.J77 . 99
2014 5 1580146 1571)5•1 . 79
~O!·t 15:955c 1~.;~ -i..: . . ;7
20:4 2-2 1559.;..;3 1 ·i ·78] . .;2
2'Jl4 •• 5 1525437 15·i1':1:8 . 78
2"" , ~"' "'t;~~9:;.; ''' .;~r . 3" -~ - '
Figure 11 -4 includes the EWMA smoothed series to the plot from Figure 11 -3.
I I ) ) ) a 17 19 n 19 11 J9 lA 41 So4 1:!10 61 11 99 110 111 141 IU Ill 111 16) Ul U6 107 U9 u 60 66 10 lO!t 106 114 96 UO 111 J1 l 11fl
fiGURE 12 16 Forty-five years of store opening data
Even showing somewhat less data is sti ll difficult to read through for most people. Figure 12-17 hides
the fir st 10 years, leaving 35 years of data in the table.
12.3 Data Vi sualization Basics
Su~8ox U 14 20 14 17 29 24 37 33 117 42 65 79 81 90 92 82 86 106 72 62 62 40 49 22 26 33 47 78 71 67 64 9 1 91 33 1980
BigSo~t 4 5 5 5 10 10 10 6 21 33 21 22 20 29 31 SO 43 45 72 9 1 76 94 67 80 31 34 33 33 27 35 47 32 39 27 4 1196
Total 17 19 25 19 21 39 34 43 54 150 63 87 99 110 121 142 125 13 1 178 161 138 156 107 t29 53 60 66 80 105 106 114 96 130 118 37 3J76
FIGURE 12-17 Thirty-five years of store opening data
As most readers will observe, it is challenging to make sense of data, even at relatively small sca les. There
are several obse rvat ions in t he data that one may notice, if one loo ks closely at the data tables:
• Big Box experienced strong growth in the 1980s and 1990s.
• By the 1980s, Big Box began adding more SuperBox stores to its mix of chain stores.
• SuperBox stores outnumber Big Box stores nearly 2 to 1 in aggregate.
Depending on the point trying to be made, the analyst must take care to organize the inform ation in a
way that intu itively enables the viewer to take away the same main point that the author intended. If the
analyst fails to do this effectively, the person consuming the data must guess at the main point and may
interpret something different from what was intended.
Figure 12-18 shows a map of the United States, with the points representing the geographic locations
of the stores. This map is a more powerful way to depict data than a small table wou ld be. The approach
is well suited to a sponsor audience. This map shows where the Big Box store has market saturation, where
the company has grown, and where it has SuperBox stores and other Big Box stores, based on the color and
shadi ng. The visualization in Figure 12-18 clea rl y commun icates more effectively than t he dense tab les in
Figure 12-16 and Figure 12-1 7. For a sponsor audience, t he analytics team can also use other sim ple visua l-
ization techniques to portray data, such as bar charts or line charts.
Map of BlgBox Store•
• •
t. ~ ~ ! : ~ … • • • . . ‘– . . .. . ~ -.’ .. . . . . . , .. ··i
I • • •.I• . .· c~
I • •• • .•• .. ,..
•
\~· , . ..; … , . .:· ..
•• • • ··~ · – .. ··HI.’Il … . .
.. , …
FiGURE 12-18 Forty-five years of store opening data, show n as map
StCH• l yJM
• ••
– ~ £<.,,
THE ENDGAME, OR PUTTING IT ALL TOGETHER
12.3.2 Evolution of a Graph
Visualization allows people to po rtray data in a more compelling way than tables of data and in a way
that can be understood on an intuitive, precognitive level. In addition, analys ts and data scientists can use
visua lization to interact with and explore data. Following is an example of the steps a data scientist may
go through in exploring pricing data to understand the data better, model it, and assess whether a cur-
rent pricing model is working effectively. Figure 12-19 shows a distribution of pricing data as a user score
reflecting price sensitivity.
DistributiOn or user Score
c
a 5 10 15 20 25 30 35
User San
FIGURE 12-19 Frequency distribution of user scores
A data scientist's fir st step may be to view the data as a ra w distri bution of the pricing levels of users.
Because the values have a long tail to the right, in Figure 12-19, it may be difficult to get a sense of how
tightly clustered the data is between user scores of zero and five.
To understa nd this better, a data scientist may rerun this distrib ution showing a log distribution
(Chapter 3) of the user score, as demonstrated in Figure 12-20.
This shows a less skewed distribution that may be easier for a data scientist to understand. Figure 12-21
illustra tes a resca led view of Fig ure 12-20, with the median of the distribution around 2.0. This plot provides
the distribution of a new user score, or index, that may gauge the level of price sensitivity of a user when
expressed in log form.
12.3 Data Visualization Basics
Log Distribution of User sccre
-4 ·2 0 2
log(Use< SOl!a)
FIGURE 12-20 Frequency distribution with log of user score
Distribution of New User Score
fa
i ~
]uJ n~rdh
0.5 1.0 1.5 2.0 25 30 35
NewUse
~
~ ,..,
0
IQ
0
“1
0
0 2 6 6 10
LDyalty SaJre
FIGURE 12-23 Graph comparing the price in U.S. dollars with a customer loyalty score
Current Pricing Model willl Score Distribution
q
Cll
0
_j
6
01
0
Ul
::>
~
~ ,..,
0
IQ
0
“‘ 0
0 2 4 6 6 10
LDyalty Salre
FIGURE 12-24 Graph comparing the price in U.S . dollars with a customer loyalty score (with rug
representation)
THE ENDGAME, OR PUTTING IT ALL TOGETHER
This rug indicates that the majority of customers in this example are in a tight band of loyalty scores,
between about 1 and 3 on the x-axis, all of wh ich offered the same set of prices, which are high (between
0.9 and 1.0 on they-axis). They-axis in this example may represent a pricing score, or the raw value of a
customer in millions of dollars. The important aspect is to recognize that the pricing is high and is offered
consistently to most of the customers in this example.
Based on what was shown in Figure 12-25, the team may decide to develop a new pricing model. Rather
than offering static prices to customers regardless of their level of loyalty, a new pricing model might offer
more dynamic pri ce points to customers. In this visualization, the data shows the price increases as more of
a cu rvilinear slope relative to the customer loyalty score. The rug at the bottom of the graph indicates that
most customers remain between 1 and 3 on the x-axis, but now rather than offering all these customers
the sa me price, the proposal suggests offering progressively higher prices as customer loyalty increases.
In one sense, this may seem counterintuitive. It could be argued that the best prices should be offered to
the most loya l customers. However, in reality, the opposite is often the case, with the most attractive prices
being offered to the least loyal customers. The rationale is that loyal customers are less price sensitive and
may enjoy the product and stay with it regardless of small fluctuation s in price. Conversely, customers who
are not very loyal may defect unless they are offered more attractive prices to stay. In other words, less loyal
customers are more price sensitive. To add ress this issue, a new pri ci ng model that accounts for this may
enable an organization to maximize revenue and minimize attrition by offering higher pri ces to more loyal
customers and lower prices to less loya l customers. Creating an iterative depicting the data visually allows
the viewer to see these cha nges in a more concrete way than by looking at tables of numbers or raw va lues.
Proposed PriCing Model with score Distribution
q
“l
0
0
“!
0
en
::>
.e
~ ….,
0
“‘!
0
“1
0
0 2 6 8 10
LDyalty SaJre
FIGURE 12-25 New proposed pricing model compared to prices in U.S. dollars with rug
12.3 Data Visualization Basics
Data scientists typically iterate and view data in many different ways, framing hypotheses, testing them,
and exploring the implications of a given model. This case explores visual examples of pri cing distributions,
fluctuations in pricing, and the differen ces in price tiers before and after implementing a new model to
optimize price. The visualization work illustrates how the data may look as the resu lt of the model, and
helps a data scientist understand the relationships within the data at a glance.
Th e resulting graph in the pricing scena rio appears to be tech nical regarding the distri bution of prices
throughout a customer base and woul d be su itable for a technical audience composed of other data sci –
entists. Figure 12-26 shows an example of how one may present this graphic to an aud ience of other data
scientists or data analysts. This demonstrates a curvi linea r relationship between price tiers and customer
loyalty when expressed as an index. Note that the comments to the right of the graph relate to the preci –
sion of the price targeting, the amount of variability in robustness of the model, and the expecta tions of
model speed when run in a production environment.
Evolution of a Graph, Analyst Example
.
~ D
•
l ~
…… ….
—
·o
Im plementing new price tiering
approach i ncreases the precision of
price promotions by 23%
Pr ice optimiza tion model explains
92% of customer behavior
• Model ca n be run in production
environment on daily basis, if
needed, to tailor changes to direct
mail campaigns and web
pr omotional offers
FtGURE 12-26 Evolution of a graph, analyst example with supporting points
Figu re 12-27 portrays another exa mple of the ou tput from t he price optimization proj ect scena rio,
showing how one may present this to an audience of project sponsors. This demonstrates a simple bar chart
depicting the average price per customer or user segment. Figure 12-27 shows a much simpler-looking visual
than Figure 12-26.1t clearly portrays that customers with lower loyalty scores tend to get lower prices due
to targeting from price promotio ns. Note that the right side of the image focuses on the busi ness impact
and cost savings rath er than the detailed characteristics of the model.
THE ENDGAME, OR PUTTING IT ALL TOGETHER
Evolution of a Graph, Sponsor Example
• Before the project, pricing promotions
were offered to all customers equally
With the new appro ach:
Highly loyal customers do not receive as
many price promotions, since their loya lty
Is not strongly Influenced by price
Customers wi th low loyalty are Influenced
by price, and w e can now target them for
th is purpose better
We project multiple cost savings with this
approach
$2M In lost cust omers
Sl.SM In new customer acquisition costs
$1M In reduct1ons for pricing promotions
FtGURE 12-27 Evolution of a graph, sponsor example
The comments to the right side of the graphic in Figure 12-27 explain the impact of the model at a high
level and the cost savings of implementing this approach to price optimization.
12.3.3 Common Representation Methods
Although there are many types of data visualizations, several fundamenta l types of charts portray data
and information. It is important to know when to use a particu lar type of chart or graph to express a given
kind of data. Tab le 12-3 shows some basic chart types to guide the reader in understanding that differen t
types of charts are more suited to a sit uation depending on specific kinds of data and the message the
team is attempting to portray. Using a type of chart for data it is not designed for may look interesting
or unusual, but it genera lly confuses the viewer. The objective for the author is to find the best chart for
expressing the data clearly so the visual does not impede the message, but rather supports the reader in
taking away the intended message.
TABLE 12-3 Common Representation Methods for Data and Charts
Data for Visualization Type of Chart
Components (parts of whole) Pie chart
Item Bar chart
Time series Line chart
Frequency Line chart or histogram
Correlation Scatterplot, side-by-side bar charts
Table 12-3 shows the most fundam ental and common data representations, which can be combined,
embellished, and made more sophisticated depending on the situation and the audience. It is recommended
12.3 Data Vis ua lization Basics
that the team consider the message it is trying to communicate and then select the appropriate type of
visual to support the point. Misusing charts tends to confuse an audience, so it is important to take into
accou nt the data type and desired message when choosing a chart.
Pie charts are desig ned to show the components, or pa rts relative to a whole set of things. A pie chart
is also the most commonly misused kind of chart. If the situation calls for using a pie chart, employ it only
when showing only 2- 3 items in a chart, and only for sponsor audiences.
Bar charts and line charts are used much more often and are useful for showing comparisons and trends
over time. Even though people use vertical ba r charts more often, hori zontal bar charts allow an author
more room to fit the text la bels. Vertical bar charts tend to work we ll when the labels are small, such as
when showing comparisons over time usi ng years.
For frequency, histograms are usefu l for demonstrating the distribution of data to an analyst audience
or to data scientists. As shown in the pricing example earlier in this chapter, data distributions are typically
one of the first steps when visua lizing data to prepare for model planning. To qualitatively evaluate cor-
relations, scatterplots ca n be useful to compare relationships among variables.
As with any presentation, consider the audience and level of sophistication when selecting the chart
to convey the intended message. These charts are simple examples but can easily become more complex
when adding data variables, combining charts, or adding animation where appropriate.
12.3.4 How to Clean Up a Graphic
Many times soft wa re packages generate a graphic for a dataset, but the software adds too many things
to the graphic. These added visual distractions can make the visual appear busy or otherwise obscure the
main points that are to be made with the graphic. In general, it is a best practice to stri ve for simplicity
when creating grap hics and data visual ization graphs. Knowing how to si mplify graphi cs or clean up a
messy chart is helpful for conveying the key message as clea rl y as possible. Figu re 12-28 portrays a line
chart with several design problems.
11
1’ 11
” CNn~e•
5
0 11
10
10
5
0
5
0
9
9
8
8
7
7
6
6
5
5
4
4
3
3
2
2
1
1
5
0
5
0
5
0
5
0
5
0
5
0
5
0
5
0
5
0
-• _11
I
-I
_L_ ~—… -..,.
I
“”‘-
T
N – I _ , .. _
~
,,
….. IT –…. ~ r–
~~
flGURE 12-28 How t o clean up a graphic, example 1 (before)
–. -n _l
~} … –… ~ …
\
_l
\11 • -.Ill. I 1\-
I-.&. —~ ~~ I!’:
T-.. – -.. _l _l
‘ …
-:’!!!:
THE ENDGAME, OR P UTT ING IT ALL TOGETHER
How to Clean Up a Graphic
The line chart shown in Figure 12-28 compa res two trends over time. The chart looks busy and contains
a lot of chart junk that distracts the viewer from the main message. Chart junk refers to elements of data
visualization that provide additional materials but do not contribute to the data portion of the graphic.
If chart junk were removed, the meaning and understanding of the graphic would not be diminished; it
would instead be made clearer. There are five main kinds of “chart junk” in Fig ure 12-28:
• Horizontal grid lines: These serve no purpose in this graphic. They do not provide additional infor-
mation for the chart.
• Chunky data points: These data points represented as large square blocks draw the viewer’s atten-
tion to them but do not represent any specific meaning aside from the data points themselves.
• Overuse of emphasis colors in the lines and border: The border of the graphic is a th ick, bold line.
This forces the viewer’s attention to the perimeter of the graphic, which contains no information
value. In addition, the lines showing the trends are relatively thick.
• No context or labels: The chart contains no legend to provide context as to what is being shown. The
lines also lack labels to explain what they represen t.
• Crowded axis labels: There are too many axis labels, so they appear crowded. There is no need for
labels on they-axis to appear every five units or for values on the x-axis to appear every two units.
Shown in this way, the axis labels distract the viewer from the actual data that is represented by the
trend lines in the chart.
The five form s of chart junk in Figure 12-28 are easily corrected, as shown in Figure 12-29. Note that
there is no clear message associated with the chart and no legend to provide context for what is shown
in Figure 12-28.
140
“‘ 120 “” c
-~ 100
Q.
0
80 .,
15 ….
“‘ 60 0 ….
c 40
~
u 20
0
1962
Growth of SuperBox Stores
1967
–SuperBox BigBox
1972 1977 1982 1987 1992 1997 2002
Yea r
FtGURE 12-29 How to clean up a graphic, example 7 (after)
-;;;- 100 ..
ll 80
.:;
!! 60
‘eo
~ 40
] 20
!! 0 ..,
ll. -20
~
0 -40
E 1962
~
Difference in Store Openings
–Diffin Super Box vs. Big Box
1967 1972 1977 1982 1967 1992 1997 2002
Year
FIGURE 12-30 How to clean up a graphic, example 1 (alternate “after” view)
12.3 Data Visualization Basics
Figures 12-29 and 12-30 portray two examples of cleaned-up versions of the chart shown in Figure 12-28.
Note that the problems with chart j unk have been addressed. There is a clear label and title for each chart to
reinforce the message, and color has been used in ways to highlight the point the author is trying to make.
In Figure 12-29, a strong, green color is shown to represent the count of SuperBox stores, because this is
where the viewer’s focus should be drawn, whereas the count of Big Box stores is shown in a light gray color.
In addition, note the amount of white space being used in each of the two charts shown in Figures 12-29
and 12-30. Removing grid lines, excessive axes, and the visual noise within the chart allows clear contras t
between the emphasis colors (the green line charts) and the standard colors (the lighter gray of the Big Box
stores). When creating charts, it is best to draw most of the main visuals in standard colors, light tones,
or color shades so that stronger emphasis colors can highlight t he main points. In this case, the t rend of
Big Box stores in lig ht gray fades into the background but does not disappear, while making the SuperBox
stores trend in a da rk er gray (bright green in the onl ine chart) makes it prominent to support the message
the author is maki ng about the growth of the SuperBox stores.
An alternative to Figure 12-29 is shown in Figure 12-30. 1f the main message is to show the difference
in the growth of new stores, Figure 12-30 can be created to further simplify Figure 12-28 and graph only
the difference between SuperBox stores compared to regular Big Box stores. Two examples are shown to
illustrate different ways to convey the message, depending on what it is the author of t hese charts would
like to emphasize.
How to Clean Up a Graphic, Second Example
Another example of cleaning up a chart is portrayed in Figure 12-3 1. This vertica l bar chart suffers from
more of the typical problems related to chart junk, includi ng misuse of co lor schemes and lack of context.
THE ENDGAME, OR PUTTING IT ALL TOGETHER
, ..
“”
100
.. • SuperBox
• BigBox .. • Grand Total
..
“‘
FIGURE 12-31 How to clean up a graphic, example 2 (before)
There are five main kinds of chart j unk in Figure 12-31:
• Vertical grid li nes: These vertical grid lines are not needed in this graphic. They provide no additional
information to help the viewer understand the message in the data. lnstead, these vertical grid lines
only distract the viewer from looki ng at the data .
• Too much emphasis color: This bar chart uses strong colors and too much high-contrast dark gray-
scale. In general, it is best to use subtle tones, with a low contrast gray as neutral color, and then
emphasize the data underscoring the key message in a dark tone or strong color.
• No chart title: Because the graphic lacks a chart title, the viewer is not oriented to what he is viewing
and does not have proper context.
• Legend at rig ht restricting chart space: Although there is a legend for the chart, it is shown on the
righ t side, which ca uses the verti ca l bar chart to be compressed hori zontally. The legend wou ld make
more sense placed across the top, above the chart, where it would not interfere with the data being
expressed.
• Sma ll labels: The horizontal and vertica l axis labels have appropriate spacing, but the font size is
too small to be easily read. These shou ld be slightly larger to be easily read, while not appearing too
prominent.
Figures 12-32 and 12-33 portray two examp les of cleaned -up versions of the chart shown in
Figure 12-31. The problems with chart j unk have been addressed. There is a clear label and tit le for each
chart to reinforce the message, and appropriate colors have been used in ways to highlight t he point the
author is trying to make. Figures 12-32 and 12-33 show two options for modifying the graphic, depending
on the main point the presenter is t rying to make.
Figure 12-32 shows strong emphasis color (dark blue) rep resenting the SuperBox stores to support t he
chart title: Growth of SuperB ox Stores.
100
90
.. 80
~ c 70
“‘ 8” 60
“‘ ~ so
VI
‘0 40
~ 30
U
5
20
10
0
12.3 Data Visualization Basics
Growth of SuperBox Stores
• SuperBox Big Box
I I I I I
1996 1997 1998 1999 2000 2001 2002 2003 2004 2005 2006
Year
FIGURE 12-32 How to clean up a graphic, example 2 (after)
Su ppose the presenter wanted to talk about the total growth of Big Box stores instead. A line chart
showing the trends over time would be a better choice, as shown in Figure 12-33.
140
110
100
60
40
20
Growth of Stores, Over Time
1!)96 : 997 1998 1999 2000 200: 2002 2003 200< 200S 2006
Yur
FIGURE 12-33 How to clean up a graphic, example 2 (alternate view of "after")
THE ENDGAME, OR PUTTI NG IT ALL TOGETHE R
In both cases, the noise and distractions within the chart have been removed. As a result, the data in the
bar chart for providing context has been deemphasized, while other data has been made more prominent
because it reinforces the key point as stated in the chart's title.
12.3.5 Additional Considerations
As stated in the previous examples, the emphasis should be on simplicity when creating charts and graphs.
Create graphics that are free of chart junk and utilize the simplest method for portraying graphics clearly.
The goa l of data visualization should be to support the key messages being made as clearly as possible
and with few distractions.
Simi lar to the idea of removing chart junk is being cogniza nt of th e data-ink ra tio. Data-ink refers to
the actual portion of a graphic t hat portrays the data, whi le non-data ink refers to labels, edges, colors,
and other decoration. If one imagined the ink requi red to print a data visualization on paper, the data-ink
ratio could be thought of as (data-ink)/(tota l ink used to print the graphic). In other words, the greater the
ratio of data-ink in the visual, t he more data rich it is and the fewer distractions it has [4].
Avoid Using Three-D im ensions in M o st Graphics
One more example where people typically err is in addi ng unnecessary shading, depth, or dimensions to
graphics. Figure 12-34 shows a vertical bar chart with two visi ble dimensions. This example is simple and
easy to understand, and the focus is on the data, not the graphics. The author of the chart has chosen to
highlight the SuperBox stores in a dark blue color, wh ile the BigBox bars in the chart are in a lighter blue.
The title is about the growth of SuperBox stores, and the SuperBox bars in the chart are in a dark, high-
contrast shade that draws the viewer's attention to them.
Growth of SuperBox Stores
• SuperBox Big Box
100
"' 80 "'
I
~ 60
I I I
V>
0
I
.. 40 c
8 20
I I I I u 0
1996 1997 1998 1999 2000 2001 2002 2003 2004 2005 2006
Yea r
FIGURE 12-34 Simple bar chart, with two dimensions
Compare Figure 12-34 to Figure 12-35, which shows a three-dimensional chart. Figure 12-35 shows the
origi nal bar chart at an ang le, with some attempt at showi ng depth. Th is kind of three-dimensional per-
spective makes it more difficult for the viewer to gauge the actual data and the scaling becomes deceptive.
Summary
Three-dimensional charts often distort scales and axes, and impede viewer cognition. Adding a third dimen-
sion for depth in Figure 12-35, does not make it fancier, just more difficult to understand.
Growth of SuperBox Sto res
• Super Box Big Box
Year
FIGURE 12-35 Misleading bar chart, with three dimensions
The charts in Figures 12-34 and 12-35 portray the same data, but it is more difficult to judge the actual
height of the bars in Figure 12-35. Moreover, the shadowing and shape of the chart cause most viewers
to spend time looking at the perspective of the chart rather than the height of the bars, which is the key
message and purpo se of this data visualization.
Summary
Communicati ng the value of analytical projects is cri t ical for sustaining t he momentum of a project and
building support within organization s. This su pport is instrumental in tu rn ing a successful project into a
system or integrating it properly into an existing production environment. Beca use an ana lytics project
may need to be comm unica ted to aud iences with mixed backgrounds, this chapter recommends creating
four deliverables to satisfy most of the needs of va rious stakeholders.
• A presentation for a project sponsor
• A presentation for an analytical audience
• Technical specification documents
• Well-annotated production code
Creating these deliverables enables the analytics project team to communicate and evangelize the work
that it did, whereas the code and technical documentation assists the team that wants to implement the
models within the prod uction environment.
This chapter ill ustrates the im portance of selecting cl ear and simple visual representations to support
the key points in the final presentations or for portraying data. Most data representations and graphs can
be improved by simply removing the visual distractions. This means minimizing or removing chart junk,
which distracts the viewer from the main purpose of a chart or graph and does not add information value.
THE ENDGAME, OR PUTTING IT ALL TOGETHER
Following severa l common-sense principles about minimizing distractions in slides and visualizations,
com municating clearly and simply, using color in a de liberate way, and taking time to provide context
addresses most of the common problems in charts and slides. These few guidelines support the creation
of cri sp, clear visuals that convey the key messages.
In most cases, the best data visualizations use the simplest, clearest visual to illustrate the key point.
Avoid unnecessary embellishment and focus on trying to find the best, simplest method for t ransmitting
the message. Context is critica l to orient the viewer to a cha rt or graph, because people have immediate
reactions to imagery on a precognitive level. To this end, make sure to employ thoughtful use of color and
orient the viewer with scales, legends, and axes.
Exercises
1. Describe four common deliverables for an analytics project.
2. What is the focus of a presentation for a project sponsor?
3. Give examples of appropri ate cha rts to create in a presentation for other data analysts and data scien-
tists as part of a final presentation. Explain why the charts are appropriate to show each audience.
4. Explain what types of graphs would be appropriate to show data chang ing over time and why.
5. As part of operationalizing an analytics proj ect, which del iverable wou ld you expect to provide to a
Business Intelligence analyst?
References and Further Reading
Following are additiona l references to learn more about best practices for giving presentations.
• Say It with Charts, by Gene Zelazny[3]: Simple reference book on how to select the right graphical
approach for portraying data and for ensuring the message is clearl y conveyed in presentations.
• Pyramid Principle, by Barbara Minto [5]: Minto pioneered the approach for constructing log ical
structures for presentations in threes: three sections to the presentations, each with three main
points. This teaches people how to weave a story out of the disparate pieces.
• Presentation Zen, by Garr Reynolds [6]: Teaches how to convey ideas simply and clearly and use
imagery in presen tations. Shows many before and after versions of graphics and sl ides.
• Now Yo u See It, by St ephen Few [4]: Provides many examples for matching the appropriate kind of
data visua lization to a given dataset.
Bibliography
[1) N. Yau, “flowingdata.com” [On line). Available: h ttp : I / flowingdata . com.
[2) N. Yau, Visualize This, Indianapolis: Wiley, 2011.
Bibliography
[3) G. Zelazny, Say It with Charts: The Executive’s Guide to Visual Communication, McGraw-Hill, 2001.
[4) S. Few, Now You See It: Simple Visualization Techniques for Quantitative Analysis, Analytics Press,
2009.
[5) B. Minto, The Minto Pyramid Principle: Logic in Writing, Thinking, and Problem Solving, Prentice
Hall, 2010.
[6) G. Reynolds, Presentation Zen: Simple Ideas on Presentation Design and Delivery, Berkeley: New
Riders, 2011.
Index
Numbers & Symbols
\(backward slash) as separator, 69
I (forward slash) as separator, 69
1-itemsets, 147
2-itemsets, 148-149
3 Vs (volume, variety, velocity), 2-3
3-itemsets, 149-150
4-itemsets, 150-151
A
accuracy, 225
ACF (autocorrelation function), 236-237
ACME text analysis example, 259-260
raw text collection, 260-263
aggregates (SQL)
ordered, 351-352
user-defined, 347-351
aggregators of data, 18
AlE (Applied Information Economics), 28
algorithms
clustering, 134-135
decision trees, 197-200
(4.5, 203-204
CART,204
103,203
Alphine Miner, 42
alternative hypothesis, 102-103
analytic projects
Approach, 369-371
Bl analyst, 362
business users, 361
code,362,376-377
communication, 360-361
data engineer, 362
data scientists, 362
DBA (Database Administrator), 362
deliverables, 362-364
audiences, 364-365
core material, 364-365
key points, 372
Main Findings, 367-369
model description, 371
model details, 372-374
operationalizing, 360-361
outputs, 361
presentations, 362
Project Goals, 365-367
project manager, 362
project sponsor, 361
recommendations, 374-375
stakeholders, 361-362
technical specifications, 376-377
analytic sandboxes. See sandboxes
analytical architecture, 13-15
analytics
business drivers, 11
examples, 22-23
new approaches, 16-19
ANOVA, 110-114
Anscombe’s quartet, 82-83
aov ( ) function, 78
Apache Hadoop. See Hadoop
APis (application programming interfaces), Hadoop, 304-305
apriori ( ) function, 146,152-157
Apriori algorithm, 139
grocery store example, 143
Groceries dataset, 144-146
itemset generation, 146-151
rule generation, 152-157
itemsets, 139, 140-141
counting, 158
partitioning and, 158
sampling and, 158
transaction reduction and, 158
architecture, analytical, 13-15
arima ( ) function, 246
ARIMA (Autoregressive Integrated Moving Average) model,
236
ACF, 236-237
ARMA model, 241-244
autoregressive models, 238-239
building, 244-252
cautions, 252-253
constant variance, 250-251
evaluating, 244-252
fitted time series models, 249-250
forecasting, 251-252
moving average models, 239-241
normality, 250-251
PACF, 238-239
reasons to choose, 252-253
seasonal autoregressive integrated moving average
model, 243-244
VARIMA,253
ARMA (Autoregressive Moving Average) model, 241-244
array ( ) function, 74
arrays
matrices, 74
R, 74-75
association rules, 138-139
application, 143
candidate rules, 141-142
diagnostics, 158
testing and, 157-158
validation, 157-158
attributes
objects, k-means, 130-131
R, 71-72
AUC (area under the curve), 227
autoregressive models, 238-239
averages, moving average models, 239-241
B
bagging, 228
bag-of-words in text analysis,
265-266
banking, 18
barplot { ) function, 88
barplots, 93-94
Bayes’ Theorem, 212-214. See also na’ive Bayes
conditional probability, 212
Bl (business intelligence)
analytical tools, 10
versus Data Science, 12-13
Big Data
3 Vs, 2-3
analytics, examples, 22-23
characteristics, 2
definitions, 2-3
drivers, 15-16
ecosystem, 16-19
key roles, 19-22
McKinsey & Co. on, 3
volume,2-3
boosting, 228-229
bootstrap aggregation, 228
box-and-whisker plots, 95-96
Box-Jenkins methodology, 235-236
ARIMA model, 236
branches (decision trees), 193
Brown Corpus, 267-268
business drivers for analytics, 11
Business Intelligence Analyst, Operationalize phase,
52
Business Intelligence Analyst role, 27
Business User, Operationalize phase, 52
Business User role, 27
buyers of data, 18
c
C4.5 algorithm, 203-204
cable TV providers, 17
candidate rules, 141-142
CART (Classification And Regression Trees), 204
case folding in text analysis, 264-265
categorical algorithms, 205
categorical variables, 170-171
cbind { ) function, 78
centroids, 120-122
starting positions, 134
character data types, R, 72
charts, 386-387
churn rate (customers),
120
logistic regression, 180-181
class ( ) function, 72
classification
bagging, 228
boosting, 228-229
bootstrap aggregation, 228
decision trees, 192-193
algorithms, 197-200, 203-204
binary decisions, 206
branches, 193
categorical attributes, 205
classification trees, 193
correlated variables, 206
decision stump, 194
evaluating, 204-206
greedy algorithm, 204
internal nodes, 193
irrelevant variables, 205
nodes, 193
numerical attributes, 205
Rand, 206-211
redundant variables, 206
regions, 205
regression trees, 193
root, 193
shorttrees, 194
splits, 193, 194, 197,200-203
structure, 205
uses, 194
na’ive Bayes, 211-212
Bayes’theorem, 212-214
diagnostics, 217-218
na’ive Bayes classifier, 214-217
Rand, 218-224
smoothing, 217
classification trees, 193
classifiers
accuracy, 225
diagnostics, 224-228
recall, 225
clickstream, 9
clustering, 118
algorithms, 134-135
centroids, 120-122
Index
Index
starting positions, 134
diagnostics, 128-129
k-means, 118-119
algorithm, 120-122
customer segmentation, 120
image processing and, 119
medical uses, 119
reasons to choose, 130-134
rescaling, 133-134
units of measure, 132-133
labels, 127
numberofclusters, 123-127
code, technical specifications in project, 376-377
coefficients, linear regression, 169
combiners, 302-303
Communicate Results phase of lifecycle, 30, 49-50
components, short trees as, 194
conditional entropy, 199
conditional probability, 212
na”ive Bayes classifier, 215-216
confidence, 141-142
outcome, 172
parameters, 171
confidence interval, 107
conf int ( ) function, 171
confusion matrix, 224, 280
contingency tables, 79
continuous variables, discretization, 211
corpora
Brown Corpus, 267-268
corpora in Natural language Processing, 256
IC (information content), 268-269
sentiment analysis and, 278
correlated variables, 206
credit card companies, 2
CRISP-OM, 28
crowdsourcing, 17
CSV (comma-separated-value) files, 64-65
importing, 64-65
customer segmentation
k-means, 120
logistic regression, 180-181
CVS files, 6
cyclic components oftime series analysis, 235
D
data
growth needs, 9-10
sources, 15-16
data ( ) function, 84
data aggregators, 17-18
data analysis, exploratory, 80-82
visualization and, 82-85
Data Analytics lifecycle
Business Intelligence Analyst role, 27
Business User role, 27
Communicate Results phase, 30, 49-50
GINA case study, 58-59
Data Engineer role, 27-28
Data preparation phase, 29,
36-37
Alpine Miner,42
data conditioning, 40-41
data visualization, 41-42
Data Wrangler, 42
dataset inventory, 39-40
ETLT,38-39
GINA case study, 55-56
Hadoop,42
OpenRefine, 42
sandbox preparation, 37-38
tools,42
Data Scientist role, 28
DBA (Database Administrator) role, 27
Discovery phase, 29
business domain, 30-31
data source identification, 35-36
framing, 32-33
GINA case study, 54-55
hypothesis development, 35
resources, 31-32
sponsor interview, 33-34
stakeholder identification, 33
GINA case study, 53-60
Model Building phase, 30, 46-48
Alpine Miner, 48
GINA case study, 56-58
Mathematica, 48
Matlab,48
Octave,48
PUR,48
Python,48
R,48
SAS Enterprise Miner, 48
SPSS Modeler, 48
SQL,48
STATISTICA, 48
WEKA,48
Model Planning phase, 29-30, 42-44
data exploration, 44-45
GINA case study, 56
model selection, 45
R,45-46
SAS/ ACCESS, 46
SOL Analysis services, 46
variable selection, 44-45
Operationalize phase, 30, 50-53, 360
Business Intelligence Analyst and, 52
Business User and, 52
Data Engineer and, 52
Data Scientist and, 52
DBA (Database Administrator) and, 52
GINA case study, 59-60
Project Manager and, 52
Project Sponsor and, 52
processes, 28
Project Manager role, 27
Project Sponsor role, 27
roles, 26-28
data buyers, 18
data cleansing, 86
data collectors, 17
data conditioning, 40-41
data creation rate, 3
data devices, 17
Data Engineer, Operationalize phase, 52
Data Engineer role, 27-28
data formats, text analysis, 257
data frames, 75-76
data marts, 1 0
Data preparation phase of lifecycle, 29, 36-37
data conditioning, 40-41
data visualization, 41-42
dataset inventory, 39-40
ETLT,38-39
sandbox preparation, 37-38
data repositories, 9-11
types, 10-11
Data Savvy Professionals, 20
Data Science versus Bl, 12-13
Data Scientists, 28
activities, 20-21
business challenges, 20
characteristics, 21-22
Operationalize phase and, 52
recommendations and, 21
statistical models and, 20-21
data sources
Discovery phase, 35-36
text analysis, 257
data structures, 5-9
quasi-structured data, 6, 7
semi-structured data, 6
structured data, 6
unstructured data, 6
data types in R, 71-72
character, 72
logical, 72
numeric, 72
vectors, 73-74
data users, 18
data visualization, 41-42,377-378
CSS and,378
GGobi, 377-378
Gnuplot, 377-378
graphs, 380-386
clean up, 387-392
three-dimensional,392-393
HTML and, 378
key points with support, 378-379
representation methods, 386-387
SVGand,378
data warehouses, 11
Data Wrangler, 42
datasets
exporting, Rand, 69-71
importing, Rand, 69-71
inventory, 39-40
Davenport, Tom, 28
DBA (Database Administrator), 10,27
Operational phase and, 52
decision trees, 192-193
algorithms, 197-200
C4.5, 203-204
CART,204
categorical, 205
greedy,204
ID3,203
numerical, 205
binary decisions, 206
branches, 193
classification trees, 193
correlated variables, 206
evaluating, 204-206
greedy algorithms, 204
internal nodes, 193
irrelevant variables, 205
nodes
depth, 193
leaf, 193
Rand, 206-211
redundant variables, 206
regions, 205
regression trees, 193
root, 193
short trees, 194
decision stump, 194
Index
Index
splits, 193, 197
detecting, 200-203
limiting, 194
structure, 205
uses, 194
Deep Analytical Talent, 19-20
DELTA framework, 28
demand forecasting, linear regression and, 162
density plots, exploratory data analysis, 88-91
dependent variables, 162
descriptive statistics, 79-80
deviance, 183-184
devices, 17
mobile, 16
nontraditional, 16
smart devices, 16
OF (document frequency), 271-272
diagnostic imaging, 16
diagnostics
association rules, 158
classifiers, 224-228
linear regression
linearity assumption, 173
N-fold cross-validation, 177-178
normality assumption, 174-177
residuals, 173-174
logistic regression
deviance, 183-184
histogram of probabilities, 188
log-likelihood test, 184-185
pseudo-R2, 183
ROC curve, 185-187
na”ive Bayes, 217-218
diff ( ) function, 245
difference in means, 104
confidence interval, 107
student’s t-testing, 104-106
Welch’s t-test, 106-108
differencing, 241-242
dirty data, 85-87
Discovery phase of lifecycle, 29
data source identification, 35-36
framing, 32-33
hypothesis development, 35
sponsor interview, 33-34
stakeholder identification, 33
discretization of continuous variables, 211
documents, categorization, 274-277
dotchart ( ) function, 88
E
Eclipse, 304
ecosystem of Big Data, 16-19
Data Savvy Professionals, 20
Deep Analytical Talent, 19-20
key roles, 19-22
Technology and Data Enablers, 20
EDWs (Enterprise Data Warehouses), 10
effect size, 11 0
EMC Google search example, 7-9
emoticons, 282
engineering, logistic regression and, 179
ensemble methods, decision trees, 194
error distribution
linear regression model, 165-166
residual standard error, 170
ETLT, 38-39
EXCEPT operator (SOL), 333-3334
exploratory data analysis, 80-82
density plot, 88-91
dirty data, 85-87
histograms, 88-91
multiple variables, 91-92
analysis over time, 99
barplots, 93-94
box-and-whisker plots, 95-96
dotcharts, 93-94
hexbinplots, 96-97
versus presentation, 99-101
scatterplot matrix, 97-99
visualization and, 82-85
single variable, 88-91
exporting datasets in R, 69-71
expressions, regular, 263
F
Facebook, 2, 3-4
factors, 77-78
financial information, logistic regression and, 179
FNR (false negative rate), 225
forecasting
ARIMA (Autoregressive Integrated Moving Average)
model, 251-252
linear regression and, 162
FP (false positives), confusion matrix, 224
FPR (false positive rate), 225
framing in Discovery phase, 32-33
functions
aov( ) , 78
apriori ( ) 1 1461 152-157
arima ( ) I 246
array( ) I 74
barplot ( ) I 88
cbind( ) I 78
class ( ) 1 72
confint ( ) 1 171
G
data ( ) , 84
diff ( ) 1 245
dotchart ( ) , 88
gl ( ) 1 84
glm( ) , 183
hclust ( ) , 135
head( ) , 65
inspect ( ) , 147,154-155
integer ( ) , 72
IQR( ) I 80
is.data.frame( ) , 75
is .na ( ) , 86
is.vector( ) , 73
jpeg ( ) , 71
kmeans ( ) , 134
kmode ( ) , 134-135
length ( ) , 72
library ( ) , 70
lm( ) , 66
load. image ( ) , 68-69
matrix.inverse( ), 74
mean( ) , 86
my_range( ) , 80
na.exclude( ),86
pamk ( ) , 135
Pig, 307-308
plot ( ) , 65, 153-154,245
predict ( ) , 172
rbind( ) , 78
read. csv ( ) , 64-65, 75
read.csv2 ( ) , 70
read.delim2( ), 70
rpart, 207
SQL, 347-351
sqlQuery ( ) , 70
str ( ) , 75
summary ( ) , 65, 66-67, 79, 80-82
t ( ) 1 74
ts ( ) , 245
typeof ( ) , 72
wilcox. test ( ) , 109
window functions (SQL), 343-347
write. csv ( ) , 70
write.csv2( ) , 70
write.table( ), 70
Generalized linear Model function, 182
genetic sequencing, 3, 4
genomics, 4, 16
genotyping, 4
GGobi, 377-378
GINA (Global Innovation Network and Analysis), Data
Analytics lifecycle case study, 53-60
gl ( ) function,84
glm ( ) function, 183
Gnu plot, 377-378
GPS systems, 16
Graph Search (Facebook), 3-4
graphs, 380-386
clean up, 387-392
three-dimensional, 392-393
greedy algorithms, 204
Green Eggs and Ham, text analysis and, 256
grocery store example of Apriori algorithm, 143
Groceries dataset, 144-146
itemsets, frequent generation, 146-151
rules, generating, 152-157
growth needs of data, 9-10
GUis (graphical user interfaces), Rand, 67-69
H
Hadoop
Data preparation phase, 42
Hadoop Streaming API, 304-305
HBase, 311-312
architecture, 312-317
column family names, 319
column qualifier names, 319
data model, 312-317
Java API and, 319
rows,319
use cases, 317-319
versioning, 319
Zookeeper, 319
HDFS, 300-301
Hive, 308-311
linkedln, 297
Mahout, 319-320
MapReduce, 22
combiners, 302-303
development, 304-305
drivers, 301
execution, 304-305
mappers, 301-302
partitioners, 304
structuring,301-304
natural language processing, 18
Pig, 306-308
pipes,305
Watson (IBM), 297
Yahoo!, 297-298
YARN (Yet Another Resource Negotiator), 305
hash-based itemsets, Apriori algorithm and, 158
Index
Index
HAWQ (HAdoop With Query), 321
HBase, 311-312
architecture, 312-317
column family names, 319
column qualifier names, 319
data model, 312-317
Java API and, 319
rows, 319
use cases, 317-319
versioning, 319
Zookeeper, 319
hclust ( ) function, 135
HDFS (Hadoop Distributed File System), 300-301
head ( ) function, 65
hexbinplots, 96-97
histograms
exploratory data analysis, 88-91
logistic regression, 188
Hive, 308-311
HiveQL (Hive Query Language), 308
Hopper, Grace, 299
Hubbard, Doug, 28
HVE (Hadoop Virtualization Extensions), 321
hypotheses
alternative hypothesis, 102-103
Discovery phase, 35
null hypothesis, 102
hypothesis testing, 102-104
two-sided hypothesis testing, 105
type I errors, 109-110
type II errors, 109-110
IBM Watson, 297
103 algorithm, 203
IDE (Interactive Development Environment), 304
IDF (inverted document frequency), 271-272
importing datasets in R, 69-71
in-database analytics
SQL, 328-338
text analysis, 338-339
independent variables, 162
input variables, 192
inspect ( ) function, 147, 154-155
integer ( ) function, 72
internal nodes (decision trees), 193
Internet ofThings, 17-18
INTERSECT operator (SQL), 333
IQR ( ) function, 80
is. data. frame ( ) function, 75
is. na ( ) function, 86
is. vector ( ) function, 73
itemsets, 139
J
1-itemsets, 147
2-itemsets, 148-149
3-itemsets, 149-150
4-itemsets, 150-1S1
A priori algorithm, 139
Apriori property, 139
downward closure property, 139
dynamic counting, Apriori algorithm and, 158
frequent itemset, 139
generation, frequent, 146-151
hash-based, Apriori algorithm and, 158
k-itemset, 139, 140-141
joins (SQL), 330-332
j peg ( ) function, 71
K
k clusters
finding, 120-122
numberof, 123-127
k-itemset, 139, 140-141
k-means, 118-119
customer segmentation, 120
image processing and, 119
k clusters
finding, 120-122
numberof, 123-127
medical uses, 119
objects, attributes, 130-131
Rand, 123-127
reasons to choose, 130-134
rescaling, 133-134
units of measure, 132-133
kmeans ( ) function, 134
kmode ( ) function, 134-135
L
lag, 237
Laplace smoothing, 217
lasso regression, 189
LOA (latent Dirichlet allocation), 274-275
leaf nodes, 192, 193
lemmatization, text analysis and, 258
length ( ) function, 72
leverage, 142
1 ibrary ( ) function, 70
lifecycle. See also Data Analytics Lifecycle
lift, 142
linear regression, 162
coefficients, 169
diagnostics
linearity assumption, 173
N-fold cross-validation, 177-178
normality assumption, 174-177
residuals, 173-17 4
model, 163-165
categorica I variables, 170-171
normally distributed errors, 165-166
outcome confidence intervals, 172
parameter confidence intervals, 171
prediction interval on outcome, 172
R, 166-170
p-values, 169-170
use cases, 162-163
linkedln, 2, 22-23, 297
lists in R, 76-77
lm ( ) function, 66
load. image ( ) function, 68-69
logical data types, R, 72
logistic regression, 178
cautions, 188-189
diagnostics, 181-182
deviance, 183-184
histogram of probabilities, 188
log-likelihood test, 184-185
pseudo-R2, 183
ROC curve, 185-187
Generalized Linear Model function, 182
model, 179-181
multinomial, 190
reasons to choose, 188-189
use cases, 179
log-likelihood test, 184-185
loyalty cards, 17
M
MAD (Magnetic/Agile/Deep) skills, 28, 352-356
MADiib, 352-356
Mahout, 319-320
MapReduce, 22, 298-299
combiners, 302-303
development, 304-305
drivers, 301-302
execution, 304-305
mappers, 301-302
partitioners, 304
structuring, 301-304
market basket analysis, 139
association rules, 143
marketing, logistic regression and, 179
master nodes, 301
matrices
confusion matrix, 224
R, 74-75
scatterplot matrices, 97-99
matrix. inverse ( ) function, 74
MaxEnt (maximum entropy), 278
McKinsey & Co. definition of Big Data, 3
mean ( ) function, 86
medical information, 16
k-means and, 119
linear regression and, 162
logistic regression and, 179
minimum confidence, 141
missing data, 86
mobile devices, 16
mobile phone companies, 2
Model Building phase of lifecycle, 30, 46-48
Alpine Miner, 48
Mathematica, 48
Matlab,48
Octave, 48
PL/R,48
Python,48
R,48
SAS Enterprise Miner,
48
SPSS Modeler, 48
SQL,48
STATISTICA, 48
WEKA,48
Model Planning phase of lifecycle, 29-30, 42-44
data exploration, 44-45
model selection, 45
R,45-46
SAS/ACCESS, 46
SQL Analysis services, 46
variables, selecting, 44-45
morphological features in text analysis, 266-267
moving average models, 239-241
MPP (massively parallel processing), 5
MTurk (Mechanical Turk), 282
multinomial logistic regression, 190
multivariate time series analysis, 253
my_ range ( ) function, 80
N
na . exclude ( ) function, 86
na”ive Bayes, 211-212
Bayes’ theorem, 212-214
diagnostics, 217-218
Index
Index
narve Bayes classifier, 214-217
Rand, 218-224
sentiment analysis and, 278
smoothing, 217
natural language processing, 18
N-fold cross-validation, 177-178
NLP (Natural language Processing), 256
nodes
master, 301
worker, 301
nodes (decision trees), 192
depth, 193
leaf, 193
leaf nodes, 192, 193
nonparametric tests, 108-109
nontraditional devices, 16
normality
ARIMA model, 250-251
linear regression, 174-177
normalization, data conditioning, 40-41
NoSQL, 322-323
null deviance, 183
null hypothesis, 102
numeric data types, R, 72
numerical algorithms, 205
numerical underflow, 216-217
0
objects, k-means, attributes, 130-131
OLAP (online analytical processing), 6
cubes, 10
OpenRefine, 42
Operationalize phase of lifecycle, 30, 50-53, 360
Business Intelligence Analyst and, 52
Business User and, 52
Data Engineer and, 52
Data Scientist and, 52
DBA (Database Administrator) and, 52
Project Manager and, 52
Project Sponsor and, 52
operators, subsetting, 75
outcome
p
confidence intervals, 172
prediction interval, 172
PACF (partial autocorrelation function), 238-239
pamk ( ) function, 135
parameters, confidence intervals, 171
parametric tests, 108-109
parsing, text analysis and, 257
partitioning
Apriori algorithm and, 158
MapReduce, 304
photographs, 16
Pig, 306-308
Pivotal HD Enterprise, 320-321
plot ( ) function, 65, 153-154, 245
POS (part-of-speech) tagging,
258
power of a test, 11 0
precision in sentiment analysis, 281
predict ( ) function, 172
prediction trees. See decision trees
presentation versus data exploration, 99-101
probability, conditional, 212
na’ive Bayes classifier, 215-216
Project Manager, Operationalize phase, 52
Project Manager role, 27
Project Sponsor, Operationalize phase, 52
Project Sponsor role, 27
pseudo-R2, 183
p-values,linear regression, 169-170
Q
quasi-structured data, 6, 7
queries, SQl, 329-330
nested, 3334
subqueries, 3334
R
arrays, 74-75
attributes, types, 71-72
data frames, 75-76
data types, 71-72
character, 72
logical,72
numeric, 72
vectors, 73-74
decision trees, 206-211
descriptive statistics, 79-80
exploratory data analysis, 80-82
density plot,88-91
dirty data, 85-87
histograms, 88-91
multiple variables, 91-99
versus presentation, 99-101
visualization and, 82-85,88-91
factors, 77-78
functions
aov( ) , 78
array( ) , 74
barplot() ,88
cbind( ) , 78
class ( ) , 72
data ( ) , 84
dotchart( ),88
gl ( ) 184
head( ) , 65
import function defaults, 70
integer ( ) , 72
IQR( ) I 80
is.data.frame( ),
75
is .na ( ) , 86
is.vector( ) , 73
jpeg ( ) , 71
length ( ) , 72
library ( ) , 70
lm( ) , 66
load. image ( ) , 68-69
my _range ( ) , 80
plot ( ) function, 65
rbind( ) , 78
read. csv ( ) , 65, 75
read.csv2( ), 70
read.delim( ) , 69
read.delim2( ), 70
read.table( ),69
str ( ) , 75
summary ( ) , 65,66-67,79
t ( ) 1 74
typeof ( ) , 72
visualizing single variable, 88
write.csv( ) , 70
write.csv2(), 70
write. table( ) , 70
GUis,67-69
import/export, 69-71
k-means analysis, 123-127
linear regression model, 166-170
lists, 76-77
matrices, 74-75
model planning and, 45-46
na’ive Bayes and,218-224
operators, subsetting, 75
overview, 64-67
statistical techniques, 101-102
ANOVA, 110-114
difference in means, 1 04-1 08
effect size, 11 0
hypothesis testing, 102-104
poweroftest, 110
sample size, 110
type I errors, 1 09-11 0
type II errors, 1 09-11 0
tables, contingency tables, 79
R commander GUI, 67
random components of time series analysis, 235
Rattle GUI, 67
raw text
collection, 260-263
tokenization, 264
rbind ( ) function, 78
RDBMS,6
read. csv ( ) function, 64-65, 75
read. csv2 ( ) function, 70
read.delim( ) function,69
read. delim2 ( ) function, 70
read. table ( ) function, 69
real estate, linear regression and, 162
recall in sentiment analysis, 281
redundant variables, 206
regression
lasso, 189
linear, 162
coefficients, 169
diagnostics, 173-178
model, 163-172
p-values, 169-170
use cases, 162-163
logistic, 178
cautions, 188-189
diagnostics, 181-188
model, 179-181
multinomial logistic,
190
reasons to choose, 188-189
use cases, 179
multinomial logistic, 190
ridge, 189
variables
dependent, 162
independent, 162
regression trees, 193
regular expressions, 263, 339-340
relationships, 141
repositories, 9-11
types, 10-11
representation methods, 386-387
rescaling, k-means, 133-134
residual deviance, 183
residual standard error, 170
Index
Index
residuals, linear regression, 173-174
resources, Discovery phase of lifecycle, 31-32
RFID readers, 16
ridge regression, 189
ROC (receiver operating characteristic) curve, 185-187,225
roots (decision trees), 193
rpart function, 207
RStudio GUI, 67-68
rules
s
association rules, 138-139
application, 143
candidate rules, 141-142
diagnostics, 158
testing and, 157-158
validation, 1 57-158
generating, grocery store example (Apriori), 152-157
sales, time series analysis and, 234
sample size, 110
sampling, Apriori algorithm and, 158
sandboxes, 10, 11. See also work spaces
Data preparation phase, 37-38
SAS/ACCESS, model planning, 46
scatterplot matrix, 97-99
scatterplots, 81
Anscom be’s quartet, 83
multiple variables, 91-92
scientific method, 28
searches, text analysis and, 257
seasonal autoregressive integrated moving average model,
243-244
seasonality components of time series analysis, 235
seismic processing, 16
semi-structured data, 6
SensorNet, 17-18
sentiment analysis in text analysis, 277-283
confusion matrix, 280
precision, 281
recall, 281
shopping
loyalty cards, 17
RFID chips in carts, 17
short trees, 194
smart devices, 16
smartphones, 17
smoothing, 217
social media, 3-4
sources of data, 15-16
spart parts planning, time series analysis and, 234-235
splits (decision trees), 193
detecting, 200-203
sponsor interview, Discovery phase, 33
spreadmarts, 10
spreadsheets, 6, 9, 10
SQL (Structured Query Language), 328-329
aggregates
ordered, 351-352
user-defined, 347-351
EXCEPT operator, 333-3334
functions, user-defined, 347-351
grouping, 334-338
INTERSECT operator, 333
joins, 330-332
MADiib, 352-356
queries, 329-330
nested,3334
subqueries, 3334
set operations, 332-334
UNION ALL operator, 332-333
window functions, 343-347
SQL Analysis services, model planning and, 46
sqlQuery ( ) function, 70
stakeholders, Discovery phase of lifecycle, 33
stationary time series, 236
statistical techniques, 101-102
ANOVA, 110-114
difference in means, 104
student’s t-test, 1 04-1 06
Welch’s t-test, 106-108
effect size, 110
hypothesis testing, 102-104
power of test, 11 0
sample size, 110
type I errors, 109-110
type II errors, 109-110
Wilcoxon rank-sum test, 108-109
statistics
Anscom be’s quartet, 82-83
descriptive, 79-80
stemming, text analysis and, 258
stock trading, time series analysis and, 235
stop words, 270-271
str ( ) function, 75
structured data, 6
subsetting operators, 75
summary ( ) function, 65, 66-67, 79, 80-82
SVM (support vector machines), 278
T
t ( ) function, 74
tables, contengency tables, 79
Target stores, 22
t-distribution
ANOVA, 110-114
student’s t-test, 104-106
Welch’s t-test, 106-108
technical specifications in project, 376-377
Technology and Data Enablers, 20
testing, association rules and, 157-158
text analysis, 256
ACME example, 259-263
bag-of-words, 265-266
corpora, 264-265
Brown Corpus, 267-268
corpora in Natural Language Processing, 256
IC (information corpora), 268-269
data formats, 257
data sources, 257
document categorization, 274-277
Green Eggs and Ham, 256
in-database, 338-339
lemmatization, 258
morphological features, 266-267
NLP (Natural Language Processing), 256
parsing, 257
POS (part-of-speech) tagging, 258
raw text, collection, 260-263
search and retrieval, 257
sentiment analysis, 277-283
stemming, 258
stop words, 270-271
text mining, 257-258
TF (term frequency) of words, 265-266
DF,271-272
IDF, 271-272
lemmatization, 271
stemming, 271
stop words, 270-271
TFIDF, 269-274
tokenization, 264
topic modeling, 267, 274
LOA (latent Dirichlet allocation), 274-275
web scraper, 262-263
word clouds, 284
Zipf’s Law, 265-266
text mining, 257
textual data files, 6
TF (term frequency) of words,
265-266
OF (document frequency), 271-272
IOF (inverted document frequency), 271-272
lemmatization, 271
stemming, 271
stop words, 270-271
TFIDF, 269-274
TFIDF (Term Frequency-Inverse Document Frequency),
269-274, 285-286
time series analysis
ARIMA model, 236
ACF, 236-237
ARMA model, 241-244
autoregressive models, 238-239
building, 244-252
cautions, 252-253
constant variance, 250-251
evaluating, 244-252
fitted models, 249-250
forecasting, 251-252
moving average models, 239-241
normality, 250-251
PACF, 238-239
reasons to choose, 252-253
Index
seasonal autogregressive integrated moving average
model, 243-244
ARMAX (Autoregressive Moving Average with
Exogenous inputs), 253
Box-Jenkins methodology, 235-236
cyclic components, 235
differencing, 241-242
fitted models, 249-250
GARCH (Generalized Autoregressive Conditionally
Heteroscedastic), 253
Kalman filtering, 253
multivariate time series analysis, 253
random components, 235
seasonal autoregressive integrated moving average
model, 243-244
seasonality, 235
spectral analysis, 253
stationary time series, 236
trends, 235
use cases, 234-235
white noise process, 239
tokenization in text analysis, 264
topic modeling in text analysis, 267, 274
LOA (latent Dirichlet allocation), 274-275
TP (true positives), confusion matrix, 224
TPR (true positive rate), 225
transaction data, 6
transaction reduction, Apriori algorithm and, 158
trends, time series analysis, 235
TRP (True Positive Rate), 185-187
ts ( ) function, 245
two-sided hypothesis test, 105
type I errors, 109-110
type II errors, 109-110
typeof ( ) function, 72
u
UNION ALL operator (SQL), 332-333
units of measure, k-means, 132-133
unstructured data, 6
Index
Apache Hadoop, HDFS, 300-301
linkedln, 297
MapReduce,298-299
natural language processing,
18
use cases, 296-298
Watson (IBM), 297
Yahoo!, 297-298
unsupervised techniques. See clustering
users of data, 18
v
validation, association rules and, 157-158
variables
categorical, 170-171
continuous, discretization, 211
correlated, 206
decision trees, 205
dependent, 162
factors, 77-78
independent, 162
input, 192
redundant, 206
VARIMA (Vector ARIMA), 253
vectors, R, 73-74
video footage, 16
k-means and, 119
video surveillance, 16
visualization, 41-42. See also data visualization
exploratory data analysis, 82-85
single variable, 88-91
grocery store example (Apriori), 152-157
volume, variety, velocity. See 3 Vs (volume, variety, velocity)
w
Watson (IBM), 297
web scraper, 262-263
white noise process, 239
Wilcoxan rank-sum test, 108-109
wilcox. test { ) function, 109
window functions (SQL), 343-347
word clouds, 284
work spaces, 10, 11. See also sandboxes
Data preparation phase, 37-38
worker nodes, 301
write. csv ( ) function, 70
write. csv2 ( ) function, 70
write. table ( ) function, 70
WSS (Within Sum of Squares), 123-127
X-Z
XML (eXtensible Markup language), 6
Yahoo!, 297-298
YARN (Yet Another Resource Negotiator),
305
Zipf’s law, 265-266
We provide professional writing services to help you score straight A’s by submitting custom written assignments that mirror your guidelines.
Get result-oriented writing and never worry about grades anymore. We follow the highest quality standards to make sure that you get perfect assignments.
Our writers have experience in dealing with papers of every educational level. You can surely rely on the expertise of our qualified professionals.
Your deadline is our threshold for success and we take it very seriously. We make sure you receive your papers before your predefined time.
Someone from our customer support team is always here to respond to your questions. So, hit us up if you have got any ambiguity or concern.
Sit back and relax while we help you out with writing your papers. We have an ultimate policy for keeping your personal and order-related details a secret.
We assure you that your document will be thoroughly checked for plagiarism and grammatical errors as we use highly authentic and licit sources.
Still reluctant about placing an order? Our 100% Moneyback Guarantee backs you up on rare occasions where you aren’t satisfied with the writing.
You don’t have to wait for an update for hours; you can track the progress of your order any time you want. We share the status after each step.
Although you can leverage our expertise for any writing task, we have a knack for creating flawless papers for the following document types.
Although you can leverage our expertise for any writing task, we have a knack for creating flawless papers for the following document types.
From brainstorming your paper's outline to perfecting its grammar, we perform every step carefully to make your paper worthy of A grade.
Hire your preferred writer anytime. Simply specify if you want your preferred expert to write your paper and we’ll make that happen.
Get an elaborate and authentic grammar check report with your work to have the grammar goodness sealed in your document.
You can purchase this feature if you want our writers to sum up your paper in the form of a concise and well-articulated summary.
You don’t have to worry about plagiarism anymore. Get a plagiarism report to certify the uniqueness of your work.
Join us for the best experience while seeking writing assistance in your college life. A good grade is all you need to boost up your academic excellence and we are all about it.
We create perfect papers according to the guidelines.
We seamlessly edit out errors from your papers.
We thoroughly read your final draft to identify errors.
Work with ultimate peace of mind because we ensure that your academic work is our responsibility and your grades are a top concern for us!
Dedication. Quality. Commitment. Punctuality
Here is what we have achieved so far. These numbers are evidence that we go the extra mile to make your college journey successful.
We have the most intuitive and minimalistic process so that you can easily place an order. Just follow a few steps to unlock success.
We understand your guidelines first before delivering any writing service. You can discuss your writing needs and we will have them evaluated by our dedicated team.
We write your papers in a standardized way. We complete your work in such a way that it turns out to be a perfect description of your guidelines.
We promise you excellent grades and academic excellence that you always longed for. Our writers stay in touch with you via email.