Home > Store

Data Just Right: Introduction to Large-Scale Data & Analytics, Rough Cuts

Rough Cuts

  • Available to Safari Subscribers
  • About Rough Cuts
  • Rough Cuts are manuscripts that are developed but not yet published, available through Safari. Rough Cuts provide you access to the very latest information on a given topic and offer you the opportunity to interact with the author to influence the final publication.

Not for Sale

Description

  • Copyright 2013
  • Dimensions: 7" x 9-1/8"
  • Pages: 300
  • Edition: 1st
  • Rough Cuts
  • ISBN-10: 0-13-335905-0
  • ISBN-13: 978-0-13-335905-3

This is the Rough Cut version of the printed book.

Making Big Data Work: Real-World Use Cases and Examples, Practical Code, Detailed Solutions

Large-scale data analysis is now vitally important to virtually every business. Mobile and social technologies are generating massive datasets; distributed cloud computing offers the resources to store and analyze them; and professionals have radically new technologies at their command, including NoSQL databases. Until now, however, most books on “Big Data” have been little more than business polemics or product catalogs. Data Just Right is different: It’s a completely practical and indispensable guide for every Big Data decision-maker, implementer, and strategist.

Michael Manoochehri, a former Google engineer and data hacker, writes for professionals who need practical solutions that can be implemented with limited resources and time. Drawing on his extensive experience, he helps you focus on building applications, rather than infrastructure, because that’s where you can derive the most value.

Manoochehri shows how to address each of today’s key Big Data use cases in a cost-effective way by combining technologies in hybrid solutions. You’ll find expert approaches to managing massive datasets, visualizing data, building data pipelines and dashboards, choosing tools for statistical analysis, and more. Throughout, the author demonstrates techniques using many of today’s leading data analysis tools, including Hadoop, Hive, Shark, R, Apache Pig, Mahout, and Google BigQuery.

Coverage includes

  • Mastering the four guiding principles of Big Data success—and avoiding common pitfalls
  • Emphasizing collaboration and avoiding problems with siloed data
  • Hosting and sharing multi-terabyte datasets efficiently and economically
  • “Building for infinity” to support rapid growth
  • Developing a NoSQL Web app with Redis to collect crowd-sourced data
  • Running distributed queries over massive datasets with Hadoop, Hive, and Shark
  • Building a data dashboard with Google BigQuery
  • Exploring large datasets with advanced visualization
  • Implementing efficient pipelines for transforming immense amounts of data
  • Automating complex processing with Apache Pig and the Cascading Java library
  • Applying machine learning to classify, recommend, and predict incoming information
  • Using R to perform statistical analysis on massive datasets
  • Building highly efficient analytics workflows with Python and Pandas
  • Establishing sensible purchasing strategies: when to build, buy, or outsource
  • Previewing emerging trends and convergences in scalable data technologies and the evolving role of the Data Scientist 

Sample Content

Table of Contents

Foreword xv
Preface xvii
Acknowledgments xxv
About the Author xxvii

Part I: Directives in the Big Data Era 1

Chapter 1: Four Rules for Data Success 3

When Data Became a BIG Deal 3
Data and the Single Server 4
The Big Data Trade-Off 5
Anatomy of a Big Data Pipeline 9
The Ultimate Database 10
Summary 10

Part II: Collecting and Sharing a Lot of Data 11

Chapter 2: Hosting and Sharing Terabytes of Raw Data 13

Suffering from Files 14
Storage: Infrastructure as a Service 15
Choosing the Right Data Format 16
Character Encoding 19
Data in Motion: Data Serialization Formats 21
Summary 23

Chapter 3: Building a NoSQL-Based Web App to Collect Crowd-Sourced Data 25
Relational Databases: Command and Control 25
Relational Databases versus the Internet 28
Nonrelational Database Models 31
Leaning toward Write Performance: Redis 35
Sharding across Many Redis Instances 38
NewSQL: The Return of Codd 41
Summary 42

Chapter 4: Strategies for Dealing with Data Silos 43
A Warehouse Full of Jargon 43
Hadoop: The Elephant in the Warehouse 48
Data Silos Can Be Good 49
Convergence: The End of the Data Silo 51
Summary 53

Part III: Asking Questions about Your Data 55

Chapter 5: Using Hadoop, Hive, and Shark to Ask Questions about Large Datasets 57

What Is a Data Warehouse? 57
Apache Hive: Interactive Querying for Hadoop 60
Shark: Queries at the Speed of RAM 65
Data Warehousing in the Cloud 66
Summary 67

Chapter 6: Building a Data Dashboard with Google BigQuery 69
Analytical Databases 69
Dremel: Spreading the Wealth 71
BigQuery: Data Analytics as a Service 73
Building a Custom Big Data Dashboard 75
The Future of Analytical Query Engines 82
Summary 83

Chapter 7: Visualization Strategies for Exploring Large Datasets 85
Cautionary Tales: Translating Data into Narrative 86
Human Scale versus Machine Scale 89
Building Applications for Data Interactivity 90
Summary 96

Part IV: Building Data Pipelines 97

Chapter 8: Putting It Together: MapReduce Data Pipelines 99

What Is a Data Pipeline? 99
Data Pipelines with Hadoop Streaming 101
A One-Step MapReduce Transformation 105
Managing Complexity: Python MapReduce Frameworks for Hadoop 110
Summary 114

Chapter 9: Building Data Transformation Workflows with Pig and Cascading 117
Large-Scale Data Workflows in Practice 118
It’s Complicated: Multistep MapReduce
Transformations 118
Cascading: Building Robust Data-Workflow Applications 122
When to Choose Pig versus Cascading 128
Summary 128

Part V: Machine Learning for Large Datasets 129

Chapter 10: Building a Data Classification System with Mahout 131

Can Machines Predict the Future? 132
Challenges of Machine Learning 132
Apache Mahout: Sc

Updates

Submit Errata

More Information