Home > Store

Machine Learning in Production: Developing and Optimizing Data Science Workflows and Applications, Rough Cuts

Machine Learning in Production: Developing and Optimizing Data Science Workflows and Applications, Rough Cuts

Rough Cuts

  • Available to Safari Subscribers
  • About Rough Cuts
  • Rough Cuts are manuscripts that are developed but not yet published, available through Safari. Rough Cuts provide you access to the very latest information on a given topic and offer you the opportunity to interact with the author to influence the final publication.

Also available in other formats.

Description

  • Copyright 2019
  • Dimensions: 7" x 9-1/8"
  • Edition: 1st
  • Rough Cuts
  • ISBN-10: 0-13-411658-5
  • ISBN-13: 978-0-13-411658-7

This is the Rough Cut version of the printed book.


Foundational Hands-On Skills for Succeeding with Real Data Science Projects

Machine Learning in Production is a crash course in data science and machine learning for people who need to solve real-world problems and don’t have extensive formal training. Written for “accidental data scientists” with curiosity, ambition, and technical ability, this complete and rigorous introduction stresses practice, not theory.

Building on agile principles, Andrew and Adam Kelleher show how to deliver significant value quickly, resisting overhyped tools and unnecessary complexity. Drawing on their extensive experience, they help you ask useful questions and then execute typical projects from start to finish.

The authors show just how much information you can glean with straightforward queries, aggregations, and visualizations, and they teach indispensable error analysis methods to avoid costly mistakes. They turn to workhorse machine learning techniques such as linear regression, classification, clustering, and Bayesian inference. They also explain the hardware and software of data science and how to architect systems that maximize performance despite constraints.

The authors always focus on what matters: solving the problems that offer the highest return on investment, using the simplest, lowest-risk approaches that work.

  • Leverage agile principles to keep project scope small and development efficient
  • Start with simple heuristics and improve them as your data pipeline matures
  • Avoid bad conclusions by implementing foundational error analysis techniques
  • Communicate your results with basic data visualization techniques
  • Master basic machine learning techniques, starting with linear regression and random forests
  • Perform classification and clustering on both vector and graph data
  • Master Bayesian networks and use them to understand causal inference
  • Explore overfitting, model capacity, and other advanced machine learning techniques
  • Make informed architectural decisions about storage, data transfer, computation, and communication

Sample Content

Table of Contents

Preface

Acknowledgments

About the Authors

Part I: Principles of Framing

Chapter 1: The Role of the Data Scientist

Chapter 2: Project Workflow

Chapter 3: Quantifying Error

Chapter 4: Data Encoding and Pre-Processing

Chapter 5: Hypothesis Testing

Chapter 6: Data Visualization

Part II: Algorithms and Architectures

Chapter 7: Algorithms and Architectures

Chapter 8: Comparison

Chapter 9: Regression

Chapter 10: Classification and Clustering

Chapter 11: Bayesian Networks

Chapter 12: Dimensional Reduction and Latent Variable Models

Chapter 13: Causal Inference

Chapter 14: Advanced Machine Learning

Part III: Bottlenecks and Optimizations

Chapter 15: Hardware Fundamentals

Chapter 16: Software Fundamentals

Chapter 17: Software Architecture

Chapter 18: The CAP Theorem

Chapter 19: Logical Network Topological Nodes

Bibliography

Index

Updates

Submit Errata

More Information