Home > Articles > Web Services > Cloud Computing

Data Just Right Video Tutorials: How to Use Hadoop, Hive, Shark, R, Apache Pig, Mahout, and Google BigQuery

  • Print
  • + Share This
  • 💬 Discuss
In Data Just Right LiveLessons Video Training Data Engineer and former Googler Michael Manoochehri provides viewers with an introduction to implementing practical solutions for common data problems with 7+ hours of hands-on video tutorials. The player below contains three sample video excerpts for your enrichment: 1. Loading Data into Hive, 2. Writing a Multistep MapReduce Job Using the mrjob Python Library, and 3. Using the Pandas Library for Analyzing Time Series Data.

We recommend clicking the Full Screen option in the bottom right corner of the video window for best viewing.

 
Data Just Right LiveLessons provides a practical introduction to solving common data challenges, such as managing massive datasets, visualizing data, building data pipelines and dashboards, and choosing tools for statistical analysis. You will learn how to use many of today's leading data analysis tools, including Hadoop, Hive, Shark, R, Apache Pig, Mahout, and Google BigQuery. The course does not assume any previous experience in large scale data analytics technology, and includes detailed, practical examples. Trainer Manoochehri is also author of Data Just Right: Introduction to Large-Scale Data & Analytics.

Data Just Right LiveLessons shows how to address each of today's key Big Data use cases in a cost-effective way by combining technologies in hybrid solutions. You'll find expert approaches to managing massive datasets, visualizing data, building data pipelines and dashboards, choosing tools for statistical analysis, and more. These videos demonstrate techniques using many of today's leading data analysis tools, including Hadoop, Hive, Shark, R, Apache Pig, Mahout, and Google BigQuery.

What You Will Learn:

  • Mastering the four guiding principles of Big Data success–and avoiding common pitfalls
  • Emphasizing collaboration and avoiding problems with siloed data
  • Hosting and sharing multi-terabyte datasets efficiently and economically
  • "Building for infinity" to support rapid growth
  • Developing a NoSQL Web app with Redis to collect crowd-sourced data
  • Running distributed queries over massive datasets with Hadoop and Hive
  • Building a data dashboard with Google BigQuery
  • Exploring large datasets with advanced visualization
  • Implementing efficient pipelines for transforming immense amounts of data
  • Automating complex processing with Apache Pig and the Cascading Java library
  • Applying machine learning to classify, recommend, and predict incoming information
  • Using R to perform statistical analysis on massive datasets
  • Building highly efficient analytics workflows with Python and Pandas
  • Establishing sensible purchasing strategies: when to build, buy, or outsource
  • Previewing emerging trends and convergences in scalable data technologies and the evolving role of the "Data Scientist"
  • + Share This
  • 🔖 Save To Your Account

Discussions

comments powered by Disqus