What you’ll learn
- Understand the Big Data problem in terms of storage and computation
- Understand how Hadoop approach Big Data problem and provide a solution to the problem
- Understand the need for another file system like HDFS
- Work with HDFS
- Understand the architecture of HDFS
- Understand the MapReduce programming model
- Understand the phases in MapReduce
- Envision a problem in MapReduce
- Write a MapReduce program with complete understanding of program constructs
- Write Pig Latin instructions
- Create and query Hive tables
Course content
Requirements
Basic linux commands
Basic Java knowledge is only needed to understand MapReduce programming in Java. Pig, Hive and other lessons does not need Java knowledge
Description
The objective of this course is to walk you through step by step of all the core components in Hadoop but more importantly make Hadoop learning experience easy and fun.
By enrolling in this course you can also get free access to our multi-node Hadoop training cluster so you can try out what you learn right away in a real multi-node distributed environment.
ABOUT INSTRUCTOR(S)
We are a group of Hadoop consultants who are passionate about Hadoop and Big Data technologies. 4 years ago when we were looking for Big Data consultants to work in our own projects we did not find qualified candidates because the big data industry was very new and hence we set out to train qualified candidates in Big Data ourselves giving them a deep and real world insights in to Hadoop.
WHAT YOU WILL LEARN IN THIS COURSE
In the first section you will learn about what is big data with examples. We will discuss the factors to consider when considering whether a problem is big data problem or not. We will talk about the challenges with existing technologies when it comes to big data computation. We will breakdown the Big Data problem in terms of storage and computation and understand how Hadoop approaches the problem and provide a solution to the problem.
In the HDFS, section you will learn about the need for another file system like HDFS. We will compare HDFS with traditional file systems and its benefits. We will also work with HDFS and discuss the architecture of HDFS.
In the MapReduce section you will learn about the basics of MapReduce and phases involved in MapReduce. We will go over each phase in detail and understand what happens in each phase. Then we will write a MapReduce program in Java to calculate the maximum closing price for stock symbols from a stock dataset.
In the next two sections, we will introduce you to Apache Pig & Hive. We will try to calculate the maximum closing price for stock symbols from a stock dataset using Pig and Hive.
Who this course is for:
This course is for anyone who wants to learn about Big Data technologies.
No advanced programming knowledge is needed
This course is for anyone who wants to learn about distributed computing and Hadoop.
Course content
8 sections • 15 lectures • 3h 19m total length
Expand all sections
Welcome & Let’s Get Started1 lecture • 3min
- Course Introduction
02:55
Introduction to Big Data2 lectures • 33min
- What Is Big Data?
17:54
- Understanding Big Data Problem
14:46
- Test your understanding of Big Data
7 questions
HDFS3 lectures • 44min
- HDFS – Why Another Filesystem?
13:29
- Working With HDFS
17:26
- HDFS Architechture
12:50
- Test your understanding of HDFS
6 questions
MapReduce4 lectures • 56min
- Introduction To MapReduce
08:51
- Dissecting MapReduce Components
18:05
- Dissecting MapReduce Program (Part 1)
12:05
- Dissecting MapReduce Program (Part 2)
17:13
- Test your understanding of MapReduce
6 questions
Apache Pig1 lecture • 12min
- Introduction to Apache Pig
12:05
Apache Hive1 lecture • 8min
- Introduction to Apache Hive
08:28
- Test your understanding of Pig & Hive
3 questions
Hadoop Administrator In Real World (Upcoming Course)2 lectures • 37min
- Cloudera Manager – Introduction
13:08
- Cloudera Manager – Installation
24:07
Our Hadoop Developer course1 lecture • 6min
- BONUS: Hadoop In Real World Course: Become an Expert Hadoop Developer
06:28