About This Course
Published 1/2016 English
Course Description
The main objective of this course is to help you understand Complex Architectures of Hadoop and its components, guide you in the right direction to start with, and quickly start working with Hadoop and its components.
It covers everything what you need as a Big Data Beginner. Learn about Big Data market, different job roles, technology trends, history of Hadoop, HDFS, Hadoop Ecosystem, Hive and Pig. In this course, we will see how as a beginner one should start with Hadoop. This course comes with a lot of hands-on examples which will help you learn Hadoop quickly.
The course have 5 sections, and focuses on the following topics:
Big Data at a Glance: Learn about Big Data and different job roles required in Big Data market. Know big data salary trends around the globe. Learn about hottest technologies and their trends in the market.
Big Data at a Glance: Learn about Big Data and different job roles required in Big Data market. Know big data salary trends around the globe. Learn about hottest technologies and their trends in the market.
Getting Started with Hadoop: Understand Hadoop and its complex architecture. Learn Hadoop Ecosystem with simple examples. Know different versions of Hadoop (Hadoop 1.x vs Hadoop 2.x), different Hadoop Vendors in the market and Hadoop on Cloud. Understand how Hadoop uses ELT approach. Learn installing Hadoop on your machine. We will see running HDFS commands from command line to manage HDFS.
Getting Started with Hive: Understand what kind of problem Hive solves in Big Data. Learn its architectural design and working mechanism. Know data models in Hive, different file formats supported by Hive, Hive queries etc. We will see running queries in Hive.
Getting Started with Pig: Understand how Pig solves problems in Big Data. Learn its architectural design and working mechanism. Understand how Pig Latin works in Pig. You will understand the differences between SQL and Pig Latin. Demos on running different queries in Pig.
Use Cases: Real life applications of Hadoop is really important to better understand Hadoop and its components, hence we will be learning by designing a sample Data Pipeline in Hadoop to process big data. Also, understand how companies are adopting modern data architecture i.e. Data Lake in their data infrastructure.
What are the requirements?
- Basics knowledge of SQL and RDBMS is required
What am I going to get from this course?
- Understand different technology trends, salary trends, Big Data market and different job roles in Big Data
- Understand what Hadoop is for, and how it works
- Understand complex architectures of Hadoop and its component
- Hadoop installation on your machine
- Understand how MapReduce, Hive and Pig can be used to analyze big data sets
- High quality documents
- Demos: Running HDFS commands, Hive queries, Pig queries
- Sample data sets and scripts (HDFS commands, Hive sample queries, Pig sample queries, Data Pipeline sample queries)
- Start writing your own codes in Hive and Pig to process huge volumes of data
- Design your own data pipeline using Pig and Hive
- Understand modern data architecture: Data Lake
What is the target audience?
- This course can be opted by anyone (students, developer, manager) who is interested to learn big data. This course assumes everyone as a beginner, and teaches all fundamentals of Big Data, Hadoop and its complex architecture.
About Mustapha
I am online instructor at Udemy. My passions are: Mobile and Web Development, Entrepreneurship and Management. You can read my full biography on My Udemy Page. Feel free to follow me social media to know more about me and the topics and courses I'm teaching.