Overview

In this AWS Big Data course, students are shown how to use Amazon EMR to process data using the broad ecosystem of Hadoop tools like Hive and Hue. Students will be taught how to create big data environments, work with Amazon DynamoDB, Amazon Redshift, Amazon Quicksight, Amazon Athena and Amazon Kinesis, and leverage best practices to design Big Data environments for security and cost-effectiveness.

Learning Outcome

  • Fit AWS solutions inside of a big data ecosystem.
  • Leverage Apache Hadoop in the context of Amazon EMR.
  • Identify the components of an Amazon EMR cluster.
  • Launch and configure an Amazon EMR cluster.
  • Leverage common programming frameworks available for Amazon EMR including Hive, Pig, and Streaming.
  • Leverage Hue to improve the ease-of-use of Amazon EMR.
  • Use in-memory analytics with Spark on Amazon EMR.
  • Choose appropriate AWS data storage options.
  • Identify the benefits of using Amazon Kinesis for near real-time big data processing.
  • Leverage Amazon Redshift to efficiently store and analyze data.
  • Comprehend and manage costs and security for a big data solution.
  • Secure a Big Data solution.
  • Identify options for ingesting, transferring, and compressing data.
  • Leverage Amazon Athena for ad-hod query analytics.
  • Use visualisation software to depict data and queries using Amazon QuickSight.
  • Orchestrate big data workflows using AWS Data Pipeline 

Who should Attend?

  • Data Engineers
  • Data Analysts
  • Senior Data Engineers
  • Data Scientists
  • Business Intelligence Managers

Eligibility Criteria

Participants need to have prior experience working on AWS platform (more than 6 months) and have experience in working in Data Management
This course is endorsed under Critical Infocomm Technology Resource Programme Plus (CITREP+) Programme.
To find out more about CITREP+ Funding, please refer to Programme Support under CITREP+ page


Information as accurate as of 29 January 2020