In this Specialization, you will develop a robust set of skills that will allow you to process, analyze, and extract meaningful information from large amounts of complex data. You will develop talking knowledge, and practical execution knowledge, for the Hadoop platform, it's architecture and major elements of the ecosystem. Through hands-on instruction and assignments, you will develop working knowledge of tools, such as Spark, Pig, and Hive, and strategies for processing massive datasets using the map/reduce framework. You will be exposed to these tools and strategies as they might apply in particular to analyzing big data. You will become proficient in carrying out scalable basic analysis and comfortable enough to apply advanced analytics, predictive modeling, or graph analysis to problems in your domain. In the final Capstone Project, developed in partnership with data software company Splunk, you’ll apply the skills you learned by tuning and scaling your own analysis, building your own model, applying tools in new ways, or some other similar kind of effort, to analyze big data in the whatever area of your choice.This class is designed for non-programmers familiar with SQL and desire big-data skill, and for programmers who are new to big data, or new to big data analytics.
Incentives & Benefits
When you complete this Specialization, you will have a robust set of skills that are in high demand across dozens of modern industries. You will be able to extract meaningful information from large amounts of complex data, using powerful tools such as Splunk and Hadoop, and you’ll be able to build customized applications to analyze and visualize data from various sources.
What You'll Learn
- Process, analyze, and interpret massive and complex data
- Use common Big Data technologies, including Splunk and Apache Hadoop
- Build data tools, visualizations, and dashboards