Learn to implement distributed data management and machine learning in
Spark using the PySpark package.
In this course, you'll learn how to use Spark from Python! Spark is a tool for doing parallel computation with large datasets and it integrates well with Python. PySpark is the Python package that makes the magic happen. You'll use this package to work with data about flights from Portland and Seattle. You'll learn to wrangle this data and build a whole machine learning pipeline to predict whether or not flights will be delayed. Get ready to put some Spark in your Python code and dive into the world of high-performance machine learning!
Getting to know PySpark
-In this chapter, you'll learn how Spark manages data and how can you read and write tables from Python.
-In this chapter, you'll learn about the pyspark.sql module, which provides optimized data queries to your Spark session.
Getting started with machine learning pipelines
-PySpark has built-in, cutting-edge machine learning routines, along with utilities to create full machine learning pipelines. You'll learn about them in this chapter.
Model tuning and selection
-In this last chapter, you'll apply what you've learned to create a model that predicts which flights will be delayed.