This course is part 1 of a 3-course series on Serverless Data Processing with Dataflow. In this first course, we start with a refresher of what Apache Beam is and its relationship with Dataflow. Next, we talk about the Apache Beam vision and the benefits of the Beam Portability framework. The Beam Portability framework achieves the vision that a developer can use their favorite programming language with their preferred execution backend. We then show you how Dataflow allows you to separate compute and storage while saving money, and how identity, access, and management tools interact with your Dataflow pipelines. Lastly, we look at how to implement the right security model for your use case on Dataflow.
Overview
Syllabus
- Introduction
- Course Introduction
- Beam and Dataflow Refresher
- Beam Portability
- Beam Portability
- Runner v2
- Container Environments
- Cross-Language Transforms
- Quiz
- Separating Compute and Storage with Dataflow
- Separating Storage and Compute with Dataflow
- Dataflow Shuffle Service
- Dataflow Streaming Engine
- Flexible Resource Scheduling
- Quiz
- IAM, Quotas, and Permissions
- IAM
- Quotas
- Quiz
- Security
- Data Locality
- Shared VPC
- Private IPs
- CMEK
- Setup IAM and Networking for your Dataflow Jobs
- Quiz
- Summary
- Course Summary
- Additional Resources
- Your Next Steps
- Course Badge