Learn how to use Apache Flink relational APIs—the Table API and SQL—for batch and real-time exploratory data analytics.
Overview
Syllabus
Introduction
- Apache Flink for exploratory analysis
- What is Apache Flink?
- Flink relational APIs
- Integrations and connectors
- Course prerequisites
- Setting up the exercise files
- Creating a table environment
- Creating tables from a CSV
- Selecting table data
- Filtering data in tables
- Writing tables to files
- Aggregations on tables
- Ordering and limiting data
- Adding new columns
- Joining tables
- Working with datasets
- Challenges with streaming SQL
- Dynamic tables
- Appending and retracting data
- Consuming Kafka sources
- Running continuous queries
- Windowing on streams
- Using tumbling and sliding windows
- Writing tables to Kafka
- Working with data streams
- Using event time
- Use case problem definition
- Read source data into a Flink table
- Compute total scores
- Compute aggregations
- Next steps
Taught by
Kumaran Ponnambalam