R has great ways to handle working with big data including programming in parallel and interfacing with Spark. In this track, you'll learn how to write scalable and efficient R code and ways to visualize it too.
Overview
Syllabus
- Writing Efficient R Code
- Learn to write faster R code, discover benchmarking and profiling, and unlock the secrets of parallel programming.
- Visualizing Big Data with Trelliscope in R
- Learn how to visualize big data in R using ggplot2 and trelliscopejs.
- Scalable Data Processing in R
- Learn how to write scalable code for working with big data in R using the bigmemory and iotools packages.
- Introduction to Spark with sparklyr in R
- Learn how to run big data analysis using Spark and the sparklyr package in R, and explore Spark MLIb in just 4 hours.
Taught by
Michael Kane, Simon Urbanek, Colin Gillespie, Richie Cotton, and Ryan Hafen