Distributed TensorFlow Training - Google I/O 2018

Distributed TensorFlow Training - Google I/O 2018

TensorFlow via YouTube Direct link

Intro

1 of 23

1 of 23

Intro

Class Central Classrooms beta

YouTube playlists curated by Class Central.

Classroom Contents

Distributed TensorFlow Training - Google I/O 2018

Automatically move to the next video in the Classroom when playback concludes

  1. 1 Intro
  2. 2 Training can take a long time
  3. 3 Scaling with Distributed Training
  4. 4 Data parallelism
  5. 5 Async Parameter Server
  6. 6 Sync Allreduce Architecture
  7. 7 Ring Allreduce Architecture
  8. 8 Model parallelism
  9. 9 Distribution Strategy API High Level API to distribute your training.
  10. 10 # Training with Estimator API
  11. 11 # Training on multiple GPUs with Distribution Strategy
  12. 12 Mirrored Strategy
  13. 13 Demo Setup on Google Cloud
  14. 14 Performance Benchmarks
  15. 15 N A simple input pipeline for ResNet58
  16. 16 Input pipeline as an ETL Process
  17. 17 Input pipeline bottleneck
  18. 18 Parallelize file reading
  19. 19 Parallelize sap for transformations
  20. 20 Pipelining with prefetching
  21. 21 Using fused transformation ops
  22. 22 Work In Progress
  23. 23 TensorFlow Resources

Never Stop Learning.

Get personalized course recommendations, track subjects and courses with reminders, and more.

Someone learning on their laptop while sitting on the floor.