Inside TensorFlow - Parameter Server Training

Inside TensorFlow - Parameter Server Training

TensorFlow via YouTube Direct link

Intro

1 of 29

1 of 29

Intro

Class Central Classrooms beta

YouTube playlists curated by Class Central.

Classroom Contents

Inside TensorFlow - Parameter Server Training

Automatically move to the next video in the Classroom when playback concludes

  1. 1 Intro
  2. 2 Parameter Server Training Overview
  3. 3 Adaptive Learning Rate
  4. 4 Synchronous Parameter Server Training
  5. 5 Evaluation by Estimator
  6. 6 Problems with Multi-Client Setup
  7. 7 Benefits of Single-Client Setup
  8. 8 Problems of Single-Client Setup
  9. 9 Schedule/Join APIs
  10. 10 Custom Training Loop with PS
  11. 11 Current Limitations of the APIs
  12. 12 Benefits of Inline Evaluation
  13. 13 Current Limitations of Inline Evaluation
  14. 14 Variable Sharding
  15. 15 Ongoing and Future Work
  16. 16 Runtime, Performance, and Scalability
  17. 17 Parameter server training in runtime
  18. 18 Invoke model func with async schedule API
  19. 19 Distributed functions in PS training
  20. 20 Large embedding model
  21. 21 Performance compared with Estimator
  22. 22 Worker profiles with multi-step packing
  23. 23 Multi-step packing: pros and cons
  24. 24 Preemptions and failures
  25. 25 Fault tolerance: worker failures
  26. 26 Large-scale fault tolerance testing
  27. 27 Run jobs with preemptible resources
  28. 28 Multi-worker testing framework
  29. 29 MLCompass dashboard

Never Stop Learning.

Get personalized course recommendations, track subjects and courses with reminders, and more.

Someone learning on their laptop while sitting on the floor.