AI Workload Preemption in a Multi-Cluster Scheduling System at Bloomberg
CNCF [Cloud Native Computing Foundation] via YouTube
Overview
Explore how Bloomberg implemented Karmada's Priority and Preemption feature to efficiently manage machine learning workloads across multiple clusters in this 28-minute conference talk from CNCF. Leon Zhou and Wei-Cheng Lai from Bloomberg discuss their approach to ensuring high-impact AI workloads receive priority access to GPU resources as the company's AI usage grows rapidly. Learn about the challenges of balancing resource allocation between high-priority and lower-priority ML batch jobs, and how Karmada helps prevent business-critical workloads from being starved of resources during high contention periods. Gain practical insights into configuring and managing multi-cluster environments while maintaining efficient execution of ML jobs. This presentation is particularly valuable for Kubernetes administrators and engineers responsible for managing large-scale machine learning workloads.
Syllabus
AI Workload Preemption in a Multi-Cluster Scheduling System at Bloomberg - Leon Zhou & Wei-Cheng Lai
Taught by
CNCF [Cloud Native Computing Foundation]