USENIX ATC ’22 - PilotFish: Harvesting Free Cycles of Cloud Gaming with Deep Learning Training
USENIX ATC ’22 - PilotFish: Harvesting Free Cycles of Cloud Gaming with Deep Learning Training
Wei Zhang and Binghao Chen, Shanghai Jiao Tong University; Zhenhua Han, Microsoft Research; Quan Chen, Shanghai Jiao Tong University; Peng Cheng, Fan Yang, Ran Shu, and Yuqing Yang, Microsoft Research; Minyi Guo, Shanghai Jiao Tong University
Cloud gaming services have become important workloads in cloud datacenter. However, our investigation shows that a cloud gaming service cannot saturate the modern cloud GPUs. One way to improve the GPU utilization is to co-locate multiple workloads within one GPU, which is challenging for cloud gaming due to its highly fluctuated and unpredictable GPU usage pattern. In this paper, we present PilotFish, a high-performance system that harvests the free GPU cycles of cloud gaming with deep learning (DL) training, while incurring almost zero interference to cloud gaming. We co-locate DL training jobs with cloud gaming because they have stable and predictable workloads and have no strict latency requirement. In more detail, Pilotfish captures the idle periods of the game’s GPU usage with its low-overhead instrumentation to graphic libraries in sub-millisecond granularity. To avoid the potential interference to cloud gaming, PilotFish schedules training computation kernels only when they can finish before the idle GPU periods, and preempts straggler kernels running longer than expected. Our evaluation on popular cloud games and DL models shows PilotFish can harvest up to 85.1% of the idle GPU time from cloud gaming with no interference.
View the full USENIX ATC ’22 program at
3 views
2
1
2 years ago 00:18:45 3
USENIX ATC ’22 - PilotFish: Harvesting Free Cycles of Cloud Gaming with Deep Learning Training
5 years ago 00:19:26 1
USENIX ATC ’19 - NeuGraph: Parallel Deep Neural Network Computation on Large Graphs