TY - GEN
T1 - Scheduler technologies in support of high performance data analysis
AU - Reuther, Albert
AU - Byun, Chansup
AU - Arcand, William
AU - Bestor, David
AU - Bergeron, Bill
AU - Hubbell, Matthew
AU - Jones, Michael
AU - Michaleas, Peter
AU - Prout, Andrew
AU - Rosa, Antonio
AU - Kepner, Jeremy
N1 - Funding Information:
This material is based upon work supported by the National Science Foundation under Grant No. DMS-1312831.
Publisher Copyright:
© 2016 IEEE.
PY - 2016/11/28
Y1 - 2016/11/28
N2 - Job schedulers are a key component of scalable computing infrastructures. They orchestrate all of the work executed on the computing infrastructure and directly impact the effectiveness of the system. Recently, job workloads have diversified from long-running, synchronously-parallel simulations to include short-duration, independently parallel high performance data analysis (HPDA) jobs. Each of these job types requires different features and scheduler tuning to run efficiently. A number of schedulers have been developed to address both job workload and computing system heterogeneity. High performance computing (HPC) schedulers were designed to schedule large-scale scientific modeling and simulations on supercomputers. Big Data schedulers were designed to schedule data processing and analytic jobs on clusters. This paper compares and contrasts the features of HPC and Big Data schedulers with a focus on accommodating both scientific computing and high performance data analytic workloads. Job latency is critical for the efficient utilization of scalable computing infrastructures, and this paper presents the results of job launch benchmarking of several current schedulers: Slurm, Son of Grid Engine, Mesos, and Yarn. We find that all of these schedulers have low utilization for short-running jobs. Furthermore, employing multilevel scheduling significantly improves the utilization across all schedulers.
AB - Job schedulers are a key component of scalable computing infrastructures. They orchestrate all of the work executed on the computing infrastructure and directly impact the effectiveness of the system. Recently, job workloads have diversified from long-running, synchronously-parallel simulations to include short-duration, independently parallel high performance data analysis (HPDA) jobs. Each of these job types requires different features and scheduler tuning to run efficiently. A number of schedulers have been developed to address both job workload and computing system heterogeneity. High performance computing (HPC) schedulers were designed to schedule large-scale scientific modeling and simulations on supercomputers. Big Data schedulers were designed to schedule data processing and analytic jobs on clusters. This paper compares and contrasts the features of HPC and Big Data schedulers with a focus on accommodating both scientific computing and high performance data analytic workloads. Job latency is critical for the efficient utilization of scalable computing infrastructures, and this paper presents the results of job launch benchmarking of several current schedulers: Slurm, Son of Grid Engine, Mesos, and Yarn. We find that all of these schedulers have low utilization for short-running jobs. Furthermore, employing multilevel scheduling significantly improves the utilization across all schedulers.
KW - Scheduler
KW - data analytics
KW - high performance computing
KW - job scheduler
KW - resource manager
UR - http://www.scopus.com/inward/record.url?scp=85007049697&partnerID=8YFLogxK
U2 - 10.1109/HPEC.2016.7761604
DO - 10.1109/HPEC.2016.7761604
M3 - Conference contribution
AN - SCOPUS:85007049697
T3 - 2016 IEEE High Performance Extreme Computing Conference, HPEC 2016
BT - 2016 IEEE High Performance Extreme Computing Conference, HPEC 2016
PB - Institute of Electrical and Electronics Engineers Inc.
T2 - 2016 IEEE High Performance Extreme Computing Conference, HPEC 2016
Y2 - 13 September 2016 through 15 September 2016
ER -