The search result changed since you submitted your search request. Documents might be displayed in a different sort order.
  • search hit 45 of 403
Back to Result List

Mary, Hugo, and Hugo*

  • Distributed data-parallel processing systems like MapReduce, Spark, and Flink are popular for analyzing large datasets using cluster resources. Resource management systems like YARN or Mesos in turn allow multiple data-parallel processing jobs to share cluster resources in temporary containers. Often, the containers do not isolate resource usage to achieve high degrees of overall resource utilization despite overprovisioning and the often fluctuating utilization of specific jobs. However, some combinations of jobs utilize resources better and interfere less with each other when running on the same shared nodes than others. This article presents an approach for improving the resource utilization and job throughput when scheduling recurring distributed data-parallel processing jobs in shared clusters. The approach is based on reinforcement learning and a measure of co-location goodness to have cluster schedulers learn over time which jobs are best executed together on shared resources. We evaluated this approach over the last years withDistributed data-parallel processing systems like MapReduce, Spark, and Flink are popular for analyzing large datasets using cluster resources. Resource management systems like YARN or Mesos in turn allow multiple data-parallel processing jobs to share cluster resources in temporary containers. Often, the containers do not isolate resource usage to achieve high degrees of overall resource utilization despite overprovisioning and the often fluctuating utilization of specific jobs. However, some combinations of jobs utilize resources better and interfere less with each other when running on the same shared nodes than others. This article presents an approach for improving the resource utilization and job throughput when scheduling recurring distributed data-parallel processing jobs in shared clusters. The approach is based on reinforcement learning and a measure of co-location goodness to have cluster schedulers learn over time which jobs are best executed together on shared resources. We evaluated this approach over the last years with three prototype schedulers that build on each other: Mary, Hugo, and Hugo*. For the evaluation we used exemplary Flink and Spark jobs from different application domains and clusters of commodity nodes managed by YARN. The results of these experiments show that our approach can increase resource utilization and job throughput significantly.show moreshow less

Export metadata

Additional Services

Search Google Scholar Statistics
Metadaten
Author details:Lauritz ThamsenORCiD, Jossekin Jakob BeilharzORCiD, Vinh Thuy Tran, Sasho Nedelkoski, Odej Kao
DOI:https://doi.org/10.1002/cpe.5823
ISSN:1532-0626
ISSN:1532-0634
Title of parent work (English):Concurrency and computation : practice & experience
Subtitle (English):learning to schedule distributed data-parallel processing jobs on shared clusters
Publisher:Wiley
Place of publishing:Hoboken
Publication type:Article
Language:English
Date of first publication:2020/05/21
Publication year:2020
Release date:2023/04/25
Tag:cluster resource management; co-location; distributed data-parallel processing; job; reinforcement learning; self-learning scheduler
Volume:33
Issue:18
Article number:e5823
Number of pages:12
Funding institution:Bundesministerium fur Bildung und Forschung (BMBF)Federal Ministry of; Education & Research (BMBF) [01IS14013A, 01IS18025A]
Organizational units:An-Institute / Hasso-Plattner-Institut für Digital Engineering gGmbH
DDC classification:0 Informatik, Informationswissenschaft, allgemeine Werke / 00 Informatik, Wissen, Systeme / 000 Informatik, Informationswissenschaft, allgemeine Werke
Peer review:Referiert
Publishing method:Open Access / Hybrid Open-Access
License (German):License LogoCC-BY - Namensnennung 4.0 International
Accept ✔
This website uses technically necessary session cookies. By continuing to use the website, you agree to this. You can find our privacy policy here.