TY - JOUR A1 - Thamsen, Lauritz A1 - Beilharz, Jossekin Jakob A1 - Vinh Thuy Tran, A1 - Nedelkoski, Sasho A1 - Kao, Odej T1 - Mary, Hugo, and Hugo* BT - learning to schedule distributed data-parallel processing jobs on shared clusters JF - Concurrency and computation : practice & experience N2 - Distributed data-parallel processing systems like MapReduce, Spark, and Flink are popular for analyzing large datasets using cluster resources. Resource management systems like YARN or Mesos in turn allow multiple data-parallel processing jobs to share cluster resources in temporary containers. Often, the containers do not isolate resource usage to achieve high degrees of overall resource utilization despite overprovisioning and the often fluctuating utilization of specific jobs. However, some combinations of jobs utilize resources better and interfere less with each other when running on the same shared nodes than others. This article presents an approach for improving the resource utilization and job throughput when scheduling recurring distributed data-parallel processing jobs in shared clusters. The approach is based on reinforcement learning and a measure of co-location goodness to have cluster schedulers learn over time which jobs are best executed together on shared resources. We evaluated this approach over the last years with three prototype schedulers that build on each other: Mary, Hugo, and Hugo*. For the evaluation we used exemplary Flink and Spark jobs from different application domains and clusters of commodity nodes managed by YARN. The results of these experiments show that our approach can increase resource utilization and job throughput significantly. KW - cluster resource management KW - distributed data-parallel processing KW - job KW - co-location KW - reinforcement learning KW - self-learning scheduler Y1 - 2020 U6 - https://doi.org/10.1002/cpe.5823 SN - 1532-0626 SN - 1532-0634 VL - 33 IS - 18 PB - Wiley CY - Hoboken ER - TY - JOUR A1 - Panzer, Marcel A1 - Bender, Benedict T1 - Deep reinforcement learning in production systems BT - a systematic literature review JF - International Journal of Production Research N2 - Shortening product development cycles and fully customizable products pose major challenges for production systems. These not only have to cope with an increased product diversity but also enable high throughputs and provide a high adaptability and robustness to process variations and unforeseen incidents. To overcome these challenges, deep Reinforcement Learning (RL) has been increasingly applied for the optimization of production systems. Unlike other machine learning methods, deep RL operates on recently collected sensor-data in direct interaction with its environment and enables real-time responses to system changes. Although deep RL is already being deployed in production systems, a systematic review of the results has not yet been established. The main contribution of this paper is to provide researchers and practitioners an overview of applications and to motivate further implementations and research of deep RL supported production systems. Findings reveal that deep RL is applied in a variety of production domains, contributing to data-driven and flexible processes. In most applications, conventional methods were outperformed and implementation efforts or dependence on human experience were reduced. Nevertheless, future research must focus more on transferring the findings to real-world systems to analyze safety aspects and demonstrate reliability under prevailing conditions. KW - Machine learning KW - reinforcement learning KW - production control KW - production planning KW - manufacturing processes KW - systematic literature review Y1 - 2021 U6 - https://doi.org/10.1080/00207543.2021.1973138 SN - 1366-588X SN - 0020-7543 VL - 13 IS - 60 PB - Taylor & Francis CY - London ER - TY - JOUR A1 - Nebe, Stephan A1 - Kroemer, Nils B. A1 - Schad, Daniel A1 - Bernhardt, Nadine A1 - Sebold, Miriam Hannah A1 - Mueller, Dirk K. A1 - Scholl, Lucie A1 - Kuitunen-Paul, Sören A1 - Heinz, Andreas A1 - Rapp, Michael Armin A1 - Huys, Quentin J. M. A1 - Smolka, Michael N. T1 - No association of goal-directed and habitual control with alcohol consumption in young adults JF - Addiction biology N2 - Alcohol dependence is a mental disorder that has been associated with an imbalance in behavioral control favoring model-free habitual over model-based goal-directed strategies. It is as yet unknown, however, whether such an imbalance reflects a predisposing vulnerability or results as a consequence of repeated and/or excessive alcohol exposure. We, therefore, examined the association of alcohol consumption with model-based goal-directed and model-free habitual control in 188 18-year-old social drinkers in a two-step sequential decision-making task while undergoing functional magnetic resonance imaging before prolonged alcohol misuse could have led to severe neurobiological adaptations. Behaviorally, participants showed a mixture of model-free and model-based decision-making as observed previously. Measures of impulsivity were positively related to alcohol consumption. In contrast, neither model-free nor model-based decision weights nor the trade-off between them were associated with alcohol consumption. There were also no significant associations between alcohol consumption and neural correlates of model-free or model-based decision quantities in either ventral striatum or ventromedial prefrontal cortex. Exploratory whole-brain functional magnetic resonance imaging analyses with a lenient threshold revealed early onset of drinking to be associated with an enhanced representation of model-free reward prediction errors in the posterior putamen. These results suggest that an imbalance between model-based goal-directed and model-free habitual control might rather not be a trait marker of alcohol intake per se. KW - alcohol KW - goal-directed KW - reinforcement learning Y1 - 2017 U6 - https://doi.org/10.1111/adb.12490 SN - 1355-6215 SN - 1369-1600 VL - 23 IS - 1 SP - 379 EP - 393 PB - Wiley CY - Hoboken ER -