TY - JOUR A1 - Schlosser, Rainer T1 - Risk-sensitive control of Markov decision processes BT - a moment-based approach with target distributions JF - Computers & operations research : and their applications to problems of world concern N2 - In many revenue management applications risk-averse decision-making is crucial. In dynamic settings, however, it is challenging to find the right balance between maximizing expected rewards and minimizing various kinds of risk. In existing approaches utility functions, chance constraints, or (conditional) value at risk considerations are used to influence the distribution of rewards in a preferred way. Nevertheless, common techniques are not flexible enough and typically numerically complex. In our model, we exploit the fact that a distribution is characterized by its mean and higher moments. We present a multi-valued dynamic programming heuristic to compute risk-sensitive feedback policies that are able to directly control the moments of future rewards. Our approach is based on recursive formulations of higher moments and does not require an extension of the state space. Finally, we propose a self-tuning algorithm, which allows to identify feedback policies that approximate predetermined (risk-sensitive) target distributions. We illustrate the effectiveness and the flexibility of our approach for different dynamic pricing scenarios. (C) 2020 Elsevier Ltd. All rights reserved. KW - risk aversion KW - Markov decision process KW - dynamic programming KW - dynamic KW - pricing KW - heuristics Y1 - 2020 U6 - https://doi.org/10.1016/j.cor.2020.104997 SN - 0305-0548 VL - 123 PB - Elsevier CY - Oxford ER - TY - JOUR A1 - Schlosser, Rainer T1 - Heuristic mean-variance optimization in Markov decision processes using state-dependent risk aversion JF - IMA journal of management mathematics / Institute of Mathematics and Its Applications N2 - In dynamic decision problems, it is challenging to find the right balance between maximizing expected rewards and minimizing risks. In this paper, we consider NP-hard mean-variance (MV) optimization problems in Markov decision processes with a finite time horizon. We present a heuristic approach to solve MV problems, which is based on state-dependent risk aversion and efficient dynamic programming techniques. Our approach can also be applied to mean-semivariance (MSV) problems, which particularly focus on the downside risk. We demonstrate the applicability and the effectiveness of our heuristic for dynamic pricing applications. Using reproducible examples, we show that our approach outperforms existing state-of-the-art benchmark models for MV and MSV problems while also providing competitive runtimes. Further, compared to models based on constant risk levels, we find that state-dependent risk aversion allows to more effectively intervene in case sales processes deviate from their planned paths. Our concepts are domain independent, easy to implement and of low computational complexity. KW - risk aversion KW - mean-variance optimization KW - Markov decision process; KW - dynamic programming KW - dynamic pricing KW - heuristics Y1 - 2021 U6 - https://doi.org/10.1093/imaman/dpab009 SN - 1471-678X SN - 1471-6798 VL - 33 IS - 2 SP - 181 EP - 199 PB - Oxford Univ. Press CY - Oxford ER -