The search result changed since you submitted your search request. Documents might be displayed in a different sort order.
  • search hit 74 of 56871
Back to Result List

Risk-sensitive control of Markov decision processes

  • In many revenue management applications risk-averse decision-making is crucial. In dynamic settings, however, it is challenging to find the right balance between maximizing expected rewards and minimizing various kinds of risk. In existing approaches utility functions, chance constraints, or (conditional) value at risk considerations are used to influence the distribution of rewards in a preferred way. Nevertheless, common techniques are not flexible enough and typically numerically complex. In our model, we exploit the fact that a distribution is characterized by its mean and higher moments. We present a multi-valued dynamic programming heuristic to compute risk-sensitive feedback policies that are able to directly control the moments of future rewards. Our approach is based on recursive formulations of higher moments and does not require an extension of the state space. Finally, we propose a self-tuning algorithm, which allows to identify feedback policies that approximate predetermined (risk-sensitive) target distributions. WeIn many revenue management applications risk-averse decision-making is crucial. In dynamic settings, however, it is challenging to find the right balance between maximizing expected rewards and minimizing various kinds of risk. In existing approaches utility functions, chance constraints, or (conditional) value at risk considerations are used to influence the distribution of rewards in a preferred way. Nevertheless, common techniques are not flexible enough and typically numerically complex. In our model, we exploit the fact that a distribution is characterized by its mean and higher moments. We present a multi-valued dynamic programming heuristic to compute risk-sensitive feedback policies that are able to directly control the moments of future rewards. Our approach is based on recursive formulations of higher moments and does not require an extension of the state space. Finally, we propose a self-tuning algorithm, which allows to identify feedback policies that approximate predetermined (risk-sensitive) target distributions. We illustrate the effectiveness and the flexibility of our approach for different dynamic pricing scenarios. (C) 2020 Elsevier Ltd. All rights reserved.show moreshow less

Export metadata

Additional Services

Search Google Scholar Statistics
Metadaten
Author details:Rainer SchlosserORCiDGND
DOI:https://doi.org/10.1016/j.cor.2020.104997
ISSN:0305-0548
Title of parent work (English):Computers & operations research : and their applications to problems of world concern
Subtitle (English):a moment-based approach with target distributions
Publisher:Elsevier
Place of publishing:Oxford
Publication type:Article
Language:English
Date of first publication:2020/06/02
Publication year:2020
Release date:2023/01/19
Tag:Markov decision process; dynamic; dynamic programming; heuristics; pricing; risk aversion
Volume:123
Article number:104997
Number of pages:14
Organizational units:An-Institute / Hasso-Plattner-Institut für Digital Engineering gGmbH
DDC classification:6 Technik, Medizin, angewandte Wissenschaften / 65 Management, Öffentlichkeitsarbeit / 650 Management und unterstützende Tätigkeiten
Peer review:Referiert
Accept ✔
This website uses technically necessary session cookies. By continuing to use the website, you agree to this. You can find our privacy policy here.