• Treffer 2 von 2
Zurück zur Trefferliste

Risk-sensitive control of Markov decision processes

  • In many revenue management applications risk-averse decision-making is crucial. In dynamic settings, however, it is challenging to find the right balance between maximizing expected rewards and minimizing various kinds of risk. In existing approaches utility functions, chance constraints, or (conditional) value at risk considerations are used to influence the distribution of rewards in a preferred way. Nevertheless, common techniques are not flexible enough and typically numerically complex. In our model, we exploit the fact that a distribution is characterized by its mean and higher moments. We present a multi-valued dynamic programming heuristic to compute risk-sensitive feedback policies that are able to directly control the moments of future rewards. Our approach is based on recursive formulations of higher moments and does not require an extension of the state space. Finally, we propose a self-tuning algorithm, which allows to identify feedback policies that approximate predetermined (risk-sensitive) target distributions. WeIn many revenue management applications risk-averse decision-making is crucial. In dynamic settings, however, it is challenging to find the right balance between maximizing expected rewards and minimizing various kinds of risk. In existing approaches utility functions, chance constraints, or (conditional) value at risk considerations are used to influence the distribution of rewards in a preferred way. Nevertheless, common techniques are not flexible enough and typically numerically complex. In our model, we exploit the fact that a distribution is characterized by its mean and higher moments. We present a multi-valued dynamic programming heuristic to compute risk-sensitive feedback policies that are able to directly control the moments of future rewards. Our approach is based on recursive formulations of higher moments and does not require an extension of the state space. Finally, we propose a self-tuning algorithm, which allows to identify feedback policies that approximate predetermined (risk-sensitive) target distributions. We illustrate the effectiveness and the flexibility of our approach for different dynamic pricing scenarios. (C) 2020 Elsevier Ltd. All rights reserved.zeige mehrzeige weniger

Metadaten exportieren

Weitere Dienste

Suche bei Google Scholar Statistik - Anzahl der Zugriffe auf das Dokument
Metadaten
Verfasserangaben:Rainer SchlosserORCiDGND
DOI:https://doi.org/10.1016/j.cor.2020.104997
ISSN:0305-0548
Titel des übergeordneten Werks (Englisch):Computers & operations research : and their applications to problems of world concern
Untertitel (Englisch):a moment-based approach with target distributions
Verlag:Elsevier
Verlagsort:Oxford
Publikationstyp:Wissenschaftlicher Artikel
Sprache:Englisch
Datum der Erstveröffentlichung:02.06.2020
Erscheinungsjahr:2020
Datum der Freischaltung:19.01.2023
Freies Schlagwort / Tag:Markov decision process; dynamic; dynamic programming; heuristics; pricing; risk aversion
Band:123
Aufsatznummer:104997
Seitenanzahl:14
Organisationseinheiten:An-Institute / Hasso-Plattner-Institut für Digital Engineering gGmbH
DDC-Klassifikation:6 Technik, Medizin, angewandte Wissenschaften / 65 Management, Öffentlichkeitsarbeit / 650 Management und unterstützende Tätigkeiten
Peer Review:Referiert
Verstanden ✔
Diese Webseite verwendet technisch erforderliche Session-Cookies. Durch die weitere Nutzung der Webseite stimmen Sie diesem zu. Unsere Datenschutzerklärung finden Sie hier.