Dynamic pricing under competition using reinforcement learning
- Dynamic pricing is considered a possibility to gain an advantage over competitors in modern online markets. The past advancements in Reinforcement Learning (RL) provided more capable algorithms that can be used to solve pricing problems. In this paper, we study the performance of Deep Q-Networks (DQN) and Soft Actor Critic (SAC) in different market models. We consider tractable duopoly settings, where optimal solutions derived by dynamic programming techniques can be used for verification, as well as oligopoly settings, which are usually intractable due to the curse of dimensionality. We find that both algorithms provide reasonable results, while SAC performs better than DQN. Moreover, we show that under certain conditions, RL algorithms can be forced into collusion by their competitors without direct communication.
Author details: | Alexander Kastius, Rainer SchlosserORCiDGND |
---|---|
DOI: | https://doi.org/10.1057/s41272-021-00285-3 |
ISSN: | 1476-6930 |
ISSN: | 1477-657X |
Title of parent work (English): | Journal of revenue and pricing management |
Publisher: | Springer Nature Switzerland AG |
Place of publishing: | Cham |
Publication type: | Article |
Language: | English |
Date of first publication: | 2021/02/27 |
Publication year: | 2022 |
Release date: | 2023/02/09 |
Tag: | Competition; Dynamic pricing; E-commerce; Price collusion; Reinforcement learning |
Volume: | 21 |
Issue: | 1 |
Number of pages: | 14 |
First page: | 50 |
Last Page: | 63 |
Funding institution: | Projekt DEAL |
Organizational units: | An-Institute / Hasso-Plattner-Institut für Digital Engineering gGmbH |
DDC classification: | 3 Sozialwissenschaften / 33 Wirtschaft / 330 Wirtschaft |
Peer review: | Referiert |
Publishing method: | Open Access / Hybrid Open-Access |
License (German): | CC-BY - Namensnennung 4.0 International |