{"acknowledgement":"Krishnendu Chatterjee is supported by the Austrian Science Fund (FWF) NFN Grant No. S11407-N23 (RiSE/SHiNE), and COST Action GAMENET. Tomas Brazdil is supported by the Grant Agency of Masaryk University grant no. MUNI/G/0739/2017 and by the Czech Science Foundation grant No. 18-11193S. Petr Novotny and Jirı Vahala are supported by the Czech Science Foundation grant No. GJ19-15134Y.","page":"9794-9801","department":[{"_id":"KrCh"}],"author":[{"first_name":"Tomáš","full_name":"Brázdil, Tomáš","last_name":"Brázdil"},{"first_name":"Krishnendu","last_name":"Chatterjee","full_name":"Chatterjee, Krishnendu","id":"2E5DCA20-F248-11E8-B48F-1D18A9856A87","orcid":"0000-0002-4561-241X"},{"full_name":"Novotný, Petr","last_name":"Novotný","first_name":"Petr"},{"last_name":"Vahala","full_name":"Vahala, Jiří","first_name":"Jiří"}],"publication":"Proceedings of the 34th AAAI Conference on Artificial Intelligence","status":"public","_id":"15055","oa_version":"Preprint","month":"04","project":[{"grant_number":"S11407","name":"Game Theory","call_identifier":"FWF","_id":"25863FF4-B435-11E9-9278-68D0E5697425"}],"publication_status":"published","conference":{"end_date":"2020-02-12","location":"New York, NY, United States","start_date":"2020-02-07","name":"AAAI: Conference on Artificial Intelligence"},"date_updated":"2024-03-04T08:30:16Z","date_created":"2024-03-04T08:07:22Z","language":[{"iso":"eng"}],"issue":"06","publication_identifier":{"issn":["2374-3468"]},"year":"2020","volume":34,"main_file_link":[{"open_access":"1","url":"https://doi.org/10.48550/arXiv.2002.12086"}],"day":"03","type":"journal_article","user_id":"2DF688A6-F248-11E8-B48F-1D18A9856A87","publisher":"Association for the Advancement of Artificial Intelligence","abstract":[{"text":"Markov decision processes (MDPs) are the defacto framework for sequential decision making in the presence of stochastic uncertainty. A classical optimization criterion for MDPs is to maximize the expected discounted-sum payoff, which ignores low probability catastrophic events with highly negative impact on the system. On the other hand, risk-averse policies require the probability of undesirable events to be below a given threshold, but they do not account for optimization of the expected payoff. We consider MDPs with discounted-sum payoff with failure states which represent catastrophic outcomes. The objective of risk-constrained planning is to maximize the expected discounted-sum payoff among risk-averse policies that ensure the probability to encounter a failure state is below a desired threshold. Our main contribution is an efficient risk-constrained planning algorithm that combines UCT-like search with a predictor learned through interaction with the MDP (in the style of AlphaZero) and with a risk-constrained action selection via linear programming. We demonstrate the effectiveness of our approach with experiments on classical MDPs from the literature, including benchmarks with an order of 106 states.","lang":"eng"}],"keyword":["General Medicine"],"quality_controlled":"1","article_type":"original","article_processing_charge":"No","citation":{"apa":"Brázdil, T., Chatterjee, K., Novotný, P., & Vahala, J. (2020). Reinforcement learning of risk-constrained policies in Markov decision processes. Proceedings of the 34th AAAI Conference on Artificial Intelligence. New York, NY, United States: Association for the Advancement of Artificial Intelligence. https://doi.org/10.1609/aaai.v34i06.6531","chicago":"Brázdil, Tomáš, Krishnendu Chatterjee, Petr Novotný, and Jiří Vahala. “Reinforcement Learning of Risk-Constrained Policies in Markov Decision Processes.” Proceedings of the 34th AAAI Conference on Artificial Intelligence. Association for the Advancement of Artificial Intelligence, 2020. https://doi.org/10.1609/aaai.v34i06.6531.","ieee":"T. Brázdil, K. Chatterjee, P. Novotný, and J. Vahala, “Reinforcement learning of risk-constrained policies in Markov decision processes,” Proceedings of the 34th AAAI Conference on Artificial Intelligence, vol. 34, no. 06. Association for the Advancement of Artificial Intelligence, pp. 9794–9801, 2020.","ama":"Brázdil T, Chatterjee K, Novotný P, Vahala J. Reinforcement learning of risk-constrained policies in Markov decision processes. Proceedings of the 34th AAAI Conference on Artificial Intelligence. 2020;34(06):9794-9801. doi:10.1609/aaai.v34i06.6531","short":"T. Brázdil, K. Chatterjee, P. Novotný, J. Vahala, Proceedings of the 34th AAAI Conference on Artificial Intelligence 34 (2020) 9794–9801.","mla":"Brázdil, Tomáš, et al. “Reinforcement Learning of Risk-Constrained Policies in Markov Decision Processes.” Proceedings of the 34th AAAI Conference on Artificial Intelligence, vol. 34, no. 06, Association for the Advancement of Artificial Intelligence, 2020, pp. 9794–801, doi:10.1609/aaai.v34i06.6531.","ista":"Brázdil T, Chatterjee K, Novotný P, Vahala J. 2020. Reinforcement learning of risk-constrained policies in Markov decision processes. Proceedings of the 34th AAAI Conference on Artificial Intelligence. 34(06), 9794–9801."},"doi":"10.1609/aaai.v34i06.6531","external_id":{"arxiv":["2002.12086"]},"date_published":"2020-04-03T00:00:00Z","intvolume":" 34","oa":1,"title":"Reinforcement learning of risk-constrained policies in Markov decision processes"}