Reinforcement learning (RL) algorithms output policies specifying which action an agent should take in a given state. However, faults can sometimes arise during policy execution due to internal faults in the agent. As a result, actions may have unexpected effects. In this work, we aim to diagnose such faults and infer their root cause. We consider two types of diagnosis problems. In the first, which we call RLDXw, we assume we only know what a normal execution looks like. In the second, called RLDXs, we assume we have models for the faulty behavior of a component, which we call fault modes. The solution to RLDXw is a time step at which a fault occurred for the first time. The solution to RLDXs is more informative, represented as a fault mode according to which the RL task was executed. Solving those problems is useful in practice to facilitate efficient repair of faulty agents, since it can focus the repair efforts on specific actions. We formally define RLDXw and RLDXs and design two algorithms called WFMa and SFMa for solving them. We evaluate our algorithms on a benchmark of RL domains and discuss their strengths and limitations. When the number of the observed states increases, both WFMa and SFMa report a decrease in runtime (up to significantly 6.5 times faster). Additionally, the runtime of SFMa increases linearly with the increase in candidate fault modes.
@InProceedings{natan_et_al:OASIcs.DX.2024.23, author = {Natan, Avraham and Stern, Roni and Kalech, Meir}, title = {{Diagnosing Non-Intermittent Anomalies in Reinforcement Learning Policy Executions}}, booktitle = {35th International Conference on Principles of Diagnosis and Resilient Systems (DX 2024)}, pages = {23:1--23:13}, series = {Open Access Series in Informatics (OASIcs)}, ISBN = {978-3-95977-356-0}, ISSN = {2190-6807}, year = {2024}, volume = {125}, editor = {Pill, Ingo and Natan, Avraham and Wotawa, Franz}, publisher = {Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik}, address = {Dagstuhl, Germany}, URL = {https://drops.dagstuhl.de/entities/document/10.4230/OASIcs.DX.2024.23}, URN = {urn:nbn:de:0030-drops-221151}, doi = {10.4230/OASIcs.DX.2024.23}, annote = {Keywords: Diagnosis, Reinforcement Learning, Autonomous Systems} }
Feedback for Dagstuhl Publishing