{"isi":1,"acknowledgement":"M.L. and T.A.H. are supported in part by the Austrian Science Fund (FWF) under grant Z211-N23 (Wittgenstein Award). R.H. and D.R. are supported by Boeing and R.G. by Horizon-2020 ECSEL Project grant no. 783163 (iDev40).","project":[{"call_identifier":"FWF","grant_number":"Z211","name":"The Wittgenstein Prize","_id":"25F42A32-B435-11E9-9278-68D0E5697425"}],"ddc":["000"],"year":"2021","conference":{"end_date":"2021-06-05","location":"Xi'an, China","start_date":"2021-05-30","name":"ICRA: International Conference on Robotics and Automation"},"citation":{"apa":"Lechner, M., Hasani, R., Grosu, R., Rus, D., & Henzinger, T. A. (2021). Adversarial training is not ready for robot learning. In 2021 IEEE International Conference on Robotics and Automation (pp. 4140–4147). Xi’an, China. https://doi.org/10.1109/ICRA48506.2021.9561036","ista":"Lechner M, Hasani R, Grosu R, Rus D, Henzinger TA. 2021. Adversarial training is not ready for robot learning. 2021 IEEE International Conference on Robotics and Automation. ICRA: International Conference on Robotics and AutomationICRA, 4140–4147.","chicago":"Lechner, Mathias, Ramin Hasani, Radu Grosu, Daniela Rus, and Thomas A Henzinger. “Adversarial Training Is Not Ready for Robot Learning.” In 2021 IEEE International Conference on Robotics and Automation, 4140–47. ICRA, 2021. https://doi.org/10.1109/ICRA48506.2021.9561036.","ama":"Lechner M, Hasani R, Grosu R, Rus D, Henzinger TA. Adversarial training is not ready for robot learning. In: 2021 IEEE International Conference on Robotics and Automation. ICRA. ; 2021:4140-4147. doi:10.1109/ICRA48506.2021.9561036","short":"M. Lechner, R. Hasani, R. Grosu, D. Rus, T.A. Henzinger, in:, 2021 IEEE International Conference on Robotics and Automation, 2021, pp. 4140–4147.","mla":"Lechner, Mathias, et al. “Adversarial Training Is Not Ready for Robot Learning.” 2021 IEEE International Conference on Robotics and Automation, 2021, pp. 4140–47, doi:10.1109/ICRA48506.2021.9561036.","ieee":"M. Lechner, R. Hasani, R. Grosu, D. Rus, and T. A. Henzinger, “Adversarial training is not ready for robot learning,” in 2021 IEEE International Conference on Robotics and Automation, Xi’an, China, 2021, pp. 4140–4147."},"_id":"10666","language":[{"iso":"eng"}],"author":[{"first_name":"Mathias","last_name":"Lechner","full_name":"Lechner, Mathias","id":"3DC22916-F248-11E8-B48F-1D18A9856A87"},{"first_name":"Ramin","last_name":"Hasani","full_name":"Hasani, Ramin"},{"last_name":"Grosu","first_name":"Radu","full_name":"Grosu, Radu"},{"full_name":"Rus, Daniela","first_name":"Daniela","last_name":"Rus"},{"orcid":"0000-0002-2985-7724","first_name":"Thomas A","last_name":"Henzinger","id":"40876CD8-F248-11E8-B48F-1D18A9856A87","full_name":"Henzinger, Thomas A"}],"has_accepted_license":"1","publication":"2021 IEEE International Conference on Robotics and Automation","doi":"10.1109/ICRA48506.2021.9561036","page":"4140-4147","external_id":{"isi":["000765738803040"],"arxiv":["2103.08187"]},"publication_identifier":{"issn":["1050-4729"],"eisbn":["978-1-7281-9077-8"],"isbn":["978-1-7281-9078-5"],"eissn":["2577-087X"]},"quality_controlled":"1","title":"Adversarial training is not ready for robot learning","status":"public","related_material":{"record":[{"status":"public","relation":"dissertation_contains","id":"11362"}]},"oa_version":"None","oa":1,"date_created":"2022-01-25T15:44:54Z","user_id":"4359f0d1-fa6c-11eb-b949-802e58b17ae8","series_title":"ICRA","date_published":"2021-01-01T00:00:00Z","publication_status":"published","department":[{"_id":"GradSch"},{"_id":"ToHe"}],"main_file_link":[{"url":"https://arxiv.org/abs/2103.08187","open_access":"1"}],"abstract":[{"text":"Adversarial training is an effective method to train deep learning models that are resilient to norm-bounded perturbations, with the cost of nominal performance drop. While adversarial training appears to enhance the robustness and safety of a deep model deployed in open-world decision-critical applications, counterintuitively, it induces undesired behaviors in robot learning settings. In this paper, we show theoretically and experimentally that neural controllers obtained via adversarial training are subjected to three types of defects, namely transient, systematic, and conditional errors. We first generalize adversarial training to a safety-domain optimization scheme allowing for more generic specifications. We then prove that such a learning process tends to cause certain error profiles. We support our theoretical results by a thorough experimental safety analysis in a robot-learning task. Our results suggest that adversarial training is not yet ready for robot learning.","lang":"eng"}],"license":"https://creativecommons.org/licenses/by-nc-nd/3.0/","type":"conference","article_processing_charge":"No","date_updated":"2023-08-17T06:58:38Z","tmp":{"short":"CC BY-NC-ND (3.0)","image":"/images/cc_by_nc_nd.png","name":"Creative Commons Attribution-NonCommercial-NoDerivs 3.0 Unported (CC BY-NC-ND 3.0)","legal_code_url":"https://creativecommons.org/licenses/by-nc-nd/3.0/legalcode"}}