{"related_material":{"link":[{"url":"https://ist.ac.at/en/news/new-deep-learning-models/","description":"News on IST Homepage","relation":"press_release"}]},"oa_version":"None","project":[{"name":"The Wittgenstein Prize","grant_number":"Z211","_id":"25F42A32-B435-11E9-9278-68D0E5697425","call_identifier":"FWF"}],"status":"public","scopus_import":"1","year":"2020","date_published":"2020-10-01T00:00:00Z","citation":{"apa":"Lechner, M., Hasani, R., Amini, A., Henzinger, T. A., Rus, D., & Grosu, R. (2020). Neural circuit policies enabling auditable autonomy. Nature Machine Intelligence. Springer Nature. https://doi.org/10.1038/s42256-020-00237-3","ama":"Lechner M, Hasani R, Amini A, Henzinger TA, Rus D, Grosu R. Neural circuit policies enabling auditable autonomy. Nature Machine Intelligence. 2020;2:642-652. doi:10.1038/s42256-020-00237-3","ista":"Lechner M, Hasani R, Amini A, Henzinger TA, Rus D, Grosu R. 2020. Neural circuit policies enabling auditable autonomy. Nature Machine Intelligence. 2, 642–652.","mla":"Lechner, Mathias, et al. “Neural Circuit Policies Enabling Auditable Autonomy.” Nature Machine Intelligence, vol. 2, Springer Nature, 2020, pp. 642–52, doi:10.1038/s42256-020-00237-3.","short":"M. Lechner, R. Hasani, A. Amini, T.A. Henzinger, D. Rus, R. Grosu, Nature Machine Intelligence 2 (2020) 642–652.","chicago":"Lechner, Mathias, Ramin Hasani, Alexander Amini, Thomas A Henzinger, Daniela Rus, and Radu Grosu. “Neural Circuit Policies Enabling Auditable Autonomy.” Nature Machine Intelligence. Springer Nature, 2020. https://doi.org/10.1038/s42256-020-00237-3.","ieee":"M. Lechner, R. Hasani, A. Amini, T. A. Henzinger, D. Rus, and R. Grosu, “Neural circuit policies enabling auditable autonomy,” Nature Machine Intelligence, vol. 2. Springer Nature, pp. 642–652, 2020."},"volume":2,"publication":"Nature Machine Intelligence","type":"journal_article","publication_identifier":{"eissn":["2522-5839"]},"date_created":"2020-10-19T13:46:06Z","abstract":[{"lang":"eng","text":"A central goal of artificial intelligence in high-stakes decision-making applications is to design a single algorithm that simultaneously expresses generalizability by learning coherent representations of their world and interpretable explanations of its dynamics. Here, we combine brain-inspired neural computation principles and scalable deep learning architectures to design compact neural controllers for task-specific compartments of a full-stack autonomous vehicle control system. We discover that a single algorithm with 19 control neurons, connecting 32 encapsulated input features to outputs by 253 synapses, learns to map high-dimensional inputs into steering commands. This system shows superior generalizability, interpretability and robustness compared with orders-of-magnitude larger black-box learning systems. The obtained neural agents enable high-fidelity autonomy for task-specific parts of a complex autonomous system."}],"author":[{"last_name":"Lechner","full_name":"Lechner, Mathias","id":"3DC22916-F248-11E8-B48F-1D18A9856A87","first_name":"Mathias"},{"full_name":"Hasani, Ramin","last_name":"Hasani","first_name":"Ramin"},{"last_name":"Amini","full_name":"Amini, Alexander","first_name":"Alexander"},{"first_name":"Thomas A","id":"40876CD8-F248-11E8-B48F-1D18A9856A87","full_name":"Henzinger, Thomas A","orcid":"0000-0002-2985-7724","last_name":"Henzinger"},{"first_name":"Daniela","full_name":"Rus, Daniela","last_name":"Rus"},{"first_name":"Radu","last_name":"Grosu","full_name":"Grosu, Radu"}],"publisher":"Springer Nature","intvolume":" 2","department":[{"_id":"ToHe"}],"external_id":{"isi":["000583337200011"]},"article_processing_charge":"No","doi":"10.1038/s42256-020-00237-3","article_type":"original","title":"Neural circuit policies enabling auditable autonomy","publication_status":"published","date_updated":"2023-08-22T10:36:06Z","page":"642-652","_id":"8679","day":"01","user_id":"4359f0d1-fa6c-11eb-b949-802e58b17ae8","quality_controlled":"1","month":"10","language":[{"iso":"eng"}],"isi":1}