Earlier Version
Revisiting the adversarial robustness-accuracy tradeoff in robot learning
Lechner M, Amini A, Rus D, Henzinger TA. Revisiting the adversarial robustness-accuracy tradeoff in robot learning. arXiv, 2204.07373.
Download (ext.)
https://doi.org/10.48550/arXiv.2204.07373
[Preprint]
Preprint
| Submitted
| English
Author
Department
Abstract
Adversarial training (i.e., training on adversarially perturbed input data) is a well-studied method for making neural networks robust to potential adversarial attacks during inference. However, the improved robustness does not
come for free but rather is accompanied by a decrease in overall model accuracy and performance. Recent work has shown that, in practical robot learning applications, the effects of adversarial training do not pose a fair trade-off
but inflict a net loss when measured in holistic robot performance. This work revisits the robustness-accuracy trade-off in robot learning by systematically analyzing if recent advances in robust training methods and theory in
conjunction with adversarial robot learning can make adversarial training suitable for real-world robot applications. We evaluate a wide variety of robot learning tasks ranging from autonomous driving in a high-fidelity environment
amenable to sim-to-real deployment, to mobile robot gesture recognition. Our results demonstrate that, while these techniques make incremental improvements on the trade-off on a relative scale, the negative side-effects caused by
adversarial training still outweigh the improvements by an order of magnitude. We conclude that more substantial advances in robust learning methods are necessary before they can benefit robot learning tasks in practice.
Publishing Year
Date Published
2022-04-15
Journal Title
arXiv
Acknowledgement
This work was supported in parts by the ERC-2020-AdG 101020093, National Science Foundation (NSF), and JP
Morgan Graduate Fellowships. We thank Christoph Lampert for inspiring this work.
Article Number
2204.07373
IST-REx-ID
Cite this
Lechner M, Amini A, Rus D, Henzinger TA. Revisiting the adversarial robustness-accuracy tradeoff in robot learning. arXiv. doi:10.48550/arXiv.2204.07373
Lechner, M., Amini, A., Rus, D., & Henzinger, T. A. (n.d.). Revisiting the adversarial robustness-accuracy tradeoff in robot learning. arXiv. https://doi.org/10.48550/arXiv.2204.07373
Lechner, Mathias, Alexander Amini, Daniela Rus, and Thomas A Henzinger. “Revisiting the Adversarial Robustness-Accuracy Tradeoff in Robot Learning.” ArXiv, n.d. https://doi.org/10.48550/arXiv.2204.07373.
M. Lechner, A. Amini, D. Rus, and T. A. Henzinger, “Revisiting the adversarial robustness-accuracy tradeoff in robot learning,” arXiv. .
Lechner M, Amini A, Rus D, Henzinger TA. Revisiting the adversarial robustness-accuracy tradeoff in robot learning. arXiv, 2204.07373.
Lechner, Mathias, et al. “Revisiting the Adversarial Robustness-Accuracy Tradeoff in Robot Learning.” ArXiv, 2204.07373, doi:10.48550/arXiv.2204.07373.
All files available under the following license(s):
Copyright Statement:
This Item is protected by copyright and/or related rights. [...]
Link(s) to Main File(s)
Access Level
Open Access
Material in ISTA:
Dissertation containing ISTA record
Later Version
Export
Marked PublicationsOpen Data ISTA Research Explorer
Sources
arXiv 2204.07373