[{"file_date_updated":"2022-03-10T12:11:48Z","ec_funded":1,"page":"176","publisher":"Institute of Science and Technology Austria","author":[{"full_name":"Konstantinov, Nikola H","first_name":"Nikola H","last_name":"Konstantinov","id":"4B9D76E4-F248-11E8-B48F-1D18A9856A87"}],"_id":"10799","title":"Robustness and fairness in machine learning","alternative_title":["ISTA Thesis"],"date_created":"2022-02-28T13:03:49Z","department":[{"_id":"GradSch"},{"_id":"ChLa"}],"article_processing_charge":"No","publication_status":"published","ddc":["000"],"year":"2022","citation":{"ista":"Konstantinov NH. 2022. Robustness and fairness in machine learning. Institute of Science and Technology Austria.","short":"N.H. Konstantinov, Robustness and Fairness in Machine Learning, Institute of Science and Technology Austria, 2022.","mla":"Konstantinov, Nikola H. <i>Robustness and Fairness in Machine Learning</i>. Institute of Science and Technology Austria, 2022, doi:<a href=\"https://doi.org/10.15479/at:ista:10799\">10.15479/at:ista:10799</a>.","ieee":"N. H. Konstantinov, “Robustness and fairness in machine learning,” Institute of Science and Technology Austria, 2022.","chicago":"Konstantinov, Nikola H. “Robustness and Fairness in Machine Learning.” Institute of Science and Technology Austria, 2022. <a href=\"https://doi.org/10.15479/at:ista:10799\">https://doi.org/10.15479/at:ista:10799</a>.","apa":"Konstantinov, N. H. (2022). <i>Robustness and fairness in machine learning</i>. Institute of Science and Technology Austria. <a href=\"https://doi.org/10.15479/at:ista:10799\">https://doi.org/10.15479/at:ista:10799</a>","ama":"Konstantinov NH. Robustness and fairness in machine learning. 2022. doi:<a href=\"https://doi.org/10.15479/at:ista:10799\">10.15479/at:ista:10799</a>"},"date_updated":"2023-10-17T12:31:54Z","abstract":[{"text":"Because of the increasing popularity of machine learning methods, it is becoming important to understand the impact of learned components on automated decision-making systems and to guarantee that their consequences are beneficial to society. In other words, it is necessary to ensure that machine learning is sufficiently trustworthy to be used in real-world applications. This thesis studies two properties of machine learning models that are highly desirable for the\r\nsake of reliability: robustness and fairness. In the first part of the thesis we study the robustness of learning algorithms to training data corruption. Previous work has shown that machine learning models are vulnerable to a range\r\nof training set issues, varying from label noise through systematic biases to worst-case data manipulations. This is an especially relevant problem from a present perspective, since modern machine learning methods are particularly data hungry and therefore practitioners often have to rely on data collected from various external sources, e.g. from the Internet, from app users or via crowdsourcing. Naturally, such sources vary greatly in the quality and reliability of the\r\ndata they provide. With these considerations in mind, we study the problem of designing machine learning algorithms that are robust to corruptions in data coming from multiple sources. We show that, in contrast to the case of a single dataset with outliers, successful learning within this model is possible both theoretically and practically, even under worst-case data corruptions. The second part of this thesis deals with fairness-aware machine learning. There are multiple areas where machine learning models have shown promising results, but where careful considerations are required, in order to avoid discrimanative decisions taken by such learned components. Ensuring fairness can be particularly challenging, because real-world training datasets are expected to contain various forms of historical bias that may affect the learning process. In this thesis we show that data corruption can indeed render the problem of achieving fairness impossible, by tightly characterizing the theoretical limits of fair learning under worst-case data manipulations. However, assuming access to clean data, we also show how fairness-aware learning can be made practical in contexts beyond binary classification, in particular in the challenging learning to rank setting.","lang":"eng"}],"day":"08","degree_awarded":"PhD","doi":"10.15479/at:ista:10799","keyword":["robustness","fairness","machine learning","PAC learning","adversarial learning"],"language":[{"iso":"eng"}],"has_accepted_license":"1","month":"03","project":[{"name":"International IST Doctoral Program","grant_number":"665385","_id":"2564DBCA-B435-11E9-9278-68D0E5697425","call_identifier":"H2020"}],"oa_version":"Published Version","user_id":"c635000d-4b10-11ee-a964-aac5a93f6ac1","related_material":{"record":[{"status":"public","relation":"part_of_dissertation","id":"8724"},{"id":"10803","relation":"part_of_dissertation","status":"public"},{"id":"10802","relation":"part_of_dissertation","status":"public"},{"relation":"part_of_dissertation","id":"6590","status":"public"}]},"status":"public","file":[{"checksum":"626bc523ae8822d20e635d0e2d95182e","file_size":4204905,"date_created":"2022-03-06T11:42:54Z","file_name":"thesis.pdf","content_type":"application/pdf","date_updated":"2022-03-06T11:42:54Z","access_level":"open_access","relation":"main_file","success":1,"creator":"nkonstan","file_id":"10823"},{"file_id":"10824","creator":"nkonstan","access_level":"closed","relation":"source_file","date_updated":"2022-03-10T12:11:48Z","file_name":"thesis.zip","content_type":"application/x-zip-compressed","date_created":"2022-03-06T11:42:57Z","file_size":22841103,"checksum":"e2ca2b88350ac8ea1515b948885cbcb1"}],"type":"dissertation","date_published":"2022-03-08T00:00:00Z","oa":1,"supervisor":[{"id":"40C20FD2-F248-11E8-B48F-1D18A9856A87","orcid":"0000-0001-8622-7887","full_name":"Lampert, Christoph","first_name":"Christoph","last_name":"Lampert"}],"publication_identifier":{"isbn":["978-3-99078-015-2"],"issn":["2663-337X"]}},{"author":[{"full_name":"Konstantinov, Nikola H","first_name":"Nikola H","last_name":"Konstantinov","id":"4B9D76E4-F248-11E8-B48F-1D18A9856A87"},{"full_name":"Lampert, Christoph","orcid":"0000-0002-4561-241X","last_name":"Lampert","first_name":"Christoph","id":"40C20FD2-F248-11E8-B48F-1D18A9856A87"}],"_id":"10802","scopus_import":"1","title":"Fairness-aware PAC learning from corrupted data","intvolume":"        23","publication_status":"published","date_created":"2022-02-28T14:05:42Z","department":[{"_id":"ChLa"}],"article_processing_charge":"No","file_date_updated":"2022-07-12T15:08:28Z","page":"1-60","quality_controlled":"1","article_type":"original","publisher":"ML Research Press","external_id":{"arxiv":["2102.06004"]},"date_updated":"2023-09-26T10:44:37Z","year":"2022","citation":{"apa":"Konstantinov, N. H., &#38; Lampert, C. (2022). Fairness-aware PAC learning from corrupted data. <i>Journal of Machine Learning Research</i>. ML Research Press.","ama":"Konstantinov NH, Lampert C. Fairness-aware PAC learning from corrupted data. <i>Journal of Machine Learning Research</i>. 2022;23:1-60.","ieee":"N. H. Konstantinov and C. Lampert, “Fairness-aware PAC learning from corrupted data,” <i>Journal of Machine Learning Research</i>, vol. 23. ML Research Press, pp. 1–60, 2022.","chicago":"Konstantinov, Nikola H, and Christoph Lampert. “Fairness-Aware PAC Learning from Corrupted Data.” <i>Journal of Machine Learning Research</i>. ML Research Press, 2022.","mla":"Konstantinov, Nikola H., and Christoph Lampert. “Fairness-Aware PAC Learning from Corrupted Data.” <i>Journal of Machine Learning Research</i>, vol. 23, ML Research Press, 2022, pp. 1–60.","short":"N.H. Konstantinov, C. Lampert, Journal of Machine Learning Research 23 (2022) 1–60.","ista":"Konstantinov NH, Lampert C. 2022. Fairness-aware PAC learning from corrupted data. Journal of Machine Learning Research. 23, 1–60."},"abstract":[{"lang":"eng","text":"Addressing fairness concerns about machine learning models is a crucial step towards their long-term adoption in real-world automated systems. While many approaches have been developed for training fair models from data, little is known about the robustness of these methods to data corruption. In this work we consider fairness-aware learning under worst-case data manipulations. We show that an adversary can in some situations force any learner to return an overly biased classifier, regardless of the sample size and with or without degrading\r\naccuracy, and that the strength of the excess bias increases for learning problems with underrepresented protected groups in the data. We also prove that our hardness results are tight up to constant factors. To this end, we study two natural learning algorithms that optimize for both accuracy and fairness and show that these algorithms enjoy guarantees that are order-optimal in terms of the corruption ratio and the protected groups frequencies in the large data\r\nlimit."}],"arxiv":1,"day":"01","ddc":["004"],"volume":23,"acknowledgement":"The authors thank Eugenia Iofinova and Bernd Prach for providing feedback on early versions of this paper. This publication was made possible by an ETH AI Center postdoctoral fellowship to Nikola Konstantinov.","publication":"Journal of Machine Learning Research","has_accepted_license":"1","month":"05","oa_version":"Published Version","language":[{"iso":"eng"}],"keyword":["Fairness","robustness","data poisoning","trustworthy machine learning","PAC learning"],"date_published":"2022-05-01T00:00:00Z","type":"journal_article","tmp":{"legal_code_url":"https://creativecommons.org/licenses/by/4.0/legalcode","short":"CC BY (4.0)","image":"/images/cc_by.png","name":"Creative Commons Attribution 4.0 International Public License (CC-BY 4.0)"},"oa":1,"publication_identifier":{"issn":["1532-4435"],"eissn":["1533-7928"]},"status":"public","user_id":"2DF688A6-F248-11E8-B48F-1D18A9856A87","related_material":{"record":[{"relation":"dissertation_contains","id":"10799","status":"public"},{"status":"public","id":"13241","relation":"shorter_version"}]},"file":[{"success":1,"relation":"main_file","access_level":"open_access","creator":"kschuh","file_id":"11570","checksum":"9cac897b54a0ddf3a553a2c33e88cfda","file_size":551862,"date_created":"2022-07-12T15:08:28Z","file_name":"2022_JournalMachineLearningResearch_Konstantinov.pdf","content_type":"application/pdf","date_updated":"2022-07-12T15:08:28Z"}]}]
