{"date_published":"2022-03-08T00:00:00Z","publication_status":"published","abstract":[{"lang":"eng","text":"Because of the increasing popularity of machine learning methods, it is becoming important to understand the impact of learned components on automated decision-making systems and to guarantee that their consequences are beneficial to society. In other words, it is necessary to ensure that machine learning is sufficiently trustworthy to be used in real-world applications. This thesis studies two properties of machine learning models that are highly desirable for the\r\nsake of reliability: robustness and fairness. In the first part of the thesis we study the robustness of learning algorithms to training data corruption. Previous work has shown that machine learning models are vulnerable to a range\r\nof training set issues, varying from label noise through systematic biases to worst-case data manipulations. This is an especially relevant problem from a present perspective, since modern machine learning methods are particularly data hungry and therefore practitioners often have to rely on data collected from various external sources, e.g. from the Internet, from app users or via crowdsourcing. Naturally, such sources vary greatly in the quality and reliability of the\r\ndata they provide. With these considerations in mind, we study the problem of designing machine learning algorithms that are robust to corruptions in data coming from multiple sources. We show that, in contrast to the case of a single dataset with outliers, successful learning within this model is possible both theoretically and practically, even under worst-case data corruptions. The second part of this thesis deals with fairness-aware machine learning. There are multiple areas where machine learning models have shown promising results, but where careful considerations are required, in order to avoid discrimanative decisions taken by such learned components. Ensuring fairness can be particularly challenging, because real-world training datasets are expected to contain various forms of historical bias that may affect the learning process. In this thesis we show that data corruption can indeed render the problem of achieving fairness impossible, by tightly characterizing the theoretical limits of fair learning under worst-case data manipulations. However, assuming access to clean data, we also show how fairness-aware learning can be made practical in contexts beyond binary classification, in particular in the challenging learning to rank setting."}],"department":[{"_id":"GradSch"},{"_id":"ChLa"}],"degree_awarded":"PhD","day":"08","file":[{"content_type":"application/pdf","file_name":"thesis.pdf","creator":"nkonstan","access_level":"open_access","relation":"main_file","checksum":"626bc523ae8822d20e635d0e2d95182e","file_id":"10823","success":1,"file_size":4204905,"date_created":"2022-03-06T11:42:54Z","date_updated":"2022-03-06T11:42:54Z"},{"file_id":"10824","file_size":22841103,"date_created":"2022-03-06T11:42:57Z","date_updated":"2022-03-10T12:11:48Z","content_type":"application/x-zip-compressed","file_name":"thesis.zip","access_level":"closed","creator":"nkonstan","relation":"source_file","checksum":"e2ca2b88350ac8ea1515b948885cbcb1"}],"type":"dissertation","date_updated":"2023-10-17T12:31:54Z","supervisor":[{"id":"40C20FD2-F248-11E8-B48F-1D18A9856A87","full_name":"Lampert, Christoph","last_name":"Lampert","first_name":"Christoph","orcid":"0000-0001-8622-7887"}],"month":"03","article_processing_charge":"No","related_material":{"record":[{"status":"public","id":"8724","relation":"part_of_dissertation"},{"id":"10803","relation":"part_of_dissertation","status":"public"},{"relation":"part_of_dissertation","id":"10802","status":"public"},{"status":"public","id":"6590","relation":"part_of_dissertation"}]},"status":"public","title":"Robustness and fairness in machine learning","alternative_title":["ISTA Thesis"],"user_id":"c635000d-4b10-11ee-a964-aac5a93f6ac1","date_created":"2022-02-28T13:03:49Z","oa":1,"oa_version":"Published Version","publisher":"Institute of Science and Technology Austria","ec_funded":1,"citation":{"short":"N.H. Konstantinov, Robustness and Fairness in Machine Learning, Institute of Science and Technology Austria, 2022.","apa":"Konstantinov, N. H. (2022). Robustness and fairness in machine learning. Institute of Science and Technology Austria. https://doi.org/10.15479/at:ista:10799","ista":"Konstantinov NH. 2022. Robustness and fairness in machine learning. Institute of Science and Technology Austria.","chicago":"Konstantinov, Nikola H. “Robustness and Fairness in Machine Learning.” Institute of Science and Technology Austria, 2022. https://doi.org/10.15479/at:ista:10799.","ama":"Konstantinov NH. Robustness and fairness in machine learning. 2022. doi:10.15479/at:ista:10799","ieee":"N. H. Konstantinov, “Robustness and fairness in machine learning,” Institute of Science and Technology Austria, 2022.","mla":"Konstantinov, Nikola H. Robustness and Fairness in Machine Learning. Institute of Science and Technology Austria, 2022, doi:10.15479/at:ista:10799."},"language":[{"iso":"eng"}],"_id":"10799","doi":"10.15479/at:ista:10799","page":"176","author":[{"first_name":"Nikola H","last_name":"Konstantinov","id":"4B9D76E4-F248-11E8-B48F-1D18A9856A87","full_name":"Konstantinov, Nikola H"}],"has_accepted_license":"1","publication_identifier":{"isbn":["978-3-99078-015-2"],"issn":["2663-337X"]},"keyword":["robustness","fairness","machine learning","PAC learning","adversarial learning"],"file_date_updated":"2022-03-10T12:11:48Z","ddc":["000"],"project":[{"_id":"2564DBCA-B435-11E9-9278-68D0E5697425","name":"International IST Doctoral Program","grant_number":"665385","call_identifier":"H2020"}],"year":"2022"}