---
_id: '14454'
abstract:
- lang: eng
  text: As AI and machine-learned software are used increasingly for making decisions
    that affect humans, it is imperative that they remain fair and unbiased in their
    decisions. To complement design-time bias mitigation measures, runtime verification
    techniques have been introduced recently to monitor the algorithmic fairness of
    deployed systems. Previous monitoring techniques assume full observability of
    the states of the (unknown) monitored system. Moreover, they can monitor only
    fairness properties that are specified as arithmetic expressions over the probabilities
    of different events. In this work, we extend fairness monitoring to systems modeled
    as partially observed Markov chains (POMC), and to specifications containing arithmetic
    expressions over the expected values of numerical functions on event sequences.
    The only assumptions we make are that the underlying POMC is aperiodic and starts
    in the stationary distribution, with a bound on its mixing time being known. These
    assumptions enable us to estimate a given property for the entire distribution
    of possible executions of the monitored POMC, by observing only a single execution.
    Our monitors observe a long run of the system and, after each new observation,
    output updated PAC-estimates of how fair or biased the system is. The monitors
    are computationally lightweight and, using a prototype implementation, we demonstrate
    their effectiveness on several real-world examples.
acknowledgement: 'This work is supported by the European Research Council under Grant
  No.: ERC-2020-AdG 101020093.'
alternative_title:
- LNCS
article_processing_charge: No
arxiv: 1
author:
- first_name: Thomas A
  full_name: Henzinger, Thomas A
  id: 40876CD8-F248-11E8-B48F-1D18A9856A87
  last_name: Henzinger
  orcid: 0000-0002-2985-7724
- first_name: Konstantin
  full_name: Kueffner, Konstantin
  id: 8121a2d0-dc85-11ea-9058-af578f3b4515
  last_name: Kueffner
  orcid: 0000-0001-8974-2542
- first_name: Kaushik
  full_name: Mallik, Kaushik
  id: 0834ff3c-6d72-11ec-94e0-b5b0a4fb8598
  last_name: Mallik
  orcid: 0000-0001-9864-7475
citation:
  ama: 'Henzinger TA, Kueffner K, Mallik K. Monitoring algorithmic fairness under
    partial observations. In: <i>23rd International Conference on Runtime Verification</i>.
    Vol 14245. Springer Nature; 2023:291-311. doi:<a href="https://doi.org/10.1007/978-3-031-44267-4_15">10.1007/978-3-031-44267-4_15</a>'
  apa: 'Henzinger, T. A., Kueffner, K., &#38; Mallik, K. (2023). Monitoring algorithmic
    fairness under partial observations. In <i>23rd International Conference on Runtime
    Verification</i> (Vol. 14245, pp. 291–311). Thessaloniki, Greece: Springer Nature.
    <a href="https://doi.org/10.1007/978-3-031-44267-4_15">https://doi.org/10.1007/978-3-031-44267-4_15</a>'
  chicago: Henzinger, Thomas A, Konstantin Kueffner, and Kaushik Mallik. “Monitoring
    Algorithmic Fairness under Partial Observations.” In <i>23rd International Conference
    on Runtime Verification</i>, 14245:291–311. Springer Nature, 2023. <a href="https://doi.org/10.1007/978-3-031-44267-4_15">https://doi.org/10.1007/978-3-031-44267-4_15</a>.
  ieee: T. A. Henzinger, K. Kueffner, and K. Mallik, “Monitoring algorithmic fairness
    under partial observations,” in <i>23rd International Conference on Runtime Verification</i>,
    Thessaloniki, Greece, 2023, vol. 14245, pp. 291–311.
  ista: 'Henzinger TA, Kueffner K, Mallik K. 2023. Monitoring algorithmic fairness
    under partial observations. 23rd International Conference on Runtime Verification.
    RV: Conference on Runtime Verification, LNCS, vol. 14245, 291–311.'
  mla: Henzinger, Thomas A., et al. “Monitoring Algorithmic Fairness under Partial
    Observations.” <i>23rd International Conference on Runtime Verification</i>, vol.
    14245, Springer Nature, 2023, pp. 291–311, doi:<a href="https://doi.org/10.1007/978-3-031-44267-4_15">10.1007/978-3-031-44267-4_15</a>.
  short: T.A. Henzinger, K. Kueffner, K. Mallik, in:, 23rd International Conference
    on Runtime Verification, Springer Nature, 2023, pp. 291–311.
conference:
  end_date: 2023-10-06
  location: Thessaloniki, Greece
  name: 'RV: Conference on Runtime Verification'
  start_date: 2023-10-03
date_created: 2023-10-29T23:01:15Z
date_published: 2023-10-01T00:00:00Z
date_updated: 2023-10-31T11:48:20Z
day: '01'
department:
- _id: ToHe
doi: 10.1007/978-3-031-44267-4_15
ec_funded: 1
external_id:
  arxiv:
  - '2308.00341'
intvolume: '     14245'
language:
- iso: eng
main_file_link:
- open_access: '1'
  url: https://doi.org/10.48550/arXiv.2308.00341
month: '10'
oa: 1
oa_version: Preprint
page: 291-311
project:
- _id: 62781420-2b32-11ec-9570-8d9b63373d4d
  call_identifier: H2020
  grant_number: '101020093'
  name: Vigilant Algorithmic Monitoring of Software
publication: 23rd International Conference on Runtime Verification
publication_identifier:
  eissn:
  - 1611-3349
  isbn:
  - '9783031442667'
  issn:
  - 0302-9743
publication_status: published
publisher: Springer Nature
quality_controlled: '1'
scopus_import: '1'
status: public
title: Monitoring algorithmic fairness under partial observations
type: conference
user_id: 2DF688A6-F248-11E8-B48F-1D18A9856A87
volume: 14245
year: '2023'
...
---
_id: '13228'
abstract:
- lang: eng
  text: A machine-learned system that is fair in static decision-making tasks may
    have biased societal impacts in the long-run. This may happen when the system
    interacts with humans and feedback patterns emerge, reinforcing old biases in
    the system and creating new biases. While existing works try to identify and mitigate
    long-run biases through smart system design, we introduce techniques for monitoring
    fairness in real time. Our goal is to build and deploy a monitor that will continuously
    observe a long sequence of events generated by the system in the wild, and will
    output, with each event, a verdict on how fair the system is at the current point
    in time. The advantages of monitoring are two-fold. Firstly, fairness is evaluated
    at run-time, which is important because unfair behaviors may not be eliminated
    a priori, at design-time, due to partial knowledge about the system and the environment,
    as well as uncertainties and dynamic changes in the system and the environment,
    such as the unpredictability of human behavior. Secondly, monitors are by design
    oblivious to how the monitored system is constructed, which makes them suitable
    to be used as trusted third-party fairness watchdogs. They function as computationally
    lightweight statistical estimators, and their correctness proofs rely on the rigorous
    analysis of the stochastic process that models the assumptions about the underlying
    dynamics of the system. We show, both in theory and experiments, how monitors
    can warn us (1) if a bank’s credit policy over time has created an unfair distribution
    of credit scores among the population, and (2) if a resource allocator’s allocation
    policy over time has made unfair allocations. Our experiments demonstrate that
    the monitors introduce very low overhead. We believe that runtime monitoring is
    an important and mathematically rigorous new addition to the fairness toolbox.
acknowledgement: 'The authors would like to thank the anonymous reviewers for their
  valuable comments and helpful suggestions. This work is supported by the European
  Research Council under Grant No.: ERC-2020-AdG 101020093.'
article_processing_charge: No
arxiv: 1
author:
- first_name: Thomas A
  full_name: Henzinger, Thomas A
  id: 40876CD8-F248-11E8-B48F-1D18A9856A87
  last_name: Henzinger
  orcid: 0000-0002-2985-7724
- first_name: Mahyar
  full_name: Karimi, Mahyar
  last_name: Karimi
- first_name: Konstantin
  full_name: Kueffner, Konstantin
  id: 8121a2d0-dc85-11ea-9058-af578f3b4515
  last_name: Kueffner
  orcid: 0000-0001-8974-2542
- first_name: Kaushik
  full_name: Mallik, Kaushik
  id: 0834ff3c-6d72-11ec-94e0-b5b0a4fb8598
  last_name: Mallik
  orcid: 0000-0001-9864-7475
citation:
  ama: 'Henzinger TA, Karimi M, Kueffner K, Mallik K. Runtime monitoring of dynamic
    fairness properties. In: <i>FAccT ’23: Proceedings of the 2023 ACM Conference
    on Fairness, Accountability, and Transparency</i>. Association for Computing Machinery;
    2023:604-614. doi:<a href="https://doi.org/10.1145/3593013.3594028">10.1145/3593013.3594028</a>'
  apa: 'Henzinger, T. A., Karimi, M., Kueffner, K., &#38; Mallik, K. (2023). Runtime
    monitoring of dynamic fairness properties. In <i>FAccT ’23: Proceedings of the
    2023 ACM Conference on Fairness, Accountability, and Transparency</i> (pp. 604–614).
    Chicago, IL, United States: Association for Computing Machinery. <a href="https://doi.org/10.1145/3593013.3594028">https://doi.org/10.1145/3593013.3594028</a>'
  chicago: 'Henzinger, Thomas A, Mahyar Karimi, Konstantin Kueffner, and Kaushik Mallik.
    “Runtime Monitoring of Dynamic Fairness Properties.” In <i>FAccT ’23: Proceedings
    of the 2023 ACM Conference on Fairness, Accountability, and Transparency</i>,
    604–14. Association for Computing Machinery, 2023. <a href="https://doi.org/10.1145/3593013.3594028">https://doi.org/10.1145/3593013.3594028</a>.'
  ieee: 'T. A. Henzinger, M. Karimi, K. Kueffner, and K. Mallik, “Runtime monitoring
    of dynamic fairness properties,” in <i>FAccT ’23: Proceedings of the 2023 ACM
    Conference on Fairness, Accountability, and Transparency</i>, Chicago, IL, United
    States, 2023, pp. 604–614.'
  ista: 'Henzinger TA, Karimi M, Kueffner K, Mallik K. 2023. Runtime monitoring of
    dynamic fairness properties. FAccT ’23: Proceedings of the 2023 ACM Conference
    on Fairness, Accountability, and Transparency. FAccT: Conference on Fairness,
    Accountability and Transparency, 604–614.'
  mla: 'Henzinger, Thomas A., et al. “Runtime Monitoring of Dynamic Fairness Properties.”
    <i>FAccT ’23: Proceedings of the 2023 ACM Conference on Fairness, Accountability,
    and Transparency</i>, Association for Computing Machinery, 2023, pp. 604–14, doi:<a
    href="https://doi.org/10.1145/3593013.3594028">10.1145/3593013.3594028</a>.'
  short: 'T.A. Henzinger, M. Karimi, K. Kueffner, K. Mallik, in:, FAccT ’23: Proceedings
    of the 2023 ACM Conference on Fairness, Accountability, and Transparency, Association
    for Computing Machinery, 2023, pp. 604–614.'
conference:
  end_date: 2023-06-15
  location: Chicago, IL, United States
  name: 'FAccT: Conference on Fairness, Accountability and Transparency'
  start_date: 2023-06-12
date_created: 2023-07-16T22:01:09Z
date_published: 2023-06-12T00:00:00Z
date_updated: 2023-12-13T11:30:31Z
day: '12'
ddc:
- '000'
department:
- _id: ToHe
doi: 10.1145/3593013.3594028
ec_funded: 1
external_id:
  arxiv:
  - '2305.04699'
  isi:
  - '001062819300057'
file:
- access_level: open_access
  checksum: 96c759db9cdf94b81e37871a66a6ff48
  content_type: application/pdf
  creator: dernst
  date_created: 2023-07-18T07:43:10Z
  date_updated: 2023-07-18T07:43:10Z
  file_id: '13245'
  file_name: 2023_ACM_HenzingerT.pdf
  file_size: 4100596
  relation: main_file
  success: 1
file_date_updated: 2023-07-18T07:43:10Z
has_accepted_license: '1'
isi: 1
language:
- iso: eng
license: https://creativecommons.org/licenses/by/4.0/
month: '06'
oa: 1
oa_version: Published Version
page: 604-614
project:
- _id: 62781420-2b32-11ec-9570-8d9b63373d4d
  call_identifier: H2020
  grant_number: '101020093'
  name: Vigilant Algorithmic Monitoring of Software
publication: 'FAccT ''23: Proceedings of the 2023 ACM Conference on Fairness, Accountability,
  and Transparency'
publication_identifier:
  isbn:
  - '9781450372527'
publication_status: published
publisher: Association for Computing Machinery
quality_controlled: '1'
scopus_import: '1'
status: public
title: Runtime monitoring of dynamic fairness properties
tmp:
  image: /images/cc_by.png
  legal_code_url: https://creativecommons.org/licenses/by/4.0/legalcode
  name: Creative Commons Attribution 4.0 International Public License (CC-BY 4.0)
  short: CC BY (4.0)
type: conference
user_id: 2DF688A6-F248-11E8-B48F-1D18A9856A87
year: '2023'
...
---
_id: '13234'
abstract:
- lang: eng
  text: Neural-network classifiers achieve high accuracy when predicting the class
    of an input that they were trained to identify. Maintaining this accuracy in dynamic
    environments, where inputs frequently fall outside the fixed set of initially
    known classes, remains a challenge. We consider the problem of monitoring the
    classification decisions of neural networks in the presence of novel classes.
    For this purpose, we generalize our recently proposed abstraction-based monitor
    from binary output to real-valued quantitative output. This quantitative output
    enables new applications, two of which we investigate in the paper. As our first
    application, we introduce an algorithmic framework for active monitoring of a
    neural network, which allows us to learn new classes dynamically and yet maintain
    high monitoring performance. As our second application, we present an offline
    procedure to retrain the neural network to improve the monitor’s detection performance
    without deteriorating the network’s classification accuracy. Our experimental
    evaluation demonstrates both the benefits of our active monitoring framework in
    dynamic scenarios and the effectiveness of the retraining procedure.
acknowledgement: This work was supported in part by the ERC-2020-AdG 101020093, by
  DIREC - Digital Research Centre Denmark, and by the Villum Investigator Grant S4OS.
article_processing_charge: Yes (in subscription journal)
article_type: original
arxiv: 1
author:
- first_name: Konstantin
  full_name: Kueffner, Konstantin
  id: 8121a2d0-dc85-11ea-9058-af578f3b4515
  last_name: Kueffner
  orcid: 0000-0001-8974-2542
- first_name: Anna
  full_name: Lukina, Anna
  id: CBA4D1A8-0FE8-11E9-BDE6-07BFE5697425
  last_name: Lukina
- first_name: Christian
  full_name: Schilling, Christian
  id: 3A2F4DCE-F248-11E8-B48F-1D18A9856A87
  last_name: Schilling
  orcid: 0000-0003-3658-1065
- first_name: Thomas A
  full_name: Henzinger, Thomas A
  id: 40876CD8-F248-11E8-B48F-1D18A9856A87
  last_name: Henzinger
  orcid: 0000-0002-2985-7724
citation:
  ama: 'Kueffner K, Lukina A, Schilling C, Henzinger TA. Into the unknown: Active
    monitoring of neural networks (extended version). <i>International Journal on
    Software Tools for Technology Transfer</i>. 2023;25:575-592. doi:<a href="https://doi.org/10.1007/s10009-023-00711-4">10.1007/s10009-023-00711-4</a>'
  apa: 'Kueffner, K., Lukina, A., Schilling, C., &#38; Henzinger, T. A. (2023). Into
    the unknown: Active monitoring of neural networks (extended version). <i>International
    Journal on Software Tools for Technology Transfer</i>. Springer Nature. <a href="https://doi.org/10.1007/s10009-023-00711-4">https://doi.org/10.1007/s10009-023-00711-4</a>'
  chicago: 'Kueffner, Konstantin, Anna Lukina, Christian Schilling, and Thomas A Henzinger.
    “Into the Unknown: Active Monitoring of Neural Networks (Extended Version).” <i>International
    Journal on Software Tools for Technology Transfer</i>. Springer Nature, 2023.
    <a href="https://doi.org/10.1007/s10009-023-00711-4">https://doi.org/10.1007/s10009-023-00711-4</a>.'
  ieee: 'K. Kueffner, A. Lukina, C. Schilling, and T. A. Henzinger, “Into the unknown:
    Active monitoring of neural networks (extended version),” <i>International Journal
    on Software Tools for Technology Transfer</i>, vol. 25. Springer Nature, pp. 575–592,
    2023.'
  ista: 'Kueffner K, Lukina A, Schilling C, Henzinger TA. 2023. Into the unknown:
    Active monitoring of neural networks (extended version). International Journal
    on Software Tools for Technology Transfer. 25, 575–592.'
  mla: 'Kueffner, Konstantin, et al. “Into the Unknown: Active Monitoring of Neural
    Networks (Extended Version).” <i>International Journal on Software Tools for Technology
    Transfer</i>, vol. 25, Springer Nature, 2023, pp. 575–92, doi:<a href="https://doi.org/10.1007/s10009-023-00711-4">10.1007/s10009-023-00711-4</a>.'
  short: K. Kueffner, A. Lukina, C. Schilling, T.A. Henzinger, International Journal
    on Software Tools for Technology Transfer 25 (2023) 575–592.
date_created: 2023-07-16T22:01:11Z
date_published: 2023-08-01T00:00:00Z
date_updated: 2024-01-30T12:06:57Z
day: '01'
ddc:
- '000'
department:
- _id: ToHe
doi: 10.1007/s10009-023-00711-4
ec_funded: 1
external_id:
  arxiv:
  - '2009.06429'
  isi:
  - '001020160000001'
file:
- access_level: open_access
  checksum: 3c4b347f39412a76872f9a6f30101f94
  content_type: application/pdf
  creator: dernst
  date_created: 2024-01-30T12:06:07Z
  date_updated: 2024-01-30T12:06:07Z
  file_id: '14903'
  file_name: 2023_JourSoftwareTools_Kueffner.pdf
  file_size: 13387667
  relation: main_file
  success: 1
file_date_updated: 2024-01-30T12:06:07Z
has_accepted_license: '1'
intvolume: '        25'
isi: 1
language:
- iso: eng
month: '08'
oa: 1
oa_version: Published Version
page: 575-592
project:
- _id: 62781420-2b32-11ec-9570-8d9b63373d4d
  call_identifier: H2020
  grant_number: '101020093'
  name: Vigilant Algorithmic Monitoring of Software
publication: International Journal on Software Tools for Technology Transfer
publication_identifier:
  eissn:
  - 1433-2787
  issn:
  - 1433-2779
publication_status: published
publisher: Springer Nature
quality_controlled: '1'
related_material:
  record:
  - id: '10206'
    relation: shorter_version
    status: public
scopus_import: '1'
status: public
title: 'Into the unknown: Active monitoring of neural networks (extended version)'
tmp:
  image: /images/cc_by.png
  legal_code_url: https://creativecommons.org/licenses/by/4.0/legalcode
  name: Creative Commons Attribution 4.0 International Public License (CC-BY 4.0)
  short: CC BY (4.0)
type: journal_article
user_id: 2DF688A6-F248-11E8-B48F-1D18A9856A87
volume: 25
year: '2023'
...
---
_id: '13310'
abstract:
- lang: eng
  text: Machine-learned systems are in widespread use for making decisions about humans,
    and it is important that they are fair, i.e., not biased against individuals based
    on sensitive attributes. We present runtime verification of algorithmic fairness
    for systems whose models are unknown, but are assumed to have a Markov chain structure.
    We introduce a specification language that can model many common algorithmic fairness
    properties, such as demographic parity, equal opportunity, and social burden.
    We build monitors that observe a long sequence of events as generated by a given
    system, and output, after each observation, a quantitative estimate of how fair
    or biased the system was on that run until that point in time. The estimate is
    proven to be correct modulo a variable error bound and a given confidence level,
    where the error bound gets tighter as the observed sequence gets longer. Our monitors
    are of two types, and use, respectively, frequentist and Bayesian statistical
    inference techniques. While the frequentist monitors compute estimates that are
    objectively correct with respect to the ground truth, the Bayesian monitors compute
    estimates that are correct subject to a given prior belief about the system’s
    model. Using a prototype implementation, we show how we can monitor if a bank
    is fair in giving loans to applicants from different social backgrounds, and if
    a college is fair in admitting students while maintaining a reasonable financial
    burden on the society. Although they exhibit different theoretical complexities
    in certain cases, in our experiments, both frequentist and Bayesian monitors took
    less than a millisecond to update their verdicts after each observation.
acknowledgement: 'This work is supported by the European Research Council under Grant
  No.: ERC-2020-AdG101020093.'
alternative_title:
- LNCS
article_processing_charge: Yes (in subscription journal)
arxiv: 1
author:
- first_name: Thomas A
  full_name: Henzinger, Thomas A
  id: 40876CD8-F248-11E8-B48F-1D18A9856A87
  last_name: Henzinger
  orcid: 0000-0002-2985-7724
- first_name: Mahyar
  full_name: Karimi, Mahyar
  id: f1dedef5-2f78-11ee-989a-c4c97bccf506
  last_name: Karimi
  orcid: 0009-0005-0820-1696
- first_name: Konstantin
  full_name: Kueffner, Konstantin
  id: 8121a2d0-dc85-11ea-9058-af578f3b4515
  last_name: Kueffner
  orcid: 0000-0001-8974-2542
- first_name: Kaushik
  full_name: Mallik, Kaushik
  id: 0834ff3c-6d72-11ec-94e0-b5b0a4fb8598
  last_name: Mallik
  orcid: 0000-0001-9864-7475
citation:
  ama: 'Henzinger TA, Karimi M, Kueffner K, Mallik K. Monitoring algorithmic fairness.
    In: <i>Computer Aided Verification</i>. Vol 13965. Springer Nature; 2023:358–382.
    doi:<a href="https://doi.org/10.1007/978-3-031-37703-7_17">10.1007/978-3-031-37703-7_17</a>'
  apa: 'Henzinger, T. A., Karimi, M., Kueffner, K., &#38; Mallik, K. (2023). Monitoring
    algorithmic fairness. In <i>Computer Aided Verification</i> (Vol. 13965, pp. 358–382).
    Paris, France: Springer Nature. <a href="https://doi.org/10.1007/978-3-031-37703-7_17">https://doi.org/10.1007/978-3-031-37703-7_17</a>'
  chicago: Henzinger, Thomas A, Mahyar Karimi, Konstantin Kueffner, and Kaushik Mallik.
    “Monitoring Algorithmic Fairness.” In <i>Computer Aided Verification</i>, 13965:358–382.
    Springer Nature, 2023. <a href="https://doi.org/10.1007/978-3-031-37703-7_17">https://doi.org/10.1007/978-3-031-37703-7_17</a>.
  ieee: T. A. Henzinger, M. Karimi, K. Kueffner, and K. Mallik, “Monitoring algorithmic
    fairness,” in <i>Computer Aided Verification</i>, Paris, France, 2023, vol. 13965,
    pp. 358–382.
  ista: 'Henzinger TA, Karimi M, Kueffner K, Mallik K. 2023. Monitoring algorithmic
    fairness. Computer Aided Verification. CAV: Computer Aided Verification, LNCS,
    vol. 13965, 358–382.'
  mla: Henzinger, Thomas A., et al. “Monitoring Algorithmic Fairness.” <i>Computer
    Aided Verification</i>, vol. 13965, Springer Nature, 2023, pp. 358–382, doi:<a
    href="https://doi.org/10.1007/978-3-031-37703-7_17">10.1007/978-3-031-37703-7_17</a>.
  short: T.A. Henzinger, M. Karimi, K. Kueffner, K. Mallik, in:, Computer Aided Verification,
    Springer Nature, 2023, pp. 358–382.
conference:
  end_date: 2023-07-22
  location: Paris, France
  name: 'CAV: Computer Aided Verification'
  start_date: 2023-07-17
date_created: 2023-07-25T18:32:40Z
date_published: 2023-07-18T00:00:00Z
date_updated: 2023-09-05T15:14:00Z
day: '18'
ddc:
- '000'
department:
- _id: GradSch
- _id: ToHe
doi: 10.1007/978-3-031-37703-7_17
ec_funded: 1
external_id:
  arxiv:
  - '2305.15979'
file:
- access_level: open_access
  checksum: ccaf94bf7d658ba012c016e11869b54c
  content_type: application/pdf
  creator: dernst
  date_created: 2023-07-31T08:11:20Z
  date_updated: 2023-07-31T08:11:20Z
  file_id: '13327'
  file_name: 2023_LNCS_CAV_HenzingerT.pdf
  file_size: 647760
  relation: main_file
  success: 1
file_date_updated: 2023-07-31T08:11:20Z
has_accepted_license: '1'
intvolume: '     13965'
language:
- iso: eng
month: '07'
oa: 1
oa_version: Published Version
page: 358–382
project:
- _id: 62781420-2b32-11ec-9570-8d9b63373d4d
  call_identifier: H2020
  grant_number: '101020093'
  name: Vigilant Algorithmic Monitoring of Software
publication: Computer Aided Verification
publication_identifier:
  eisbn:
  - '9783031377037'
  eissn:
  - 1611-3349
  isbn:
  - '9783031377020'
  issn:
  - 0302-9743
publication_status: published
publisher: Springer Nature
quality_controlled: '1'
status: public
title: Monitoring algorithmic fairness
tmp:
  image: /images/cc_by.png
  legal_code_url: https://creativecommons.org/licenses/by/4.0/legalcode
  name: Creative Commons Attribution 4.0 International Public License (CC-BY 4.0)
  short: CC BY (4.0)
type: conference
user_id: c635000d-4b10-11ee-a964-aac5a93f6ac1
volume: 13965
year: '2023'
...
