---
_id: '14601'
abstract:
- lang: eng
  text: "In this work, we address the problem of learning provably stable neural\r\nnetwork
    policies for stochastic control systems. While recent work has\r\ndemonstrated
    the feasibility of certifying given policies using martingale\r\ntheory, the problem
    of how to learn such policies is little explored. Here, we\r\nstudy the effectiveness
    of jointly learning a policy together with a martingale\r\ncertificate that proves
    its stability using a single learning algorithm. We\r\nobserve that the joint
    optimization problem becomes easily stuck in local\r\nminima when starting from
    a randomly initialized policy. Our results suggest\r\nthat some form of pre-training
    of the policy is required for the joint\r\noptimization to repair and verify the
    policy successfully."
article_processing_charge: No
arxiv: 1
author:
- first_name: Dorde
  full_name: Zikelic, Dorde
  id: 294AA7A6-F248-11E8-B48F-1D18A9856A87
  last_name: Zikelic
  orcid: 0000-0002-4681-1699
- first_name: Mathias
  full_name: Lechner, Mathias
  id: 3DC22916-F248-11E8-B48F-1D18A9856A87
  last_name: Lechner
- first_name: Krishnendu
  full_name: Chatterjee, Krishnendu
  id: 2E5DCA20-F248-11E8-B48F-1D18A9856A87
  last_name: Chatterjee
  orcid: 0000-0002-4561-241X
- first_name: Thomas A
  full_name: Henzinger, Thomas A
  id: 40876CD8-F248-11E8-B48F-1D18A9856A87
  last_name: Henzinger
  orcid: 0000-0002-2985-7724
citation:
  ama: Zikelic D, Lechner M, Chatterjee K, Henzinger TA. Learning stabilizing policies
    in stochastic control systems. <i>arXiv</i>. doi:<a href="https://doi.org/10.48550/arXiv.2205.11991">10.48550/arXiv.2205.11991</a>
  apa: Zikelic, D., Lechner, M., Chatterjee, K., &#38; Henzinger, T. A. (n.d.). Learning
    stabilizing policies in stochastic control systems. <i>arXiv</i>. <a href="https://doi.org/10.48550/arXiv.2205.11991">https://doi.org/10.48550/arXiv.2205.11991</a>
  chicago: Zikelic, Dorde, Mathias Lechner, Krishnendu Chatterjee, and Thomas A Henzinger.
    “Learning Stabilizing Policies in Stochastic Control Systems.” <i>ArXiv</i>, n.d.
    <a href="https://doi.org/10.48550/arXiv.2205.11991">https://doi.org/10.48550/arXiv.2205.11991</a>.
  ieee: D. Zikelic, M. Lechner, K. Chatterjee, and T. A. Henzinger, “Learning stabilizing
    policies in stochastic control systems,” <i>arXiv</i>. .
  ista: Zikelic D, Lechner M, Chatterjee K, Henzinger TA. Learning stabilizing policies
    in stochastic control systems. arXiv, <a href="https://doi.org/10.48550/arXiv.2205.11991">10.48550/arXiv.2205.11991</a>.
  mla: Zikelic, Dorde, et al. “Learning Stabilizing Policies in Stochastic Control
    Systems.” <i>ArXiv</i>, doi:<a href="https://doi.org/10.48550/arXiv.2205.11991">10.48550/arXiv.2205.11991</a>.
  short: D. Zikelic, M. Lechner, K. Chatterjee, T.A. Henzinger, ArXiv (n.d.).
date_created: 2023-11-24T13:22:30Z
date_published: 2022-05-24T00:00:00Z
date_updated: 2025-07-14T09:10:00Z
day: '24'
department:
- _id: KrCh
- _id: ToHe
doi: 10.48550/arXiv.2205.11991
ec_funded: 1
external_id:
  arxiv:
  - '2205.11991'
language:
- iso: eng
main_file_link:
- open_access: '1'
  url: https://arxiv.org/abs/2205.11991
month: '05'
oa: 1
oa_version: Preprint
project:
- _id: 62781420-2b32-11ec-9570-8d9b63373d4d
  call_identifier: H2020
  grant_number: '101020093'
  name: Vigilant Algorithmic Monitoring of Software
- _id: 0599E47C-7A3F-11EA-A408-12923DDC885E
  call_identifier: H2020
  grant_number: '863818'
  name: 'Formal Methods for Stochastic Models: Algorithms and Applications'
- _id: 2564DBCA-B435-11E9-9278-68D0E5697425
  call_identifier: H2020
  grant_number: '665385'
  name: International IST Doctoral Program
publication: arXiv
publication_status: submitted
related_material:
  record:
  - id: '14539'
    relation: dissertation_contains
    status: public
status: public
title: Learning stabilizing policies in stochastic control systems
type: preprint
user_id: 8b945eb4-e2f2-11eb-945a-df72226e66a9
year: '2022'
...
---
_id: '13064'
abstract:
- lang: eng
  text: Genetically informed, deep-phenotyped biobanks are an important research resource
    and it is imperative that the most powerful, versatile, and efficient analysis
    approaches are used. Here, we apply our recently developed Bayesian grouped mixture
    of regressions model (GMRM) in the UK and Estonian Biobanks and obtain the highest
    genomic prediction accuracy reported to date across 21 heritable traits. When
    compared to other approaches, GMRM accuracy was greater than annotation prediction
    models run in the LDAK or LDPred-funct software by 15% (SE 7%) and 14% (SE 2%),
    respectively, and was 18% (SE 3%) greater than a baseline BayesR model without
    single-nucleotide polymorphism (SNP) markers grouped into minor allele frequency–linkage
    disequilibrium (MAF-LD) annotation categories. For height, the prediction accuracy
    R 2 was 47% in a UK Biobank holdout sample, which was 76% of the estimated h SNP
    2 . We then extend our GMRM prediction model to provide mixed-linear model association
    (MLMA) SNP marker estimates for genome-wide association (GWAS) discovery, which
    increased the independent loci detected to 16,162 in unrelated UK Biobank individuals,
    compared to 10,550 from BoltLMM and 10,095 from Regenie, a 62 and 65% increase,
    respectively. The average χ2 value of the leading markers increased by 15.24 (SE
    0.41) for every 1% increase in prediction accuracy gained over a baseline BayesR
    model across the traits. Thus, we show that modeling genetic associations accounting
    for MAF and LD differences among SNP markers, and incorporating prior knowledge
    of genomic function, is important for both genomic prediction and discovery in
    large-scale individual-level studies.
article_processing_charge: No
author:
- first_name: Etienne
  full_name: Orliac, Etienne
  last_name: Orliac
- first_name: Daniel
  full_name: Trejo Banos, Daniel
  last_name: Trejo Banos
- first_name: Sven
  full_name: Ojavee, Sven
  last_name: Ojavee
- first_name: Kristi
  full_name: Läll, Kristi
  last_name: Läll
- first_name: Reedik
  full_name: Mägi, Reedik
  last_name: Mägi
- first_name: Peter
  full_name: Visscher, Peter
  last_name: Visscher
- first_name: Matthew Richard
  full_name: Robinson, Matthew Richard
  id: E5D42276-F5DA-11E9-8E24-6303E6697425
  last_name: Robinson
  orcid: 0000-0001-8982-8813
citation:
  ama: Orliac E, Trejo Banos D, Ojavee S, et al. Improving genome-wide association
    discovery and genomic prediction accuracy in biobank data. 2022. doi:<a href="https://doi.org/10.5061/DRYAD.GTHT76HMZ">10.5061/DRYAD.GTHT76HMZ</a>
  apa: Orliac, E., Trejo Banos, D., Ojavee, S., Läll, K., Mägi, R., Visscher, P.,
    &#38; Robinson, M. R. (2022). Improving genome-wide association discovery and
    genomic prediction accuracy in biobank data. Dryad. <a href="https://doi.org/10.5061/DRYAD.GTHT76HMZ">https://doi.org/10.5061/DRYAD.GTHT76HMZ</a>
  chicago: Orliac, Etienne, Daniel Trejo Banos, Sven Ojavee, Kristi Läll, Reedik Mägi,
    Peter Visscher, and Matthew Richard Robinson. “Improving Genome-Wide Association
    Discovery and Genomic Prediction Accuracy in Biobank Data.” Dryad, 2022. <a href="https://doi.org/10.5061/DRYAD.GTHT76HMZ">https://doi.org/10.5061/DRYAD.GTHT76HMZ</a>.
  ieee: E. Orliac <i>et al.</i>, “Improving genome-wide association discovery and
    genomic prediction accuracy in biobank data.” Dryad, 2022.
  ista: Orliac E, Trejo Banos D, Ojavee S, Läll K, Mägi R, Visscher P, Robinson MR.
    2022. Improving genome-wide association discovery and genomic prediction accuracy
    in biobank data, Dryad, <a href="https://doi.org/10.5061/DRYAD.GTHT76HMZ">10.5061/DRYAD.GTHT76HMZ</a>.
  mla: Orliac, Etienne, et al. <i>Improving Genome-Wide Association Discovery and
    Genomic Prediction Accuracy in Biobank Data</i>. Dryad, 2022, doi:<a href="https://doi.org/10.5061/DRYAD.GTHT76HMZ">10.5061/DRYAD.GTHT76HMZ</a>.
  short: E. Orliac, D. Trejo Banos, S. Ojavee, K. Läll, R. Mägi, P. Visscher, M.R.
    Robinson, (2022).
date_created: 2023-05-23T16:28:13Z
date_published: 2022-09-02T00:00:00Z
date_updated: 2023-08-03T12:40:37Z
day: '02'
ddc:
- '570'
department:
- _id: MaRo
doi: 10.5061/DRYAD.GTHT76HMZ
main_file_link:
- open_access: '1'
  url: https://doi.org/10.5061/dryad.gtht76hmz
month: '09'
oa: 1
oa_version: Published Version
publisher: Dryad
related_material:
  record:
  - id: '11733'
    relation: used_in_publication
    status: public
status: public
title: Improving genome-wide association discovery and genomic prediction accuracy
  in biobank data
tmp:
  image: /images/cc_0.png
  legal_code_url: https://creativecommons.org/publicdomain/zero/1.0/legalcode
  name: Creative Commons Public Domain Dedication (CC0 1.0)
  short: CC0 (1.0)
type: research_data_reference
user_id: 2DF688A6-F248-11E8-B48F-1D18A9856A87
year: '2022'
...
---
_id: '13066'
abstract:
- lang: eng
  text: Chromosomal inversions have been shown to play a major role in local adaptation
    by suppressing recombination between alternative arrangements and maintaining
    beneficial allele combinations. However, so far, their importance relative to
    the remaining genome remains largely unknown. Understanding the genetic architecture
    of adaptation requires better estimates of how loci of different effect sizes
    contribute to phenotypic variation. Here, we used three Swedish islands where
    the marine snail Littorina saxatilis has repeatedly evolved into two distinct
    ecotypes along a habitat transition. We estimated the contribution of inversion
    polymorphisms to phenotypic divergence while controlling for polygenic effects
    in the remaining genome using a quantitative genetics framework. We confirmed
    the importance of inversions but showed that contributions of loci outside inversions
    are of similar magnitude, with variable proportions dependent on the trait and
    the population. Some inversions showed consistent effects across all sites, whereas
    others exhibited site-specific effects, indicating that the genomic basis for
    replicated phenotypic divergence is only partly shared. The contributions of sexual
    dimorphism as well as environmental factors to phenotypic variation were significant
    but minor compared to inversions and polygenic background. Overall, this integrated
    approach provides insight into the multiple mechanisms contributing to parallel
    phenotypic divergence.
article_processing_charge: No
author:
- first_name: Eva
  full_name: Koch, Eva
  last_name: Koch
- first_name: Mark
  full_name: Ravinet, Mark
  last_name: Ravinet
- first_name: Anja M
  full_name: Westram, Anja M
  id: 3C147470-F248-11E8-B48F-1D18A9856A87
  last_name: Westram
  orcid: 0000-0003-1050-4969
- first_name: Kerstin
  full_name: Jonannesson, Kerstin
  last_name: Jonannesson
- first_name: Roger
  full_name: Butlin, Roger
  last_name: Butlin
citation:
  ama: 'Koch E, Ravinet M, Westram AM, Jonannesson K, Butlin R. Data from: Genetic
    architecture of repeated phenotypic divergence in Littorina saxatilis ecotype
    evolution. 2022. doi:<a href="https://doi.org/10.5061/DRYAD.M905QFV4B">10.5061/DRYAD.M905QFV4B</a>'
  apa: 'Koch, E., Ravinet, M., Westram, A. M., Jonannesson, K., &#38; Butlin, R. (2022).
    Data from: Genetic architecture of repeated phenotypic divergence in Littorina
    saxatilis ecotype evolution. Dryad. <a href="https://doi.org/10.5061/DRYAD.M905QFV4B">https://doi.org/10.5061/DRYAD.M905QFV4B</a>'
  chicago: 'Koch, Eva, Mark Ravinet, Anja M Westram, Kerstin Jonannesson, and Roger
    Butlin. “Data from: Genetic Architecture of Repeated Phenotypic Divergence in
    Littorina Saxatilis Ecotype Evolution.” Dryad, 2022. <a href="https://doi.org/10.5061/DRYAD.M905QFV4B">https://doi.org/10.5061/DRYAD.M905QFV4B</a>.'
  ieee: 'E. Koch, M. Ravinet, A. M. Westram, K. Jonannesson, and R. Butlin, “Data
    from: Genetic architecture of repeated phenotypic divergence in Littorina saxatilis
    ecotype evolution.” Dryad, 2022.'
  ista: 'Koch E, Ravinet M, Westram AM, Jonannesson K, Butlin R. 2022. Data from:
    Genetic architecture of repeated phenotypic divergence in Littorina saxatilis
    ecotype evolution, Dryad, <a href="https://doi.org/10.5061/DRYAD.M905QFV4B">10.5061/DRYAD.M905QFV4B</a>.'
  mla: 'Koch, Eva, et al. <i>Data from: Genetic Architecture of Repeated Phenotypic
    Divergence in Littorina Saxatilis Ecotype Evolution</i>. Dryad, 2022, doi:<a href="https://doi.org/10.5061/DRYAD.M905QFV4B">10.5061/DRYAD.M905QFV4B</a>.'
  short: E. Koch, M. Ravinet, A.M. Westram, K. Jonannesson, R. Butlin, (2022).
date_created: 2023-05-23T16:33:12Z
date_published: 2022-07-28T00:00:00Z
date_updated: 2023-08-04T09:42:10Z
day: '28'
ddc:
- '570'
department:
- _id: NiBa
doi: 10.5061/DRYAD.M905QFV4B
main_file_link:
- open_access: '1'
  url: https://doi.org/10.5061/dryad.m905qfv4b
month: '07'
oa: 1
oa_version: Published Version
publisher: Dryad
related_material:
  record:
  - id: '12247'
    relation: used_in_publication
    status: public
status: public
title: 'Data from: Genetic architecture of repeated phenotypic divergence in Littorina
  saxatilis ecotype evolution'
tmp:
  image: /images/cc_0.png
  legal_code_url: https://creativecommons.org/publicdomain/zero/1.0/legalcode
  name: Creative Commons Public Domain Dedication (CC0 1.0)
  short: CC0 (1.0)
type: research_data_reference
user_id: 2DF688A6-F248-11E8-B48F-1D18A9856A87
year: '2022'
...
---
_id: '13076'
abstract:
- lang: eng
  text: "The source code for replicating experiments presented in the paper.\r\n\r\nThe
    implementation of the designed priority schedulers can be found in Galois-2.2.1/include/Galois/WorkList/:\r\nStealingMultiQueue.h
    is the StealingMultiQueue.\r\nMQOptimized/ contains MQ Optimized variants.\r\n\r\nWe
    provide images that contain all the dependencies and datasets. Images can be pulled
    from npostnikova/mq-based-schedulers repository, or downloaded from Zenodo. See
    readme for more detail."
article_processing_charge: No
author:
- first_name: Anastasiia
  full_name: Postnikova, Anastasiia
  last_name: Postnikova
- first_name: Nikita
  full_name: Koval, Nikita
  id: 2F4DB10C-F248-11E8-B48F-1D18A9856A87
  last_name: Koval
- first_name: Giorgi
  full_name: Nadiradze, Giorgi
  id: 3279A00C-F248-11E8-B48F-1D18A9856A87
  last_name: Nadiradze
- first_name: Dan-Adrian
  full_name: Alistarh, Dan-Adrian
  id: 4A899BFC-F248-11E8-B48F-1D18A9856A87
  last_name: Alistarh
  orcid: 0000-0003-3650-940X
citation:
  ama: Postnikova A, Koval N, Nadiradze G, Alistarh D-A. Multi-queues can be state-of-the-art
    priority schedulers. 2022. doi:<a href="https://doi.org/10.5281/ZENODO.5733408">10.5281/ZENODO.5733408</a>
  apa: Postnikova, A., Koval, N., Nadiradze, G., &#38; Alistarh, D.-A. (2022). Multi-queues
    can be state-of-the-art priority schedulers. Zenodo. <a href="https://doi.org/10.5281/ZENODO.5733408">https://doi.org/10.5281/ZENODO.5733408</a>
  chicago: Postnikova, Anastasiia, Nikita Koval, Giorgi Nadiradze, and Dan-Adrian
    Alistarh. “Multi-Queues Can Be State-of-the-Art Priority Schedulers.” Zenodo,
    2022. <a href="https://doi.org/10.5281/ZENODO.5733408">https://doi.org/10.5281/ZENODO.5733408</a>.
  ieee: A. Postnikova, N. Koval, G. Nadiradze, and D.-A. Alistarh, “Multi-queues can
    be state-of-the-art priority schedulers.” Zenodo, 2022.
  ista: Postnikova A, Koval N, Nadiradze G, Alistarh D-A. 2022. Multi-queues can be
    state-of-the-art priority schedulers, Zenodo, <a href="https://doi.org/10.5281/ZENODO.5733408">10.5281/ZENODO.5733408</a>.
  mla: Postnikova, Anastasiia, et al. <i>Multi-Queues Can Be State-of-the-Art Priority
    Schedulers</i>. Zenodo, 2022, doi:<a href="https://doi.org/10.5281/ZENODO.5733408">10.5281/ZENODO.5733408</a>.
  short: A. Postnikova, N. Koval, G. Nadiradze, D.-A. Alistarh, (2022).
date_created: 2023-05-23T17:05:40Z
date_published: 2022-01-03T00:00:00Z
date_updated: 2023-08-03T06:48:34Z
day: '03'
ddc:
- '510'
department:
- _id: DaAl
doi: 10.5281/ZENODO.5733408
main_file_link:
- open_access: '1'
  url: https://doi.org/10.5281/zenodo.5813846
month: '01'
oa: 1
oa_version: Published Version
publisher: Zenodo
related_material:
  link:
  - relation: software
    url: https://github.com/npostnikova/mq-based-schedulers/tree/v1.1
  record:
  - id: '11180'
    relation: used_in_publication
    status: public
status: public
title: Multi-queues can be state-of-the-art priority schedulers
type: research_data_reference
user_id: 2DF688A6-F248-11E8-B48F-1D18A9856A87
year: '2022'
...
---
_id: '13239'
abstract:
- lang: eng
  text: Brains are thought to engage in predictive learning - learning to predict
    upcoming stimuli - to construct an internal model of their environment. This is
    especially notable for spatial navigation, as first described by Tolman’s latent
    learning tasks. However, predictive learning has also been observed in sensory
    cortex, in settings unrelated to spatial navigation. Apart from normative frameworks
    such as active inference or efficient coding, what could be the utility of learning
    to predict the patterns of occurrence of correlated stimuli? Here we show that
    prediction, and thereby the construction of an internal model of sequential stimuli,
    can bootstrap the learning process of a working memory task in a recurrent neural
    network. We implemented predictive learning alongside working memory match-tasks,
    and networks emerged to solve the prediction task first by encoding information
    across time to predict upcoming stimuli, and then eavesdropped on this solution
    to solve the matching task. Eavesdropping was most beneficial when neural resources
    were limited. Hence, predictive learning acts as a general neural mechanism to
    learn to store sensory information that can later be essential for working memory
    tasks.
acknowledgement: "The authors would like to thank members of the Vogels lab and Manohar
  lab, as well as Adam Packer, Andrew Saxe, Stefano Sarao Mannelli and Jacob Bakermans
  for fruitful discussions and comments on earlier versions of the manuscript.\r\nTLvdP
  was supported by funding from the Biotechnology and Biological Sciences Research
  Council (BBSRC) [grant number BB/M011224/1]. TPV was supported by an ERC Consolidator
  Grant (SYNAPSEEK). SGM was funded by a MRC Clinician Scientist Fellowship MR/P00878X
  and Leverhulme Grant RPG-2018-310."
article_processing_charge: No
author:
- first_name: Thijs L.
  full_name: Van Der Plas, Thijs L.
  last_name: Van Der Plas
- first_name: Tim P
  full_name: Vogels, Tim P
  id: CB6FF8D2-008F-11EA-8E08-2637E6697425
  last_name: Vogels
  orcid: 0000-0003-3295-6181
- first_name: Sanjay G.
  full_name: Manohar, Sanjay G.
  last_name: Manohar
citation:
  ama: 'Van Der Plas TL, Vogels TP, Manohar SG. Predictive learning enables neural
    networks to learn complex working memory tasks. In: <i>Proceedings of Machine
    Learning Research</i>. Vol 199. ML Research Press; 2022:518-531.'
  apa: Van Der Plas, T. L., Vogels, T. P., &#38; Manohar, S. G. (2022). Predictive
    learning enables neural networks to learn complex working memory tasks. In <i>Proceedings
    of Machine Learning Research</i> (Vol. 199, pp. 518–531). ML Research Press.
  chicago: Van Der Plas, Thijs L., Tim P Vogels, and Sanjay G. Manohar. “Predictive
    Learning Enables Neural Networks to Learn Complex Working Memory Tasks.” In <i>Proceedings
    of Machine Learning Research</i>, 199:518–31. ML Research Press, 2022.
  ieee: T. L. Van Der Plas, T. P. Vogels, and S. G. Manohar, “Predictive learning
    enables neural networks to learn complex working memory tasks,” in <i>Proceedings
    of Machine Learning Research</i>, 2022, vol. 199, pp. 518–531.
  ista: Van Der Plas TL, Vogels TP, Manohar SG. 2022. Predictive learning enables
    neural networks to learn complex working memory tasks. Proceedings of Machine
    Learning Research. vol. 199, 518–531.
  mla: Van Der Plas, Thijs L., et al. “Predictive Learning Enables Neural Networks
    to Learn Complex Working Memory Tasks.” <i>Proceedings of Machine Learning Research</i>,
    vol. 199, ML Research Press, 2022, pp. 518–31.
  short: T.L. Van Der Plas, T.P. Vogels, S.G. Manohar, in:, Proceedings of Machine
    Learning Research, ML Research Press, 2022, pp. 518–531.
date_created: 2023-07-16T22:01:12Z
date_published: 2022-12-01T00:00:00Z
date_updated: 2023-07-18T06:36:28Z
day: '01'
ddc:
- '000'
department:
- _id: TiVo
ec_funded: 1
file:
- access_level: open_access
  checksum: 7530a93ef42e10b4db1e5e4b69796e93
  content_type: application/pdf
  creator: dernst
  date_created: 2023-07-18T06:32:38Z
  date_updated: 2023-07-18T06:32:38Z
  file_id: '13243'
  file_name: 2022_PMLR_vanderPlas.pdf
  file_size: 585135
  relation: main_file
  success: 1
file_date_updated: 2023-07-18T06:32:38Z
has_accepted_license: '1'
intvolume: '       199'
language:
- iso: eng
month: '12'
oa: 1
oa_version: Published Version
page: 518-531
project:
- _id: 0aacfa84-070f-11eb-9043-d7eb2c709234
  call_identifier: H2020
  grant_number: '819603'
  name: Learning the shape of synaptic plasticity rules for neuronal architectures
    and function through machine learning.
publication: Proceedings of Machine Learning Research
publication_identifier:
  eissn:
  - 2640-3498
publication_status: published
publisher: ML Research Press
quality_controlled: '1'
scopus_import: '1'
status: public
title: Predictive learning enables neural networks to learn complex working memory
  tasks
type: conference
user_id: 2DF688A6-F248-11E8-B48F-1D18A9856A87
volume: 199
year: '2022'
...
---
_id: '13240'
abstract:
- lang: eng
  text: Ustilago maydis is a biotrophic phytopathogenic fungus that causes corn smut
    disease. As a well-established model system, U. maydis is genetically fully accessible
    with large omics datasets available and subject to various biological questions
    ranging from DNA-repair, RNA-transport, and protein secretion to disease biology.
    For many genetic approaches, tight control of transgene regulation is important.
    Here we established an optimised version of the Tetracycline-ON (TetON) system
    for U. maydis. We demonstrate the Tetracycline concentration-dependent expression
    of fluorescent protein transgenes and the system’s suitability for the induced
    expression of the toxic protein BCL2 Associated X-1 (Bax1). The Golden Gate compatible
    vector system contains a native minimal promoter from the mating factor a-1 encoding
    gene, mfa with ten copies of the tet-regulated operator (tetO) and a codon optimised
    Tet-repressor (tetR*) which is translationally fused to the native transcriptional
    corepressor Mql1 (UMAG_05501). The metabolism-independent transcriptional regulator
    system is functional both, in liquid culture as well as on solid media in the
    presence of the inducer and can become a useful tool for toxin-antitoxin studies,
    identification of antifungal proteins, and to study functions of toxic gene products
    in Ustilago maydis.
acknowledgement: "The research leading to these results received funding from the
  European Research Council under the European Union’s Seventh Framework Programme
  ERC-2013-STG (grant agreement: 335691), the Austrian Science Fund (I 3033-B22),
  the Austrian Academy of Sciences, and the Deutsche Forschungsgemeinschaft (DFG,
  German Research Foundation) under Germany's Excellence Strategy EXC-2070-390732324
  (PhenoRob) and DFG grant (DJ 64/5-1).\r\nWe would like to thank the GMI/IMBA/IMP
  core facilities for their excellent technical support. We would like to acknowledge
  Dr. Sinéad A. O’Sullivan from DZNE, University of Bonn for providing anti-GFP antibodies.
  The authors are thankful to the Excellence University of Bonn for providing infrastructure
  and instrumentation facilities at the INRES-Plant Pathology department."
article_number: '1029114'
article_processing_charge: Yes
article_type: original
author:
- first_name: Kishor D.
  full_name: Ingole, Kishor D.
  last_name: Ingole
- first_name: Nithya
  full_name: Nagarajan, Nithya
  last_name: Nagarajan
- first_name: Simon
  full_name: Uhse, Simon
  last_name: Uhse
- first_name: Caterina
  full_name: Giannini, Caterina
  id: e3fdddd5-f6e0-11ea-865d-ca99ee6367f4
  last_name: Giannini
- first_name: Armin
  full_name: Djamei, Armin
  last_name: Djamei
citation:
  ama: Ingole KD, Nagarajan N, Uhse S, Giannini C, Djamei A. Tetracycline-controlled
    (TetON) gene expression system for the smut fungus Ustilago maydis. <i>Frontiers
    in Fungal Biology</i>. 2022;3. doi:<a href="https://doi.org/10.3389/ffunb.2022.1029114">10.3389/ffunb.2022.1029114</a>
  apa: Ingole, K. D., Nagarajan, N., Uhse, S., Giannini, C., &#38; Djamei, A. (2022).
    Tetracycline-controlled (TetON) gene expression system for the smut fungus Ustilago
    maydis. <i>Frontiers in Fungal Biology</i>. Frontiers Media. <a href="https://doi.org/10.3389/ffunb.2022.1029114">https://doi.org/10.3389/ffunb.2022.1029114</a>
  chicago: Ingole, Kishor D., Nithya Nagarajan, Simon Uhse, Caterina Giannini, and
    Armin Djamei. “Tetracycline-Controlled (TetON) Gene Expression System for the
    Smut Fungus Ustilago Maydis.” <i>Frontiers in Fungal Biology</i>. Frontiers Media,
    2022. <a href="https://doi.org/10.3389/ffunb.2022.1029114">https://doi.org/10.3389/ffunb.2022.1029114</a>.
  ieee: K. D. Ingole, N. Nagarajan, S. Uhse, C. Giannini, and A. Djamei, “Tetracycline-controlled
    (TetON) gene expression system for the smut fungus Ustilago maydis,” <i>Frontiers
    in Fungal Biology</i>, vol. 3. Frontiers Media, 2022.
  ista: Ingole KD, Nagarajan N, Uhse S, Giannini C, Djamei A. 2022. Tetracycline-controlled
    (TetON) gene expression system for the smut fungus Ustilago maydis. Frontiers
    in Fungal Biology. 3, 1029114.
  mla: Ingole, Kishor D., et al. “Tetracycline-Controlled (TetON) Gene Expression
    System for the Smut Fungus Ustilago Maydis.” <i>Frontiers in Fungal Biology</i>,
    vol. 3, 1029114, Frontiers Media, 2022, doi:<a href="https://doi.org/10.3389/ffunb.2022.1029114">10.3389/ffunb.2022.1029114</a>.
  short: K.D. Ingole, N. Nagarajan, S. Uhse, C. Giannini, A. Djamei, Frontiers in
    Fungal Biology 3 (2022).
date_created: 2023-07-16T22:01:12Z
date_published: 2022-10-19T00:00:00Z
date_updated: 2024-03-06T14:01:57Z
day: '19'
ddc:
- '579'
department:
- _id: JiFr
doi: 10.3389/ffunb.2022.1029114
file:
- access_level: open_access
  checksum: 2254e0119c0749d6f7237084fefcece6
  content_type: application/pdf
  creator: dernst
  date_created: 2023-07-17T11:46:34Z
  date_updated: 2023-07-17T11:46:34Z
  file_id: '13242'
  file_name: 2023_FrontiersFungalBio_Ingole.pdf
  file_size: 27966699
  relation: main_file
  success: 1
file_date_updated: 2023-07-17T11:46:34Z
has_accepted_license: '1'
intvolume: '         3'
language:
- iso: eng
month: '10'
oa: 1
oa_version: Published Version
publication: Frontiers in Fungal Biology
publication_identifier:
  eissn:
  - 2673-6128
publication_status: published
publisher: Frontiers Media
quality_controlled: '1'
scopus_import: '1'
status: public
title: Tetracycline-controlled (TetON) gene expression system for the smut fungus
  Ustilago maydis
tmp:
  image: /images/cc_by.png
  legal_code_url: https://creativecommons.org/licenses/by/4.0/legalcode
  name: Creative Commons Attribution 4.0 International Public License (CC-BY 4.0)
  short: CC BY (4.0)
type: journal_article
user_id: 3E5EF7F0-F248-11E8-B48F-1D18A9856A87
volume: 3
year: '2022'
...
---
_id: '13241'
abstract:
- lang: eng
  text: Addressing fairness concerns about machine learning models is a crucial step
    towards their long-term adoption in real-world automated systems. Many approaches
    for training fair models from data have been developed and an implicit assumption
    about such algorithms is that they are able to recover a fair model, despite potential
    historical biases in the data. In this work we show a number of impossibility
    results that indicate that there is no learning algorithm that can recover a fair
    model when a proportion of the dataset is subject to arbitrary manipulations.
    Specifically, we prove that there are situations in which an adversary can force
    any learner to return a biased classifier, with or without degrading accuracy,
    and that the strength of this bias increases for learning problems with underrepresented
    protected groups in the data. Our results emphasize on the importance of studying
    further data corruption models of various strength and of establishing stricter
    data collection practices for fairness-aware learning.
acknowledgement: "This paper is a shortened, workshop version of Konstantinov and
  Lampert (2021),\r\nhttps://arxiv.org/abs/2102.06004. For further results, including
  an analysis of algorithms achieving the lower bounds from this paper, we refer to
  the full version."
article_processing_charge: No
arxiv: 1
author:
- first_name: Nikola H
  full_name: Konstantinov, Nikola H
  id: 4B9D76E4-F248-11E8-B48F-1D18A9856A87
  last_name: Konstantinov
- first_name: Christoph
  full_name: Lampert, Christoph
  id: 40C20FD2-F248-11E8-B48F-1D18A9856A87
  last_name: Lampert
  orcid: 0000-0001-8622-7887
citation:
  ama: 'Konstantinov NH, Lampert C. On the impossibility of fairness-aware learning
    from corrupted data. In: <i>Proceedings of Machine Learning Research</i>. Vol
    171. ML Research Press; 2022:59-83.'
  apa: Konstantinov, N. H., &#38; Lampert, C. (2022). On the impossibility of fairness-aware
    learning from corrupted data. In <i>Proceedings of Machine Learning Research</i>
    (Vol. 171, pp. 59–83). ML Research Press.
  chicago: Konstantinov, Nikola H, and Christoph Lampert. “On the Impossibility of
    Fairness-Aware Learning from Corrupted Data.” In <i>Proceedings of Machine Learning
    Research</i>, 171:59–83. ML Research Press, 2022.
  ieee: N. H. Konstantinov and C. Lampert, “On the impossibility of fairness-aware
    learning from corrupted data,” in <i>Proceedings of Machine Learning Research</i>,
    2022, vol. 171, pp. 59–83.
  ista: Konstantinov NH, Lampert C. 2022. On the impossibility of fairness-aware learning
    from corrupted data. Proceedings of Machine Learning Research. vol. 171, 59–83.
  mla: Konstantinov, Nikola H., and Christoph Lampert. “On the Impossibility of Fairness-Aware
    Learning from Corrupted Data.” <i>Proceedings of Machine Learning Research</i>,
    vol. 171, ML Research Press, 2022, pp. 59–83.
  short: N.H. Konstantinov, C. Lampert, in:, Proceedings of Machine Learning Research,
    ML Research Press, 2022, pp. 59–83.
date_created: 2023-07-16T22:01:13Z
date_published: 2022-12-01T00:00:00Z
date_updated: 2023-09-26T10:44:37Z
day: '01'
department:
- _id: ChLa
external_id:
  arxiv:
  - '2102.06004'
intvolume: '       171'
language:
- iso: eng
main_file_link:
- open_access: '1'
  url: https://arxiv.org/abs/2102.06004
month: '12'
oa: 1
oa_version: Preprint
page: 59-83
publication: Proceedings of Machine Learning Research
publication_identifier:
  eissn:
  - 2640-3498
publication_status: published
publisher: ML Research Press
quality_controlled: '1'
related_material:
  record:
  - id: '10802'
    relation: extended_version
    status: public
scopus_import: '1'
status: public
title: On the impossibility of fairness-aware learning from corrupted data
type: conference
user_id: 2DF688A6-F248-11E8-B48F-1D18A9856A87
volume: 171
year: '2022'
...
---
_id: '14093'
abstract:
- lang: eng
  text: ' We propose a stochastic conditional gradient method (CGM) for minimizing
    convex finite-sum objectives formed as a sum of smooth and non-smooth terms. Existing
    CGM variants for this template either suffer from slow convergence rates, or require
    carefully increasing the batch size over the course of the algorithm’s execution,
    which leads to computing full gradients. In contrast, the proposed method, equipped
    with a stochastic average gradient (SAG) estimator, requires only one sample per
    iteration. Nevertheless, it guarantees fast convergence rates on par with more
    sophisticated variance reduction techniques. In applications we put special emphasis
    on problems with a large number of separable constraints. Such problems are prevalent
    among semidefinite programming (SDP) formulations arising in machine learning
    and theoretical computer science. We provide numerical experiments on matrix completion,
    unsupervised clustering, and sparsest-cut SDPs. '
alternative_title:
- PMLR
article_processing_charge: No
arxiv: 1
author:
- first_name: Gideon
  full_name: Dresdner, Gideon
  last_name: Dresdner
- first_name: Maria-Luiza
  full_name: Vladarean, Maria-Luiza
  last_name: Vladarean
- first_name: Gunnar
  full_name: Rätsch, Gunnar
  last_name: Rätsch
- first_name: Francesco
  full_name: Locatello, Francesco
  id: 26cfd52f-2483-11ee-8040-88983bcc06d4
  last_name: Locatello
  orcid: 0000-0002-4850-0683
- first_name: Volkan
  full_name: Cevher, Volkan
  last_name: Cevher
- first_name: Alp
  full_name: Yurtsever, Alp
  last_name: Yurtsever
citation:
  ama: 'Dresdner G, Vladarean M-L, Rätsch G, Locatello F, Cevher V, Yurtsever A.  Faster
    one-sample stochastic conditional gradient method for composite convex minimization.
    In: <i>Proceedings of the 25th International Conference on Artificial Intelligence
    and Statistics</i>. Vol 151. ML Research Press; 2022:8439-8457.'
  apa: 'Dresdner, G., Vladarean, M.-L., Rätsch, G., Locatello, F., Cevher, V., &#38;
    Yurtsever, A. (2022).  Faster one-sample stochastic conditional gradient method
    for composite convex minimization. In <i>Proceedings of the 25th International
    Conference on Artificial Intelligence and Statistics</i> (Vol. 151, pp. 8439–8457).
    Virtual: ML Research Press.'
  chicago: Dresdner, Gideon, Maria-Luiza Vladarean, Gunnar Rätsch, Francesco Locatello,
    Volkan Cevher, and Alp Yurtsever. “ Faster One-Sample Stochastic Conditional Gradient
    Method for Composite Convex Minimization.” In <i>Proceedings of the 25th International
    Conference on Artificial Intelligence and Statistics</i>, 151:8439–57. ML Research
    Press, 2022.
  ieee: G. Dresdner, M.-L. Vladarean, G. Rätsch, F. Locatello, V. Cevher, and A. Yurtsever,
    “ Faster one-sample stochastic conditional gradient method for composite convex
    minimization,” in <i>Proceedings of the 25th International Conference on Artificial
    Intelligence and Statistics</i>, Virtual, 2022, vol. 151, pp. 8439–8457.
  ista: 'Dresdner G, Vladarean M-L, Rätsch G, Locatello F, Cevher V, Yurtsever A.
    2022.  Faster one-sample stochastic conditional gradient method for composite
    convex minimization. Proceedings of the 25th International Conference on Artificial
    Intelligence and Statistics. AISTATS: Conference on Artificial Intelligence and
    Statistics, PMLR, vol. 151, 8439–8457.'
  mla: Dresdner, Gideon, et al. “ Faster One-Sample Stochastic Conditional Gradient
    Method for Composite Convex Minimization.” <i>Proceedings of the 25th International
    Conference on Artificial Intelligence and Statistics</i>, vol. 151, ML Research
    Press, 2022, pp. 8439–57.
  short: G. Dresdner, M.-L. Vladarean, G. Rätsch, F. Locatello, V. Cevher, A. Yurtsever,
    in:, Proceedings of the 25th International Conference on Artificial Intelligence
    and Statistics, ML Research Press, 2022, pp. 8439–8457.
conference:
  end_date: 2022-03-30
  location: Virtual
  name: 'AISTATS: Conference on Artificial Intelligence and Statistics'
  start_date: 2022-03-28
date_created: 2023-08-21T09:27:43Z
date_published: 2022-04-01T00:00:00Z
date_updated: 2023-09-06T10:28:17Z
day: '01'
department:
- _id: FrLo
extern: '1'
external_id:
  arxiv:
  - '2202.13212'
intvolume: '       151'
language:
- iso: eng
main_file_link:
- open_access: '1'
  url: https://arxiv.org/abs/2202.13212
month: '04'
oa: 1
oa_version: Preprint
page: 8439-8457
publication: Proceedings of the 25th International Conference on Artificial Intelligence
  and Statistics
publication_identifier:
  issn:
  - 2640-3498
publication_status: published
publisher: ML Research Press
quality_controlled: '1'
scopus_import: '1'
status: public
title: ' Faster one-sample stochastic conditional gradient method for composite convex
  minimization'
type: conference
user_id: 2DF688A6-F248-11E8-B48F-1D18A9856A87
volume: 151
year: '2022'
...
---
_id: '14106'
abstract:
- lang: eng
  text: "We show that deep networks trained to satisfy demographic parity often do
    so\r\nthrough a form of race or gender awareness, and that the more we force a
    network\r\nto be fair, the more accurately we can recover race or gender from
    the internal state\r\nof the network. Based on this observation, we investigate
    an alternative fairness\r\napproach: we add a second classification head to the
    network to explicitly predict\r\nthe protected attribute (such as race or gender)
    alongside the original task. After\r\ntraining the two-headed network, we enforce
    demographic parity by merging the\r\ntwo heads, creating a network with the same
    architecture as the original network.\r\nWe establish a close relationship between
    existing approaches and our approach\r\nby showing (1) that the decisions of a
    fair classifier are well-approximated by our\r\napproach, and (2) that an unfair
    and optimally accurate classifier can be recovered\r\nfrom a fair classifier and
    our second head predicting the protected attribute. We use\r\nour explicit formulation
    to argue that the existing fairness approaches, just as ours,\r\ndemonstrate disparate
    treatment and that they are likely to be unlawful in a wide\r\nrange of scenarios
    under US law."
alternative_title:
- Advances in Neural Information Processing Systems
article_processing_charge: No
arxiv: 1
author:
- first_name: Michael
  full_name: Lohaus, Michael
  last_name: Lohaus
- first_name: Matthäus
  full_name: Kleindessner, Matthäus
  last_name: Kleindessner
- first_name: Krishnaram
  full_name: Kenthapadi, Krishnaram
  last_name: Kenthapadi
- first_name: Francesco
  full_name: Locatello, Francesco
  id: 26cfd52f-2483-11ee-8040-88983bcc06d4
  last_name: Locatello
  orcid: 0000-0002-4850-0683
- first_name: Chris
  full_name: Russell, Chris
  last_name: Russell
citation:
  ama: 'Lohaus M, Kleindessner M, Kenthapadi K, Locatello F, Russell C. Are two heads
    the same as one? Identifying disparate treatment in fair neural networks. In:
    <i>36th Conference on Neural Information Processing Systems</i>. Vol 35. Neural
    Information Processing Systems Foundation; 2022:16548-16562.'
  apa: 'Lohaus, M., Kleindessner, M., Kenthapadi, K., Locatello, F., &#38; Russell,
    C. (2022). Are two heads the same as one? Identifying disparate treatment in fair
    neural networks. In <i>36th Conference on Neural Information Processing Systems</i>
    (Vol. 35, pp. 16548–16562). New Orleans, LA, United States: Neural Information
    Processing Systems Foundation.'
  chicago: Lohaus, Michael, Matthäus Kleindessner, Krishnaram Kenthapadi, Francesco
    Locatello, and Chris Russell. “Are Two Heads the Same as One? Identifying Disparate
    Treatment in Fair Neural Networks.” In <i>36th Conference on Neural Information
    Processing Systems</i>, 35:16548–62. Neural Information Processing Systems Foundation,
    2022.
  ieee: M. Lohaus, M. Kleindessner, K. Kenthapadi, F. Locatello, and C. Russell, “Are
    two heads the same as one? Identifying disparate treatment in fair neural networks,”
    in <i>36th Conference on Neural Information Processing Systems</i>, New Orleans,
    LA, United States, 2022, vol. 35, pp. 16548–16562.
  ista: 'Lohaus M, Kleindessner M, Kenthapadi K, Locatello F, Russell C. 2022. Are
    two heads the same as one? Identifying disparate treatment in fair neural networks.
    36th Conference on Neural Information Processing Systems. NeurIPS: Neural Information
    Processing Systems, Advances in Neural Information Processing Systems, vol. 35,
    16548–16562.'
  mla: Lohaus, Michael, et al. “Are Two Heads the Same as One? Identifying Disparate
    Treatment in Fair Neural Networks.” <i>36th Conference on Neural Information Processing
    Systems</i>, vol. 35, Neural Information Processing Systems Foundation, 2022,
    pp. 16548–62.
  short: M. Lohaus, M. Kleindessner, K. Kenthapadi, F. Locatello, C. Russell, in:,
    36th Conference on Neural Information Processing Systems, Neural Information Processing
    Systems Foundation, 2022, pp. 16548–16562.
conference:
  end_date: 2022-12-09
  location: New Orleans, LA, United States
  name: 'NeurIPS: Neural Information Processing Systems'
  start_date: 2022-11-28
date_created: 2023-08-21T12:12:42Z
date_published: 2022-12-15T00:00:00Z
date_updated: 2023-09-06T10:29:42Z
day: '15'
department:
- _id: FrLo
extern: '1'
external_id:
  arxiv:
  - '2204.04440'
intvolume: '        35'
language:
- iso: eng
main_file_link:
- open_access: '1'
  url: https://arxiv.org/abs/2204.04440
month: '12'
oa: 1
oa_version: Preprint
page: 16548-16562
publication: 36th Conference on Neural Information Processing Systems
publication_identifier:
  isbn:
  - '9781713871088'
publication_status: published
publisher: Neural Information Processing Systems Foundation
quality_controlled: '1'
scopus_import: '1'
status: public
title: Are two heads the same as one? Identifying disparate treatment in fair neural
  networks
type: conference
user_id: 2DF688A6-F248-11E8-B48F-1D18A9856A87
volume: 35
year: '2022'
...
---
_id: '14107'
abstract:
- lang: eng
  text: "Amodal perception requires inferring the full shape of an object that is
    partially occluded. This task is particularly challenging on two levels: (1) it
    requires more information than what is contained in the instant retina or imaging
    sensor, (2) it is difficult to obtain enough well-annotated amodal labels for
    supervision. To this end, this paper develops a new framework of\r\nSelf-supervised
    amodal Video object segmentation (SaVos). Our method efficiently leverages the
    visual information of video temporal sequences to infer the amodal mask of objects.
    The key intuition is that the occluded part of an object can be explained away
    if that part is visible in other frames, possibly deformed as long as the deformation
    can be reasonably learned.\r\nAccordingly, we derive a novel self-supervised learning
    paradigm that efficiently utilizes the visible object parts as the supervision
    to guide the training on videos. In addition to learning type prior to complete
    masks for known types, SaVos also learns the spatiotemporal prior, which is also
    useful for the amodal task and could generalize to unseen types. The proposed\r\nframework
    achieves the state-of-the-art performance on the synthetic amodal segmentation
    benchmark FISHBOWL and the real world benchmark KINS-Video-Car. Further, it lends
    itself well to being transferred to novel distributions using test-time adaptation,
    outperforming existing models even after the transfer to a new distribution."
article_processing_charge: No
arxiv: 1
author:
- first_name: Jian
  full_name: Yao, Jian
  last_name: Yao
- first_name: Yuxin
  full_name: Hong, Yuxin
  last_name: Hong
- first_name: Chiyu
  full_name: Wang, Chiyu
  last_name: Wang
- first_name: Tianjun
  full_name: Xiao, Tianjun
  last_name: Xiao
- first_name: Tong
  full_name: He, Tong
  last_name: He
- first_name: Francesco
  full_name: Locatello, Francesco
  id: 26cfd52f-2483-11ee-8040-88983bcc06d4
  last_name: Locatello
  orcid: 0000-0002-4850-0683
- first_name: David
  full_name: Wipf, David
  last_name: Wipf
- first_name: Yanwei
  full_name: Fu, Yanwei
  last_name: Fu
- first_name: Zheng
  full_name: Zhang, Zheng
  last_name: Zhang
citation:
  ama: 'Yao J, Hong Y, Wang C, et al. Self-supervised amodal video object segmentation.
    In: <i>36th Conference on Neural Information Processing Systems</i>. ; 2022. doi:<a
    href="https://doi.org/10.48550/arXiv.2210.12733">10.48550/arXiv.2210.12733</a>'
  apa: Yao, J., Hong, Y., Wang, C., Xiao, T., He, T., Locatello, F., … Zhang, Z. (2022).
    Self-supervised amodal video object segmentation. In <i>36th Conference on Neural
    Information Processing Systems</i>. New Orleans, LA, United States. <a href="https://doi.org/10.48550/arXiv.2210.12733">https://doi.org/10.48550/arXiv.2210.12733</a>
  chicago: Yao, Jian, Yuxin Hong, Chiyu Wang, Tianjun Xiao, Tong He, Francesco Locatello,
    David Wipf, Yanwei Fu, and Zheng Zhang. “Self-Supervised Amodal Video Object Segmentation.”
    In <i>36th Conference on Neural Information Processing Systems</i>, 2022. <a href="https://doi.org/10.48550/arXiv.2210.12733">https://doi.org/10.48550/arXiv.2210.12733</a>.
  ieee: J. Yao <i>et al.</i>, “Self-supervised amodal video object segmentation,”
    in <i>36th Conference on Neural Information Processing Systems</i>, New Orleans,
    LA, United States, 2022.
  ista: 'Yao J, Hong Y, Wang C, Xiao T, He T, Locatello F, Wipf D, Fu Y, Zhang Z.
    2022. Self-supervised amodal video object segmentation. 36th Conference on Neural
    Information Processing Systems. NeurIPS: Neural Information Processing Systems.'
  mla: Yao, Jian, et al. “Self-Supervised Amodal Video Object Segmentation.” <i>36th
    Conference on Neural Information Processing Systems</i>, 2022, doi:<a href="https://doi.org/10.48550/arXiv.2210.12733">10.48550/arXiv.2210.12733</a>.
  short: J. Yao, Y. Hong, C. Wang, T. Xiao, T. He, F. Locatello, D. Wipf, Y. Fu, Z.
    Zhang, in:, 36th Conference on Neural Information Processing Systems, 2022.
conference:
  end_date: 2022-12-01
  location: New Orleans, LA, United States
  name: 'NeurIPS: Neural Information Processing Systems'
  start_date: 2022-11-28
date_created: 2023-08-21T12:13:25Z
date_published: 2022-10-23T00:00:00Z
date_updated: 2023-09-11T09:34:17Z
day: '23'
department:
- _id: FrLo
doi: 10.48550/arXiv.2210.12733
extern: '1'
external_id:
  arxiv:
  - '2210.12733'
language:
- iso: eng
main_file_link:
- open_access: '1'
  url: https://doi.org/10.48550/arXiv.2210.12733
month: '10'
oa: 1
oa_version: Preprint
publication: 36th Conference on Neural Information Processing Systems
publication_status: published
status: public
title: Self-supervised amodal video object segmentation
type: conference
user_id: 2DF688A6-F248-11E8-B48F-1D18A9856A87
year: '2022'
...
---
_id: '14114'
abstract:
- lang: eng
  text: Algorithmic fairness is frequently motivated in terms of a trade-off in which
    overall performance is decreased so as to improve performance on disadvantaged
    groups where the algorithm would otherwise be less accurate. Contrary to this,
    we find that applying existing fairness approaches to computer vision improve
    fairness by degrading the performance of classifiers across all groups (with increased
    degradation on the best performing groups). Extending the bias-variance decomposition
    for classification to fairness, we theoretically explain why the majority of fairness
    methods designed for low capacity models should not be used in settings involving
    high-capacity models, a scenario common to computer vision. We corroborate this
    analysis with extensive experimental support that shows that many of the fairness
    heuristics used in computer vision also degrade performance on the most disadvantaged
    groups. Building on these insights, we propose an adaptive augmentation strategy
    that, uniquely, of all methods tested, improves performance for the disadvantaged
    groups.
article_processing_charge: No
arxiv: 1
author:
- first_name: Dominik
  full_name: Zietlow, Dominik
  last_name: Zietlow
- first_name: Michael
  full_name: Lohaus, Michael
  last_name: Lohaus
- first_name: Guha
  full_name: Balakrishnan, Guha
  last_name: Balakrishnan
- first_name: Matthaus
  full_name: Kleindessner, Matthaus
  last_name: Kleindessner
- first_name: Francesco
  full_name: Locatello, Francesco
  id: 26cfd52f-2483-11ee-8040-88983bcc06d4
  last_name: Locatello
  orcid: 0000-0002-4850-0683
- first_name: Bernhard
  full_name: Scholkopf, Bernhard
  last_name: Scholkopf
- first_name: Chris
  full_name: Russell, Chris
  last_name: Russell
citation:
  ama: 'Zietlow D, Lohaus M, Balakrishnan G, et al. Leveling down in computer vision:
    Pareto inefficiencies in fair deep classifiers. In: <i>2022 IEEE/CVF Conference
    on Computer Vision and Pattern Recognition</i>. Institute of Electrical and Electronics
    Engineers; 2022:10400-10411. doi:<a href="https://doi.org/10.1109/cvpr52688.2022.01016">10.1109/cvpr52688.2022.01016</a>'
  apa: 'Zietlow, D., Lohaus, M., Balakrishnan, G., Kleindessner, M., Locatello, F.,
    Scholkopf, B., &#38; Russell, C. (2022). Leveling down in computer vision: Pareto
    inefficiencies in fair deep classifiers. In <i>2022 IEEE/CVF Conference on Computer
    Vision and Pattern Recognition</i> (pp. 10400–10411). New Orleans, LA, United
    States: Institute of Electrical and Electronics Engineers. <a href="https://doi.org/10.1109/cvpr52688.2022.01016">https://doi.org/10.1109/cvpr52688.2022.01016</a>'
  chicago: 'Zietlow, Dominik, Michael Lohaus, Guha Balakrishnan, Matthaus Kleindessner,
    Francesco Locatello, Bernhard Scholkopf, and Chris Russell. “Leveling down in
    Computer Vision: Pareto Inefficiencies in Fair Deep Classifiers.” In <i>2022 IEEE/CVF
    Conference on Computer Vision and Pattern Recognition</i>, 10400–411. Institute
    of Electrical and Electronics Engineers, 2022. <a href="https://doi.org/10.1109/cvpr52688.2022.01016">https://doi.org/10.1109/cvpr52688.2022.01016</a>.'
  ieee: 'D. Zietlow <i>et al.</i>, “Leveling down in computer vision: Pareto inefficiencies
    in fair deep classifiers,” in <i>2022 IEEE/CVF Conference on Computer Vision and
    Pattern Recognition</i>, New Orleans, LA, United States, 2022, pp. 10400–10411.'
  ista: 'Zietlow D, Lohaus M, Balakrishnan G, Kleindessner M, Locatello F, Scholkopf
    B, Russell C. 2022. Leveling down in computer vision: Pareto inefficiencies in
    fair deep classifiers. 2022 IEEE/CVF Conference on Computer Vision and Pattern
    Recognition. CVPR: Conference on Computer Vision and Pattern Recognition, 10400–10411.'
  mla: 'Zietlow, Dominik, et al. “Leveling down in Computer Vision: Pareto Inefficiencies
    in Fair Deep Classifiers.” <i>2022 IEEE/CVF Conference on Computer Vision and
    Pattern Recognition</i>, Institute of Electrical and Electronics Engineers, 2022,
    pp. 10400–11, doi:<a href="https://doi.org/10.1109/cvpr52688.2022.01016">10.1109/cvpr52688.2022.01016</a>.'
  short: D. Zietlow, M. Lohaus, G. Balakrishnan, M. Kleindessner, F. Locatello, B.
    Scholkopf, C. Russell, in:, 2022 IEEE/CVF Conference on Computer Vision and Pattern
    Recognition, Institute of Electrical and Electronics Engineers, 2022, pp. 10400–10411.
conference:
  end_date: 2022-06-24
  location: New Orleans, LA, United States
  name: 'CVPR: Conference on Computer Vision and Pattern Recognition'
  start_date: 2022-06-18
date_created: 2023-08-21T12:18:00Z
date_published: 2022-07-01T00:00:00Z
date_updated: 2023-09-11T09:19:14Z
day: '01'
department:
- _id: FrLo
doi: 10.1109/cvpr52688.2022.01016
extern: '1'
external_id:
  arxiv:
  - '2203.04913'
language:
- iso: eng
main_file_link:
- open_access: '1'
  url: https://arxiv.org/abs/2203.04913
month: '07'
oa: 1
oa_version: Preprint
page: 10400-10411
publication: 2022 IEEE/CVF Conference on Computer Vision and Pattern Recognition
publication_identifier:
  eissn:
  - 2575-7075
  isbn:
  - '9781665469470'
  issn:
  - 1063-6919
publication_status: published
publisher: Institute of Electrical and Electronics Engineers
quality_controlled: '1'
scopus_import: '1'
status: public
title: 'Leveling down in computer vision: Pareto inefficiencies in fair deep classifiers'
type: conference
user_id: 2DF688A6-F248-11E8-B48F-1D18A9856A87
year: '2022'
...
---
_id: '14168'
abstract:
- lang: eng
  text: "Recent work has seen the development of general purpose neural architectures\r\nthat
    can be trained to perform tasks across diverse data modalities. General\r\npurpose
    models typically make few assumptions about the underlying\r\ndata-structure and
    are known to perform well in the large-data regime. At the\r\nsame time, there
    has been growing interest in modular neural architectures that\r\nrepresent the
    data using sparsely interacting modules. These models can be more\r\nrobust out-of-distribution,
    computationally efficient, and capable of\r\nsample-efficient adaptation to new
    data. However, they tend to make\r\ndomain-specific assumptions about the data,
    and present challenges in how\r\nmodule behavior (i.e., parameterization) and
    connectivity (i.e., their layout)\r\ncan be jointly learned. In this work, we
    introduce a general purpose, yet\r\nmodular neural architecture called Neural
    Attentive Circuits (NACs) that\r\njointly learns the parameterization and a sparse
    connectivity of neural modules\r\nwithout using domain knowledge. NACs are best
    understood as the combination of\r\ntwo systems that are jointly trained end-to-end:
    one that determines the module\r\nconfiguration and the other that executes it
    on an input. We demonstrate\r\nqualitatively that NACs learn diverse and meaningful
    module configurations on\r\nthe NLVR2 dataset without additional supervision.
    Quantitatively, we show that\r\nby incorporating modularity in this way, NACs
    improve upon a strong non-modular\r\nbaseline in terms of low-shot adaptation
    on CIFAR and CUBs dataset by about\r\n10%, and OOD robustness on Tiny ImageNet-R
    by about 2.5%. Further, we find that\r\nNACs can achieve an 8x speedup at inference
    time while losing less than 3%\r\nperformance. Finally, we find NACs to yield
    competitive results on diverse data\r\nmodalities spanning point-cloud classification,
    symbolic processing and\r\ntext-classification from ASCII bytes, thereby confirming
    its general purpose\r\nnature."
alternative_title:
- ' Advances in Neural Information Processing Systems'
article_processing_charge: No
arxiv: 1
author:
- first_name: Nasim
  full_name: Rahaman, Nasim
  last_name: Rahaman
- first_name: Martin
  full_name: Weiss, Martin
  last_name: Weiss
- first_name: Francesco
  full_name: Locatello, Francesco
  id: 26cfd52f-2483-11ee-8040-88983bcc06d4
  last_name: Locatello
  orcid: 0000-0002-4850-0683
- first_name: Chris
  full_name: Pal, Chris
  last_name: Pal
- first_name: Yoshua
  full_name: Bengio, Yoshua
  last_name: Bengio
- first_name: Bernhard
  full_name: Schölkopf, Bernhard
  last_name: Schölkopf
- first_name: Li Erran
  full_name: Li, Li Erran
  last_name: Li
- first_name: Nicolas
  full_name: Ballas, Nicolas
  last_name: Ballas
citation:
  ama: 'Rahaman N, Weiss M, Locatello F, et al. Neural attentive circuits. In: <i>36th
    Conference on Neural Information Processing Systems</i>. Vol 35. ; 2022.'
  apa: Rahaman, N., Weiss, M., Locatello, F., Pal, C., Bengio, Y., Schölkopf, B.,
    … Ballas, N. (2022). Neural attentive circuits. In <i>36th Conference on Neural
    Information Processing Systems</i> (Vol. 35). New Orleans, United States.
  chicago: Rahaman, Nasim, Martin Weiss, Francesco Locatello, Chris Pal, Yoshua Bengio,
    Bernhard Schölkopf, Li Erran Li, and Nicolas Ballas. “Neural Attentive Circuits.”
    In <i>36th Conference on Neural Information Processing Systems</i>, Vol. 35, 2022.
  ieee: N. Rahaman <i>et al.</i>, “Neural attentive circuits,” in <i>36th Conference
    on Neural Information Processing Systems</i>, New Orleans, United States, 2022,
    vol. 35.
  ista: 'Rahaman N, Weiss M, Locatello F, Pal C, Bengio Y, Schölkopf B, Li LE, Ballas
    N. 2022. Neural attentive circuits. 36th Conference on Neural Information Processing
    Systems. NeurIPS: Neural Information Processing Systems,  Advances in Neural Information
    Processing Systems, vol. 35.'
  mla: Rahaman, Nasim, et al. “Neural Attentive Circuits.” <i>36th Conference on Neural
    Information Processing Systems</i>, vol. 35, 2022.
  short: N. Rahaman, M. Weiss, F. Locatello, C. Pal, Y. Bengio, B. Schölkopf, L.E.
    Li, N. Ballas, in:, 36th Conference on Neural Information Processing Systems,
    2022.
conference:
  end_date: 2022-12-01
  location: New Orleans, United States
  name: 'NeurIPS: Neural Information Processing Systems'
  start_date: 2022-11-29
date_created: 2023-08-22T13:57:27Z
date_published: 2022-10-14T00:00:00Z
date_updated: 2023-09-11T09:29:09Z
day: '14'
department:
- _id: FrLo
extern: '1'
external_id:
  arxiv:
  - '2210.08031'
intvolume: '        35'
language:
- iso: eng
main_file_link:
- open_access: '1'
  url: https://doi.org/10.48550/arXiv.2210.08031
month: '10'
oa: 1
oa_version: Preprint
publication: 36th Conference on Neural Information Processing Systems
publication_status: published
status: public
title: Neural attentive circuits
type: conference
user_id: 2DF688A6-F248-11E8-B48F-1D18A9856A87
volume: 35
year: '2022'
...
---
_id: '14170'
abstract:
- lang: eng
  text: "The idea behind object-centric representation learning is that natural scenes
    can better be modeled as compositions of objects and their relations as opposed
    to distributed representations. This inductive bias can be injected into neural
    networks to potentially improve systematic generalization and performance of downstream
    tasks in scenes with multiple objects. In this paper, we train state-of-the-art
    unsupervised models on five common multi-object datasets and evaluate segmentation
    metrics and downstream object property prediction. In addition, we study generalization
    and robustness by investigating the settings where either a single object is out
    of distribution -- e.g., having an unseen color, texture, or shape -- or global
    properties of the scene are altered -- e.g., by occlusions, cropping, or increasing
    the number of objects. From our experimental study, we find object-centric representations
    to be useful for\r\ndownstream tasks and generally robust to most distribution
    shifts affecting objects. However, when the distribution shift affects the input
    in a less structured manner, robustness in terms of segmentation and downstream
    task performance may vary significantly across models and distribution shifts. "
alternative_title:
- PMLR
article_processing_charge: No
arxiv: 1
author:
- first_name: Andrea
  full_name: Dittadi, Andrea
  last_name: Dittadi
- first_name: Samuele
  full_name: Papa, Samuele
  last_name: Papa
- first_name: Michele De
  full_name: Vita, Michele De
  last_name: Vita
- first_name: Bernhard
  full_name: Schölkopf, Bernhard
  last_name: Schölkopf
- first_name: Ole
  full_name: Winther, Ole
  last_name: Winther
- first_name: Francesco
  full_name: Locatello, Francesco
  id: 26cfd52f-2483-11ee-8040-88983bcc06d4
  last_name: Locatello
  orcid: 0000-0002-4850-0683
citation:
  ama: 'Dittadi A, Papa S, Vita MD, Schölkopf B, Winther O, Locatello F. Generalization
    and robustness implications in object-centric learning. In: <i>Proceedings of
    the 39th International Conference on Machine Learning</i>. Vol 2022. ML Research
    Press; :5221-5285.'
  apa: 'Dittadi, A., Papa, S., Vita, M. D., Schölkopf, B., Winther, O., &#38; Locatello,
    F. (n.d.). Generalization and robustness implications in object-centric learning.
    In <i>Proceedings of the 39th International Conference on Machine Learning</i>
    (Vol. 2022, pp. 5221–5285). Baltimore, MD, United States: ML Research Press.'
  chicago: Dittadi, Andrea, Samuele Papa, Michele De Vita, Bernhard Schölkopf, Ole
    Winther, and Francesco Locatello. “Generalization and Robustness Implications
    in Object-Centric Learning.” In <i>Proceedings of the 39th International Conference
    on Machine Learning</i>, 2022:5221–85. ML Research Press, n.d.
  ieee: A. Dittadi, S. Papa, M. D. Vita, B. Schölkopf, O. Winther, and F. Locatello,
    “Generalization and robustness implications in object-centric learning,” in <i>Proceedings
    of the 39th International Conference on Machine Learning</i>, Baltimore, MD, United
    States, vol. 2022, pp. 5221–5285.
  ista: Dittadi A, Papa S, Vita MD, Schölkopf B, Winther O, Locatello F. Generalization
    and robustness implications in object-centric learning. Proceedings of the 39th
    International Conference on Machine Learning. International Conference on Machine
    Learning, PMLR, vol. 2022, 5221–5285.
  mla: Dittadi, Andrea, et al. “Generalization and Robustness Implications in Object-Centric
    Learning.” <i>Proceedings of the 39th International Conference on Machine Learning</i>,
    vol. 2022, ML Research Press, pp. 5221–85.
  short: A. Dittadi, S. Papa, M.D. Vita, B. Schölkopf, O. Winther, F. Locatello, in:,
    Proceedings of the 39th International Conference on Machine Learning, ML Research
    Press, n.d., pp. 5221–5285.
conference:
  end_date: 2022-07-23
  location: Baltimore, MD, United States
  name: International Conference on Machine Learning
  start_date: 2022-07-17
date_created: 2023-08-22T13:59:55Z
date_published: 2022-07-22T00:00:00Z
date_updated: 2023-09-11T10:08:14Z
day: '22'
department:
- _id: FrLo
extern: '1'
external_id:
  arxiv:
  - '2107.00637'
intvolume: '      2022'
language:
- iso: eng
main_file_link:
- open_access: '1'
  url: https://arxiv.org/abs/2107.00637
month: '07'
oa: 1
oa_version: Preprint
page: 5221-5285
publication: Proceedings of the 39th International Conference on Machine Learning
publication_status: submitted
publisher: ML Research Press
quality_controlled: '1'
status: public
title: Generalization and robustness implications in object-centric learning
type: conference
user_id: 2DF688A6-F248-11E8-B48F-1D18A9856A87
volume: 2022
year: '2022'
...
---
_id: '14171'
abstract:
- lang: eng
  text: "This paper demonstrates how to recover causal graphs from the score of the\r\ndata
    distribution in non-linear additive (Gaussian) noise models. Using score\r\nmatching
    algorithms as a building block, we show how to design a new generation\r\nof scalable
    causal discovery methods. To showcase our approach, we also propose\r\na new efficient
    method for approximating the score's Jacobian, enabling to\r\nrecover the causal
    graph. Empirically, we find that the new algorithm, called\r\nSCORE, is competitive
    with state-of-the-art causal discovery methods while\r\nbeing significantly faster."
alternative_title:
- PMLR
article_processing_charge: No
arxiv: 1
author:
- first_name: Paul
  full_name: Rolland, Paul
  last_name: Rolland
- first_name: Volkan
  full_name: Cevher, Volkan
  last_name: Cevher
- first_name: Matthäus
  full_name: Kleindessner, Matthäus
  last_name: Kleindessner
- first_name: Chris
  full_name: Russel, Chris
  last_name: Russel
- first_name: Bernhard
  full_name: Schölkopf, Bernhard
  last_name: Schölkopf
- first_name: Dominik
  full_name: Janzing, Dominik
  last_name: Janzing
- first_name: Francesco
  full_name: Locatello, Francesco
  id: 26cfd52f-2483-11ee-8040-88983bcc06d4
  last_name: Locatello
  orcid: 0000-0002-4850-0683
citation:
  ama: 'Rolland P, Cevher V, Kleindessner M, et al. Score matching enables causal
    discovery of nonlinear additive noise  models. In: <i>Proceedings of the 39th
    International Conference on Machine Learning</i>. Vol 162. ML Research Press;
    2022:18741-18753.'
  apa: 'Rolland, P., Cevher, V., Kleindessner, M., Russel, C., Schölkopf, B., Janzing,
    D., &#38; Locatello, F. (2022). Score matching enables causal discovery of nonlinear
    additive noise  models. In <i>Proceedings of the 39th International Conference
    on Machine Learning</i> (Vol. 162, pp. 18741–18753). Baltimore, MD, United States:
    ML Research Press.'
  chicago: Rolland, Paul, Volkan Cevher, Matthäus Kleindessner, Chris Russel, Bernhard
    Schölkopf, Dominik Janzing, and Francesco Locatello. “Score Matching Enables Causal
    Discovery of Nonlinear Additive Noise  Models.” In <i>Proceedings of the 39th
    International Conference on Machine Learning</i>, 162:18741–53. ML Research Press,
    2022.
  ieee: P. Rolland <i>et al.</i>, “Score matching enables causal discovery of nonlinear
    additive noise  models,” in <i>Proceedings of the 39th International Conference
    on Machine Learning</i>, Baltimore, MD, United States, 2022, vol. 162, pp. 18741–18753.
  ista: Rolland P, Cevher V, Kleindessner M, Russel C, Schölkopf B, Janzing D, Locatello
    F. 2022. Score matching enables causal discovery of nonlinear additive noise 
    models. Proceedings of the 39th International Conference on Machine Learning.
    International Conference on Machine Learning, PMLR, vol. 162, 18741–18753.
  mla: Rolland, Paul, et al. “Score Matching Enables Causal Discovery of Nonlinear
    Additive Noise  Models.” <i>Proceedings of the 39th International Conference on
    Machine Learning</i>, vol. 162, ML Research Press, 2022, pp. 18741–53.
  short: P. Rolland, V. Cevher, M. Kleindessner, C. Russel, B. Schölkopf, D. Janzing,
    F. Locatello, in:, Proceedings of the 39th International Conference on Machine
    Learning, ML Research Press, 2022, pp. 18741–18753.
conference:
  end_date: 2022-07-23
  location: Baltimore, MD, United States
  name: International Conference on Machine Learning
  start_date: 2022-07-17
date_created: 2023-08-22T14:00:18Z
date_published: 2022-07-22T00:00:00Z
date_updated: 2023-09-11T10:14:20Z
day: '22'
department:
- _id: FrLo
extern: '1'
external_id:
  arxiv:
  - '2203.04413'
intvolume: '       162'
language:
- iso: eng
main_file_link:
- open_access: '1'
  url: https://arxiv.org/abs/2203.04413
month: '07'
oa: 1
oa_version: Preprint
page: 18741-18753
publication: Proceedings of the 39th International Conference on Machine Learning
publication_status: published
publisher: ML Research Press
quality_controlled: '1'
status: public
title: Score matching enables causal discovery of nonlinear additive noise  models
type: conference
user_id: 2DF688A6-F248-11E8-B48F-1D18A9856A87
volume: 162
year: '2022'
...
---
_id: '14172'
abstract:
- lang: eng
  text: "An important component for generalization in machine learning is to uncover
    underlying latent factors of variation as well as the mechanism through which
    each factor acts in the world. In this paper, we test whether 17 unsupervised,
    weakly supervised, and fully supervised representation learning approaches correctly
    infer the generative factors of variation in simple datasets (dSprites, Shapes3D,
    MPI3D) from controlled environments, and on our contributed CelebGlow dataset.
    In contrast to prior robustness work that introduces novel factors of variation
    during test time, such as blur or other (un)structured noise, we here recompose,
    interpolate, or extrapolate only existing factors of variation from the training
    data set (e.g., small and medium-sized objects during training and large objects
    during testing). Models\r\nthat learn the correct mechanism should be able to
    generalize to this benchmark. In total, we train and test 2000+ models and observe
    that all of them struggle to learn the underlying mechanism regardless of supervision
    signal and architectural bias. Moreover, the generalization capabilities of all
    tested models drop significantly as we move from artificial datasets towards\r\nmore
    realistic real-world datasets. Despite their inability to identify the correct
    mechanism, the models are quite modular as their ability to infer other in-distribution
    factors remains fairly stable, providing only a single factoris out-of-distribution.
    These results point to an important yet understudied problem of learning mechanistic
    models of observations that can facilitate\r\ngeneralization."
article_processing_charge: No
arxiv: 1
author:
- first_name: Lukas
  full_name: Schott, Lukas
  last_name: Schott
- first_name: Julius von
  full_name: Kügelgen, Julius von
  last_name: Kügelgen
- first_name: Frederik
  full_name: Träuble, Frederik
  last_name: Träuble
- first_name: Peter
  full_name: Gehler, Peter
  last_name: Gehler
- first_name: Chris
  full_name: Russell, Chris
  last_name: Russell
- first_name: Matthias
  full_name: Bethge, Matthias
  last_name: Bethge
- first_name: Bernhard
  full_name: Schölkopf, Bernhard
  last_name: Schölkopf
- first_name: Francesco
  full_name: Locatello, Francesco
  id: 26cfd52f-2483-11ee-8040-88983bcc06d4
  last_name: Locatello
  orcid: 0000-0002-4850-0683
- first_name: Wieland
  full_name: Brendel, Wieland
  last_name: Brendel
citation:
  ama: 'Schott L, Kügelgen J von, Träuble F, et al. Visual representation learning
    does not generalize strongly within the  same domain. In: <i>10th International
    Conference on Learning Representations</i>. ; 2022.'
  apa: Schott, L., Kügelgen, J. von, Träuble, F., Gehler, P., Russell, C., Bethge,
    M., … Brendel, W. (2022). Visual representation learning does not generalize strongly
    within the  same domain. In <i>10th International Conference on Learning Representations</i>.
    Virtual.
  chicago: Schott, Lukas, Julius von Kügelgen, Frederik Träuble, Peter Gehler, Chris
    Russell, Matthias Bethge, Bernhard Schölkopf, Francesco Locatello, and Wieland
    Brendel. “Visual Representation Learning Does Not Generalize Strongly within the 
    Same Domain.” In <i>10th International Conference on Learning Representations</i>,
    2022.
  ieee: L. Schott <i>et al.</i>, “Visual representation learning does not generalize
    strongly within the  same domain,” in <i>10th International Conference on Learning
    Representations</i>, Virtual, 2022.
  ista: 'Schott L, Kügelgen J von, Träuble F, Gehler P, Russell C, Bethge M, Schölkopf
    B, Locatello F, Brendel W. 2022. Visual representation learning does not generalize
    strongly within the  same domain. 10th International Conference on Learning Representations.
    ICLR: International Conference on Learning Representations.'
  mla: Schott, Lukas, et al. “Visual Representation Learning Does Not Generalize Strongly
    within the  Same Domain.” <i>10th International Conference on Learning Representations</i>,
    2022.
  short: L. Schott, J. von Kügelgen, F. Träuble, P. Gehler, C. Russell, M. Bethge,
    B. Schölkopf, F. Locatello, W. Brendel, in:, 10th International Conference on
    Learning Representations, 2022.
conference:
  end_date: 2022-04-29
  location: Virtual
  name: 'ICLR: International Conference on Learning Representations'
  start_date: 2022-04-25
date_created: 2023-08-22T14:00:50Z
date_published: 2022-04-25T00:00:00Z
date_updated: 2023-09-11T09:40:52Z
day: '25'
department:
- _id: FrLo
extern: '1'
external_id:
  arxiv:
  - '2107.08221'
language:
- iso: eng
main_file_link:
- open_access: '1'
  url: https://doi.org/10.48550/arXiv.2107.08221
month: '04'
oa: 1
oa_version: Preprint
publication: 10th International Conference on Learning Representations
publication_status: published
quality_controlled: '1'
status: public
title: Visual representation learning does not generalize strongly within the  same
  domain
type: conference
user_id: 2DF688A6-F248-11E8-B48F-1D18A9856A87
year: '2022'
...
---
_id: '14173'
abstract:
- lang: eng
  text: "Since out-of-distribution generalization is a generally ill-posed problem,
    various proxy targets (e.g., calibration, adversarial robustness, algorithmic
    corruptions, invariance across shifts) were studied across different research
    programs resulting in different recommendations. While sharing the same aspirational
    goal, these approaches have never been tested under the same\r\nexperimental conditions
    on real data. In this paper, we take a unified view of previous work, highlighting
    message discrepancies that we address empirically, and providing recommendations
    on how to measure the robustness of a model and how to improve it. To this end,
    we collect 172 publicly available dataset pairs for training and out-of-distribution
    evaluation of accuracy, calibration error, adversarial attacks, environment invariance,
    and synthetic corruptions. We fine-tune over 31k networks, from nine different
    architectures in the many- and\r\nfew-shot setting. Our findings confirm that
    in- and out-of-distribution accuracies tend to increase jointly, but show that
    their relation is largely dataset-dependent, and in general more nuanced and more
    complex than posited by previous, smaller scale studies."
alternative_title:
- Advances in Neural Information Processing Systems
article_processing_charge: No
arxiv: 1
author:
- first_name: Florian
  full_name: Wenzel, Florian
  last_name: Wenzel
- first_name: Andrea
  full_name: Dittadi, Andrea
  last_name: Dittadi
- first_name: Peter Vincent
  full_name: Gehler, Peter Vincent
  last_name: Gehler
- first_name: Carl-Johann Simon-Gabriel
  full_name: Carl-Johann Simon-Gabriel, Carl-Johann Simon-Gabriel
  last_name: Carl-Johann Simon-Gabriel
- first_name: Max
  full_name: Horn, Max
  last_name: Horn
- first_name: Dominik
  full_name: Zietlow, Dominik
  last_name: Zietlow
- first_name: David
  full_name: Kernert, David
  last_name: Kernert
- first_name: Chris
  full_name: Russell, Chris
  last_name: Russell
- first_name: Thomas
  full_name: Brox, Thomas
  last_name: Brox
- first_name: Bernt
  full_name: Schiele, Bernt
  last_name: Schiele
- first_name: Bernhard
  full_name: Schölkopf, Bernhard
  last_name: Schölkopf
- first_name: Francesco
  full_name: Locatello, Francesco
  id: 26cfd52f-2483-11ee-8040-88983bcc06d4
  last_name: Locatello
  orcid: 0000-0002-4850-0683
citation:
  ama: 'Wenzel F, Dittadi A, Gehler PV, et al. Assaying out-of-distribution generalization
    in transfer learning. In: <i>36th Conference on Neural Information Processing
    Systems</i>. Vol 35. Neural Information Processing Systems Foundation; 2022:7181-7198.'
  apa: 'Wenzel, F., Dittadi, A., Gehler, P. V., Carl-Johann Simon-Gabriel, C.-J. S.-G.,
    Horn, M., Zietlow, D., … Locatello, F. (2022). Assaying out-of-distribution generalization
    in transfer learning. In <i>36th Conference on Neural Information Processing Systems</i>
    (Vol. 35, pp. 7181–7198). New Orleans, LA, United States: Neural Information Processing
    Systems Foundation.'
  chicago: Wenzel, Florian, Andrea Dittadi, Peter Vincent Gehler, Carl-Johann Simon-Gabriel
    Carl-Johann Simon-Gabriel, Max Horn, Dominik Zietlow, David Kernert, et al. “Assaying
    Out-of-Distribution Generalization in Transfer Learning.” In <i>36th Conference
    on Neural Information Processing Systems</i>, 35:7181–98. Neural Information Processing
    Systems Foundation, 2022.
  ieee: F. Wenzel <i>et al.</i>, “Assaying out-of-distribution generalization in transfer
    learning,” in <i>36th Conference on Neural Information Processing Systems</i>,
    New Orleans, LA, United States, 2022, vol. 35, pp. 7181–7198.
  ista: 'Wenzel F, Dittadi A, Gehler PV, Carl-Johann Simon-Gabriel C-JS-G, Horn M,
    Zietlow D, Kernert D, Russell C, Brox T, Schiele B, Schölkopf B, Locatello F.
    2022. Assaying out-of-distribution generalization in transfer learning. 36th Conference
    on Neural Information Processing Systems. NeurIPS: Neural Information Processing
    Systems, Advances in Neural Information Processing Systems, vol. 35, 7181–7198.'
  mla: Wenzel, Florian, et al. “Assaying Out-of-Distribution Generalization in Transfer
    Learning.” <i>36th Conference on Neural Information Processing Systems</i>, vol.
    35, Neural Information Processing Systems Foundation, 2022, pp. 7181–98.
  short: F. Wenzel, A. Dittadi, P.V. Gehler, C.-J.S.-G. Carl-Johann Simon-Gabriel,
    M. Horn, D. Zietlow, D. Kernert, C. Russell, T. Brox, B. Schiele, B. Schölkopf,
    F. Locatello, in:, 36th Conference on Neural Information Processing Systems, Neural
    Information Processing Systems Foundation, 2022, pp. 7181–7198.
conference:
  end_date: 2022-12-09
  location: New Orleans, LA, United States
  name: 'NeurIPS: Neural Information Processing Systems'
  start_date: 2022-11-28
date_created: 2023-08-22T14:01:13Z
date_published: 2022-12-15T00:00:00Z
date_updated: 2023-09-06T10:34:43Z
day: '15'
department:
- _id: FrLo
extern: '1'
external_id:
  arxiv:
  - '2207.09239'
intvolume: '        35'
language:
- iso: eng
main_file_link:
- open_access: '1'
  url: https://arxiv.org/abs/2207.09239
month: '12'
oa: 1
oa_version: Preprint
page: 7181-7198
publication: 36th Conference on Neural Information Processing Systems
publication_identifier:
  isbn:
  - '9781713871088'
publication_status: published
publisher: Neural Information Processing Systems Foundation
quality_controlled: '1'
scopus_import: '1'
status: public
title: Assaying out-of-distribution generalization in transfer learning
type: conference
user_id: 2DF688A6-F248-11E8-B48F-1D18A9856A87
volume: 35
year: '2022'
...
---
_id: '14174'
abstract:
- lang: eng
  text: "Building sample-efficient agents that generalize out-of-distribution (OOD)
    in real-world settings remains a fundamental unsolved problem on the path towards
    achieving higher-level cognition. One particularly promising approach is to begin
    with low-dimensional, pretrained representations of our world, which should facilitate
    efficient downstream learning and generalization. By training 240 representations
    and over 10,000 reinforcement learning (RL) policies on a simulated robotic setup,
    we evaluate to what extent different properties of\r\npretrained VAE-based representations
    affect the OOD generalization of downstream agents. We observe that many agents
    are surprisingly robust to realistic distribution shifts, including the challenging
    sim-to-real case. In addition, we find that the generalization performance of
    a simple downstream proxy task reliably predicts the generalization performance
    of our RL agents\r\nunder a wide range of OOD settings. Such proxy tasks can thus
    be used to select pretrained representations that will lead to agents that generalize."
article_processing_charge: No
arxiv: 1
author:
- first_name: Andrea
  full_name: Dittadi, Andrea
  last_name: Dittadi
- first_name: Frederik
  full_name: Träuble, Frederik
  last_name: Träuble
- first_name: Manuel
  full_name: Wüthrich, Manuel
  last_name: Wüthrich
- first_name: Felix
  full_name: Widmaier, Felix
  last_name: Widmaier
- first_name: Peter
  full_name: Gehler, Peter
  last_name: Gehler
- first_name: Ole
  full_name: Winther, Ole
  last_name: Winther
- first_name: Francesco
  full_name: Locatello, Francesco
  id: 26cfd52f-2483-11ee-8040-88983bcc06d4
  last_name: Locatello
  orcid: 0000-0002-4850-0683
- first_name: Olivier
  full_name: Bachem, Olivier
  last_name: Bachem
- first_name: Bernhard
  full_name: Schölkopf, Bernhard
  last_name: Schölkopf
- first_name: Stefan
  full_name: Bauer, Stefan
  last_name: Bauer
citation:
  ama: 'Dittadi A, Träuble F, Wüthrich M, et al. The role of pretrained representations
    for the OOD generalization of  reinforcement learning agents. In: <i>10th International
    Conference on Learning Representations</i>. ; 2022.'
  apa: Dittadi, A., Träuble, F., Wüthrich, M., Widmaier, F., Gehler, P., Winther,
    O., … Bauer, S. (2022). The role of pretrained representations for the OOD generalization
    of  reinforcement learning agents. In <i>10th International Conference on Learning
    Representations</i>. Virtual.
  chicago: Dittadi, Andrea, Frederik Träuble, Manuel Wüthrich, Felix Widmaier, Peter
    Gehler, Ole Winther, Francesco Locatello, Olivier Bachem, Bernhard Schölkopf,
    and Stefan Bauer. “The Role of Pretrained Representations for the OOD Generalization
    of  Reinforcement Learning Agents.” In <i>10th International Conference on Learning
    Representations</i>, 2022.
  ieee: A. Dittadi <i>et al.</i>, “The role of pretrained representations for the
    OOD generalization of  reinforcement learning agents,” in <i>10th International
    Conference on Learning Representations</i>, Virtual, 2022.
  ista: 'Dittadi A, Träuble F, Wüthrich M, Widmaier F, Gehler P, Winther O, Locatello
    F, Bachem O, Schölkopf B, Bauer S. 2022. The role of pretrained representations
    for the OOD generalization of  reinforcement learning agents. 10th International
    Conference on Learning Representations. ICLR: International Conference on Learning
    Representations.'
  mla: Dittadi, Andrea, et al. “The Role of Pretrained Representations for the OOD
    Generalization of  Reinforcement Learning Agents.” <i>10th International Conference
    on Learning Representations</i>, 2022.
  short: A. Dittadi, F. Träuble, M. Wüthrich, F. Widmaier, P. Gehler, O. Winther,
    F. Locatello, O. Bachem, B. Schölkopf, S. Bauer, in:, 10th International Conference
    on Learning Representations, 2022.
conference:
  end_date: 2022-04-29
  location: Virtual
  name: 'ICLR: International Conference on Learning Representations'
  start_date: 2022-04-25
date_created: 2023-08-22T14:02:13Z
date_published: 2022-04-25T00:00:00Z
date_updated: 2023-09-11T09:48:36Z
day: '25'
department:
- _id: FrLo
extern: '1'
external_id:
  arxiv:
  - '2107.05686'
language:
- iso: eng
main_file_link:
- open_access: '1'
  url: ' https://doi.org/10.48550/arXiv.2107.05686'
month: '04'
oa: 1
oa_version: Preprint
publication: 10th International Conference on Learning Representations
publication_status: published
quality_controlled: '1'
status: public
title: The role of pretrained representations for the OOD generalization of  reinforcement
  learning agents
type: conference
user_id: 2DF688A6-F248-11E8-B48F-1D18A9856A87
year: '2022'
...
---
_id: '14175'
abstract:
- lang: eng
  text: "Predicting the future trajectory of a moving agent can be easy when the past
    trajectory continues smoothly but is challenging when complex interactions with
    other agents are involved. Recent deep learning approaches for trajectory prediction
    show promising performance and partially attribute this to successful reasoning
    about agent-agent interactions. However, it remains unclear which features such
    black-box models actually learn to use for making predictions. This paper proposes
    a procedure that quantifies the contributions\r\nof different cues to model performance
    based on a variant of Shapley values. Applying this procedure to state-of-the-art
    trajectory prediction methods on standard benchmark datasets shows that they are,
    in fact, unable to reason about interactions. Instead, the past trajectory of
    the target is the only feature used for predicting its future. For a task with
    richer social\r\ninteraction patterns, on the other hand, the tested models do
    pick up such interactions to a certain extent, as quantified by our feature attribution
    method. We discuss the limits of the proposed method and its links to causality."
article_processing_charge: No
arxiv: 1
author:
- first_name: Osama
  full_name: Makansi, Osama
  last_name: Makansi
- first_name: Julius von
  full_name: Kügelgen, Julius von
  last_name: Kügelgen
- first_name: Francesco
  full_name: Locatello, Francesco
  id: 26cfd52f-2483-11ee-8040-88983bcc06d4
  last_name: Locatello
  orcid: 0000-0002-4850-0683
- first_name: Peter
  full_name: Gehler, Peter
  last_name: Gehler
- first_name: Dominik
  full_name: Janzing, Dominik
  last_name: Janzing
- first_name: Thomas
  full_name: Brox, Thomas
  last_name: Brox
- first_name: Bernhard
  full_name: Schölkopf, Bernhard
  last_name: Schölkopf
citation:
  ama: 'Makansi O, Kügelgen J von, Locatello F, et al. You mostly walk alone: Analyzing
    feature attribution in trajectory prediction. In: <i>10th International Conference
    on Learning Representations</i>. ; 2022.'
  apa: 'Makansi, O., Kügelgen, J. von, Locatello, F., Gehler, P., Janzing, D., Brox,
    T., &#38; Schölkopf, B. (2022). You mostly walk alone: Analyzing feature attribution
    in trajectory prediction. In <i>10th International Conference on Learning Representations</i>.
    Virtual.'
  chicago: 'Makansi, Osama, Julius von Kügelgen, Francesco Locatello, Peter Gehler,
    Dominik Janzing, Thomas Brox, and Bernhard Schölkopf. “You Mostly Walk Alone:
    Analyzing Feature Attribution in Trajectory Prediction.” In <i>10th International
    Conference on Learning Representations</i>, 2022.'
  ieee: 'O. Makansi <i>et al.</i>, “You mostly walk alone: Analyzing feature attribution
    in trajectory prediction,” in <i>10th International Conference on Learning Representations</i>,
    Virtual, 2022.'
  ista: 'Makansi O, Kügelgen J von, Locatello F, Gehler P, Janzing D, Brox T, Schölkopf
    B. 2022. You mostly walk alone: Analyzing feature attribution in trajectory prediction.
    10th International Conference on Learning Representations. ICLR: International
    Conference on Learning Representations.'
  mla: 'Makansi, Osama, et al. “You Mostly Walk Alone: Analyzing Feature Attribution
    in Trajectory Prediction.” <i>10th International Conference on Learning Representations</i>,
    2022.'
  short: O. Makansi, J. von Kügelgen, F. Locatello, P. Gehler, D. Janzing, T. Brox,
    B. Schölkopf, in:, 10th International Conference on Learning Representations,
    2022.
conference:
  end_date: 2022-04-29
  location: Virtual
  name: 'ICLR: International Conference on Learning Representations'
  start_date: 2022-04-25
date_created: 2023-08-22T14:02:34Z
date_published: 2022-04-25T00:00:00Z
date_updated: 2023-09-11T09:52:20Z
day: '25'
department:
- _id: FrLo
extern: '1'
external_id:
  arxiv:
  - '2110.05304'
language:
- iso: eng
main_file_link:
- open_access: '1'
  url: https://doi.org/10.48550/arXiv.2110.05304
month: '04'
oa: 1
oa_version: Preprint
publication: 10th International Conference on Learning Representations
publication_status: published
quality_controlled: '1'
status: public
title: 'You mostly walk alone: Analyzing feature attribution in trajectory prediction'
type: conference
user_id: 2DF688A6-F248-11E8-B48F-1D18A9856A87
year: '2022'
...
---
_id: '14215'
abstract:
- lang: eng
  text: Geospatial Information Systems are used by researchers and Humanitarian Assistance
    and Disaster Response (HADR) practitioners to support a wide variety of important
    applications. However, collaboration between these actors is difficult due to
    the heterogeneous nature of geospatial data modalities (e.g., multi-spectral images
    of various resolutions, timeseries, weather data) and diversity of tasks (e.g.,
    regression of human activity indicators or detecting forest fires). In this work,
    we present a roadmap towards the construction of a general-purpose neural architecture
    (GPNA) with a geospatial inductive bias, pre-trained on large amounts of unlabelled
    earth observation data in a self-supervised manner. We envision how such a model
    may facilitate cooperation between members of the community. We show preliminary
    results on the first step of the roadmap, where we instantiate an architecture
    that can process a wide variety of geospatial data modalities and demonstrate
    that it can achieve competitive performance with domain-specific architectures
    on tasks relating to the U.N.'s Sustainable Development Goals.
article_processing_charge: No
arxiv: 1
author:
- first_name: Nasim
  full_name: Rahaman, Nasim
  last_name: Rahaman
- first_name: Martin
  full_name: Weiss, Martin
  last_name: Weiss
- first_name: Frederik
  full_name: Träuble, Frederik
  last_name: Träuble
- first_name: Francesco
  full_name: Locatello, Francesco
  id: 26cfd52f-2483-11ee-8040-88983bcc06d4
  last_name: Locatello
  orcid: 0000-0002-4850-0683
- first_name: Alexandre
  full_name: Lacoste, Alexandre
  last_name: Lacoste
- first_name: Yoshua
  full_name: Bengio, Yoshua
  last_name: Bengio
- first_name: Chris
  full_name: Pal, Chris
  last_name: Pal
- first_name: Li Erran
  full_name: Li, Li Erran
  last_name: Li
- first_name: Bernhard
  full_name: Schölkopf, Bernhard
  last_name: Schölkopf
citation:
  ama: 'Rahaman N, Weiss M, Träuble F, et al. A general purpose neural architecture
    for geospatial systems. In: <i>36th Conference on Neural Information Processing
    Systems</i>.'
  apa: Rahaman, N., Weiss, M., Träuble, F., Locatello, F., Lacoste, A., Bengio, Y.,
    … Schölkopf, B. (n.d.). A general purpose neural architecture for geospatial systems.
    In <i>36th Conference on Neural Information Processing Systems</i>. New Orleans,
    LA, United States.
  chicago: Rahaman, Nasim, Martin Weiss, Frederik Träuble, Francesco Locatello, Alexandre
    Lacoste, Yoshua Bengio, Chris Pal, Li Erran Li, and Bernhard Schölkopf. “A General
    Purpose Neural Architecture for Geospatial Systems.” In <i>36th Conference on
    Neural Information Processing Systems</i>, n.d.
  ieee: N. Rahaman <i>et al.</i>, “A general purpose neural architecture for geospatial
    systems,” in <i>36th Conference on Neural Information Processing Systems</i>,
    New Orleans, LA, United States.
  ista: 'Rahaman N, Weiss M, Träuble F, Locatello F, Lacoste A, Bengio Y, Pal C, Li
    LE, Schölkopf B. A general purpose neural architecture for geospatial systems.
    36th Conference on Neural Information Processing Systems. NeurIPS: Neural Information
    Processing Systems.'
  mla: Rahaman, Nasim, et al. “A General Purpose Neural Architecture for Geospatial
    Systems.” <i>36th Conference on Neural Information Processing Systems</i>.
  short: N. Rahaman, M. Weiss, F. Träuble, F. Locatello, A. Lacoste, Y. Bengio, C.
    Pal, L.E. Li, B. Schölkopf, in:, 36th Conference on Neural Information Processing
    Systems, n.d.
conference:
  end_date: 2022-12-09
  location: New Orleans, LA, United States
  name: 'NeurIPS: Neural Information Processing Systems'
  start_date: 2022-11-28
date_created: 2023-08-22T14:21:47Z
date_published: 2022-11-04T00:00:00Z
date_updated: 2023-09-13T09:35:59Z
day: '04'
department:
- _id: FrLo
extern: '1'
external_id:
  arxiv:
  - '2211.02348'
language:
- iso: eng
main_file_link:
- open_access: '1'
  url: https://doi.org/10.48550/arXiv.2211.02348
month: '11'
oa: 1
oa_version: Preprint
publication: 36th Conference on Neural Information Processing Systems
publication_status: submitted
quality_controlled: '1'
status: public
title: A general purpose neural architecture for geospatial systems
type: conference
user_id: 2DF688A6-F248-11E8-B48F-1D18A9856A87
year: '2022'
...
---
_id: '14216'
abstract:
- lang: eng
  text: CLIP proved that aligning visual and language spaces is key to solving many
    vision tasks without explicit training, but required to train image and text encoders
    from scratch on a huge dataset. LiT improved this by only training the text encoder
    and using a pre-trained vision network. In this paper, we show that a common space
    can be created without any training at all, using single-domain encoders (trained
    with or without supervision) and a much smaller amount of image-text pairs. Furthermore,
    our model has unique properties. Most notably, deploying a new version with updated
    training samples can be done in a matter of seconds. Additionally, the representations
    in the common space are easily interpretable as every dimension corresponds to
    the similarity of the input to a unique entry in the multimodal dataset. Experiments
    on standard zero-shot visual benchmarks demonstrate the typical transfer ability
    of image-text models. Overall, our method represents a simple yet surprisingly
    strong baseline for foundation multi-modal models, raising important questions
    on their data efficiency and on the role of retrieval in machine learning.
article_number: '2210.01738'
article_processing_charge: No
arxiv: 1
author:
- first_name: Antonio
  full_name: Norelli, Antonio
  last_name: Norelli
- first_name: Marco
  full_name: Fumero, Marco
  last_name: Fumero
- first_name: Valentino
  full_name: Maiorca, Valentino
  last_name: Maiorca
- first_name: Luca
  full_name: Moschella, Luca
  last_name: Moschella
- first_name: Emanuele
  full_name: Rodolà, Emanuele
  last_name: Rodolà
- first_name: Francesco
  full_name: Locatello, Francesco
  id: 26cfd52f-2483-11ee-8040-88983bcc06d4
  last_name: Locatello
  orcid: 0000-0002-4850-0683
citation:
  ama: 'Norelli A, Fumero M, Maiorca V, Moschella L, Rodolà E, Locatello F. ASIF:
    Coupled data turns unimodal models to multimodal without training. <i>arXiv</i>.
    doi:<a href="https://doi.org/10.48550/arXiv.2210.01738">10.48550/arXiv.2210.01738</a>'
  apa: 'Norelli, A., Fumero, M., Maiorca, V., Moschella, L., Rodolà, E., &#38; Locatello,
    F. (n.d.). ASIF: Coupled data turns unimodal models to multimodal without training.
    <i>arXiv</i>. <a href="https://doi.org/10.48550/arXiv.2210.01738">https://doi.org/10.48550/arXiv.2210.01738</a>'
  chicago: 'Norelli, Antonio, Marco Fumero, Valentino Maiorca, Luca Moschella, Emanuele
    Rodolà, and Francesco Locatello. “ASIF: Coupled Data Turns Unimodal Models to
    Multimodal without Training.” <i>ArXiv</i>, n.d. <a href="https://doi.org/10.48550/arXiv.2210.01738">https://doi.org/10.48550/arXiv.2210.01738</a>.'
  ieee: 'A. Norelli, M. Fumero, V. Maiorca, L. Moschella, E. Rodolà, and F. Locatello,
    “ASIF: Coupled data turns unimodal models to multimodal without training,” <i>arXiv</i>.
    .'
  ista: 'Norelli A, Fumero M, Maiorca V, Moschella L, Rodolà E, Locatello F. ASIF:
    Coupled data turns unimodal models to multimodal without training. arXiv, 2210.01738.'
  mla: 'Norelli, Antonio, et al. “ASIF: Coupled Data Turns Unimodal Models to Multimodal
    without Training.” <i>ArXiv</i>, 2210.01738, doi:<a href="https://doi.org/10.48550/arXiv.2210.01738">10.48550/arXiv.2210.01738</a>.'
  short: A. Norelli, M. Fumero, V. Maiorca, L. Moschella, E. Rodolà, F. Locatello,
    ArXiv (n.d.).
date_created: 2023-08-22T14:22:04Z
date_published: 2022-10-04T00:00:00Z
date_updated: 2024-02-12T09:57:14Z
day: '04'
department:
- _id: FrLo
doi: 10.48550/arXiv.2210.01738
external_id:
  arxiv:
  - '2210.01738'
language:
- iso: eng
main_file_link:
- open_access: '1'
  url: https://doi.org/10.48550/arXiv.2210.01738
month: '10'
oa: 1
oa_version: Preprint
publication: arXiv
publication_status: submitted
status: public
title: 'ASIF: Coupled data turns unimodal models to multimodal without training'
type: preprint
user_id: 2DF688A6-F248-11E8-B48F-1D18A9856A87
year: '2022'
...
