---
_id: '14946'
abstract:
- lang: eng
  text: "We present a unified framework for studying the identifiability of\r\nrepresentations
    learned from simultaneously observed views, such as different\r\ndata modalities.
    We allow a partially observed setting in which each view\r\nconstitutes a nonlinear
    mixture of a subset of underlying latent variables,\r\nwhich can be causally related.
    We prove that the information shared across all\r\nsubsets of any number of views
    can be learned up to a smooth bijection using\r\ncontrastive learning and a single
    encoder per view. We also provide graphical\r\ncriteria indicating which latent
    variables can be identified through a simple\r\nset of rules, which we refer to
    as identifiability algebra. Our general\r\nframework and theoretical results unify
    and extend several previous works on\r\nmulti-view nonlinear ICA, disentanglement,
    and causal representation learning.\r\nWe experimentally validate our claims on
    numerical, image, and multi-modal data\r\nsets. Further, we demonstrate that the
    performance of prior methods is\r\nrecovered in different special cases of our
    setup. Overall, we find that access\r\nto multiple partial views enables us to
    identify a more fine-grained\r\nrepresentation, under the generally milder assumption
    of partial observability."
acknowledgement: "This work was initiated at the Second Bellairs Workshop on Causality
  held at the Bellairs Research Institute, January 6–13, 2022; we thank all workshop
  participants for providing a stimulating research environment. Further, we thank
  Cian Eastwood, Luigi Gresele, Stefano Soatto, Marco Bagatella, and A. René Geist
  for helpful discussion. GM is a member of the Machine Learning Cluster of Excellence,
  EXC number 2064/1 – Project number 390727645. JvK and GM acknowledge support from
  the German Federal Ministry of Education and Research (BMBF) through the Tübingen
  AI Center (FKZ: 01IS18039B). The research of DX and SM was supported by the Air
  Force Office of Scientific Research under award number FA8655-22-1-7155. Any opinions,
  findings, and conclusions or recommendations expressed in\r\nthis material are those
  of the author(s) and do not necessarily reflect the views of the United States Air
  Force. We also thank SURF for the support in using the Dutch National Supercomputer
  Snellius. DY was supported by an Amazon fellowship and the International Max Planck
  Research School for Intelligent Systems (IMPRS-IS). Work done outside of Amazon.
  SL was supported by an IVADO excellence PhD scholarship and by Samsung Electronics
  Co., Ldt."
article_number: '2311.04056'
article_processing_charge: No
arxiv: 1
author:
- first_name: Dingling
  full_name: Yao, Dingling
  id: d3e02e50-48a8-11ee-8f62-c108061797fa
  last_name: Yao
- first_name: Danru
  full_name: Xu, Danru
  last_name: Xu
- first_name: Sébastien
  full_name: Lachapelle, Sébastien
  last_name: Lachapelle
- first_name: Sara
  full_name: Magliacane, Sara
  last_name: Magliacane
- first_name: Perouz
  full_name: Taslakian, Perouz
  last_name: Taslakian
- first_name: Georg
  full_name: Martius, Georg
  last_name: Martius
- first_name: Julius von
  full_name: Kügelgen, Julius von
  last_name: Kügelgen
- first_name: Francesco
  full_name: Locatello, Francesco
  id: 26cfd52f-2483-11ee-8040-88983bcc06d4
  last_name: Locatello
  orcid: 0000-0002-4850-0683
citation:
  ama: Yao D, Xu D, Lachapelle S, et al. Multi-view causal representation learning
    with partial observability. <i>arXiv</i>. doi:<a href="https://doi.org/10.48550/arXiv.2311.04056">10.48550/arXiv.2311.04056</a>
  apa: Yao, D., Xu, D., Lachapelle, S., Magliacane, S., Taslakian, P., Martius, G.,
    … Locatello, F. (n.d.). Multi-view causal representation learning with partial
    observability. <i>arXiv</i>. <a href="https://doi.org/10.48550/arXiv.2311.04056">https://doi.org/10.48550/arXiv.2311.04056</a>
  chicago: Yao, Dingling, Danru Xu, Sébastien Lachapelle, Sara Magliacane, Perouz
    Taslakian, Georg Martius, Julius von Kügelgen, and Francesco Locatello. “Multi-View
    Causal Representation Learning with Partial Observability.” <i>ArXiv</i>, n.d.
    <a href="https://doi.org/10.48550/arXiv.2311.04056">https://doi.org/10.48550/arXiv.2311.04056</a>.
  ieee: D. Yao <i>et al.</i>, “Multi-view causal representation learning with partial
    observability,” <i>arXiv</i>. .
  ista: Yao D, Xu D, Lachapelle S, Magliacane S, Taslakian P, Martius G, Kügelgen
    J von, Locatello F. Multi-view causal representation learning with partial observability.
    arXiv, 2311.04056.
  mla: Yao, Dingling, et al. “Multi-View Causal Representation Learning with Partial
    Observability.” <i>ArXiv</i>, 2311.04056, doi:<a href="https://doi.org/10.48550/arXiv.2311.04056">10.48550/arXiv.2311.04056</a>.
  short: D. Yao, D. Xu, S. Lachapelle, S. Magliacane, P. Taslakian, G. Martius, J.
    von Kügelgen, F. Locatello, ArXiv (n.d.).
date_created: 2024-02-07T14:28:34Z
date_published: 2023-11-07T00:00:00Z
date_updated: 2024-02-12T08:07:33Z
day: '07'
department:
- _id: FrLo
doi: 10.48550/arXiv.2311.04056
external_id:
  arxiv:
  - '2311.04056'
language:
- iso: eng
main_file_link:
- open_access: '1'
  url: https://doi.org/10.48550/arXiv.2311.04056
month: '11'
oa: 1
oa_version: Preprint
publication: arXiv
publication_status: submitted
status: public
title: Multi-view causal representation learning with partial observability
type: preprint
user_id: 2DF688A6-F248-11E8-B48F-1D18A9856A87
year: '2023'
...
---
_id: '14958'
abstract:
- lang: eng
  text: Causal representation learning (CRL) aims at identifying high-level causal
    variables from low-level data, e.g. images. Current methods usually assume that
    all causal variables are captured in the high-dimensional observations. In this
    work, we focus on learning causal representations from data under partial observability,
    i.e., when some of the causal variables are not observed in the measurements,
    and the set of masked variables changes across the different samples. We introduce
    some initial theoretical results for identifying causal variables under partial
    observability by exploiting a sparsity regularizer, focusing in particular on
    the linear and piecewise linear mixing function case. We provide a theorem that
    allows us to identify the causal variables up to permutation and element-wise
    linear transformations in the linear case and a lemma that allows us to identify
    causal variables up to linear transformation in the piecewise case. Finally, we
    provide a conjecture that would allow us to identify the causal variables up to
    permutation and element-wise linear transformations also in the piecewise linear
    case. We test the theorem and conjecture on simulated data, showing the effectiveness
    of our method.
acknowledgement: "This work was initiated at the Second Bellairs Workshop on Causality
  held at the Bellairs Research Institute, January 6–13, 2022; we thank all workshop
  participants for providing a stimulating research environment. The research of DX
  and SM was supported by the Air Force Office of Scientific Research under award
  number FA8655-22-1-7155. Any opinions, findings, and conclusions or recommendations
  expressed in this material are those of the author(s) and do not necessarily reflect
  the views of the United States Air Force. We also thank SURF for the support in
  using the Dutch National Supercomputer Snellius. DY was supported by an Amazon fellowship
  and the International Max Planck Research School for Intelligent Systems (IMPRS-IS).
  Work done outside of Amazon. SL was supported by an IVADO excellence PhD scholarship
  and by Samsung Electronics Co., Ldt. JvK acknowledges support from the German Federal
  Ministry of Education and Research (BMBF)\r\nthrough the Tübingen AI Center (FKZ:
  01IS18039B).\r\n"
article_number: '54'
article_processing_charge: No
author:
- first_name: Danru
  full_name: Xu, Danru
  last_name: Xu
- first_name: Dingling
  full_name: Yao, Dingling
  id: d3e02e50-48a8-11ee-8f62-c108061797fa
  last_name: Yao
- first_name: Sebastien
  full_name: Lachapelle, Sebastien
  last_name: Lachapelle
- first_name: Perouz
  full_name: Taslakian, Perouz
  last_name: Taslakian
- first_name: Julius
  full_name: von Kügelgen, Julius
  last_name: von Kügelgen
- first_name: Francesco
  full_name: Locatello, Francesco
  id: 26cfd52f-2483-11ee-8040-88983bcc06d4
  last_name: Locatello
  orcid: 0000-0002-4850-0683
- first_name: Sara
  full_name: Magliacane, Sara
  last_name: Magliacane
citation:
  ama: 'Xu D, Yao D, Lachapelle S, et al. A sparsity principle for partially observable
    causal representation learning. In: <i>Causal Representation Learning Workshop
    at NeurIPS 2023</i>. OpenReview; 2023.'
  apa: 'Xu, D., Yao, D., Lachapelle, S., Taslakian, P., von Kügelgen, J., Locatello,
    F., &#38; Magliacane, S. (2023). A sparsity principle for partially observable
    causal representation learning. In <i>Causal Representation Learning Workshop
    at NeurIPS 2023</i>. New Orleans, LA, United States: OpenReview.'
  chicago: Xu, Danru, Dingling Yao, Sebastien Lachapelle, Perouz Taslakian, Julius
    von Kügelgen, Francesco Locatello, and Sara Magliacane. “A Sparsity Principle
    for Partially Observable Causal Representation Learning.” In <i>Causal Representation
    Learning Workshop at NeurIPS 2023</i>. OpenReview, 2023.
  ieee: D. Xu <i>et al.</i>, “A sparsity principle for partially observable causal
    representation learning,” in <i>Causal Representation Learning Workshop at NeurIPS
    2023</i>, New Orleans, LA, United States, 2023.
  ista: 'Xu D, Yao D, Lachapelle S, Taslakian P, von Kügelgen J, Locatello F, Magliacane
    S. 2023. A sparsity principle for partially observable causal representation learning.
    Causal Representation Learning Workshop at NeurIPS 2023. CRL: Causal Representation
    Learning Workshop at NeurIPS, 54.'
  mla: Xu, Danru, et al. “A Sparsity Principle for Partially Observable Causal Representation
    Learning.” <i>Causal Representation Learning Workshop at NeurIPS 2023</i>, 54,
    OpenReview, 2023.
  short: D. Xu, D. Yao, S. Lachapelle, P. Taslakian, J. von Kügelgen, F. Locatello,
    S. Magliacane, in:, Causal Representation Learning Workshop at NeurIPS 2023, OpenReview,
    2023.
conference:
  end_date: 2023-12-15
  location: New Orleans, LA, United States
  name: 'CRL: Causal Representation Learning Workshop at NeurIPS'
  start_date: 2023-12-15
date_created: 2024-02-07T15:17:51Z
date_published: 2023-12-05T00:00:00Z
date_updated: 2024-02-13T08:59:27Z
day: '05'
ddc:
- '000'
department:
- _id: FrLo
file:
- access_level: open_access
  checksum: 484efc27bda75ed6666044989695d9b6
  content_type: application/pdf
  creator: dernst
  date_created: 2024-02-13T08:50:53Z
  date_updated: 2024-02-13T08:50:53Z
  file_id: '14982'
  file_name: 2023_CRL_Xu.pdf
  file_size: 552357
  relation: main_file
  success: 1
file_date_updated: 2024-02-13T08:50:53Z
has_accepted_license: '1'
language:
- iso: eng
main_file_link:
- open_access: '1'
  url: https://openreview.net/forum?id=Whr6uobelR
month: '12'
oa: 1
oa_version: Published Version
publication: Causal Representation Learning Workshop at NeurIPS 2023
publication_status: published
publisher: OpenReview
quality_controlled: '1'
status: public
title: A sparsity principle for partially observable causal representation learning
tmp:
  image: /images/cc_by.png
  legal_code_url: https://creativecommons.org/licenses/by/4.0/legalcode
  name: Creative Commons Attribution 4.0 International Public License (CC-BY 4.0)
  short: CC BY (4.0)
type: conference
user_id: 2DF688A6-F248-11E8-B48F-1D18A9856A87
year: '2023'
...
