---
_id: '9362'
abstract:
- lang: eng
  text: A central goal in systems neuroscience is to understand the functions performed
    by neural circuits. Previous top-down models addressed this question by comparing
    the behaviour of an ideal model circuit, optimised to perform a given function,
    with neural recordings. However, this requires guessing in advance what function
    is being performed, which may not be possible for many neural systems. To address
    this, we propose an inverse reinforcement learning (RL) framework for inferring
    the function performed by a neural network from data. We assume that the responses
    of each neuron in a network are optimised so as to drive the network towards ‘rewarded’
    states, that are desirable for performing a given function. We then show how one
    can use inverse RL to infer the reward function optimised by the network from
    observing its responses. This inferred reward function can be used to predict
    how the neural network should adapt its dynamics to perform the same function
    when the external environment or network structure changes. This could lead to
    theoretical predictions about how neural network dynamics adapt to deal with cell
    death and/or varying sensory stimulus statistics.
acknowledgement: The authors would like to thank Ulisse Ferrari for useful discussions
  and feedback.
article_number: e0248940
article_processing_charge: No
article_type: original
author:
- first_name: Matthew J
  full_name: Chalk, Matthew J
  id: 2BAAC544-F248-11E8-B48F-1D18A9856A87
  last_name: Chalk
  orcid: 0000-0001-7782-4436
- first_name: Gašper
  full_name: Tkačik, Gašper
  id: 3D494DCA-F248-11E8-B48F-1D18A9856A87
  last_name: Tkačik
  orcid: 0000-0002-6699-1455
- first_name: Olivier
  full_name: Marre, Olivier
  last_name: Marre
citation:
  ama: Chalk MJ, Tkačik G, Marre O. Inferring the function performed by a recurrent
    neural network. <i>PLoS ONE</i>. 2021;16(4). doi:<a href="https://doi.org/10.1371/journal.pone.0248940">10.1371/journal.pone.0248940</a>
  apa: Chalk, M. J., Tkačik, G., &#38; Marre, O. (2021). Inferring the function performed
    by a recurrent neural network. <i>PLoS ONE</i>. Public Library of Science. <a
    href="https://doi.org/10.1371/journal.pone.0248940">https://doi.org/10.1371/journal.pone.0248940</a>
  chicago: Chalk, Matthew J, Gašper Tkačik, and Olivier Marre. “Inferring the Function
    Performed by a Recurrent Neural Network.” <i>PLoS ONE</i>. Public Library of Science,
    2021. <a href="https://doi.org/10.1371/journal.pone.0248940">https://doi.org/10.1371/journal.pone.0248940</a>.
  ieee: M. J. Chalk, G. Tkačik, and O. Marre, “Inferring the function performed by
    a recurrent neural network,” <i>PLoS ONE</i>, vol. 16, no. 4. Public Library of
    Science, 2021.
  ista: Chalk MJ, Tkačik G, Marre O. 2021. Inferring the function performed by a recurrent
    neural network. PLoS ONE. 16(4), e0248940.
  mla: Chalk, Matthew J., et al. “Inferring the Function Performed by a Recurrent
    Neural Network.” <i>PLoS ONE</i>, vol. 16, no. 4, e0248940, Public Library of
    Science, 2021, doi:<a href="https://doi.org/10.1371/journal.pone.0248940">10.1371/journal.pone.0248940</a>.
  short: M.J. Chalk, G. Tkačik, O. Marre, PLoS ONE 16 (2021).
date_created: 2021-05-02T22:01:28Z
date_published: 2021-04-15T00:00:00Z
date_updated: 2023-10-18T08:17:42Z
day: '15'
ddc:
- '570'
department:
- _id: GaTk
doi: 10.1371/journal.pone.0248940
external_id:
  isi:
  - '000641474900072'
  pmid:
  - '33857170'
file:
- access_level: open_access
  checksum: c52da133850307d2031f552d998f00e8
  content_type: application/pdf
  creator: kschuh
  date_created: 2021-05-04T13:22:19Z
  date_updated: 2021-05-04T13:22:19Z
  file_id: '9371'
  file_name: 2021_pone_Chalk.pdf
  file_size: 2768282
  relation: main_file
  success: 1
file_date_updated: 2021-05-04T13:22:19Z
has_accepted_license: '1'
intvolume: '        16'
isi: 1
issue: '4'
language:
- iso: eng
month: '04'
oa: 1
oa_version: Published Version
pmid: 1
publication: PLoS ONE
publication_identifier:
  eissn:
  - '19326203'
publication_status: published
publisher: Public Library of Science
quality_controlled: '1'
scopus_import: '1'
status: public
title: Inferring the function performed by a recurrent neural network
tmp:
  image: /images/cc_by.png
  legal_code_url: https://creativecommons.org/licenses/by/4.0/legalcode
  name: Creative Commons Attribution 4.0 International Public License (CC-BY 4.0)
  short: CC BY (4.0)
type: journal_article
user_id: 2DF688A6-F248-11E8-B48F-1D18A9856A87
volume: 16
year: '2021'
...
---
_id: '543'
abstract:
- lang: eng
  text: A central goal in theoretical neuroscience is to predict the response properties
    of sensory neurons from first principles. To this end, “efficient coding” posits
    that sensory neurons encode maximal information about their inputs given internal
    constraints. There exist, however, many variants of efficient coding (e.g., redundancy
    reduction, different formulations of predictive coding, robust coding, sparse
    coding, etc.), differing in their regimes of applicability, in the relevance of
    signals to be encoded, and in the choice of constraints. It is unclear how these
    types of efficient coding relate or what is expected when different coding objectives
    are combined. Here we present a unified framework that encompasses previously
    proposed efficient coding models and extends to unique regimes. We show that optimizing
    neural responses to encode predictive information can lead them to either correlate
    or decorrelate their inputs, depending on the stimulus statistics; in contrast,
    at low noise, efficiently encoding the past always predicts decorrelation. Later,
    we investigate coding of naturalistic movies and show that qualitatively different
    types of visual motion tuning and levels of response sparsity are predicted, depending
    on whether the objective is to recover the past or predict the future. Our approach
    promises a way to explain the observed diversity of sensory neural responses,
    as due to multiple functional goals and constraints fulfilled by different cell
    types and/or circuits.
article_processing_charge: No
author:
- first_name: Matthew J
  full_name: Chalk, Matthew J
  id: 2BAAC544-F248-11E8-B48F-1D18A9856A87
  last_name: Chalk
  orcid: 0000-0001-7782-4436
- first_name: Olivier
  full_name: Marre, Olivier
  last_name: Marre
- first_name: Gasper
  full_name: Tkacik, Gasper
  id: 3D494DCA-F248-11E8-B48F-1D18A9856A87
  last_name: Tkacik
  orcid: 0000-0002-6699-1455
citation:
  ama: Chalk MJ, Marre O, Tkačik G. Toward a unified theory of efficient, predictive,
    and sparse coding. <i>PNAS</i>. 2018;115(1):186-191. doi:<a href="https://doi.org/10.1073/pnas.1711114115">10.1073/pnas.1711114115</a>
  apa: Chalk, M. J., Marre, O., &#38; Tkačik, G. (2018). Toward a unified theory of
    efficient, predictive, and sparse coding. <i>PNAS</i>. National Academy of Sciences.
    <a href="https://doi.org/10.1073/pnas.1711114115">https://doi.org/10.1073/pnas.1711114115</a>
  chicago: Chalk, Matthew J, Olivier Marre, and Gašper Tkačik. “Toward a Unified Theory
    of Efficient, Predictive, and Sparse Coding.” <i>PNAS</i>. National Academy of
    Sciences, 2018. <a href="https://doi.org/10.1073/pnas.1711114115">https://doi.org/10.1073/pnas.1711114115</a>.
  ieee: M. J. Chalk, O. Marre, and G. Tkačik, “Toward a unified theory of efficient,
    predictive, and sparse coding,” <i>PNAS</i>, vol. 115, no. 1. National Academy
    of Sciences, pp. 186–191, 2018.
  ista: Chalk MJ, Marre O, Tkačik G. 2018. Toward a unified theory of efficient, predictive,
    and sparse coding. PNAS. 115(1), 186–191.
  mla: Chalk, Matthew J., et al. “Toward a Unified Theory of Efficient, Predictive,
    and Sparse Coding.” <i>PNAS</i>, vol. 115, no. 1, National Academy of Sciences,
    2018, pp. 186–91, doi:<a href="https://doi.org/10.1073/pnas.1711114115">10.1073/pnas.1711114115</a>.
  short: M.J. Chalk, O. Marre, G. Tkačik, PNAS 115 (2018) 186–191.
date_created: 2018-12-11T11:47:04Z
date_published: 2018-01-02T00:00:00Z
date_updated: 2023-09-19T10:16:35Z
day: '02'
department:
- _id: GaTk
doi: 10.1073/pnas.1711114115
external_id:
  isi:
  - '000419128700049'
intvolume: '       115'
isi: 1
issue: '1'
language:
- iso: eng
main_file_link:
- open_access: '1'
  url: 'https://doi.org/10.1101/152660 '
month: '01'
oa: 1
oa_version: Submitted Version
page: 186 - 191
project:
- _id: 254D1A94-B435-11E9-9278-68D0E5697425
  call_identifier: FWF
  grant_number: P 25651-N26
  name: Sensitivity to higher-order statistics in natural scenes
publication: PNAS
publication_status: published
publisher: National Academy of Sciences
publist_id: '7273'
quality_controlled: '1'
scopus_import: '1'
status: public
title: Toward a unified theory of efficient, predictive, and sparse coding
type: journal_article
user_id: c635000d-4b10-11ee-a964-aac5a93f6ac1
volume: 115
year: '2018'
...
---
_id: '680'
abstract:
- lang: eng
  text: In order to respond reliably to specific features of their environment, sensory
    neurons need to integrate multiple incoming noisy signals. Crucially, they also
    need to compete for the interpretation of those signals with other neurons representing
    similar features. The form that this competition should take depends critically
    on the noise corrupting these signals. In this study we show that for the type
    of noise commonly observed in sensory systems, whose variance scales with the
    mean signal, sensory neurons should selectively divide their input signals by
    their predictions, suppressing ambiguous cues while amplifying others. Any change
    in the stimulus context alters which inputs are suppressed, leading to a deep
    dynamic reshaping of neural receptive fields going far beyond simple surround
    suppression. Paradoxically, these highly variable receptive fields go alongside
    and are in fact required for an invariant representation of external sensory features.
    In addition to offering a normative account of context-dependent changes in sensory
    responses, perceptual inference in the presence of signal-dependent noise accounts
    for ubiquitous features of sensory neurons such as divisive normalization, gain
    control and contrast dependent temporal dynamics.
article_number: e1005582
author:
- first_name: Matthew J
  full_name: Chalk, Matthew J
  id: 2BAAC544-F248-11E8-B48F-1D18A9856A87
  last_name: Chalk
  orcid: 0000-0001-7782-4436
- first_name: Paul
  full_name: Masset, Paul
  last_name: Masset
- first_name: Boris
  full_name: Gutkin, Boris
  last_name: Gutkin
- first_name: Sophie
  full_name: Denève, Sophie
  last_name: Denève
citation:
  ama: Chalk MJ, Masset P, Gutkin B, Denève S. Sensory noise predicts divisive reshaping
    of receptive fields. <i>PLoS Computational Biology</i>. 2017;13(6). doi:<a href="https://doi.org/10.1371/journal.pcbi.1005582">10.1371/journal.pcbi.1005582</a>
  apa: Chalk, M. J., Masset, P., Gutkin, B., &#38; Denève, S. (2017). Sensory noise
    predicts divisive reshaping of receptive fields. <i>PLoS Computational Biology</i>.
    Public Library of Science. <a href="https://doi.org/10.1371/journal.pcbi.1005582">https://doi.org/10.1371/journal.pcbi.1005582</a>
  chicago: Chalk, Matthew J, Paul Masset, Boris Gutkin, and Sophie Denève. “Sensory
    Noise Predicts Divisive Reshaping of Receptive Fields.” <i>PLoS Computational
    Biology</i>. Public Library of Science, 2017. <a href="https://doi.org/10.1371/journal.pcbi.1005582">https://doi.org/10.1371/journal.pcbi.1005582</a>.
  ieee: M. J. Chalk, P. Masset, B. Gutkin, and S. Denève, “Sensory noise predicts
    divisive reshaping of receptive fields,” <i>PLoS Computational Biology</i>, vol.
    13, no. 6. Public Library of Science, 2017.
  ista: Chalk MJ, Masset P, Gutkin B, Denève S. 2017. Sensory noise predicts divisive
    reshaping of receptive fields. PLoS Computational Biology. 13(6), e1005582.
  mla: Chalk, Matthew J., et al. “Sensory Noise Predicts Divisive Reshaping of Receptive
    Fields.” <i>PLoS Computational Biology</i>, vol. 13, no. 6, e1005582, Public Library
    of Science, 2017, doi:<a href="https://doi.org/10.1371/journal.pcbi.1005582">10.1371/journal.pcbi.1005582</a>.
  short: M.J. Chalk, P. Masset, B. Gutkin, S. Denève, PLoS Computational Biology 13
    (2017).
date_created: 2018-12-11T11:47:53Z
date_published: 2017-06-01T00:00:00Z
date_updated: 2023-02-23T14:10:54Z
day: '01'
ddc:
- '571'
department:
- _id: GaTk
doi: 10.1371/journal.pcbi.1005582
file:
- access_level: open_access
  checksum: 796a1026076af6f4405a47d985bc7b68
  content_type: application/pdf
  creator: system
  date_created: 2018-12-12T10:07:47Z
  date_updated: 2020-07-14T12:47:40Z
  file_id: '4645'
  file_name: IST-2017-898-v1+1_journal.pcbi.1005582.pdf
  file_size: 14555676
  relation: main_file
file_date_updated: 2020-07-14T12:47:40Z
has_accepted_license: '1'
intvolume: '        13'
issue: '6'
language:
- iso: eng
month: '06'
oa: 1
oa_version: Published Version
publication: PLoS Computational Biology
publication_identifier:
  issn:
  - 1553734X
publication_status: published
publisher: Public Library of Science
publist_id: '7035'
pubrep_id: '898'
quality_controlled: '1'
related_material:
  record:
  - id: '9855'
    relation: research_data
    status: public
scopus_import: 1
status: public
title: Sensory noise predicts divisive reshaping of receptive fields
tmp:
  image: /images/cc_by.png
  legal_code_url: https://creativecommons.org/licenses/by/4.0/legalcode
  name: Creative Commons Attribution 4.0 International Public License (CC-BY 4.0)
  short: CC BY (4.0)
type: journal_article
user_id: 2DF688A6-F248-11E8-B48F-1D18A9856A87
volume: 13
year: '2017'
...
---
_id: '9855'
abstract:
- lang: eng
  text: Includes derivation of optimal estimation algorithm, generalisation to non-poisson
    noise statistics, correlated input noise, and implementation of in a multi-layer
    neural network.
article_processing_charge: No
author:
- first_name: Matthew J
  full_name: Chalk, Matthew J
  id: 2BAAC544-F248-11E8-B48F-1D18A9856A87
  last_name: Chalk
  orcid: 0000-0001-7782-4436
- first_name: Paul
  full_name: Masset, Paul
  last_name: Masset
- first_name: Boris
  full_name: Gutkin, Boris
  last_name: Gutkin
- first_name: Sophie
  full_name: Denève, Sophie
  last_name: Denève
citation:
  ama: Chalk MJ, Masset P, Gutkin B, Denève S. Supplementary appendix. 2017. doi:<a
    href="https://doi.org/10.1371/journal.pcbi.1005582.s001">10.1371/journal.pcbi.1005582.s001</a>
  apa: Chalk, M. J., Masset, P., Gutkin, B., &#38; Denève, S. (2017). Supplementary
    appendix. Public Library of Science. <a href="https://doi.org/10.1371/journal.pcbi.1005582.s001">https://doi.org/10.1371/journal.pcbi.1005582.s001</a>
  chicago: Chalk, Matthew J, Paul Masset, Boris Gutkin, and Sophie Denève. “Supplementary
    Appendix.” Public Library of Science, 2017. <a href="https://doi.org/10.1371/journal.pcbi.1005582.s001">https://doi.org/10.1371/journal.pcbi.1005582.s001</a>.
  ieee: M. J. Chalk, P. Masset, B. Gutkin, and S. Denève, “Supplementary appendix.”
    Public Library of Science, 2017.
  ista: Chalk MJ, Masset P, Gutkin B, Denève S. 2017. Supplementary appendix, Public
    Library of Science, <a href="https://doi.org/10.1371/journal.pcbi.1005582.s001">10.1371/journal.pcbi.1005582.s001</a>.
  mla: Chalk, Matthew J., et al. <i>Supplementary Appendix</i>. Public Library of
    Science, 2017, doi:<a href="https://doi.org/10.1371/journal.pcbi.1005582.s001">10.1371/journal.pcbi.1005582.s001</a>.
  short: M.J. Chalk, P. Masset, B. Gutkin, S. Denève, (2017).
date_created: 2021-08-10T07:05:10Z
date_published: 2017-06-01T00:00:00Z
date_updated: 2023-02-23T12:52:17Z
day: '01'
department:
- _id: GaTk
doi: 10.1371/journal.pcbi.1005582.s001
month: '06'
oa_version: Published Version
publisher: Public Library of Science
related_material:
  record:
  - id: '680'
    relation: used_in_publication
    status: public
status: public
title: Supplementary appendix
type: research_data_reference
user_id: 6785fbc1-c503-11eb-8a32-93094b40e1cf
year: '2017'
...
---
_id: '1082'
abstract:
- lang: eng
  text: In many applications, it is desirable to extract only the relevant aspects
    of data. A principled way to do this is the information bottleneck (IB) method,
    where one seeks a code that maximises information about a relevance variable,
    Y, while constraining the information encoded about the original data, X. Unfortunately
    however, the IB method is computationally demanding when data are high-dimensional
    and/or non-gaussian. Here we propose an approximate variational scheme for maximising
    a lower bound on the IB objective, analogous to variational EM. Using this method,
    we derive an IB algorithm to recover features that are both relevant and sparse.
    Finally, we demonstrate how kernelised versions of the algorithm can be used to
    address a broad range of problems with non-linear relation between X and Y.
alternative_title:
- Advances in Neural Information Processing Systems
author:
- first_name: Matthew J
  full_name: Chalk, Matthew J
  id: 2BAAC544-F248-11E8-B48F-1D18A9856A87
  last_name: Chalk
  orcid: 0000-0001-7782-4436
- first_name: Olivier
  full_name: Marre, Olivier
  last_name: Marre
- first_name: Gasper
  full_name: Tkacik, Gasper
  id: 3D494DCA-F248-11E8-B48F-1D18A9856A87
  last_name: Tkacik
  orcid: 0000-0002-6699-1455
citation:
  ama: 'Chalk MJ, Marre O, Tkačik G. Relevant sparse codes with variational information
    bottleneck. In: Vol 29. Neural Information Processing Systems; 2016:1965-1973.'
  apa: 'Chalk, M. J., Marre, O., &#38; Tkačik, G. (2016). Relevant sparse codes with
    variational information bottleneck (Vol. 29, pp. 1965–1973). Presented at the
    NIPS: Neural Information Processing Systems, Barcelona, Spain: Neural Information
    Processing Systems.'
  chicago: Chalk, Matthew J, Olivier Marre, and Gašper Tkačik. “Relevant Sparse Codes
    with Variational Information Bottleneck,” 29:1965–73. Neural Information Processing
    Systems, 2016.
  ieee: 'M. J. Chalk, O. Marre, and G. Tkačik, “Relevant sparse codes with variational
    information bottleneck,” presented at the NIPS: Neural Information Processing
    Systems, Barcelona, Spain, 2016, vol. 29, pp. 1965–1973.'
  ista: 'Chalk MJ, Marre O, Tkačik G. 2016. Relevant sparse codes with variational
    information bottleneck. NIPS: Neural Information Processing Systems, Advances
    in Neural Information Processing Systems, vol. 29, 1965–1973.'
  mla: Chalk, Matthew J., et al. <i>Relevant Sparse Codes with Variational Information
    Bottleneck</i>. Vol. 29, Neural Information Processing Systems, 2016, pp. 1965–73.
  short: M.J. Chalk, O. Marre, G. Tkačik, in:, Neural Information Processing Systems,
    2016, pp. 1965–1973.
conference:
  end_date: 2016-12-10
  location: Barcelona, Spain
  name: 'NIPS: Neural Information Processing Systems'
  start_date: 2016-12-05
date_created: 2018-12-11T11:50:03Z
date_published: 2016-12-01T00:00:00Z
date_updated: 2021-01-12T06:48:09Z
day: '01'
department:
- _id: GaTk
intvolume: '        29'
language:
- iso: eng
main_file_link:
- open_access: '1'
  url: https://arxiv.org/abs/1605.07332
month: '12'
oa: 1
oa_version: Preprint
page: 1965-1973
publication_status: published
publisher: Neural Information Processing Systems
publist_id: '6298'
quality_controlled: '1'
related_material:
  link:
  - relation: other
    url: https://papers.nips.cc/paper/6101-relevant-sparse-codes-with-variational-information-bottleneck
scopus_import: 1
status: public
title: Relevant sparse codes with variational information bottleneck
type: conference
user_id: 3E5EF7F0-F248-11E8-B48F-1D18A9856A87
volume: 29
year: '2016'
...
---
_id: '1266'
abstract:
- lang: eng
  text: 'Cortical networks exhibit ‘global oscillations’, in which neural spike times
    are entrained to an underlying oscillatory rhythm, but where individual neurons
    fire irregularly, on only a fraction of cycles. While the network dynamics underlying
    global oscillations have been well characterised, their function is debated. Here,
    we show that such global oscillations are a direct consequence of optimal efficient
    coding in spiking networks with synaptic delays and noise. To avoid firing unnecessary
    spikes, neurons need to share information about the network state. Ideally, membrane
    potentials should be strongly correlated and reflect a ‘prediction error’ while
    the spikes themselves are uncorrelated and occur rarely. We show that the most
    efficient representation is when: (i) spike times are entrained to a global Gamma
    rhythm (implying a consistent representation of the error); but (ii) few neurons
    fire on each cycle (implying high efficiency), while (iii) excitation and inhibition
    are tightly balanced. This suggests that cortical networks exhibiting such dynamics
    are tuned to achieve a maximally efficient population code.'
acknowledgement: Boris Gutkin acknowledges funding by the Russian Academic Excellence
  Project '5-100’.
article_number: e13824
author:
- first_name: Matthew J
  full_name: Chalk, Matthew J
  id: 2BAAC544-F248-11E8-B48F-1D18A9856A87
  last_name: Chalk
  orcid: 0000-0001-7782-4436
- first_name: Boris
  full_name: Gutkin, Boris
  last_name: Gutkin
- first_name: Sophie
  full_name: Denève, Sophie
  last_name: Denève
citation:
  ama: Chalk MJ, Gutkin B, Denève S. Neural oscillations as a signature of efficient
    coding in the presence of synaptic delays. <i>eLife</i>. 2016;5(2016JULY). doi:<a
    href="https://doi.org/10.7554/eLife.13824">10.7554/eLife.13824</a>
  apa: Chalk, M. J., Gutkin, B., &#38; Denève, S. (2016). Neural oscillations as a
    signature of efficient coding in the presence of synaptic delays. <i>ELife</i>.
    eLife Sciences Publications. <a href="https://doi.org/10.7554/eLife.13824">https://doi.org/10.7554/eLife.13824</a>
  chicago: Chalk, Matthew J, Boris Gutkin, and Sophie Denève. “Neural Oscillations
    as a Signature of Efficient Coding in the Presence of Synaptic Delays.” <i>ELife</i>.
    eLife Sciences Publications, 2016. <a href="https://doi.org/10.7554/eLife.13824">https://doi.org/10.7554/eLife.13824</a>.
  ieee: M. J. Chalk, B. Gutkin, and S. Denève, “Neural oscillations as a signature
    of efficient coding in the presence of synaptic delays,” <i>eLife</i>, vol. 5,
    no. 2016JULY. eLife Sciences Publications, 2016.
  ista: Chalk MJ, Gutkin B, Denève S. 2016. Neural oscillations as a signature of
    efficient coding in the presence of synaptic delays. eLife. 5(2016JULY), e13824.
  mla: Chalk, Matthew J., et al. “Neural Oscillations as a Signature of Efficient
    Coding in the Presence of Synaptic Delays.” <i>ELife</i>, vol. 5, no. 2016JULY,
    e13824, eLife Sciences Publications, 2016, doi:<a href="https://doi.org/10.7554/eLife.13824">10.7554/eLife.13824</a>.
  short: M.J. Chalk, B. Gutkin, S. Denève, ELife 5 (2016).
date_created: 2018-12-11T11:51:02Z
date_published: 2016-07-01T00:00:00Z
date_updated: 2021-01-12T06:49:30Z
day: '01'
ddc:
- '571'
department:
- _id: GaTk
doi: 10.7554/eLife.13824
file:
- access_level: open_access
  checksum: dc52d967dc76174477bb258d84be2899
  content_type: application/pdf
  creator: system
  date_created: 2018-12-12T10:11:20Z
  date_updated: 2020-07-14T12:44:42Z
  file_id: '4874'
  file_name: IST-2016-700-v1+1_e13824-download.pdf
  file_size: 2819055
  relation: main_file
file_date_updated: 2020-07-14T12:44:42Z
has_accepted_license: '1'
intvolume: '         5'
issue: 2016JULY
language:
- iso: eng
month: '07'
oa: 1
oa_version: Published Version
publication: eLife
publication_status: published
publisher: eLife Sciences Publications
publist_id: '6056'
pubrep_id: '700'
quality_controlled: '1'
scopus_import: 1
status: public
title: Neural oscillations as a signature of efficient coding in the presence of synaptic
  delays
tmp:
  image: /images/cc_by.png
  legal_code_url: https://creativecommons.org/licenses/by/4.0/legalcode
  name: Creative Commons Attribution 4.0 International Public License (CC-BY 4.0)
  short: CC BY (4.0)
type: journal_article
user_id: 3E5EF7F0-F248-11E8-B48F-1D18A9856A87
volume: 5
year: '2016'
...
