---
_id: '10665'
abstract:
- lang: eng
  text: "Formal verification of neural networks is an active topic of research, and
    recent advances have significantly increased the size of the networks that verification
    tools can handle. However, most methods are designed for verification of an idealized
    model of the actual network which works over real arithmetic and ignores rounding
    imprecisions. This idealization is in stark contrast to network quantization,
    which is a technique that trades numerical precision for computational efficiency
    and is, therefore, often applied in practice. Neglecting rounding errors of such
    low-bit quantized neural networks has been shown to lead to wrong conclusions
    about the network’s correctness. Thus, the desired approach for verifying quantized
    neural networks would be one that takes these rounding errors\r\ninto account.
    In this paper, we show that verifying the bitexact implementation of quantized
    neural networks with bitvector specifications is PSPACE-hard, even though verifying
    idealized real-valued networks and satisfiability of bit-vector specifications
    alone are each in NP. Furthermore, we explore several practical heuristics toward
    closing the complexity gap between idealized and bit-exact verification. In particular,
    we propose three techniques for making SMT-based verification of quantized neural
    networks more scalable. Our experiments demonstrate that our proposed methods
    allow a speedup of up to three orders of magnitude over existing approaches."
acknowledgement: "This research was supported in part by the Austrian Science Fund
  (FWF) under grant Z211-N23 (Wittgenstein\r\nAward), ERC CoG 863818 (FoRM-SMArt),
  and the European Union’s Horizon 2020 research and innovation programme under the
  Marie Skłodowska-Curie Grant Agreement No. 665385.\r\n"
alternative_title:
- Technical Tracks
article_processing_charge: No
arxiv: 1
author:
- first_name: Thomas A
  full_name: Henzinger, Thomas A
  id: 40876CD8-F248-11E8-B48F-1D18A9856A87
  last_name: Henzinger
  orcid: 0000-0002-2985-7724
- first_name: Mathias
  full_name: Lechner, Mathias
  id: 3DC22916-F248-11E8-B48F-1D18A9856A87
  last_name: Lechner
- first_name: Dorde
  full_name: Zikelic, Dorde
  id: 294AA7A6-F248-11E8-B48F-1D18A9856A87
  last_name: Zikelic
  orcid: 0000-0002-4681-1699
citation:
  ama: 'Henzinger TA, Lechner M, Zikelic D. Scalable verification of quantized neural
    networks. In: <i>Proceedings of the AAAI Conference on Artificial Intelligence</i>.
    Vol 35. AAAI Press; 2021:3787-3795.'
  apa: 'Henzinger, T. A., Lechner, M., &#38; Zikelic, D. (2021). Scalable verification
    of quantized neural networks. In <i>Proceedings of the AAAI Conference on Artificial
    Intelligence</i> (Vol. 35, pp. 3787–3795). Virtual: AAAI Press.'
  chicago: Henzinger, Thomas A, Mathias Lechner, and Dorde Zikelic. “Scalable Verification
    of Quantized Neural Networks.” In <i>Proceedings of the AAAI Conference on Artificial
    Intelligence</i>, 35:3787–95. AAAI Press, 2021.
  ieee: T. A. Henzinger, M. Lechner, and D. Zikelic, “Scalable verification of quantized
    neural networks,” in <i>Proceedings of the AAAI Conference on Artificial Intelligence</i>,
    Virtual, 2021, vol. 35, no. 5A, pp. 3787–3795.
  ista: 'Henzinger TA, Lechner M, Zikelic D. 2021. Scalable verification of quantized
    neural networks. Proceedings of the AAAI Conference on Artificial Intelligence.
    AAAI: Association for the Advancement of Artificial Intelligence, Technical Tracks,
    vol. 35, 3787–3795.'
  mla: Henzinger, Thomas A., et al. “Scalable Verification of Quantized Neural Networks.”
    <i>Proceedings of the AAAI Conference on Artificial Intelligence</i>, vol. 35,
    no. 5A, AAAI Press, 2021, pp. 3787–95.
  short: T.A. Henzinger, M. Lechner, D. Zikelic, in:, Proceedings of the AAAI Conference
    on Artificial Intelligence, AAAI Press, 2021, pp. 3787–3795.
conference:
  end_date: 2021-02-09
  location: Virtual
  name: 'AAAI: Association for the Advancement of Artificial Intelligence'
  start_date: 2021-02-02
date_created: 2022-01-25T15:15:02Z
date_published: 2021-05-28T00:00:00Z
date_updated: 2025-07-14T09:10:11Z
day: '28'
ddc:
- '000'
department:
- _id: GradSch
- _id: ToHe
ec_funded: 1
external_id:
  arxiv:
  - '2012.08185'
file:
- access_level: open_access
  checksum: 2bc8155b2526a70fba5b7301bc89dbd1
  content_type: application/pdf
  creator: mlechner
  date_created: 2022-01-26T07:41:16Z
  date_updated: 2022-01-26T07:41:16Z
  file_id: '10684'
  file_name: 16496-Article Text-19990-1-2-20210518 (1).pdf
  file_size: 137235
  relation: main_file
  success: 1
file_date_updated: 2022-01-26T07:41:16Z
has_accepted_license: '1'
intvolume: '        35'
issue: 5A
language:
- iso: eng
main_file_link:
- open_access: '1'
  url: https://ojs.aaai.org/index.php/AAAI/article/view/16496
month: '05'
oa: 1
oa_version: Published Version
page: 3787-3795
project:
- _id: 2564DBCA-B435-11E9-9278-68D0E5697425
  call_identifier: H2020
  grant_number: '665385'
  name: International IST Doctoral Program
- _id: 25F42A32-B435-11E9-9278-68D0E5697425
  call_identifier: FWF
  grant_number: Z211
  name: The Wittgenstein Prize
- _id: 0599E47C-7A3F-11EA-A408-12923DDC885E
  call_identifier: H2020
  grant_number: '863818'
  name: 'Formal Methods for Stochastic Models: Algorithms and Applications'
publication: Proceedings of the AAAI Conference on Artificial Intelligence
publication_identifier:
  eissn:
  - 2374-3468
  isbn:
  - 978-1-57735-866-4
  issn:
  - 2159-5399
publication_status: published
publisher: AAAI Press
quality_controlled: '1'
related_material:
  record:
  - id: '11362'
    relation: dissertation_contains
    status: public
scopus_import: '1'
status: public
title: Scalable verification of quantized neural networks
type: conference
user_id: 2DF688A6-F248-11E8-B48F-1D18A9856A87
volume: 35
year: '2021'
...
---
_id: '10669'
abstract:
- lang: eng
  text: "We show that Neural ODEs, an emerging class of timecontinuous neural networks,
    can be verified by solving a set of global-optimization problems. For this purpose,
    we introduce Stochastic Lagrangian Reachability (SLR), an\r\nabstraction-based
    technique for constructing a tight Reachtube (an over-approximation of the set
    of reachable states\r\nover a given time-horizon), and provide stochastic guarantees
    in the form of confidence intervals for the Reachtube bounds. SLR inherently avoids
    the infamous wrapping effect (accumulation of over-approximation errors) by performing
    local optimization steps to expand safe regions instead of repeatedly forward-propagating
    them as is done by deterministic reachability methods. To enable fast local optimizations,
    we introduce a novel forward-mode adjoint sensitivity method to compute gradients
    without the need for backpropagation. Finally, we establish asymptotic and non-asymptotic
    convergence rates for SLR."
acknowledgement: "The authors would like to thank the reviewers for their insightful
  comments. RH and RG were partially supported by\r\nHorizon-2020 ECSEL Project grant
  No. 783163 (iDev40). RH was partially supported by Boeing. ML was supported\r\nin
  part by the Austrian Science Fund (FWF) under grant Z211-N23 (Wittgenstein Award).
  SG was funded by FWF\r\nproject W1255-N23. JC was partially supported by NAWA Polish
  Returns grant PPN/PPO/2018/1/00029. SS was supported by NSF awards DCL-2040599,
  CCF-1918225, and CPS-1446832.\r\n"
alternative_title:
- Technical Tracks
article_processing_charge: No
arxiv: 1
author:
- first_name: Sophie
  full_name: Grunbacher, Sophie
  last_name: Grunbacher
- first_name: Ramin
  full_name: Hasani, Ramin
  last_name: Hasani
- first_name: Mathias
  full_name: Lechner, Mathias
  id: 3DC22916-F248-11E8-B48F-1D18A9856A87
  last_name: Lechner
- first_name: Jacek
  full_name: Cyranka, Jacek
  last_name: Cyranka
- first_name: Scott A
  full_name: Smolka, Scott A
  last_name: Smolka
- first_name: Radu
  full_name: Grosu, Radu
  last_name: Grosu
citation:
  ama: 'Grunbacher S, Hasani R, Lechner M, Cyranka J, Smolka SA, Grosu R. On the verification
    of neural ODEs with stochastic guarantees. In: <i>Proceedings of the AAAI Conference
    on Artificial Intelligence</i>. Vol 35. AAAI Press; 2021:11525-11535.'
  apa: 'Grunbacher, S., Hasani, R., Lechner, M., Cyranka, J., Smolka, S. A., &#38;
    Grosu, R. (2021). On the verification of neural ODEs with stochastic guarantees.
    In <i>Proceedings of the AAAI Conference on Artificial Intelligence</i> (Vol.
    35, pp. 11525–11535). Virtual: AAAI Press.'
  chicago: Grunbacher, Sophie, Ramin Hasani, Mathias Lechner, Jacek Cyranka, Scott
    A Smolka, and Radu Grosu. “On the Verification of Neural ODEs with Stochastic
    Guarantees.” In <i>Proceedings of the AAAI Conference on Artificial Intelligence</i>,
    35:11525–35. AAAI Press, 2021.
  ieee: S. Grunbacher, R. Hasani, M. Lechner, J. Cyranka, S. A. Smolka, and R. Grosu,
    “On the verification of neural ODEs with stochastic guarantees,” in <i>Proceedings
    of the AAAI Conference on Artificial Intelligence</i>, Virtual, 2021, vol. 35,
    no. 13, pp. 11525–11535.
  ista: 'Grunbacher S, Hasani R, Lechner M, Cyranka J, Smolka SA, Grosu R. 2021. On
    the verification of neural ODEs with stochastic guarantees. Proceedings of the
    AAAI Conference on Artificial Intelligence. AAAI: Association for the Advancement
    of Artificial Intelligence, Technical Tracks, vol. 35, 11525–11535.'
  mla: Grunbacher, Sophie, et al. “On the Verification of Neural ODEs with Stochastic
    Guarantees.” <i>Proceedings of the AAAI Conference on Artificial Intelligence</i>,
    vol. 35, no. 13, AAAI Press, 2021, pp. 11525–35.
  short: S. Grunbacher, R. Hasani, M. Lechner, J. Cyranka, S.A. Smolka, R. Grosu,
    in:, Proceedings of the AAAI Conference on Artificial Intelligence, AAAI Press,
    2021, pp. 11525–11535.
conference:
  end_date: 2021-02-09
  location: Virtual
  name: 'AAAI: Association for the Advancement of Artificial Intelligence'
  start_date: 2021-02-02
date_created: 2022-01-25T15:47:20Z
date_published: 2021-05-28T00:00:00Z
date_updated: 2022-05-24T06:33:14Z
day: '28'
ddc:
- '000'
department:
- _id: GradSch
- _id: ToHe
external_id:
  arxiv:
  - '2012.08863'
file:
- access_level: open_access
  checksum: 468d07041e282a1d46ffdae92f709630
  content_type: application/pdf
  creator: mlechner
  date_created: 2022-01-26T07:38:08Z
  date_updated: 2022-01-26T07:38:08Z
  file_id: '10680'
  file_name: 17372-Article Text-20866-1-2-20210518.pdf
  file_size: 286906
  relation: main_file
  success: 1
file_date_updated: 2022-01-26T07:38:08Z
has_accepted_license: '1'
intvolume: '        35'
issue: '13'
language:
- iso: eng
main_file_link:
- open_access: '1'
  url: https://ojs.aaai.org/index.php/AAAI/article/view/17372
month: '05'
oa: 1
oa_version: Published Version
page: 11525-11535
project:
- _id: 25F42A32-B435-11E9-9278-68D0E5697425
  call_identifier: FWF
  grant_number: Z211
  name: The Wittgenstein Prize
publication: Proceedings of the AAAI Conference on Artificial Intelligence
publication_identifier:
  eissn:
  - 2374-3468
  isbn:
  - 978-1-57735-866-4
  issn:
  - 2159-5399
publication_status: published
publisher: AAAI Press
quality_controlled: '1'
status: public
title: On the verification of neural ODEs with stochastic guarantees
type: conference
user_id: 2DF688A6-F248-11E8-B48F-1D18A9856A87
volume: 35
year: '2021'
...
---
_id: '10671'
abstract:
- lang: eng
  text: We introduce a new class of time-continuous recurrent neural network models.
    Instead of declaring a learning system’s dynamics by implicit nonlinearities,
    we construct networks of linear first-order dynamical systems modulated via nonlinear
    interlinked gates. The resulting models represent dynamical systems with varying
    (i.e., liquid) time-constants coupled to their hidden state, with outputs being
    computed by numerical differential equation solvers. These neural networks exhibit
    stable and bounded behavior, yield superior expressivity within the family of
    neural ordinary differential equations, and give rise to improved performance
    on time-series prediction tasks. To demonstrate these properties, we first take
    a theoretical approach to find bounds over their dynamics, and compute their expressive
    power by the trajectory length measure in a latent trajectory space. We then conduct
    a series of time-series prediction experiments to manifest the approximation capability
    of Liquid Time-Constant Networks (LTCs) compared to classical and modern RNNs.
acknowledgement: "R.H. and D.R. are partially supported by Boeing. R.H. and R.G. were
  partially supported by the Horizon-2020 ECSEL\r\nProject grant No. 783163 (iDev40).
  M.L. was supported in part by the Austrian Science Fund (FWF) under grant Z211-N23
  (Wittgenstein Award). A.A. is supported by the National Science Foundation (NSF)
  Graduate Research Fellowship Program. This research work is partially drawn from
  the PhD dissertation of R.H."
alternative_title:
- Technical Tracks
article_processing_charge: No
arxiv: 1
author:
- first_name: Ramin
  full_name: Hasani, Ramin
  last_name: Hasani
- first_name: Mathias
  full_name: Lechner, Mathias
  id: 3DC22916-F248-11E8-B48F-1D18A9856A87
  last_name: Lechner
- first_name: Alexander
  full_name: Amini, Alexander
  last_name: Amini
- first_name: Daniela
  full_name: Rus, Daniela
  last_name: Rus
- first_name: Radu
  full_name: Grosu, Radu
  last_name: Grosu
citation:
  ama: 'Hasani R, Lechner M, Amini A, Rus D, Grosu R. Liquid time-constant networks.
    In: <i>Proceedings of the AAAI Conference on Artificial Intelligence</i>. Vol
    35. AAAI Press; 2021:7657-7666.'
  apa: 'Hasani, R., Lechner, M., Amini, A., Rus, D., &#38; Grosu, R. (2021). Liquid
    time-constant networks. In <i>Proceedings of the AAAI Conference on Artificial
    Intelligence</i> (Vol. 35, pp. 7657–7666). Virtual: AAAI Press.'
  chicago: Hasani, Ramin, Mathias Lechner, Alexander Amini, Daniela Rus, and Radu
    Grosu. “Liquid Time-Constant Networks.” In <i>Proceedings of the AAAI Conference
    on Artificial Intelligence</i>, 35:7657–66. AAAI Press, 2021.
  ieee: R. Hasani, M. Lechner, A. Amini, D. Rus, and R. Grosu, “Liquid time-constant
    networks,” in <i>Proceedings of the AAAI Conference on Artificial Intelligence</i>,
    Virtual, 2021, vol. 35, no. 9, pp. 7657–7666.
  ista: 'Hasani R, Lechner M, Amini A, Rus D, Grosu R. 2021. Liquid time-constant
    networks. Proceedings of the AAAI Conference on Artificial Intelligence. AAAI:
    Association for the Advancement of Artificial Intelligence, Technical Tracks,
    vol. 35, 7657–7666.'
  mla: Hasani, Ramin, et al. “Liquid Time-Constant Networks.” <i>Proceedings of the
    AAAI Conference on Artificial Intelligence</i>, vol. 35, no. 9, AAAI Press, 2021,
    pp. 7657–66.
  short: R. Hasani, M. Lechner, A. Amini, D. Rus, R. Grosu, in:, Proceedings of the
    AAAI Conference on Artificial Intelligence, AAAI Press, 2021, pp. 7657–7666.
conference:
  end_date: 2021-02-09
  location: Virtual
  name: 'AAAI: Association for the Advancement of Artificial Intelligence'
  start_date: 2021-02-02
date_created: 2022-01-25T15:48:36Z
date_published: 2021-05-28T00:00:00Z
date_updated: 2022-05-24T06:36:54Z
day: '28'
ddc:
- '000'
department:
- _id: GradSch
- _id: ToHe
external_id:
  arxiv:
  - '2006.04439'
file:
- access_level: open_access
  checksum: 0f06995fba06dbcfa7ed965fc66027ff
  content_type: application/pdf
  creator: mlechner
  date_created: 2022-01-26T07:36:03Z
  date_updated: 2022-01-26T07:36:03Z
  file_id: '10678'
  file_name: 16936-Article Text-20430-1-2-20210518 (1).pdf
  file_size: 4302669
  relation: main_file
  success: 1
file_date_updated: 2022-01-26T07:36:03Z
has_accepted_license: '1'
intvolume: '        35'
issue: '9'
language:
- iso: eng
main_file_link:
- open_access: '1'
  url: https://ojs.aaai.org/index.php/AAAI/article/view/16936
month: '05'
oa: 1
oa_version: Published Version
page: 7657-7666
project:
- _id: 25F42A32-B435-11E9-9278-68D0E5697425
  call_identifier: FWF
  grant_number: Z211
  name: The Wittgenstein Prize
publication: Proceedings of the AAAI Conference on Artificial Intelligence
publication_identifier:
  eissn:
  - 2374-3468
  isbn:
  - 978-1-57735-866-4
  issn:
  - 2159-5399
publication_status: published
publisher: AAAI Press
quality_controlled: '1'
status: public
title: Liquid time-constant networks
type: conference
user_id: 2DF688A6-F248-11E8-B48F-1D18A9856A87
volume: 35
year: '2021'
...
