[{"main_file_link":[{"url":"https://doi.org/10.48550/arXiv.2210.08031","open_access":"1"}],"intvolume":"        35","arxiv":1,"volume":35,"article_processing_charge":"No","language":[{"iso":"eng"}],"type":"conference","citation":{"mla":"Rahaman, Nasim, et al. “Neural Attentive Circuits.” <i>36th Conference on Neural Information Processing Systems</i>, vol. 35, 2022.","chicago":"Rahaman, Nasim, Martin Weiss, Francesco Locatello, Chris Pal, Yoshua Bengio, Bernhard Schölkopf, Li Erran Li, and Nicolas Ballas. “Neural Attentive Circuits.” In <i>36th Conference on Neural Information Processing Systems</i>, Vol. 35, 2022.","ama":"Rahaman N, Weiss M, Locatello F, et al. Neural attentive circuits. In: <i>36th Conference on Neural Information Processing Systems</i>. Vol 35. ; 2022.","short":"N. Rahaman, M. Weiss, F. Locatello, C. Pal, Y. Bengio, B. Schölkopf, L.E. Li, N. Ballas, in:, 36th Conference on Neural Information Processing Systems, 2022.","ista":"Rahaman N, Weiss M, Locatello F, Pal C, Bengio Y, Schölkopf B, Li LE, Ballas N. 2022. Neural attentive circuits. 36th Conference on Neural Information Processing Systems. NeurIPS: Neural Information Processing Systems,  Advances in Neural Information Processing Systems, vol. 35.","apa":"Rahaman, N., Weiss, M., Locatello, F., Pal, C., Bengio, Y., Schölkopf, B., … Ballas, N. (2022). Neural attentive circuits. In <i>36th Conference on Neural Information Processing Systems</i> (Vol. 35). New Orleans, United States.","ieee":"N. Rahaman <i>et al.</i>, “Neural attentive circuits,” in <i>36th Conference on Neural Information Processing Systems</i>, New Orleans, United States, 2022, vol. 35."},"day":"14","department":[{"_id":"FrLo"}],"month":"10","user_id":"2DF688A6-F248-11E8-B48F-1D18A9856A87","extern":"1","external_id":{"arxiv":["2210.08031"]},"date_published":"2022-10-14T00:00:00Z","oa_version":"Preprint","date_created":"2023-08-22T13:57:27Z","conference":{"name":"NeurIPS: Neural Information Processing Systems","end_date":"2022-12-01","location":"New Orleans, United States","start_date":"2022-11-29"},"publication":"36th Conference on Neural Information Processing Systems","date_updated":"2023-09-11T09:29:09Z","publication_status":"published","status":"public","oa":1,"title":"Neural attentive circuits","author":[{"first_name":"Nasim","last_name":"Rahaman","full_name":"Rahaman, Nasim"},{"full_name":"Weiss, Martin","last_name":"Weiss","first_name":"Martin"},{"orcid":"0000-0002-4850-0683","full_name":"Locatello, Francesco","last_name":"Locatello","id":"26cfd52f-2483-11ee-8040-88983bcc06d4","first_name":"Francesco"},{"full_name":"Pal, Chris","last_name":"Pal","first_name":"Chris"},{"first_name":"Yoshua","last_name":"Bengio","full_name":"Bengio, Yoshua"},{"first_name":"Bernhard","last_name":"Schölkopf","full_name":"Schölkopf, Bernhard"},{"first_name":"Li Erran","last_name":"Li","full_name":"Li, Li Erran"},{"last_name":"Ballas","full_name":"Ballas, Nicolas","first_name":"Nicolas"}],"_id":"14168","abstract":[{"text":"Recent work has seen the development of general purpose neural architectures\r\nthat can be trained to perform tasks across diverse data modalities. General\r\npurpose models typically make few assumptions about the underlying\r\ndata-structure and are known to perform well in the large-data regime. At the\r\nsame time, there has been growing interest in modular neural architectures that\r\nrepresent the data using sparsely interacting modules. These models can be more\r\nrobust out-of-distribution, computationally efficient, and capable of\r\nsample-efficient adaptation to new data. However, they tend to make\r\ndomain-specific assumptions about the data, and present challenges in how\r\nmodule behavior (i.e., parameterization) and connectivity (i.e., their layout)\r\ncan be jointly learned. In this work, we introduce a general purpose, yet\r\nmodular neural architecture called Neural Attentive Circuits (NACs) that\r\njointly learns the parameterization and a sparse connectivity of neural modules\r\nwithout using domain knowledge. NACs are best understood as the combination of\r\ntwo systems that are jointly trained end-to-end: one that determines the module\r\nconfiguration and the other that executes it on an input. We demonstrate\r\nqualitatively that NACs learn diverse and meaningful module configurations on\r\nthe NLVR2 dataset without additional supervision. Quantitatively, we show that\r\nby incorporating modularity in this way, NACs improve upon a strong non-modular\r\nbaseline in terms of low-shot adaptation on CIFAR and CUBs dataset by about\r\n10%, and OOD robustness on Tiny ImageNet-R by about 2.5%. Further, we find that\r\nNACs can achieve an 8x speedup at inference time while losing less than 3%\r\nperformance. Finally, we find NACs to yield competitive results on diverse data\r\nmodalities spanning point-cloud classification, symbolic processing and\r\ntext-classification from ASCII bytes, thereby confirming its general purpose\r\nnature.","lang":"eng"}],"alternative_title":[" Advances in Neural Information Processing Systems"],"year":"2022"},{"oa_version":"Preprint","conference":{"end_date":"2022-07-23","start_date":"2022-07-17","location":"Baltimore, MD, United States","name":"International Conference on Machine Learning"},"month":"07","citation":{"ista":"Dittadi A, Papa S, Vita MD, Schölkopf B, Winther O, Locatello F. Generalization and robustness implications in object-centric learning. Proceedings of the 39th International Conference on Machine Learning. International Conference on Machine Learning, PMLR, vol. 2022, 5221–5285.","ama":"Dittadi A, Papa S, Vita MD, Schölkopf B, Winther O, Locatello F. Generalization and robustness implications in object-centric learning. In: <i>Proceedings of the 39th International Conference on Machine Learning</i>. Vol 2022. ML Research Press; :5221-5285.","short":"A. Dittadi, S. Papa, M.D. Vita, B. Schölkopf, O. Winther, F. Locatello, in:, Proceedings of the 39th International Conference on Machine Learning, ML Research Press, n.d., pp. 5221–5285.","apa":"Dittadi, A., Papa, S., Vita, M. D., Schölkopf, B., Winther, O., &#38; Locatello, F. (n.d.). Generalization and robustness implications in object-centric learning. In <i>Proceedings of the 39th International Conference on Machine Learning</i> (Vol. 2022, pp. 5221–5285). Baltimore, MD, United States: ML Research Press.","ieee":"A. Dittadi, S. Papa, M. D. Vita, B. Schölkopf, O. Winther, and F. Locatello, “Generalization and robustness implications in object-centric learning,” in <i>Proceedings of the 39th International Conference on Machine Learning</i>, Baltimore, MD, United States, vol. 2022, pp. 5221–5285.","chicago":"Dittadi, Andrea, Samuele Papa, Michele De Vita, Bernhard Schölkopf, Ole Winther, and Francesco Locatello. “Generalization and Robustness Implications in Object-Centric Learning.” In <i>Proceedings of the 39th International Conference on Machine Learning</i>, 2022:5221–85. ML Research Press, n.d.","mla":"Dittadi, Andrea, et al. “Generalization and Robustness Implications in Object-Centric Learning.” <i>Proceedings of the 39th International Conference on Machine Learning</i>, vol. 2022, ML Research Press, pp. 5221–85."},"day":"22","volume":2022,"article_processing_charge":"No","arxiv":1,"abstract":[{"lang":"eng","text":"The idea behind object-centric representation learning is that natural scenes can better be modeled as compositions of objects and their relations as opposed to distributed representations. This inductive bias can be injected into neural networks to potentially improve systematic generalization and performance of downstream tasks in scenes with multiple objects. In this paper, we train state-of-the-art unsupervised models on five common multi-object datasets and evaluate segmentation metrics and downstream object property prediction. In addition, we study generalization and robustness by investigating the settings where either a single object is out of distribution -- e.g., having an unseen color, texture, or shape -- or global properties of the scene are altered -- e.g., by occlusions, cropping, or increasing the number of objects. From our experimental study, we find object-centric representations to be useful for\r\ndownstream tasks and generally robust to most distribution shifts affecting objects. However, when the distribution shift affects the input in a less structured manner, robustness in terms of segmentation and downstream task performance may vary significantly across models and distribution shifts. "}],"_id":"14170","title":"Generalization and robustness implications in object-centric learning","author":[{"full_name":"Dittadi, Andrea","last_name":"Dittadi","first_name":"Andrea"},{"full_name":"Papa, Samuele","last_name":"Papa","first_name":"Samuele"},{"full_name":"Vita, Michele De","last_name":"Vita","first_name":"Michele De"},{"first_name":"Bernhard","full_name":"Schölkopf, Bernhard","last_name":"Schölkopf"},{"first_name":"Ole","full_name":"Winther, Ole","last_name":"Winther"},{"first_name":"Francesco","id":"26cfd52f-2483-11ee-8040-88983bcc06d4","last_name":"Locatello","full_name":"Locatello, Francesco","orcid":"0000-0002-4850-0683"}],"status":"public","publication_status":"submitted","date_created":"2023-08-22T13:59:55Z","extern":"1","date_published":"2022-07-22T00:00:00Z","external_id":{"arxiv":["2107.00637"]},"user_id":"2DF688A6-F248-11E8-B48F-1D18A9856A87","department":[{"_id":"FrLo"}],"quality_controlled":"1","type":"conference","language":[{"iso":"eng"}],"publisher":"ML Research Press","main_file_link":[{"url":"https://arxiv.org/abs/2107.00637","open_access":"1"}],"intvolume":"      2022","year":"2022","alternative_title":["PMLR"],"oa":1,"page":"5221-5285","date_updated":"2023-09-11T10:08:14Z","publication":"Proceedings of the 39th International Conference on Machine Learning"},{"alternative_title":["PMLR"],"year":"2022","oa":1,"page":"18741-18753","date_updated":"2023-09-11T10:14:20Z","publication":"Proceedings of the 39th International Conference on Machine Learning","date_created":"2023-08-22T14:00:18Z","date_published":"2022-07-22T00:00:00Z","extern":"1","external_id":{"arxiv":["2203.04413"]},"user_id":"2DF688A6-F248-11E8-B48F-1D18A9856A87","department":[{"_id":"FrLo"}],"type":"conference","quality_controlled":"1","language":[{"iso":"eng"}],"main_file_link":[{"open_access":"1","url":"https://arxiv.org/abs/2203.04413"}],"intvolume":"       162","publisher":"ML Research Press","abstract":[{"lang":"eng","text":"This paper demonstrates how to recover causal graphs from the score of the\r\ndata distribution in non-linear additive (Gaussian) noise models. Using score\r\nmatching algorithms as a building block, we show how to design a new generation\r\nof scalable causal discovery methods. To showcase our approach, we also propose\r\na new efficient method for approximating the score's Jacobian, enabling to\r\nrecover the causal graph. Empirically, we find that the new algorithm, called\r\nSCORE, is competitive with state-of-the-art causal discovery methods while\r\nbeing significantly faster."}],"_id":"14171","author":[{"first_name":"Paul","full_name":"Rolland, Paul","last_name":"Rolland"},{"first_name":"Volkan","full_name":"Cevher, Volkan","last_name":"Cevher"},{"first_name":"Matthäus","full_name":"Kleindessner, Matthäus","last_name":"Kleindessner"},{"first_name":"Chris","last_name":"Russel","full_name":"Russel, Chris"},{"first_name":"Bernhard","full_name":"Schölkopf, Bernhard","last_name":"Schölkopf"},{"first_name":"Dominik","full_name":"Janzing, Dominik","last_name":"Janzing"},{"orcid":"0000-0002-4850-0683","id":"26cfd52f-2483-11ee-8040-88983bcc06d4","first_name":"Francesco","full_name":"Locatello, Francesco","last_name":"Locatello"}],"title":"Score matching enables causal discovery of nonlinear additive noise  models","status":"public","publication_status":"published","conference":{"name":"International Conference on Machine Learning","location":"Baltimore, MD, United States","start_date":"2022-07-17","end_date":"2022-07-23"},"oa_version":"Preprint","month":"07","day":"22","citation":{"mla":"Rolland, Paul, et al. “Score Matching Enables Causal Discovery of Nonlinear Additive Noise  Models.” <i>Proceedings of the 39th International Conference on Machine Learning</i>, vol. 162, ML Research Press, 2022, pp. 18741–53.","chicago":"Rolland, Paul, Volkan Cevher, Matthäus Kleindessner, Chris Russel, Bernhard Schölkopf, Dominik Janzing, and Francesco Locatello. “Score Matching Enables Causal Discovery of Nonlinear Additive Noise  Models.” In <i>Proceedings of the 39th International Conference on Machine Learning</i>, 162:18741–53. ML Research Press, 2022.","ieee":"P. Rolland <i>et al.</i>, “Score matching enables causal discovery of nonlinear additive noise  models,” in <i>Proceedings of the 39th International Conference on Machine Learning</i>, Baltimore, MD, United States, 2022, vol. 162, pp. 18741–18753.","apa":"Rolland, P., Cevher, V., Kleindessner, M., Russel, C., Schölkopf, B., Janzing, D., &#38; Locatello, F. (2022). Score matching enables causal discovery of nonlinear additive noise  models. In <i>Proceedings of the 39th International Conference on Machine Learning</i> (Vol. 162, pp. 18741–18753). Baltimore, MD, United States: ML Research Press.","short":"P. Rolland, V. Cevher, M. Kleindessner, C. Russel, B. Schölkopf, D. Janzing, F. Locatello, in:, Proceedings of the 39th International Conference on Machine Learning, ML Research Press, 2022, pp. 18741–18753.","ama":"Rolland P, Cevher V, Kleindessner M, et al. Score matching enables causal discovery of nonlinear additive noise  models. In: <i>Proceedings of the 39th International Conference on Machine Learning</i>. Vol 162. ML Research Press; 2022:18741-18753.","ista":"Rolland P, Cevher V, Kleindessner M, Russel C, Schölkopf B, Janzing D, Locatello F. 2022. Score matching enables causal discovery of nonlinear additive noise  models. Proceedings of the 39th International Conference on Machine Learning. International Conference on Machine Learning, PMLR, vol. 162, 18741–18753."},"volume":162,"article_processing_charge":"No","arxiv":1},{"publication_status":"published","status":"public","date_updated":"2023-09-11T09:40:52Z","publication":"10th International Conference on Learning Representations","abstract":[{"text":"An important component for generalization in machine learning is to uncover underlying latent factors of variation as well as the mechanism through which each factor acts in the world. In this paper, we test whether 17 unsupervised, weakly supervised, and fully supervised representation learning approaches correctly infer the generative factors of variation in simple datasets (dSprites, Shapes3D, MPI3D) from controlled environments, and on our contributed CelebGlow dataset. In contrast to prior robustness work that introduces novel factors of variation during test time, such as blur or other (un)structured noise, we here recompose, interpolate, or extrapolate only existing factors of variation from the training data set (e.g., small and medium-sized objects during training and large objects during testing). Models\r\nthat learn the correct mechanism should be able to generalize to this benchmark. In total, we train and test 2000+ models and observe that all of them struggle to learn the underlying mechanism regardless of supervision signal and architectural bias. Moreover, the generalization capabilities of all tested models drop significantly as we move from artificial datasets towards\r\nmore realistic real-world datasets. Despite their inability to identify the correct mechanism, the models are quite modular as their ability to infer other in-distribution factors remains fairly stable, providing only a single factoris out-of-distribution. These results point to an important yet understudied problem of learning mechanistic models of observations that can facilitate\r\ngeneralization.","lang":"eng"}],"year":"2022","_id":"14172","title":"Visual representation learning does not generalize strongly within the  same domain","oa":1,"author":[{"last_name":"Schott","full_name":"Schott, Lukas","first_name":"Lukas"},{"first_name":"Julius von","full_name":"Kügelgen, Julius von","last_name":"Kügelgen"},{"full_name":"Träuble, Frederik","last_name":"Träuble","first_name":"Frederik"},{"last_name":"Gehler","full_name":"Gehler, Peter","first_name":"Peter"},{"last_name":"Russell","full_name":"Russell, Chris","first_name":"Chris"},{"first_name":"Matthias","last_name":"Bethge","full_name":"Bethge, Matthias"},{"full_name":"Schölkopf, Bernhard","last_name":"Schölkopf","first_name":"Bernhard"},{"first_name":"Francesco","id":"26cfd52f-2483-11ee-8040-88983bcc06d4","last_name":"Locatello","full_name":"Locatello, Francesco","orcid":"0000-0002-4850-0683"},{"first_name":"Wieland","last_name":"Brendel","full_name":"Brendel, Wieland"}],"article_processing_charge":"No","language":[{"iso":"eng"}],"quality_controlled":"1","type":"conference","arxiv":1,"main_file_link":[{"url":"https://doi.org/10.48550/arXiv.2107.08221","open_access":"1"}],"oa_version":"Preprint","date_created":"2023-08-22T14:00:50Z","conference":{"name":"ICLR: International Conference on Learning Representations","location":"Virtual","start_date":"2022-04-25","end_date":"2022-04-29"},"date_published":"2022-04-25T00:00:00Z","external_id":{"arxiv":["2107.08221"]},"extern":"1","month":"04","user_id":"2DF688A6-F248-11E8-B48F-1D18A9856A87","citation":{"chicago":"Schott, Lukas, Julius von Kügelgen, Frederik Träuble, Peter Gehler, Chris Russell, Matthias Bethge, Bernhard Schölkopf, Francesco Locatello, and Wieland Brendel. “Visual Representation Learning Does Not Generalize Strongly within the  Same Domain.” In <i>10th International Conference on Learning Representations</i>, 2022.","mla":"Schott, Lukas, et al. “Visual Representation Learning Does Not Generalize Strongly within the  Same Domain.” <i>10th International Conference on Learning Representations</i>, 2022.","ama":"Schott L, Kügelgen J von, Träuble F, et al. Visual representation learning does not generalize strongly within the  same domain. In: <i>10th International Conference on Learning Representations</i>. ; 2022.","short":"L. Schott, J. von Kügelgen, F. Träuble, P. Gehler, C. Russell, M. Bethge, B. Schölkopf, F. Locatello, W. Brendel, in:, 10th International Conference on Learning Representations, 2022.","ista":"Schott L, Kügelgen J von, Träuble F, Gehler P, Russell C, Bethge M, Schölkopf B, Locatello F, Brendel W. 2022. Visual representation learning does not generalize strongly within the  same domain. 10th International Conference on Learning Representations. ICLR: International Conference on Learning Representations.","ieee":"L. Schott <i>et al.</i>, “Visual representation learning does not generalize strongly within the  same domain,” in <i>10th International Conference on Learning Representations</i>, Virtual, 2022.","apa":"Schott, L., Kügelgen, J. von, Träuble, F., Gehler, P., Russell, C., Bethge, M., … Brendel, W. (2022). Visual representation learning does not generalize strongly within the  same domain. In <i>10th International Conference on Learning Representations</i>. Virtual."},"day":"25","department":[{"_id":"FrLo"}]},{"oa":1,"alternative_title":["Advances in Neural Information Processing Systems"],"year":"2022","publication":"36th Conference on Neural Information Processing Systems","date_updated":"2023-09-06T10:34:43Z","page":"7181-7198","department":[{"_id":"FrLo"}],"user_id":"2DF688A6-F248-11E8-B48F-1D18A9856A87","extern":"1","external_id":{"arxiv":["2207.09239"]},"date_published":"2022-12-15T00:00:00Z","date_created":"2023-08-22T14:01:13Z","scopus_import":"1","publisher":"Neural Information Processing Systems Foundation","main_file_link":[{"url":"https://arxiv.org/abs/2207.09239","open_access":"1"}],"intvolume":"        35","type":"conference","quality_controlled":"1","language":[{"iso":"eng"}],"title":"Assaying out-of-distribution generalization in transfer learning","author":[{"first_name":"Florian","last_name":"Wenzel","full_name":"Wenzel, Florian"},{"last_name":"Dittadi","full_name":"Dittadi, Andrea","first_name":"Andrea"},{"first_name":"Peter Vincent","last_name":"Gehler","full_name":"Gehler, Peter Vincent"},{"first_name":"Carl-Johann Simon-Gabriel","last_name":"Carl-Johann Simon-Gabriel","full_name":"Carl-Johann Simon-Gabriel, Carl-Johann Simon-Gabriel"},{"first_name":"Max","full_name":"Horn, Max","last_name":"Horn"},{"first_name":"Dominik","last_name":"Zietlow","full_name":"Zietlow, Dominik"},{"full_name":"Kernert, David","last_name":"Kernert","first_name":"David"},{"last_name":"Russell","full_name":"Russell, Chris","first_name":"Chris"},{"first_name":"Thomas","last_name":"Brox","full_name":"Brox, Thomas"},{"full_name":"Schiele, Bernt","last_name":"Schiele","first_name":"Bernt"},{"first_name":"Bernhard","last_name":"Schölkopf","full_name":"Schölkopf, Bernhard"},{"orcid":"0000-0002-4850-0683","id":"26cfd52f-2483-11ee-8040-88983bcc06d4","first_name":"Francesco","full_name":"Locatello, Francesco","last_name":"Locatello"}],"_id":"14173","abstract":[{"lang":"eng","text":"Since out-of-distribution generalization is a generally ill-posed problem, various proxy targets (e.g., calibration, adversarial robustness, algorithmic corruptions, invariance across shifts) were studied across different research programs resulting in different recommendations. While sharing the same aspirational goal, these approaches have never been tested under the same\r\nexperimental conditions on real data. In this paper, we take a unified view of previous work, highlighting message discrepancies that we address empirically, and providing recommendations on how to measure the robustness of a model and how to improve it. To this end, we collect 172 publicly available dataset pairs for training and out-of-distribution evaluation of accuracy, calibration error, adversarial attacks, environment invariance, and synthetic corruptions. We fine-tune over 31k networks, from nine different architectures in the many- and\r\nfew-shot setting. Our findings confirm that in- and out-of-distribution accuracies tend to increase jointly, but show that their relation is largely dataset-dependent, and in general more nuanced and more complex than posited by previous, smaller scale studies."}],"publication_identifier":{"isbn":["9781713871088"]},"status":"public","publication_status":"published","citation":{"ieee":"F. Wenzel <i>et al.</i>, “Assaying out-of-distribution generalization in transfer learning,” in <i>36th Conference on Neural Information Processing Systems</i>, New Orleans, LA, United States, 2022, vol. 35, pp. 7181–7198.","apa":"Wenzel, F., Dittadi, A., Gehler, P. V., Carl-Johann Simon-Gabriel, C.-J. S.-G., Horn, M., Zietlow, D., … Locatello, F. (2022). Assaying out-of-distribution generalization in transfer learning. In <i>36th Conference on Neural Information Processing Systems</i> (Vol. 35, pp. 7181–7198). New Orleans, LA, United States: Neural Information Processing Systems Foundation.","short":"F. Wenzel, A. Dittadi, P.V. Gehler, C.-J.S.-G. Carl-Johann Simon-Gabriel, M. Horn, D. Zietlow, D. Kernert, C. Russell, T. Brox, B. Schiele, B. Schölkopf, F. Locatello, in:, 36th Conference on Neural Information Processing Systems, Neural Information Processing Systems Foundation, 2022, pp. 7181–7198.","ama":"Wenzel F, Dittadi A, Gehler PV, et al. Assaying out-of-distribution generalization in transfer learning. In: <i>36th Conference on Neural Information Processing Systems</i>. Vol 35. Neural Information Processing Systems Foundation; 2022:7181-7198.","ista":"Wenzel F, Dittadi A, Gehler PV, Carl-Johann Simon-Gabriel C-JS-G, Horn M, Zietlow D, Kernert D, Russell C, Brox T, Schiele B, Schölkopf B, Locatello F. 2022. Assaying out-of-distribution generalization in transfer learning. 36th Conference on Neural Information Processing Systems. NeurIPS: Neural Information Processing Systems, Advances in Neural Information Processing Systems, vol. 35, 7181–7198.","mla":"Wenzel, Florian, et al. “Assaying Out-of-Distribution Generalization in Transfer Learning.” <i>36th Conference on Neural Information Processing Systems</i>, vol. 35, Neural Information Processing Systems Foundation, 2022, pp. 7181–98.","chicago":"Wenzel, Florian, Andrea Dittadi, Peter Vincent Gehler, Carl-Johann Simon-Gabriel Carl-Johann Simon-Gabriel, Max Horn, Dominik Zietlow, David Kernert, et al. “Assaying Out-of-Distribution Generalization in Transfer Learning.” In <i>36th Conference on Neural Information Processing Systems</i>, 35:7181–98. Neural Information Processing Systems Foundation, 2022."},"day":"15","month":"12","oa_version":"Preprint","conference":{"location":"New Orleans, LA, United States","start_date":"2022-11-28","end_date":"2022-12-09","name":"NeurIPS: Neural Information Processing Systems"},"arxiv":1,"volume":35,"article_processing_charge":"No"},{"_id":"14174","author":[{"full_name":"Dittadi, Andrea","last_name":"Dittadi","first_name":"Andrea"},{"first_name":"Frederik","full_name":"Träuble, Frederik","last_name":"Träuble"},{"last_name":"Wüthrich","full_name":"Wüthrich, Manuel","first_name":"Manuel"},{"full_name":"Widmaier, Felix","last_name":"Widmaier","first_name":"Felix"},{"first_name":"Peter","last_name":"Gehler","full_name":"Gehler, Peter"},{"first_name":"Ole","last_name":"Winther","full_name":"Winther, Ole"},{"orcid":"0000-0002-4850-0683","id":"26cfd52f-2483-11ee-8040-88983bcc06d4","first_name":"Francesco","full_name":"Locatello, Francesco","last_name":"Locatello"},{"first_name":"Olivier","last_name":"Bachem","full_name":"Bachem, Olivier"},{"last_name":"Schölkopf","full_name":"Schölkopf, Bernhard","first_name":"Bernhard"},{"first_name":"Stefan","last_name":"Bauer","full_name":"Bauer, Stefan"}],"oa":1,"title":"The role of pretrained representations for the OOD generalization of  reinforcement learning agents","year":"2022","abstract":[{"lang":"eng","text":"Building sample-efficient agents that generalize out-of-distribution (OOD) in real-world settings remains a fundamental unsolved problem on the path towards achieving higher-level cognition. One particularly promising approach is to begin with low-dimensional, pretrained representations of our world, which should facilitate efficient downstream learning and generalization. By training 240 representations and over 10,000 reinforcement learning (RL) policies on a simulated robotic setup, we evaluate to what extent different properties of\r\npretrained VAE-based representations affect the OOD generalization of downstream agents. We observe that many agents are surprisingly robust to realistic distribution shifts, including the challenging sim-to-real case. In addition, we find that the generalization performance of a simple downstream proxy task reliably predicts the generalization performance of our RL agents\r\nunder a wide range of OOD settings. Such proxy tasks can thus be used to select pretrained representations that will lead to agents that generalize."}],"date_updated":"2023-09-11T09:48:36Z","publication":"10th International Conference on Learning Representations","status":"public","publication_status":"published","month":"04","user_id":"2DF688A6-F248-11E8-B48F-1D18A9856A87","day":"25","department":[{"_id":"FrLo"}],"citation":{"chicago":"Dittadi, Andrea, Frederik Träuble, Manuel Wüthrich, Felix Widmaier, Peter Gehler, Ole Winther, Francesco Locatello, Olivier Bachem, Bernhard Schölkopf, and Stefan Bauer. “The Role of Pretrained Representations for the OOD Generalization of  Reinforcement Learning Agents.” In <i>10th International Conference on Learning Representations</i>, 2022.","mla":"Dittadi, Andrea, et al. “The Role of Pretrained Representations for the OOD Generalization of  Reinforcement Learning Agents.” <i>10th International Conference on Learning Representations</i>, 2022.","apa":"Dittadi, A., Träuble, F., Wüthrich, M., Widmaier, F., Gehler, P., Winther, O., … Bauer, S. (2022). The role of pretrained representations for the OOD generalization of  reinforcement learning agents. In <i>10th International Conference on Learning Representations</i>. Virtual.","ieee":"A. Dittadi <i>et al.</i>, “The role of pretrained representations for the OOD generalization of  reinforcement learning agents,” in <i>10th International Conference on Learning Representations</i>, Virtual, 2022.","short":"A. Dittadi, F. Träuble, M. Wüthrich, F. Widmaier, P. Gehler, O. Winther, F. Locatello, O. Bachem, B. Schölkopf, S. Bauer, in:, 10th International Conference on Learning Representations, 2022.","ista":"Dittadi A, Träuble F, Wüthrich M, Widmaier F, Gehler P, Winther O, Locatello F, Bachem O, Schölkopf B, Bauer S. 2022. The role of pretrained representations for the OOD generalization of  reinforcement learning agents. 10th International Conference on Learning Representations. ICLR: International Conference on Learning Representations.","ama":"Dittadi A, Träuble F, Wüthrich M, et al. The role of pretrained representations for the OOD generalization of  reinforcement learning agents. In: <i>10th International Conference on Learning Representations</i>. ; 2022."},"conference":{"start_date":"2022-04-25","location":"Virtual","end_date":"2022-04-29","name":"ICLR: International Conference on Learning Representations"},"oa_version":"Preprint","date_created":"2023-08-22T14:02:13Z","date_published":"2022-04-25T00:00:00Z","extern":"1","external_id":{"arxiv":["2107.05686"]},"arxiv":1,"main_file_link":[{"url":" https://doi.org/10.48550/arXiv.2107.05686","open_access":"1"}],"quality_controlled":"1","type":"conference","language":[{"iso":"eng"}],"article_processing_charge":"No"},{"citation":{"apa":"Makansi, O., Kügelgen, J. von, Locatello, F., Gehler, P., Janzing, D., Brox, T., &#38; Schölkopf, B. (2022). You mostly walk alone: Analyzing feature attribution in trajectory prediction. In <i>10th International Conference on Learning Representations</i>. Virtual.","ieee":"O. Makansi <i>et al.</i>, “You mostly walk alone: Analyzing feature attribution in trajectory prediction,” in <i>10th International Conference on Learning Representations</i>, Virtual, 2022.","ama":"Makansi O, Kügelgen J von, Locatello F, et al. You mostly walk alone: Analyzing feature attribution in trajectory prediction. In: <i>10th International Conference on Learning Representations</i>. ; 2022.","short":"O. Makansi, J. von Kügelgen, F. Locatello, P. Gehler, D. Janzing, T. Brox, B. Schölkopf, in:, 10th International Conference on Learning Representations, 2022.","ista":"Makansi O, Kügelgen J von, Locatello F, Gehler P, Janzing D, Brox T, Schölkopf B. 2022. You mostly walk alone: Analyzing feature attribution in trajectory prediction. 10th International Conference on Learning Representations. ICLR: International Conference on Learning Representations.","chicago":"Makansi, Osama, Julius von Kügelgen, Francesco Locatello, Peter Gehler, Dominik Janzing, Thomas Brox, and Bernhard Schölkopf. “You Mostly Walk Alone: Analyzing Feature Attribution in Trajectory Prediction.” In <i>10th International Conference on Learning Representations</i>, 2022.","mla":"Makansi, Osama, et al. “You Mostly Walk Alone: Analyzing Feature Attribution in Trajectory Prediction.” <i>10th International Conference on Learning Representations</i>, 2022."},"department":[{"_id":"FrLo"}],"day":"25","month":"04","user_id":"2DF688A6-F248-11E8-B48F-1D18A9856A87","extern":"1","external_id":{"arxiv":["2110.05304"]},"date_published":"2022-04-25T00:00:00Z","oa_version":"Preprint","date_created":"2023-08-22T14:02:34Z","conference":{"end_date":"2022-04-29","location":"Virtual","start_date":"2022-04-25","name":"ICLR: International Conference on Learning Representations"},"main_file_link":[{"url":"https://doi.org/10.48550/arXiv.2110.05304","open_access":"1"}],"arxiv":1,"article_processing_charge":"No","type":"conference","quality_controlled":"1","language":[{"iso":"eng"}],"title":"You mostly walk alone: Analyzing feature attribution in trajectory prediction","oa":1,"author":[{"first_name":"Osama","last_name":"Makansi","full_name":"Makansi, Osama"},{"first_name":"Julius von","last_name":"Kügelgen","full_name":"Kügelgen, Julius von"},{"orcid":"0000-0002-4850-0683","full_name":"Locatello, Francesco","last_name":"Locatello","id":"26cfd52f-2483-11ee-8040-88983bcc06d4","first_name":"Francesco"},{"first_name":"Peter","full_name":"Gehler, Peter","last_name":"Gehler"},{"first_name":"Dominik","last_name":"Janzing","full_name":"Janzing, Dominik"},{"first_name":"Thomas","last_name":"Brox","full_name":"Brox, Thomas"},{"last_name":"Schölkopf","full_name":"Schölkopf, Bernhard","first_name":"Bernhard"}],"_id":"14175","abstract":[{"lang":"eng","text":"Predicting the future trajectory of a moving agent can be easy when the past trajectory continues smoothly but is challenging when complex interactions with other agents are involved. Recent deep learning approaches for trajectory prediction show promising performance and partially attribute this to successful reasoning about agent-agent interactions. However, it remains unclear which features such black-box models actually learn to use for making predictions. This paper proposes a procedure that quantifies the contributions\r\nof different cues to model performance based on a variant of Shapley values. Applying this procedure to state-of-the-art trajectory prediction methods on standard benchmark datasets shows that they are, in fact, unable to reason about interactions. Instead, the past trajectory of the target is the only feature used for predicting its future. For a task with richer social\r\ninteraction patterns, on the other hand, the tested models do pick up such interactions to a certain extent, as quantified by our feature attribution method. We discuss the limits of the proposed method and its links to causality."}],"year":"2022","publication":"10th International Conference on Learning Representations","date_updated":"2023-09-11T09:52:20Z","status":"public","publication_status":"published"},{"citation":{"ama":"Rahaman N, Weiss M, Träuble F, et al. A general purpose neural architecture for geospatial systems. In: <i>36th Conference on Neural Information Processing Systems</i>.","ista":"Rahaman N, Weiss M, Träuble F, Locatello F, Lacoste A, Bengio Y, Pal C, Li LE, Schölkopf B. A general purpose neural architecture for geospatial systems. 36th Conference on Neural Information Processing Systems. NeurIPS: Neural Information Processing Systems.","short":"N. Rahaman, M. Weiss, F. Träuble, F. Locatello, A. Lacoste, Y. Bengio, C. Pal, L.E. Li, B. Schölkopf, in:, 36th Conference on Neural Information Processing Systems, n.d.","apa":"Rahaman, N., Weiss, M., Träuble, F., Locatello, F., Lacoste, A., Bengio, Y., … Schölkopf, B. (n.d.). A general purpose neural architecture for geospatial systems. In <i>36th Conference on Neural Information Processing Systems</i>. New Orleans, LA, United States.","ieee":"N. Rahaman <i>et al.</i>, “A general purpose neural architecture for geospatial systems,” in <i>36th Conference on Neural Information Processing Systems</i>, New Orleans, LA, United States.","chicago":"Rahaman, Nasim, Martin Weiss, Frederik Träuble, Francesco Locatello, Alexandre Lacoste, Yoshua Bengio, Chris Pal, Li Erran Li, and Bernhard Schölkopf. “A General Purpose Neural Architecture for Geospatial Systems.” In <i>36th Conference on Neural Information Processing Systems</i>, n.d.","mla":"Rahaman, Nasim, et al. “A General Purpose Neural Architecture for Geospatial Systems.” <i>36th Conference on Neural Information Processing Systems</i>."},"department":[{"_id":"FrLo"}],"day":"04","month":"11","user_id":"2DF688A6-F248-11E8-B48F-1D18A9856A87","date_published":"2022-11-04T00:00:00Z","extern":"1","external_id":{"arxiv":["2211.02348"]},"date_created":"2023-08-22T14:21:47Z","oa_version":"Preprint","conference":{"end_date":"2022-12-09","location":"New Orleans, LA, United States","start_date":"2022-11-28","name":"NeurIPS: Neural Information Processing Systems"},"main_file_link":[{"url":"https://doi.org/10.48550/arXiv.2211.02348","open_access":"1"}],"arxiv":1,"article_processing_charge":"No","quality_controlled":"1","type":"conference","language":[{"iso":"eng"}],"title":"A general purpose neural architecture for geospatial systems","oa":1,"author":[{"full_name":"Rahaman, Nasim","last_name":"Rahaman","first_name":"Nasim"},{"first_name":"Martin","last_name":"Weiss","full_name":"Weiss, Martin"},{"first_name":"Frederik","last_name":"Träuble","full_name":"Träuble, Frederik"},{"orcid":"0000-0002-4850-0683","last_name":"Locatello","full_name":"Locatello, Francesco","first_name":"Francesco","id":"26cfd52f-2483-11ee-8040-88983bcc06d4"},{"first_name":"Alexandre","last_name":"Lacoste","full_name":"Lacoste, Alexandre"},{"full_name":"Bengio, Yoshua","last_name":"Bengio","first_name":"Yoshua"},{"last_name":"Pal","full_name":"Pal, Chris","first_name":"Chris"},{"full_name":"Li, Li Erran","last_name":"Li","first_name":"Li Erran"},{"first_name":"Bernhard","full_name":"Schölkopf, Bernhard","last_name":"Schölkopf"}],"_id":"14215","abstract":[{"text":"Geospatial Information Systems are used by researchers and Humanitarian Assistance and Disaster Response (HADR) practitioners to support a wide variety of important applications. However, collaboration between these actors is difficult due to the heterogeneous nature of geospatial data modalities (e.g., multi-spectral images of various resolutions, timeseries, weather data) and diversity of tasks (e.g., regression of human activity indicators or detecting forest fires). In this work, we present a roadmap towards the construction of a general-purpose neural architecture (GPNA) with a geospatial inductive bias, pre-trained on large amounts of unlabelled earth observation data in a self-supervised manner. We envision how such a model may facilitate cooperation between members of the community. We show preliminary results on the first step of the roadmap, where we instantiate an architecture that can process a wide variety of geospatial data modalities and demonstrate that it can achieve competitive performance with domain-specific architectures on tasks relating to the U.N.'s Sustainable Development Goals.","lang":"eng"}],"year":"2022","publication":"36th Conference on Neural Information Processing Systems","date_updated":"2023-09-13T09:35:59Z","publication_status":"submitted","status":"public"},{"_id":"14216","oa":1,"title":"ASIF: Coupled data turns unimodal models to multimodal without training","author":[{"first_name":"Antonio","last_name":"Norelli","full_name":"Norelli, Antonio"},{"first_name":"Marco","full_name":"Fumero, Marco","last_name":"Fumero"},{"full_name":"Maiorca, Valentino","last_name":"Maiorca","first_name":"Valentino"},{"first_name":"Luca","last_name":"Moschella","full_name":"Moschella, Luca"},{"first_name":"Emanuele","last_name":"Rodolà","full_name":"Rodolà, Emanuele"},{"orcid":"0000-0002-4850-0683","first_name":"Francesco","id":"26cfd52f-2483-11ee-8040-88983bcc06d4","last_name":"Locatello","full_name":"Locatello, Francesco"}],"doi":"10.48550/arXiv.2210.01738","abstract":[{"lang":"eng","text":"CLIP proved that aligning visual and language spaces is key to solving many vision tasks without explicit training, but required to train image and text encoders from scratch on a huge dataset. LiT improved this by only training the text encoder and using a pre-trained vision network. In this paper, we show that a common space can be created without any training at all, using single-domain encoders (trained with or without supervision) and a much smaller amount of image-text pairs. Furthermore, our model has unique properties. Most notably, deploying a new version with updated training samples can be done in a matter of seconds. Additionally, the representations in the common space are easily interpretable as every dimension corresponds to the similarity of the input to a unique entry in the multimodal dataset. Experiments on standard zero-shot visual benchmarks demonstrate the typical transfer ability of image-text models. Overall, our method represents a simple yet surprisingly strong baseline for foundation multi-modal models, raising important questions on their data efficiency and on the role of retrieval in machine learning."}],"year":"2022","date_updated":"2024-02-12T09:57:14Z","publication":"arXiv","article_number":"2210.01738","publication_status":"submitted","status":"public","month":"10","user_id":"2DF688A6-F248-11E8-B48F-1D18A9856A87","citation":{"short":"A. Norelli, M. Fumero, V. Maiorca, L. Moschella, E. Rodolà, F. Locatello, ArXiv (n.d.).","ama":"Norelli A, Fumero M, Maiorca V, Moschella L, Rodolà E, Locatello F. ASIF: Coupled data turns unimodal models to multimodal without training. <i>arXiv</i>. doi:<a href=\"https://doi.org/10.48550/arXiv.2210.01738\">10.48550/arXiv.2210.01738</a>","ista":"Norelli A, Fumero M, Maiorca V, Moschella L, Rodolà E, Locatello F. ASIF: Coupled data turns unimodal models to multimodal without training. arXiv, 2210.01738.","ieee":"A. Norelli, M. Fumero, V. Maiorca, L. Moschella, E. Rodolà, and F. Locatello, “ASIF: Coupled data turns unimodal models to multimodal without training,” <i>arXiv</i>. .","apa":"Norelli, A., Fumero, M., Maiorca, V., Moschella, L., Rodolà, E., &#38; Locatello, F. (n.d.). ASIF: Coupled data turns unimodal models to multimodal without training. <i>arXiv</i>. <a href=\"https://doi.org/10.48550/arXiv.2210.01738\">https://doi.org/10.48550/arXiv.2210.01738</a>","chicago":"Norelli, Antonio, Marco Fumero, Valentino Maiorca, Luca Moschella, Emanuele Rodolà, and Francesco Locatello. “ASIF: Coupled Data Turns Unimodal Models to Multimodal without Training.” <i>ArXiv</i>, n.d. <a href=\"https://doi.org/10.48550/arXiv.2210.01738\">https://doi.org/10.48550/arXiv.2210.01738</a>.","mla":"Norelli, Antonio, et al. “ASIF: Coupled Data Turns Unimodal Models to Multimodal without Training.” <i>ArXiv</i>, 2210.01738, doi:<a href=\"https://doi.org/10.48550/arXiv.2210.01738\">10.48550/arXiv.2210.01738</a>."},"day":"04","department":[{"_id":"FrLo"}],"date_created":"2023-08-22T14:22:04Z","oa_version":"Preprint","date_published":"2022-10-04T00:00:00Z","external_id":{"arxiv":["2210.01738"]},"arxiv":1,"main_file_link":[{"open_access":"1","url":"https://doi.org/10.48550/arXiv.2210.01738"}],"article_processing_charge":"No","language":[{"iso":"eng"}],"type":"preprint"},{"arxiv":1,"main_file_link":[{"url":"https://doi.org/10.48550/arXiv.2201.13388","open_access":"1"}],"type":"preprint","language":[{"iso":"eng"}],"article_processing_charge":"No","department":[{"_id":"FrLo"}],"day":"31","citation":{"apa":"Mambelli, D., Träuble, F., Bauer, S., Schölkopf, B., &#38; Locatello, F. (n.d.). Compositional multi-object reinforcement learning with linear relation networks. <i>arXiv</i>. <a href=\"https://doi.org/10.48550/arXiv.2201.13388\">https://doi.org/10.48550/arXiv.2201.13388</a>","ieee":"D. Mambelli, F. Träuble, S. Bauer, B. Schölkopf, and F. Locatello, “Compositional multi-object reinforcement learning with linear relation networks,” <i>arXiv</i>. .","ama":"Mambelli D, Träuble F, Bauer S, Schölkopf B, Locatello F. Compositional multi-object reinforcement learning with linear relation networks. <i>arXiv</i>. doi:<a href=\"https://doi.org/10.48550/arXiv.2201.13388\">10.48550/arXiv.2201.13388</a>","ista":"Mambelli D, Träuble F, Bauer S, Schölkopf B, Locatello F. Compositional multi-object reinforcement learning with linear relation networks. arXiv, 2201.13388.","short":"D. Mambelli, F. Träuble, S. Bauer, B. Schölkopf, F. Locatello, ArXiv (n.d.).","mla":"Mambelli, Davide, et al. “Compositional Multi-Object Reinforcement Learning with Linear Relation Networks.” <i>ArXiv</i>, 2201.13388, doi:<a href=\"https://doi.org/10.48550/arXiv.2201.13388\">10.48550/arXiv.2201.13388</a>.","chicago":"Mambelli, Davide, Frederik Träuble, Stefan Bauer, Bernhard Schölkopf, and Francesco Locatello. “Compositional Multi-Object Reinforcement Learning with Linear Relation Networks.” <i>ArXiv</i>, n.d. <a href=\"https://doi.org/10.48550/arXiv.2201.13388\">https://doi.org/10.48550/arXiv.2201.13388</a>."},"month":"01","user_id":"2DF688A6-F248-11E8-B48F-1D18A9856A87","extern":"1","external_id":{"arxiv":["2201.13388"]},"date_published":"2022-01-31T00:00:00Z","oa_version":"Preprint","date_created":"2023-08-22T14:23:16Z","publication":"arXiv","date_updated":"2023-09-11T11:49:40Z","publication_status":"submitted","status":"public","article_number":"2201.13388","author":[{"first_name":"Davide","full_name":"Mambelli, Davide","last_name":"Mambelli"},{"first_name":"Frederik","last_name":"Träuble","full_name":"Träuble, Frederik"},{"full_name":"Bauer, Stefan","last_name":"Bauer","first_name":"Stefan"},{"first_name":"Bernhard","last_name":"Schölkopf","full_name":"Schölkopf, Bernhard"},{"orcid":"0000-0002-4850-0683","id":"26cfd52f-2483-11ee-8040-88983bcc06d4","first_name":"Francesco","full_name":"Locatello, Francesco","last_name":"Locatello"}],"oa":1,"title":"Compositional multi-object reinforcement learning with linear relation networks","_id":"14220","year":"2022","abstract":[{"text":"Although reinforcement learning has seen remarkable progress over the last years, solving robust dexterous object-manipulation tasks in multi-object settings remains a challenge. In this paper, we focus on models that can learn manipulation tasks in fixed multi-object settings and extrapolate this skill zero-shot without any drop in performance when the number of objects changes. We consider the generic task of bringing a specific cube out of a set to a goal position. We find that previous approaches, which primarily leverage attention and graph neural network-based architectures, do not generalize their skills when the number of input objects changes while scaling as K2. We propose an alternative plug-and-play module based on relational inductive biases to overcome these limitations. Besides exceeding performances in their training environment, we show that our approach, which scales linearly in K, allows agents to extrapolate and generalize zero-shot to any new object number.","lang":"eng"}],"doi":"10.48550/arXiv.2201.13388"},{"quality_controlled":"1","type":"journal_article","language":[{"iso":"eng"}],"isi":1,"main_file_link":[{"url":"https://arxiv.org/abs/2102.11552","open_access":"1"}],"intvolume":"        16","publisher":"Mathematical Sciences Publishers","scopus_import":"1","date_created":"2021-02-25T09:56:57Z","date_published":"2022-12-01T00:00:00Z","external_id":{"isi":["000961514100004"],"arxiv":["2102.11552"]},"user_id":"4359f0d1-fa6c-11eb-b949-802e58b17ae8","department":[{"_id":"TiBr"}],"article_type":"original","page":"2385-2407","date_updated":"2023-08-02T06:46:38Z","publication":"Algebra & Number Theory","year":"2022","oa":1,"article_processing_charge":"No","volume":16,"arxiv":1,"oa_version":"Preprint","month":"12","day":"01","citation":{"chicago":"Browning, Timothy D, Tal Horesh, and Florian Alexander Wilsch. “Equidistribution and Freeness on Grassmannians.” <i>Algebra &#38; Number Theory</i>. Mathematical Sciences Publishers, 2022. <a href=\"https://doi.org/10.2140/ant.2022.16.2385\">https://doi.org/10.2140/ant.2022.16.2385</a>.","mla":"Browning, Timothy D., et al. “Equidistribution and Freeness on Grassmannians.” <i>Algebra &#38; Number Theory</i>, vol. 16, no. 10, Mathematical Sciences Publishers, 2022, pp. 2385–407, doi:<a href=\"https://doi.org/10.2140/ant.2022.16.2385\">10.2140/ant.2022.16.2385</a>.","ieee":"T. D. Browning, T. Horesh, and F. A. Wilsch, “Equidistribution and freeness on Grassmannians,” <i>Algebra &#38; Number Theory</i>, vol. 16, no. 10. Mathematical Sciences Publishers, pp. 2385–2407, 2022.","apa":"Browning, T. D., Horesh, T., &#38; Wilsch, F. A. (2022). Equidistribution and freeness on Grassmannians. <i>Algebra &#38; Number Theory</i>. Mathematical Sciences Publishers. <a href=\"https://doi.org/10.2140/ant.2022.16.2385\">https://doi.org/10.2140/ant.2022.16.2385</a>","short":"T.D. Browning, T. Horesh, F.A. Wilsch, Algebra &#38; Number Theory 16 (2022) 2385–2407.","ama":"Browning TD, Horesh T, Wilsch FA. Equidistribution and freeness on Grassmannians. <i>Algebra &#38; Number Theory</i>. 2022;16(10):2385-2407. doi:<a href=\"https://doi.org/10.2140/ant.2022.16.2385\">10.2140/ant.2022.16.2385</a>","ista":"Browning TD, Horesh T, Wilsch FA. 2022. Equidistribution and freeness on Grassmannians. Algebra &#38; Number Theory. 16(10), 2385–2407."},"acknowledgement":"The authors are very grateful to Will Sawin for useful remarks about this topic. While working on this paper the first two authors were supported by EPSRC grant EP/P026710/1, and the first and last authors by FWF grant P 32428-N35.","publication_status":"published","status":"public","project":[{"name":"Between rational and integral points","grant_number":"EP-P026710-2","_id":"26A8D266-B435-11E9-9278-68D0E5697425"},{"_id":"26AEDAB2-B435-11E9-9278-68D0E5697425","grant_number":"P32428","name":"New frontiers of the Manin conjecture","call_identifier":"FWF"}],"publication_identifier":{"issn":["1937-0652"],"eissn":["1944-7833"]},"doi":"10.2140/ant.2022.16.2385","issue":"10","abstract":[{"lang":"eng","text":"We associate a certain tensor product lattice to any primitive integer lattice and ask about its typical shape. These lattices are related to the tangent bundle of Grassmannians and their study is motivated by Peyre's programme on \"freeness\" for rational points of bounded height on Fano\r\nvarieties."}],"_id":"9199","author":[{"orcid":"0000-0002-8314-0177","full_name":"Browning, Timothy D","last_name":"Browning","id":"35827D50-F248-11E8-B48F-1D18A9856A87","first_name":"Timothy D"},{"full_name":"Horesh, Tal","last_name":"Horesh","id":"C8B7BF48-8D81-11E9-BCA9-F536E6697425","first_name":"Tal"},{"orcid":"0000-0001-7302-8256","first_name":"Florian Alexander","id":"560601DA-8D36-11E9-A136-7AC1E5697425","last_name":"Wilsch","full_name":"Wilsch, Florian Alexander"}],"title":"Equidistribution and freeness on Grassmannians"},{"oa":1,"year":"2022","keyword":["Management Science and Operations Research","General Mathematics","Computer Science Applications"],"publication":"Mathematics of Operations Research","date_updated":"2023-09-05T13:16:11Z","page":"100-119","article_type":"original","department":[{"_id":"GradSch"},{"_id":"KrCh"}],"user_id":"c635000d-4b10-11ee-a964-aac5a93f6ac1","date_published":"2022-02-01T00:00:00Z","external_id":{"isi":["000731918100001"],"arxiv":["1904.13360"]},"date_created":"2021-04-08T09:33:31Z","scopus_import":"1","publisher":"Institute for Operations Research and the Management Sciences","main_file_link":[{"open_access":"1","url":"https://arxiv.org/abs/1904.13360"}],"intvolume":"        47","isi":1,"quality_controlled":"1","language":[{"iso":"eng"}],"type":"journal_article","title":"Finite-memory strategies in POMDPs with long-run average objectives","author":[{"orcid":"0000-0002-4561-241X","last_name":"Chatterjee","full_name":"Chatterjee, Krishnendu","first_name":"Krishnendu","id":"2E5DCA20-F248-11E8-B48F-1D18A9856A87"},{"orcid":"0000-0001-5103-038X","first_name":"Raimundo J","id":"BD1DF4C4-D767-11E9-B658-BC13E6697425","last_name":"Saona Urmeneta","full_name":"Saona Urmeneta, Raimundo J"},{"full_name":"Ziliotto, Bruno","last_name":"Ziliotto","first_name":"Bruno"}],"_id":"9311","abstract":[{"lang":"eng","text":"Partially observable Markov decision processes (POMDPs) are standard models for dynamic systems with probabilistic and nondeterministic behaviour in uncertain environments. We prove that in POMDPs with long-run average objective, the decision maker has approximately optimal strategies with finite memory. This implies notably that approximating the long-run value is recursively enumerable, as well as a weak continuity property of the value with respect to the transition function. "}],"issue":"1","doi":"10.1287/moor.2020.1116","publication_identifier":{"eissn":["1526-5471"],"issn":["0364-765X"]},"project":[{"name":"Game Theory","call_identifier":"FWF","_id":"25863FF4-B435-11E9-9278-68D0E5697425","grant_number":"S11407"}],"publication_status":"published","acknowledgement":"Partially supported by Austrian Science Fund (FWF) NFN Grant No RiSE/SHiNE S11407, by CONICYT Chile through grant PII 20150140, and by ECOS-CONICYT through grant C15E03.\r\n","status":"public","citation":{"chicago":"Chatterjee, Krishnendu, Raimundo J Saona Urmeneta, and Bruno Ziliotto. “Finite-Memory Strategies in POMDPs with Long-Run Average Objectives.” <i>Mathematics of Operations Research</i>. Institute for Operations Research and the Management Sciences, 2022. <a href=\"https://doi.org/10.1287/moor.2020.1116\">https://doi.org/10.1287/moor.2020.1116</a>.","mla":"Chatterjee, Krishnendu, et al. “Finite-Memory Strategies in POMDPs with Long-Run Average Objectives.” <i>Mathematics of Operations Research</i>, vol. 47, no. 1, Institute for Operations Research and the Management Sciences, 2022, pp. 100–19, doi:<a href=\"https://doi.org/10.1287/moor.2020.1116\">10.1287/moor.2020.1116</a>.","ista":"Chatterjee K, Saona Urmeneta RJ, Ziliotto B. 2022. Finite-memory strategies in POMDPs with long-run average objectives. Mathematics of Operations Research. 47(1), 100–119.","ama":"Chatterjee K, Saona Urmeneta RJ, Ziliotto B. Finite-memory strategies in POMDPs with long-run average objectives. <i>Mathematics of Operations Research</i>. 2022;47(1):100-119. doi:<a href=\"https://doi.org/10.1287/moor.2020.1116\">10.1287/moor.2020.1116</a>","short":"K. Chatterjee, R.J. Saona Urmeneta, B. Ziliotto, Mathematics of Operations Research 47 (2022) 100–119.","apa":"Chatterjee, K., Saona Urmeneta, R. J., &#38; Ziliotto, B. (2022). Finite-memory strategies in POMDPs with long-run average objectives. <i>Mathematics of Operations Research</i>. Institute for Operations Research and the Management Sciences. <a href=\"https://doi.org/10.1287/moor.2020.1116\">https://doi.org/10.1287/moor.2020.1116</a>","ieee":"K. Chatterjee, R. J. Saona Urmeneta, and B. Ziliotto, “Finite-memory strategies in POMDPs with long-run average objectives,” <i>Mathematics of Operations Research</i>, vol. 47, no. 1. Institute for Operations Research and the Management Sciences, pp. 100–119, 2022."},"day":"01","month":"02","oa_version":"Preprint","arxiv":1,"volume":47,"article_processing_charge":"No"},{"external_id":{"isi":["000784421500001"],"arxiv":["1811.10563"]},"date_published":"2022-05-01T00:00:00Z","date_created":"2021-05-02T22:01:29Z","scopus_import":"1","department":[{"_id":"TiBr"}],"user_id":"4359f0d1-fa6c-11eb-b949-802e58b17ae8","isi":1,"quality_controlled":"1","language":[{"iso":"eng"}],"type":"journal_article","publisher":"Cambridge University Press","intvolume":"       172","year":"2022","oa":1,"page":"563 - 590","article_type":"original","publication":"Mathematical Proceedings of the Cambridge Philosophical Society","date_updated":"2023-08-02T06:47:48Z","license":"https://creativecommons.org/licenses/by/4.0/","oa_version":"Published Version","citation":{"ama":"Bonolis D. On the size of the maximum of incomplete Kloosterman sums. <i>Mathematical Proceedings of the Cambridge Philosophical Society</i>. 2022;172(3):563-590. doi:<a href=\"https://doi.org/10.1017/S030500412100030X\">10.1017/S030500412100030X</a>","ista":"Bonolis D. 2022. On the size of the maximum of incomplete Kloosterman sums. Mathematical Proceedings of the Cambridge Philosophical Society. 172(3), 563–590.","short":"D. Bonolis, Mathematical Proceedings of the Cambridge Philosophical Society 172 (2022) 563–590.","apa":"Bonolis, D. (2022). On the size of the maximum of incomplete Kloosterman sums. <i>Mathematical Proceedings of the Cambridge Philosophical Society</i>. Cambridge University Press. <a href=\"https://doi.org/10.1017/S030500412100030X\">https://doi.org/10.1017/S030500412100030X</a>","ieee":"D. Bonolis, “On the size of the maximum of incomplete Kloosterman sums,” <i>Mathematical Proceedings of the Cambridge Philosophical Society</i>, vol. 172, no. 3. Cambridge University Press, pp. 563–590, 2022.","mla":"Bonolis, Dante. “On the Size of the Maximum of Incomplete Kloosterman Sums.” <i>Mathematical Proceedings of the Cambridge Philosophical Society</i>, vol. 172, no. 3, Cambridge University Press, 2022, pp. 563–90, doi:<a href=\"https://doi.org/10.1017/S030500412100030X\">10.1017/S030500412100030X</a>.","chicago":"Bonolis, Dante. “On the Size of the Maximum of Incomplete Kloosterman Sums.” <i>Mathematical Proceedings of the Cambridge Philosophical Society</i>. Cambridge University Press, 2022. <a href=\"https://doi.org/10.1017/S030500412100030X\">https://doi.org/10.1017/S030500412100030X</a>."},"day":"01","file_date_updated":"2021-12-01T14:01:54Z","month":"05","ddc":["510"],"has_accepted_license":"1","article_processing_charge":"Yes (via OA deal)","volume":172,"tmp":{"name":"Creative Commons Attribution 4.0 International Public License (CC-BY 4.0)","image":"/images/cc_by.png","short":"CC BY (4.0)","legal_code_url":"https://creativecommons.org/licenses/by/4.0/legalcode"},"file":[{"creator":"cchlebak","relation":"main_file","file_size":334064,"file_name":"2021_MathProcCamPhilSoc_Bonolis.pdf","date_updated":"2021-12-01T14:01:54Z","success":1,"checksum":"614d2e9b83a78100408e4ee7752a80a8","content_type":"application/pdf","access_level":"open_access","date_created":"2021-12-01T14:01:54Z","file_id":"10395"}],"arxiv":1,"abstract":[{"text":"Let t : Fp → C be a complex valued function on Fp. A classical problem in analytic number theory is bounding the maximum M(t) := max 0≤H<p ∣ 1/√p ∑ 0≤n<H t (n) ∣ of the absolute value of the incomplete sums(1/√p)∑0≤n<H t (n). In this very general context one of the most important results is the Pólya–Vinogradov bound M(t)≤IIˆtII∞ log 3p, where ˆt : Fp → C is the normalized Fourier transform of t. In this paper we provide a lower bound for certain incomplete Kloosterman sums, namely we prove that for any ε > 0 there exists a large subset of a ∈ F×p such that for kl a,1,p : x → e((ax+x) / p) we have M(kla,1,p) ≥ (1−ε/√2π + o(1)) log log p, as p→∞. Finally, we prove a result on the growth of the moments of {M (kla,1,p)}a∈F×p. 2020 Mathematics Subject Classification: 11L03, 11T23 (Primary); 14F20, 60F10 (Secondary).","lang":"eng"}],"issue":"3","doi":"10.1017/S030500412100030X","title":"On the size of the maximum of incomplete Kloosterman sums","author":[{"id":"6A459894-5FDD-11E9-AF35-BB24E6697425","first_name":"Dante","full_name":"Bonolis, Dante","last_name":"Bonolis"}],"_id":"9364","publication_status":"published","acknowledgement":"I am most thankful to my advisor, Emmanuel Kowalski, for suggesting this problem and for his guidance during these years. I also would like to thank Youness Lamzouri for informing me about his work on sum of incomplete Birch sums and Tal Horesh for her suggestions on a previous version of the paper. Finally, I am very grateful to the anonymous referee for their careful reading of the manuscript and their valuable comments.","status":"public","publication_identifier":{"eissn":["1469-8064"],"issn":["0305-0041"]}},{"has_accepted_license":"1","volume":22,"article_processing_charge":"Yes (via OA deal)","file":[{"file_id":"9650","date_created":"2021-07-14T06:44:36Z","file_name":"Boissonnat-Wintraecken2021_Article_TheTopologicalCorrectnessOfPLA.pdf","creator":"mwintrae","relation":"main_file","file_size":1455699,"date_updated":"2021-07-14T06:44:36Z","access_level":"open_access","checksum":"f1d372ec3c08ec22e84f8e93e1126b8c","content_type":"application/pdf"}],"tmp":{"name":"Creative Commons Attribution 4.0 International Public License (CC-BY 4.0)","image":"/images/cc_by.png","short":"CC BY (4.0)","legal_code_url":"https://creativecommons.org/licenses/by/4.0/legalcode"},"oa_version":"Published Version","day":"01","citation":{"short":"J.-D. Boissonnat, M. Wintraecken, Foundations of Computational Mathematics  22 (2022) 967–1012.","ama":"Boissonnat J-D, Wintraecken M. The topological correctness of PL approximations of isomanifolds. <i>Foundations of Computational Mathematics </i>. 2022;22:967-1012. doi:<a href=\"https://doi.org/10.1007/s10208-021-09520-0\">10.1007/s10208-021-09520-0</a>","ista":"Boissonnat J-D, Wintraecken M. 2022. The topological correctness of PL approximations of isomanifolds. Foundations of Computational Mathematics . 22, 967–1012.","apa":"Boissonnat, J.-D., &#38; Wintraecken, M. (2022). The topological correctness of PL approximations of isomanifolds. <i>Foundations of Computational Mathematics </i>. Springer Nature. <a href=\"https://doi.org/10.1007/s10208-021-09520-0\">https://doi.org/10.1007/s10208-021-09520-0</a>","ieee":"J.-D. Boissonnat and M. Wintraecken, “The topological correctness of PL approximations of isomanifolds,” <i>Foundations of Computational Mathematics </i>, vol. 22. Springer Nature, pp. 967–1012, 2022.","chicago":"Boissonnat, Jean-Daniel, and Mathijs Wintraecken. “The Topological Correctness of PL Approximations of Isomanifolds.” <i>Foundations of Computational Mathematics </i>. Springer Nature, 2022. <a href=\"https://doi.org/10.1007/s10208-021-09520-0\">https://doi.org/10.1007/s10208-021-09520-0</a>.","mla":"Boissonnat, Jean-Daniel, and Mathijs Wintraecken. “The Topological Correctness of PL Approximations of Isomanifolds.” <i>Foundations of Computational Mathematics </i>, vol. 22, Springer Nature, 2022, pp. 967–1012, doi:<a href=\"https://doi.org/10.1007/s10208-021-09520-0\">10.1007/s10208-021-09520-0</a>."},"month":"0","ddc":["516"],"file_date_updated":"2021-07-14T06:44:36Z","status":"public","publication_status":"published","acknowledgement":"First and foremost, we acknowledge Siargey Kachanovich for discussions. We thank Herbert Edelsbrunner and all members of his group, all former and current members of the Datashape team (formerly known as Geometrica), and André Lieutier for encouragement. We further thank the reviewers of Foundations of Computational Mathematics and the reviewers and program committee of the Symposium on Computational Geometry for their feedback, which improved the exposition.\r\nThis work was funded by the European Research Council under the European Union’s ERC Grant Agreement number 339025 GUDHI (Algorithmic Foundations of Geometric Understanding in Higher Dimensions). This work was also supported by the French government, through the 3IA Côte d’Azur Investments in the Future project managed by the National Research Agency (ANR) with the reference number ANR-19-P3IA-0002. Mathijs Wintraecken also received funding from the European Union’s Horizon 2020 research and innovation programme under the Marie Skłodowska-Curie grant agreement no. 754411.","ec_funded":1,"publication_identifier":{"eissn":["1615-3383"]},"project":[{"grant_number":"754411","_id":"260C2330-B435-11E9-9278-68D0E5697425","call_identifier":"H2020","name":"ISTplus - Postdoctoral Fellowships"}],"abstract":[{"text":"Isomanifolds are the generalization of isosurfaces to arbitrary dimension and codimension, i.e. manifolds defined as the zero set of some multivariate vector-valued smooth function f : Rd → Rd−n. A natural (and efficient) way to approximate an isomanifold is to consider its Piecewise-Linear (PL) approximation based on a triangulation T of the ambient space Rd. In this paper, we give conditions under which the PL-approximation of an isomanifold is topologically equivalent to the isomanifold. The conditions are easy to satisfy in the sense that they can always be met by taking a sufficiently\r\nfine triangulation T . This contrasts with previous results on the triangulation of manifolds where, in arbitrary dimensions, delicate perturbations are needed to guarantee topological correctness, which leads to strong limitations in practice. We further give a bound on the Fréchet distance between the original isomanifold and its PL-approximation. Finally we show analogous results for the PL-approximation of an isomanifold with boundary.","lang":"eng"}],"doi":"10.1007/s10208-021-09520-0","author":[{"first_name":"Jean-Daniel","last_name":"Boissonnat","full_name":"Boissonnat, Jean-Daniel"},{"last_name":"Wintraecken","full_name":"Wintraecken, Mathijs","first_name":"Mathijs","id":"307CFBC8-F248-11E8-B48F-1D18A9856A87","orcid":"0000-0002-7472-2220"}],"title":"The topological correctness of PL approximations of isomanifolds","_id":"9649","isi":1,"quality_controlled":"1","language":[{"iso":"eng"}],"type":"journal_article","intvolume":"        22","publisher":"Springer Nature","external_id":{"isi":["000673039600001"]},"date_published":"2022-01-01T00:00:00Z","scopus_import":"1","date_created":"2021-07-14T06:44:53Z","department":[{"_id":"HeEd"}],"user_id":"4359f0d1-fa6c-11eb-b949-802e58b17ae8","related_material":{"record":[{"status":"public","relation":"earlier_version","id":"7952"}]},"article_type":"original","page":"967-1012","publication":"Foundations of Computational Mathematics ","date_updated":"2023-08-02T06:49:17Z","year":"2022","oa":1},{"acknowledgement":"The author thanks the whole community of researchers consciously or unconsciously working on questions related to auxin, whose hard work and enthusiasm contributed to development of this exciting story. Particular thanks go to many\r\nbrilliant present and past members of the Friml group and our numerous excellent collaborators, without whom my own personal journey would not be possible. The way of the cross with its 14 stations is a popular devotion among Roman Catholics and inspires them to make a spiritual pilgrimage through contemplation of Christ on his last day. Its aspects of gradual progress, struggle, passion, and revelation served as an inspiration for the formal depiction of our journey to understanding auxin as described in this review. It is in no way intended to reflect the personal beliefs of the author and readers. I am grateful to Nick Barton, Eva Benková, Lenka Caisová, Matyáš Fendrych, Lukáš Fiedler, Monika Frátriková, Jarmila Frimlová, Michelle Gallei, Jakub Hajný, Lukas Hoermayer, Alexandra Mally, Ondrˇej Novák, Jan Petrášek, Aleš Pěnčík, Steffen Vanneste, Tongda Xu, and Zhenbiao Yang for their valuable comments. Special thanks go to Michelle Gallei for her invaluable assistance with the figures.","status":"public","publication_status":"published","publication_identifier":{"issn":["1943-0264"]},"issue":"5","abstract":[{"lang":"eng","text":"Auxin has always been at the forefront of research in plant physiology and development. Since the earliest contemplations by Julius von Sachs and Charles Darwin, more than a century-long struggle has been waged to understand its function. This largely reflects the failures, successes, and inevitable progress in the entire field of plant signaling and development. Here I present 14 stations on our long and sometimes mystical journey to understand auxin. These highlights were selected to give a flavor of the field and to show the scope and limits of our current knowledge. A special focus is put on features that make auxin unique among phytohormones, such as its dynamic, directional transport network, which integrates external and internal signals, including self-organizing feedback. Accented are persistent mysteries and controversies. The unexpected discoveries related to rapid auxin responses and growth regulation recently disturbed our contentment regarding understanding of the auxin signaling mechanism. These new revelations, along with advances in technology, usher us into a new, exciting era in auxin research. "}],"doi":"10.1101/cshperspect.a039859 ","author":[{"orcid":"0000-0002-8302-7596","full_name":"Friml, Jiří","last_name":"Friml","id":"4159519E-F248-11E8-B48F-1D18A9856A87","first_name":"Jiří"}],"title":"Fourteen stations of auxin","_id":"10016","article_processing_charge":"No","volume":14,"oa_version":"Published Version","day":"27","citation":{"apa":"Friml, J. (2022). Fourteen stations of auxin. <i>Cold Spring Harbor Perspectives in Biology</i>. Cold Spring Harbor Laboratory. <a href=\"https://doi.org/10.1101/cshperspect.a039859 \">https://doi.org/10.1101/cshperspect.a039859 </a>","ieee":"J. Friml, “Fourteen stations of auxin,” <i>Cold Spring Harbor Perspectives in Biology</i>, vol. 14, no. 5. Cold Spring Harbor Laboratory, 2022.","ama":"Friml J. Fourteen stations of auxin. <i>Cold Spring Harbor Perspectives in Biology</i>. 2022;14(5). doi:<a href=\"https://doi.org/10.1101/cshperspect.a039859 \">10.1101/cshperspect.a039859 </a>","ista":"Friml J. 2022. Fourteen stations of auxin. Cold Spring Harbor Perspectives in Biology. 14(5), a039859.","short":"J. Friml, Cold Spring Harbor Perspectives in Biology 14 (2022).","mla":"Friml, Jiří. “Fourteen Stations of Auxin.” <i>Cold Spring Harbor Perspectives in Biology</i>, vol. 14, no. 5, a039859, Cold Spring Harbor Laboratory, 2022, doi:<a href=\"https://doi.org/10.1101/cshperspect.a039859 \">10.1101/cshperspect.a039859 </a>.","chicago":"Friml, Jiří. “Fourteen Stations of Auxin.” <i>Cold Spring Harbor Perspectives in Biology</i>. Cold Spring Harbor Laboratory, 2022. <a href=\"https://doi.org/10.1101/cshperspect.a039859 \">https://doi.org/10.1101/cshperspect.a039859 </a>."},"pmid":1,"month":"05","article_type":"review","article_number":"a039859","publication":"Cold Spring Harbor Perspectives in Biology","date_updated":"2023-08-02T06:54:42Z","year":"2022","oa":1,"isi":1,"quality_controlled":"1","type":"journal_article","language":[{"iso":"eng"}],"main_file_link":[{"url":"https://doi.org/10.1101/cshperspect.a039859 ","open_access":"1"}],"intvolume":"        14","publisher":"Cold Spring Harbor Laboratory","external_id":{"pmid":["34400554"],"isi":["000806563000003"]},"date_published":"2022-05-27T00:00:00Z","scopus_import":"1","date_created":"2021-09-14T11:36:53Z","department":[{"_id":"JiFr"}],"user_id":"4359f0d1-fa6c-11eb-b949-802e58b17ae8"},{"external_id":{"isi":["000881319200001"],"arxiv":["2109.06778"]},"date_published":"2022-11-10T00:00:00Z","scopus_import":"1","date_created":"2021-09-15T10:06:48Z","department":[{"_id":"TiBr"}],"user_id":"4359f0d1-fa6c-11eb-b949-802e58b17ae8","isi":1,"quality_controlled":"1","language":[{"iso":"eng"}],"type":"journal_article","main_file_link":[{"url":"https://doi.org/10.1017/S1474748022000482","open_access":"1"}],"publisher":"Cambridge University Press","year":"2022","keyword":["Integral points","del Pezzo surface","universal torsor","Manin’s conjecture"],"oa":1,"article_type":"original","publication":"Journal of the Institute of Mathematics of Jussieu","date_updated":"2023-08-02T06:55:10Z","oa_version":"Published Version","day":"10","citation":{"chicago":"Derenthal, Ulrich, and Florian Alexander Wilsch. “Integral Points on Singular Del Pezzo Surfaces.” <i>Journal of the Institute of Mathematics of Jussieu</i>. Cambridge University Press, 2022. <a href=\"https://doi.org/10.1017/S1474748022000482\">https://doi.org/10.1017/S1474748022000482</a>.","mla":"Derenthal, Ulrich, and Florian Alexander Wilsch. “Integral Points on Singular Del Pezzo Surfaces.” <i>Journal of the Institute of Mathematics of Jussieu</i>, Cambridge University Press, 2022, doi:<a href=\"https://doi.org/10.1017/S1474748022000482\">10.1017/S1474748022000482</a>.","apa":"Derenthal, U., &#38; Wilsch, F. A. (2022). Integral points on singular del Pezzo surfaces. <i>Journal of the Institute of Mathematics of Jussieu</i>. Cambridge University Press. <a href=\"https://doi.org/10.1017/S1474748022000482\">https://doi.org/10.1017/S1474748022000482</a>","ieee":"U. Derenthal and F. A. Wilsch, “Integral points on singular del Pezzo surfaces,” <i>Journal of the Institute of Mathematics of Jussieu</i>. Cambridge University Press, 2022.","ista":"Derenthal U, Wilsch FA. 2022. Integral points on singular del Pezzo surfaces. Journal of the Institute of Mathematics of Jussieu.","ama":"Derenthal U, Wilsch FA. Integral points on singular del Pezzo surfaces. <i>Journal of the Institute of Mathematics of Jussieu</i>. 2022. doi:<a href=\"https://doi.org/10.1017/S1474748022000482\">10.1017/S1474748022000482</a>","short":"U. Derenthal, F.A. Wilsch, Journal of the Institute of Mathematics of Jussieu (2022)."},"month":"11","article_processing_charge":"Yes (via OA deal)","arxiv":1,"abstract":[{"text":"In order to study integral points of bounded log-anticanonical height on weak del Pezzo surfaces, we classify weak del Pezzo pairs. As a representative example, we consider a quartic del Pezzo surface of singularity type A1 + A3 and prove an analogue of Manin's conjecture for integral points with respect to its singularities and its lines.","lang":"eng"}],"doi":"10.1017/S1474748022000482","author":[{"full_name":"Derenthal, Ulrich","last_name":"Derenthal","first_name":"Ulrich"},{"orcid":"0000-0001-7302-8256","last_name":"Wilsch","full_name":"Wilsch, Florian Alexander","first_name":"Florian Alexander","id":"560601DA-8D36-11E9-A136-7AC1E5697425"}],"title":"Integral points on singular del Pezzo surfaces","_id":"10018","acknowledgement":"The first author was partly supported by grant DE 1646/4-2 of the Deutsche Forschungsgemeinschaft. The second author was partly supported by FWF grant P 32428-N35 and conducted part of this work as a guest at the Institut de Mathématiques de Jussieu–Paris Rive Gauche invited by Antoine Chambert-Loir and funded by DAAD.","publication_status":"epub_ahead","status":"public","publication_identifier":{"issn":["1474-7480"],"eissn":["1475-3030 "]},"project":[{"name":"New frontiers of the Manin conjecture","call_identifier":"FWF","grant_number":"P32428","_id":"26AEDAB2-B435-11E9-9278-68D0E5697425"}]},{"oa_version":"Published Version","citation":{"short":"Y. Liu, M. Calcabrini, Y. Yu, S. Lee, C. Chang, J. David, T. Ghosh, M.C. Spadaro, C. Xie, O. Cojocaru-Mirédin, J. Arbiol, M. Ibáñez, ACS Nano 16 (2022) 78–88.","ista":"Liu Y, Calcabrini M, Yu Y, Lee S, Chang C, David J, Ghosh T, Spadaro MC, Xie C, Cojocaru-Mirédin O, Arbiol J, Ibáñez M. 2022. Defect engineering in solution-processed polycrystalline SnSe leads to high thermoelectric performance. ACS Nano. 16(1), 78–88.","ama":"Liu Y, Calcabrini M, Yu Y, et al. Defect engineering in solution-processed polycrystalline SnSe leads to high thermoelectric performance. <i>ACS Nano</i>. 2022;16(1):78-88. doi:<a href=\"https://doi.org/10.1021/acsnano.1c06720\">10.1021/acsnano.1c06720</a>","apa":"Liu, Y., Calcabrini, M., Yu, Y., Lee, S., Chang, C., David, J., … Ibáñez, M. (2022). Defect engineering in solution-processed polycrystalline SnSe leads to high thermoelectric performance. <i>ACS Nano</i>. American Chemical Society . <a href=\"https://doi.org/10.1021/acsnano.1c06720\">https://doi.org/10.1021/acsnano.1c06720</a>","ieee":"Y. Liu <i>et al.</i>, “Defect engineering in solution-processed polycrystalline SnSe leads to high thermoelectric performance,” <i>ACS Nano</i>, vol. 16, no. 1. American Chemical Society , pp. 78–88, 2022.","mla":"Liu, Yu, et al. “Defect Engineering in Solution-Processed Polycrystalline SnSe Leads to High Thermoelectric Performance.” <i>ACS Nano</i>, vol. 16, no. 1, American Chemical Society , 2022, pp. 78–88, doi:<a href=\"https://doi.org/10.1021/acsnano.1c06720\">10.1021/acsnano.1c06720</a>.","chicago":"Liu, Yu, Mariano Calcabrini, Yuan Yu, Seungho Lee, Cheng Chang, Jérémy David, Tanmoy Ghosh, et al. “Defect Engineering in Solution-Processed Polycrystalline SnSe Leads to High Thermoelectric Performance.” <i>ACS Nano</i>. American Chemical Society , 2022. <a href=\"https://doi.org/10.1021/acsnano.1c06720\">https://doi.org/10.1021/acsnano.1c06720</a>."},"day":"25","pmid":1,"ddc":["540"],"month":"01","file_date_updated":"2022-03-02T16:17:29Z","has_accepted_license":"1","volume":16,"article_processing_charge":"Yes (via OA deal)","file":[{"date_created":"2022-03-02T16:17:29Z","file_id":"10808","checksum":"74f9c1aa5f95c0b992a4328e8e0247b4","content_type":"application/pdf","access_level":"open_access","date_updated":"2022-03-02T16:17:29Z","success":1,"file_size":9050764,"creator":"cchlebak","relation":"main_file","file_name":"2022_ACSNano_Liu.pdf"}],"tmp":{"name":"Creative Commons Attribution 4.0 International Public License (CC-BY 4.0)","image":"/images/cc_by.png","short":"CC BY (4.0)","legal_code_url":"https://creativecommons.org/licenses/by/4.0/legalcode"},"abstract":[{"text":"SnSe has emerged as one of the most promising materials for thermoelectric energy conversion due to its extraordinary performance in its single-crystal form and its low-cost constituent elements. However, to achieve an economic impact, the polycrystalline counterpart needs to replicate the performance of the single crystal. Herein, we optimize the thermoelectric performance of polycrystalline SnSe produced by consolidating solution-processed and surface-engineered SnSe particles. In particular, the SnSe particles are coated with CdSe molecular complexes that crystallize during the sintering process, forming CdSe nanoparticles. The presence of CdSe nanoparticles inhibits SnSe grain growth during the consolidation step due to Zener pinning, yielding a material with a high density of grain boundaries. Moreover, the resulting SnSe–CdSe nanocomposites present a large number of defects at different length scales, which significantly reduce the thermal conductivity. The produced SnSe–CdSe nanocomposites exhibit thermoelectric figures of merit up to 2.2 at 786 K, which is among the highest reported for solution-processed SnSe.","lang":"eng"}],"issue":"1","doi":"10.1021/acsnano.1c06720","title":"Defect engineering in solution-processed polycrystalline SnSe leads to high thermoelectric performance","author":[{"first_name":"Yu","id":"2A70014E-F248-11E8-B48F-1D18A9856A87","last_name":"Liu","full_name":"Liu, Yu","orcid":"0000-0001-7313-6740"},{"id":"45D7531A-F248-11E8-B48F-1D18A9856A87","first_name":"Mariano","full_name":"Calcabrini, Mariano","last_name":"Calcabrini"},{"first_name":"Yuan","full_name":"Yu, Yuan","last_name":"Yu"},{"orcid":"0000-0002-6962-8598","full_name":"Lee, Seungho","last_name":"Lee","id":"BB243B88-D767-11E9-B658-BC13E6697425","first_name":"Seungho"},{"orcid":"0000-0002-9515-4277","id":"9E331C2E-9F27-11E9-AE48-5033E6697425","first_name":"Cheng","full_name":"Chang, Cheng","last_name":"Chang"},{"last_name":"David","full_name":"David, Jérémy","first_name":"Jérémy"},{"last_name":"Ghosh","full_name":"Ghosh, Tanmoy","first_name":"Tanmoy","id":"a5fc9bc3-feff-11ea-93fe-e8015a3c7e9d"},{"last_name":"Spadaro","full_name":"Spadaro, Maria Chiara","first_name":"Maria Chiara"},{"last_name":"Xie","full_name":"Xie, Chenyang","first_name":"Chenyang"},{"full_name":"Cojocaru-Mirédin, Oana","last_name":"Cojocaru-Mirédin","first_name":"Oana"},{"last_name":"Arbiol","full_name":"Arbiol, Jordi","first_name":"Jordi"},{"full_name":"Ibáñez, Maria","last_name":"Ibáñez","id":"43C61214-F248-11E8-B48F-1D18A9856A87","first_name":"Maria","orcid":"0000-0001-5013-2843"}],"_id":"10042","status":"public","acknowledgement":"This work was financially supported by IST Austria and the Werner Siemens Foundation. Y.L. acknowledges funding from the European Union’s Horizon 2020 research and innovation program under the Marie Sklodowska-Curie grant agreement No. 754411. S.L. and M.C. received funding from the European Union’s Horizon 2020 research and innovation program under the Marie Skłodowska-Curie Grant Agreement No. 665385. J.D. acknowledges funding from the European Union’s Horizon 2020 research and innovation program under the Marie Sklodowska-Curie grant agreement no. 665919 (P-SPHERE) cofunded by Severo Ochoa Programme. C.C. acknowledges funding from the FWF “Lise Meitner Fellowship” grant agreement M 2889-N. Y.Y. and O.C.-M. acknowledge the financial support from DFG within the project SFB 917: Nanoswitches. M.C.S. received funding from the European Union’s Horizon 2020 research and innovation programme under the Marie Skłodowska-Curie grant agreement No. 754510 (PROBIST) and the Severo Ochoa programme. J.D. received funding from the European Union’s Horizon 2020 research and innovation programme under the Marie Sklodowska-Curie grant agreement No. 665919 (P-SPHERE) cofunded by Severo Ochoa Programme. The ICN2 is funded by the CERCA Program/Generalitat de Catalunya and by the Severo Ochoa program of the Spanish Ministry of Economy, Industry, and Competitiveness (MINECO, grant no. SEV-2017-0706). ICN2 acknowledges funding from Generalitat de Catalunya 2017 SGR 327 and the Spanish MINECO project NANOGEN (PID2020-116093RB-C43). This project received funding from the European Union’s Horizon 2020 research and innovation program under grant agreement No. 823717-ESTEEM3. The FIB sample preparation was conducted in the LMA-INA-Universidad de Zaragoza.","publication_status":"published","ec_funded":1,"publication_identifier":{"eissn":["1936-086X"],"issn":["1936-0851"]},"project":[{"call_identifier":"H2020","name":"ISTplus - Postdoctoral Fellowships","grant_number":"754411","_id":"260C2330-B435-11E9-9278-68D0E5697425"},{"name":"International IST Doctoral Program","call_identifier":"H2020","grant_number":"665385","_id":"2564DBCA-B435-11E9-9278-68D0E5697425"},{"name":"HighTE: The Werner Siemens Laboratory for the High Throughput Discovery of Semiconductors for Waste Heat Recovery","_id":"9B8F7476-BA93-11EA-9121-9846C619BF3A"},{"grant_number":"M02889","_id":"9B8804FC-BA93-11EA-9121-9846C619BF3A","name":"Bottom-up Engineering for Thermoelectric Applications"}],"date_published":"2022-01-25T00:00:00Z","external_id":{"pmid":["34549956"],"isi":["000767223400008"]},"date_created":"2021-09-24T07:55:12Z","scopus_import":"1","department":[{"_id":"MaIb"}],"user_id":"4359f0d1-fa6c-11eb-b949-802e58b17ae8","isi":1,"language":[{"iso":"eng"}],"type":"journal_article","quality_controlled":"1","publisher":"American Chemical Society ","intvolume":"        16","year":"2022","keyword":["tin selenide","nanocomposite","grain growth","Zener pinning","thermoelectricity","annealing","solution processing"],"oa":1,"related_material":{"record":[{"status":"public","relation":"dissertation_contains","id":"12885"}]},"page":"78-88","article_type":"original","publication":"ACS Nano","date_updated":"2023-08-02T14:41:05Z"},{"oa_version":"None","day":"01","citation":{"apa":"Vercellino, I., &#38; Sazanov, L. A. (2022). The assembly, regulation and function of the mitochondrial respiratory chain. <i>Nature Reviews Molecular Cell Biology</i>. Springer Nature. <a href=\"https://doi.org/10.1038/s41580-021-00415-0\">https://doi.org/10.1038/s41580-021-00415-0</a>","ieee":"I. Vercellino and L. A. Sazanov, “The assembly, regulation and function of the mitochondrial respiratory chain,” <i>Nature Reviews Molecular Cell Biology</i>, vol. 23. Springer Nature, pp. 141–161, 2022.","ista":"Vercellino I, Sazanov LA. 2022. The assembly, regulation and function of the mitochondrial respiratory chain. Nature Reviews Molecular Cell Biology. 23, 141–161.","ama":"Vercellino I, Sazanov LA. The assembly, regulation and function of the mitochondrial respiratory chain. <i>Nature Reviews Molecular Cell Biology</i>. 2022;23:141–161. doi:<a href=\"https://doi.org/10.1038/s41580-021-00415-0\">10.1038/s41580-021-00415-0</a>","short":"I. Vercellino, L.A. Sazanov, Nature Reviews Molecular Cell Biology 23 (2022) 141–161.","mla":"Vercellino, Irene, and Leonid A. Sazanov. “The Assembly, Regulation and Function of the Mitochondrial Respiratory Chain.” <i>Nature Reviews Molecular Cell Biology</i>, vol. 23, Springer Nature, 2022, pp. 141–161, doi:<a href=\"https://doi.org/10.1038/s41580-021-00415-0\">10.1038/s41580-021-00415-0</a>.","chicago":"Vercellino, Irene, and Leonid A Sazanov. “The Assembly, Regulation and Function of the Mitochondrial Respiratory Chain.” <i>Nature Reviews Molecular Cell Biology</i>. Springer Nature, 2022. <a href=\"https://doi.org/10.1038/s41580-021-00415-0\">https://doi.org/10.1038/s41580-021-00415-0</a>."},"month":"02","pmid":1,"volume":23,"article_processing_charge":"No","abstract":[{"text":"The mitochondrial oxidative phosphorylation system is central to cellular metabolism. It comprises five enzymatic complexes and two mobile electron carriers that work in a mitochondrial respiratory chain. By coupling the oxidation of reducing equivalents coming into mitochondria to the generation and subsequent dissipation of a proton gradient across the inner mitochondrial membrane, this electron transport chain drives the production of ATP, which is then used as a primary energy carrier in virtually all cellular processes. Minimal perturbations of the respiratory chain activity are linked to diseases; therefore, it is necessary to understand how these complexes are assembled and regulated and how they function. In this Review, we outline the latest assembly models for each individual complex, and we also highlight the recent discoveries indicating that the formation of larger assemblies, known as respiratory supercomplexes, originates from the association of the intermediates of individual complexes. We then discuss how recent cryo-electron microscopy structures have been key to answering open questions on the function of the electron transport chain in mitochondrial respiration and how supercomplexes and other factors, including metabolites, can regulate the activity of the single complexes. When relevant, we discuss how these mechanisms contribute to physiology and outline their deregulation in human diseases.","lang":"eng"}],"doi":"10.1038/s41580-021-00415-0","author":[{"full_name":"Vercellino, Irene","last_name":"Vercellino","id":"3ED6AF16-F248-11E8-B48F-1D18A9856A87","first_name":"Irene","orcid":" 0000-0001-5618-3449"},{"orcid":"0000-0002-0977-7989","last_name":"Sazanov","full_name":"Sazanov, Leonid A","first_name":"Leonid A","id":"338D39FE-F248-11E8-B48F-1D18A9856A87"}],"title":"The assembly, regulation and function of the mitochondrial respiratory chain","_id":"10182","publication_status":"published","status":"public","publication_identifier":{"issn":["1471-0072"],"eissn":["1471-0080"]},"external_id":{"isi":["000705697100001"],"pmid":["34621061"]},"date_published":"2022-02-01T00:00:00Z","scopus_import":"1","date_created":"2021-10-24T22:01:35Z","department":[{"_id":"LeSa"}],"user_id":"4359f0d1-fa6c-11eb-b949-802e58b17ae8","isi":1,"quality_controlled":"1","language":[{"iso":"eng"}],"type":"journal_article","intvolume":"        23","publisher":"Springer Nature","year":"2022","article_type":"original","page":"141–161","publication":"Nature Reviews Molecular Cell Biology","date_updated":"2023-08-02T06:55:42Z"},{"publication":"Journal of Ambient Intelligence and Humanized Computing","date_updated":"2023-08-02T13:31:48Z","page":"2621–2635","article_type":"original","oa":1,"year":"2022","keyword":["general computer science"],"publisher":"Springer Nature","intvolume":"        13","isi":1,"type":"journal_article","language":[{"iso":"eng"}],"quality_controlled":"1","department":[{"_id":"HeEd"}],"user_id":"4359f0d1-fa6c-11eb-b949-802e58b17ae8","external_id":{"isi":["000712198000001"]},"date_published":"2022-05-01T00:00:00Z","date_created":"2021-11-02T09:28:55Z","scopus_import":"1","publication_identifier":{"eissn":["1868-5145"],"issn":["1868-5137"]},"project":[{"name":"The Wittgenstein Prize","call_identifier":"FWF","grant_number":"Z00342","_id":"268116B8-B435-11E9-9278-68D0E5697425"}],"status":"public","acknowledgement":"The third author acknowledges the funding received from the Wittgenstein Prize, Austrian Science Fund (FWF), grant no. Z 342-N31.","publication_status":"published","title":"A context-aware dimension reduction framework for trajectory and health signal analyses","author":[{"first_name":"Samira","last_name":"Goudarzi","full_name":"Goudarzi, Samira"},{"full_name":"Sharif, Mohammad","last_name":"Sharif","first_name":"Mohammad"},{"orcid":"0000-0001-6746-4174","id":"2A2BCDC4-CF62-11E9-BE5E-3B1EE6697425","first_name":"Farid","full_name":"Karimipour, Farid","last_name":"Karimipour"}],"_id":"10208","abstract":[{"text":"It is practical to collect a huge amount of movement data and environmental context information along with the health signals of individuals because there is the emergence of new generations of positioning and tracking technologies and rapid advancements of health sensors. The study of the relations between these datasets and their sequence similarity analysis is of interest to many applications such as health monitoring and recommender systems. However, entering all movement parameters and health signals can lead to the complexity of the problem and an increase in its computational load. In this situation, dimension reduction techniques can be used to avoid consideration of simultaneous dependent parameters in the process of similarity measurement of the trajectories. The present study provides a framework, named CaDRAW, to use spatial–temporal data and movement parameters along with independent context information in the process of measuring the similarity of trajectories. In this regard, the omission of dependent movement characteristic signals is conducted by using an unsupervised feature selection dimension reduction technique. To evaluate the effectiveness of the proposed framework, it was applied to a real contextualized movement and related health signal datasets of individuals. The results indicated the capability of the proposed framework in measuring the similarity and in decreasing the characteristic signals in such a way that the similarity results -before and after reduction of dependent characteristic signals- have small differences. The mean differences between the obtained results before and after reducing the dimension were 0.029 and 0.023 for the round path, respectively.","lang":"eng"}],"doi":"10.1007/s12652-021-03569-z","file":[{"checksum":"0a8961416a9bb2be5a1cebda65468bcf","content_type":"application/pdf","access_level":"open_access","date_updated":"2022-12-20T23:30:08Z","file_size":1634958,"relation":"main_file","creator":"fkarimip","file_name":"A Context‑aware Dimension Reduction Framework - Journal of Ambient Intelligence 2021 (Preprint version).pdf","embargo":"2022-11-12","date_created":"2021-11-12T19:38:05Z","file_id":"10279"}],"has_accepted_license":"1","article_processing_charge":"No","volume":13,"citation":{"mla":"Goudarzi, Samira, et al. “A Context-Aware Dimension Reduction Framework for Trajectory and Health Signal Analyses.” <i>Journal of Ambient Intelligence and Humanized Computing</i>, vol. 13, Springer Nature, 2022, pp. 2621–2635, doi:<a href=\"https://doi.org/10.1007/s12652-021-03569-z\">10.1007/s12652-021-03569-z</a>.","chicago":"Goudarzi, Samira, Mohammad Sharif, and Farid Karimipour. “A Context-Aware Dimension Reduction Framework for Trajectory and Health Signal Analyses.” <i>Journal of Ambient Intelligence and Humanized Computing</i>. Springer Nature, 2022. <a href=\"https://doi.org/10.1007/s12652-021-03569-z\">https://doi.org/10.1007/s12652-021-03569-z</a>.","ieee":"S. Goudarzi, M. Sharif, and F. Karimipour, “A context-aware dimension reduction framework for trajectory and health signal analyses,” <i>Journal of Ambient Intelligence and Humanized Computing</i>, vol. 13. Springer Nature, pp. 2621–2635, 2022.","apa":"Goudarzi, S., Sharif, M., &#38; Karimipour, F. (2022). A context-aware dimension reduction framework for trajectory and health signal analyses. <i>Journal of Ambient Intelligence and Humanized Computing</i>. Springer Nature. <a href=\"https://doi.org/10.1007/s12652-021-03569-z\">https://doi.org/10.1007/s12652-021-03569-z</a>","ama":"Goudarzi S, Sharif M, Karimipour F. A context-aware dimension reduction framework for trajectory and health signal analyses. <i>Journal of Ambient Intelligence and Humanized Computing</i>. 2022;13:2621–2635. doi:<a href=\"https://doi.org/10.1007/s12652-021-03569-z\">10.1007/s12652-021-03569-z</a>","short":"S. Goudarzi, M. Sharif, F. Karimipour, Journal of Ambient Intelligence and Humanized Computing 13 (2022) 2621–2635.","ista":"Goudarzi S, Sharif M, Karimipour F. 2022. A context-aware dimension reduction framework for trajectory and health signal analyses. Journal of Ambient Intelligence and Humanized Computing. 13, 2621–2635."},"day":"01","file_date_updated":"2022-12-20T23:30:08Z","month":"05","ddc":["000"],"oa_version":"Submitted Version"},{"article_type":"original","page":"89-100","related_material":{"record":[{"id":"13061","relation":"research_data","status":"public"}]},"date_updated":"2023-08-14T11:45:29Z","publication":"Ecology Letters","year":"2022","oa":1,"language":[{"iso":"eng"}],"quality_controlled":"1","type":"journal_article","isi":1,"intvolume":"        25","publisher":"Wiley","scopus_import":"1","acknowledged_ssus":[{"_id":"ScienComp"}],"date_created":"2021-11-14T23:01:25Z","external_id":{"pmid":["34725912"],"isi":["000713396100001"]},"date_published":"2022-01-01T00:00:00Z","user_id":"4359f0d1-fa6c-11eb-b949-802e58b17ae8","department":[{"_id":"SyCr"}],"ec_funded":1,"status":"public","publication_status":"published","acknowledgement":"The authors are grateful to G. Tkačik and V. Mireles for advice on data analyses and to A. Schloegl for help using the IST Austria HPC cluster for data processing. The authors thank J. Eilenberg for providing the fungal strain and A.V. Grasse for support with the molecular analysis. The authors also thank the Social Immunity group at IST Austria, in particular B. Milutinović, for discussions throughout and comments on the manuscript.","project":[{"_id":"2649B4DE-B435-11E9-9278-68D0E5697425","grant_number":"771402","call_identifier":"H2020","name":"Epidemics in ant societies on a chip"}],"publication_identifier":{"eissn":["1461-0248"],"issn":["1461-023X"]},"doi":"10.1111/ele.13907","issue":"1","abstract":[{"lang":"eng","text":"Infections early in life can have enduring effects on an organism's development and immunity. In this study, we show that this equally applies to developing ‘superorganisms’––incipient social insect colonies. When we exposed newly mated Lasius niger ant queens to a low pathogen dose, their colonies grew more slowly than controls before winter, but reached similar sizes afterwards. Independent of exposure, queen hibernation survival improved when the ratio of pupae to workers was small. Queens that reared fewer pupae before worker emergence exhibited lower pathogen levels, indicating that high brood rearing efforts interfere with the ability of the queen's immune system to suppress pathogen proliferation. Early-life queen pathogen exposure also improved the immunocompetence of her worker offspring, as demonstrated by challenging the workers to the same pathogen a year later. Transgenerational transfer of the queen's pathogen experience to her workforce can hence durably reduce the disease susceptibility of the whole superorganism."}],"_id":"10284","author":[{"last_name":"Casillas Perez","full_name":"Casillas Perez, Barbara E","first_name":"Barbara E","id":"351ED2AA-F248-11E8-B48F-1D18A9856A87"},{"full_name":"Pull, Christopher","last_name":"Pull","id":"3C7F4840-F248-11E8-B48F-1D18A9856A87","first_name":"Christopher","orcid":"0000-0003-1122-3982"},{"first_name":"Filip","full_name":"Naiser, Filip","last_name":"Naiser"},{"first_name":"Elisabeth","id":"31757262-F248-11E8-B48F-1D18A9856A87","last_name":"Naderlinger","full_name":"Naderlinger, Elisabeth"},{"first_name":"Jiri","last_name":"Matas","full_name":"Matas, Jiri"},{"id":"2F64EC8C-F248-11E8-B48F-1D18A9856A87","first_name":"Sylvia","full_name":"Cremer, Sylvia","last_name":"Cremer","orcid":"0000-0002-2193-3868"}],"title":"Early queen infection shapes developmental dynamics and induces long-term disease protection in incipient ant colonies","volume":25,"article_processing_charge":"Yes (via OA deal)","has_accepted_license":"1","tmp":{"name":"Creative Commons Attribution 4.0 International Public License (CC-BY 4.0)","image":"/images/cc_by.png","short":"CC BY (4.0)","legal_code_url":"https://creativecommons.org/licenses/by/4.0/legalcode"},"file":[{"file_id":"10721","date_created":"2022-02-03T13:37:11Z","access_level":"open_access","checksum":"0bd4210400e9876609b7c538ab4f9a3c","content_type":"application/pdf","success":1,"date_updated":"2022-02-03T13:37:11Z","file_name":"2021_EcologyLetters_CasillasPerez.pdf","creator":"cchlebak","file_size":700087,"relation":"main_file"}],"oa_version":"Published Version","ddc":["573"],"file_date_updated":"2022-02-03T13:37:11Z","month":"01","pmid":1,"day":"01","citation":{"apa":"Casillas Perez, B. E., Pull, C., Naiser, F., Naderlinger, E., Matas, J., &#38; Cremer, S. (2022). Early queen infection shapes developmental dynamics and induces long-term disease protection in incipient ant colonies. <i>Ecology Letters</i>. Wiley. <a href=\"https://doi.org/10.1111/ele.13907\">https://doi.org/10.1111/ele.13907</a>","ieee":"B. E. Casillas Perez, C. Pull, F. Naiser, E. Naderlinger, J. Matas, and S. Cremer, “Early queen infection shapes developmental dynamics and induces long-term disease protection in incipient ant colonies,” <i>Ecology Letters</i>, vol. 25, no. 1. Wiley, pp. 89–100, 2022.","ama":"Casillas Perez BE, Pull C, Naiser F, Naderlinger E, Matas J, Cremer S. Early queen infection shapes developmental dynamics and induces long-term disease protection in incipient ant colonies. <i>Ecology Letters</i>. 2022;25(1):89-100. doi:<a href=\"https://doi.org/10.1111/ele.13907\">10.1111/ele.13907</a>","short":"B.E. Casillas Perez, C. Pull, F. Naiser, E. Naderlinger, J. Matas, S. Cremer, Ecology Letters 25 (2022) 89–100.","ista":"Casillas Perez BE, Pull C, Naiser F, Naderlinger E, Matas J, Cremer S. 2022. Early queen infection shapes developmental dynamics and induces long-term disease protection in incipient ant colonies. Ecology Letters. 25(1), 89–100.","mla":"Casillas Perez, Barbara E., et al. “Early Queen Infection Shapes Developmental Dynamics and Induces Long-Term Disease Protection in Incipient Ant Colonies.” <i>Ecology Letters</i>, vol. 25, no. 1, Wiley, 2022, pp. 89–100, doi:<a href=\"https://doi.org/10.1111/ele.13907\">10.1111/ele.13907</a>.","chicago":"Casillas Perez, Barbara E, Christopher Pull, Filip Naiser, Elisabeth Naderlinger, Jiri Matas, and Sylvia Cremer. “Early Queen Infection Shapes Developmental Dynamics and Induces Long-Term Disease Protection in Incipient Ant Colonies.” <i>Ecology Letters</i>. Wiley, 2022. <a href=\"https://doi.org/10.1111/ele.13907\">https://doi.org/10.1111/ele.13907</a>."}}]
