{"date_created":"2018-12-11T12:01:32Z","scopus_import":1,"title":"Information theoretic clustering using minimal spanning trees","oa_version":"None","date_published":"2012-08-14T00:00:00Z","page":"205 - 215","author":[{"last_name":"Müller","first_name":"Andreas","full_name":"Müller, Andreas"},{"last_name":"Nowozin","first_name":"Sebastian","full_name":"Nowozin, Sebastian"},{"id":"40C20FD2-F248-11E8-B48F-1D18A9856A87","last_name":"Lampert","first_name":"Christoph","orcid":"0000-0001-8622-7887","full_name":"Lampert, Christoph"}],"month":"08","citation":{"ama":"Müller A, Nowozin S, Lampert C. Information theoretic clustering using minimal spanning trees. In: Vol 7476. Springer; 2012:205-215. doi:10.1007/978-3-642-32717-9_21","chicago":"Müller, Andreas, Sebastian Nowozin, and Christoph Lampert. “Information Theoretic Clustering Using Minimal Spanning Trees,” 7476:205–15. Springer, 2012. https://doi.org/10.1007/978-3-642-32717-9_21.","apa":"Müller, A., Nowozin, S., & Lampert, C. (2012). Information theoretic clustering using minimal spanning trees (Vol. 7476, pp. 205–215). Presented at the DAGM: German Association For Pattern Recognition, Graz, Austria: Springer. https://doi.org/10.1007/978-3-642-32717-9_21","mla":"Müller, Andreas, et al. Information Theoretic Clustering Using Minimal Spanning Trees. Vol. 7476, Springer, 2012, pp. 205–15, doi:10.1007/978-3-642-32717-9_21.","ista":"Müller A, Nowozin S, Lampert C. 2012. Information theoretic clustering using minimal spanning trees. DAGM: German Association For Pattern Recognition, LNCS, vol. 7476, 205–215.","ieee":"A. Müller, S. Nowozin, and C. Lampert, “Information theoretic clustering using minimal spanning trees,” presented at the DAGM: German Association For Pattern Recognition, Graz, Austria, 2012, vol. 7476, pp. 205–215.","short":"A. Müller, S. Nowozin, C. Lampert, in:, Springer, 2012, pp. 205–215."},"user_id":"3E5EF7F0-F248-11E8-B48F-1D18A9856A87","abstract":[{"text":"In this work we propose a new information-theoretic clustering algorithm that infers cluster memberships by direct optimization of a non-parametric mutual information estimate between data distribution and cluster assignment. Although the optimization objective has a solid theoretical foundation it is hard to optimize. We propose an approximate optimization formulation that leads to an efficient algorithm with low runtime complexity. The algorithm has a single free parameter, the number of clusters to find. We demonstrate superior performance on several synthetic and real datasets.\r\n","lang":"eng"}],"publisher":"Springer","volume":7476,"publication_status":"published","quality_controlled":"1","intvolume":" 7476","publist_id":"3573","day":"14","date_updated":"2021-01-12T07:41:14Z","status":"public","language":[{"iso":"eng"}],"_id":"3126","year":"2012","doi":"10.1007/978-3-642-32717-9_21","alternative_title":["LNCS"],"type":"conference","department":[{"_id":"ChLa"}],"conference":{"name":"DAGM: German Association For Pattern Recognition","location":"Graz, Austria","start_date":"2012-08-28","end_date":"2012-08-31"}}