{"related_material":{"record":[{"status":"public","id":"9418","relation":"dissertation_contains"}],"link":[{"url":"https://iclr.cc/virtual_2020/poster_Bylx-TNKvH.html","relation":"supplementary_material"}]},"status":"public","file_date_updated":"2020-07-14T12:47:59Z","quality_controlled":"1","title":"Functional vs. parametric equivalence of ReLU networks","ddc":["000"],"user_id":"2DF688A6-F248-11E8-B48F-1D18A9856A87","date_created":"2020-02-11T09:07:37Z","oa":1,"oa_version":"Published Version","year":"2020","conference":{"end_date":"2020-04-30","location":"Online","start_date":"2020-04-27","name":"ICLR: International Conference on Learning Representations"},"date_published":"2020-04-26T00:00:00Z","publication_status":"published","citation":{"short":"M. Phuong, C. Lampert, in:, 8th International Conference on Learning Representations, 2020.","ama":"Phuong M, Lampert C. Functional vs. parametric equivalence of ReLU networks. In: 8th International Conference on Learning Representations. ; 2020.","ista":"Phuong M, Lampert C. 2020. Functional vs. parametric equivalence of ReLU networks. 8th International Conference on Learning Representations. ICLR: International Conference on Learning Representations.","apa":"Phuong, M., & Lampert, C. (2020). Functional vs. parametric equivalence of ReLU networks. In 8th International Conference on Learning Representations. Online.","chicago":"Phuong, Mary, and Christoph Lampert. “Functional vs. Parametric Equivalence of ReLU Networks.” In 8th International Conference on Learning Representations, 2020.","ieee":"M. Phuong and C. Lampert, “Functional vs. parametric equivalence of ReLU networks,” in 8th International Conference on Learning Representations, Online, 2020.","mla":"Phuong, Mary, and Christoph Lampert. “Functional vs. Parametric Equivalence of ReLU Networks.” 8th International Conference on Learning Representations, 2020."},"abstract":[{"lang":"eng","text":"We address the following question: How redundant is the parameterisation of ReLU networks? Specifically, we consider transformations of the weight space which leave the function implemented by the network intact. Two such transformations are known for feed-forward architectures: permutation of neurons within a layer, and positive scaling of all incoming weights of a neuron coupled with inverse scaling of its outgoing weights. In this work, we show for architectures with non-increasing widths that permutation and scaling are in fact the only function-preserving weight transformations. For any eligible architecture we give an explicit construction of a neural network such that any other network that implements the same function can be obtained from the original one by the application of permutations and rescaling. The proof relies on a geometric understanding of boundaries between linear regions of ReLU networks, and we hope the developed mathematical tools are of independent interest."}],"language":[{"iso":"eng"}],"department":[{"_id":"ChLa"}],"_id":"7481","author":[{"last_name":"Bui Thi Mai","first_name":"Phuong","id":"3EC6EE64-F248-11E8-B48F-1D18A9856A87","full_name":"Bui Thi Mai, Phuong"},{"id":"40C20FD2-F248-11E8-B48F-1D18A9856A87","full_name":"Lampert, Christoph","last_name":"Lampert","first_name":"Christoph","orcid":"0000-0001-8622-7887"}],"has_accepted_license":"1","day":"26","publication":"8th International Conference on Learning Representations","file":[{"date_created":"2020-02-11T09:07:27Z","file_size":405469,"file_id":"7482","date_updated":"2020-07-14T12:47:59Z","file_name":"main.pdf","creator":"bphuong","access_level":"open_access","content_type":"application/pdf","relation":"main_file","checksum":"8d372ea5defd8cb8fdc430111ed754a9"}],"type":"conference","date_updated":"2023-09-07T13:29:50Z","month":"04","article_processing_charge":"No"}