{"author":[{"last_name":"Dhillon","first_name":"Paramveer","full_name":"Dhillon, Paramveer S"},{"last_name":"Nowozin","first_name":"Sebastian","full_name":"Nowozin, Sebastian"},{"id":"40C20FD2-F248-11E8-B48F-1D18A9856A87","full_name":"Christoph Lampert","first_name":"Christoph","last_name":"Lampert","orcid":"0000-0001-8622-7887"}],"day":"01","page":"22 - 29","doi":"10.1109/CVPRW.2009.5204237","type":"conference","date_updated":"2021-01-12T07:48:59Z","month":"01","issue":"174","citation":{"ieee":"P. Dhillon, S. Nowozin, and C. Lampert, “Combining appearance and motion for human action classification in videos,” presented at the CVPR: Computer Vision and Pattern Recognition, 2009, no. 174, pp. 22–29.","mla":"Dhillon, Paramveer, et al. Combining Appearance and Motion for Human Action Classification in Videos. no. 174, IEEE, 2009, pp. 22–29, doi:10.1109/CVPRW.2009.5204237.","short":"P. Dhillon, S. Nowozin, C. Lampert, in:, IEEE, 2009, pp. 22–29.","chicago":"Dhillon, Paramveer, Sebastian Nowozin, and Christoph Lampert. “Combining Appearance and Motion for Human Action Classification in Videos,” 22–29. IEEE, 2009. https://doi.org/10.1109/CVPRW.2009.5204237.","apa":"Dhillon, P., Nowozin, S., & Lampert, C. (2009). Combining appearance and motion for human action classification in videos (pp. 22–29). Presented at the CVPR: Computer Vision and Pattern Recognition, IEEE. https://doi.org/10.1109/CVPRW.2009.5204237","ista":"Dhillon P, Nowozin S, Lampert C. 2009. Combining appearance and motion for human action classification in videos. CVPR: Computer Vision and Pattern Recognition, 22–29.","ama":"Dhillon P, Nowozin S, Lampert C. Combining appearance and motion for human action classification in videos. In: IEEE; 2009:22-29. doi:10.1109/CVPRW.2009.5204237"},"date_published":"2009-01-01T00:00:00Z","publication_status":"published","conference":{"name":"CVPR: Computer Vision and Pattern Recognition"},"extern":1,"abstract":[{"text":"An important cue to high level scene understanding is to analyze the objects in the scene and their behavior and interactions. In this paper, we study the problem of classification of activities in videos, as this is an integral component of any scene understanding system, and present a novel approach for recognizing human action categories in videos by combining information from appearance and motion of human body parts. Our approach is based on tracking human body parts by using mixture particle filters and then clustering the particles using local non - parametric clustering, hence associating a local set of particles to each cluster mode. The trajectory of these cluster modes provides the "motion" information and the "appearance" information is provided by the statistical information about the relative motion of these local set of particles over a number of frames. Later we use a "Bag of Words" model to build one histogram per video sequence from the set of these robust appearance and motion descriptors. These histograms provide us characteristic information which helps us to discriminate among various human actions which ultimately helps us in better understanding of the complete scene. We tested our approach on the standard KTH and Weizmann human action dataseis and the results were comparable to the state of the art methods. Additionally our approach is able to distinguish between activities that involve the motion of complete body from those in which only certain body parts move. In other words, our method discriminates well between activities with "global body motion" like running, jogging etc. and "local motion" like waving, boxing etc.","lang":"eng"}],"_id":"3690","publist_id":"2675","date_created":"2018-12-11T12:04:38Z","year":"2009","publisher":"IEEE","status":"public","quality_controlled":0,"title":"Combining appearance and motion for human action classification in videos"}