Skip to main content
placeholder image

Depth Pooling Based Large-Scale 3-D Action Recognition with Convolutional Neural Networks

Journal Article


Download full-text (Open Access)

Abstract


  • © 1999-2012 IEEE. This paper proposes three simple, compact yet effective representations of depth sequences, referred to respectively as dynamic depth images (DDI), dynamic depth normal images (DDNI), and dynamic depth motion normal images (DDMNI), for both isolated and continuous action recognition. These dynamic images are constructed from a segmented sequence of depth maps using hierarchical bidirectional rank pooling to effectively capture the spatial-temporal information. Specifically, DDI exploits the dynamics of postures over time, and DDNI and DDMNI exploit the 3-D structural information captured by depth maps. Upon the proposed representations, a convolutional neural network (ConvNet)-based method is developed for action recognition. The image-based representations enable us to fine-tune the existing ConvNet models trained on image data without training a large number of parameters from scratch. The proposed method achieved the state-of-art results on three large datasets, namely, the large-scale continuous gesture recognition dataset (means the Jaccard index 0.4109), the large-scale isolated gesture recognition dataset (59.21%), and the NTU RGB+D dataset (87.08% cross-subject and 84.22% cross-view) even though only the depth modality was used.

Authors


Publication Date


  • 2018

Citation


  • Wang, P., Li, W., Gao, Z., Tang, C. & Ogunbona, P. (2018). Depth Pooling Based Large-Scale 3-D Action Recognition with Convolutional Neural Networks. IEEE Transactions on Multimedia, 20 (5), 1051-1061.

Scopus Eid


  • 2-s2.0-85046042144

Ro Full-text Url


  • http://ro.uow.edu.au/cgi/viewcontent.cgi?article=2369&context=eispapers1

Ro Metadata Url


  • http://ro.uow.edu.au/eispapers1/1367

Number Of Pages


  • 10

Start Page


  • 1051

End Page


  • 1061

Volume


  • 20

Issue


  • 5

Place Of Publication


  • United States

Abstract


  • © 1999-2012 IEEE. This paper proposes three simple, compact yet effective representations of depth sequences, referred to respectively as dynamic depth images (DDI), dynamic depth normal images (DDNI), and dynamic depth motion normal images (DDMNI), for both isolated and continuous action recognition. These dynamic images are constructed from a segmented sequence of depth maps using hierarchical bidirectional rank pooling to effectively capture the spatial-temporal information. Specifically, DDI exploits the dynamics of postures over time, and DDNI and DDMNI exploit the 3-D structural information captured by depth maps. Upon the proposed representations, a convolutional neural network (ConvNet)-based method is developed for action recognition. The image-based representations enable us to fine-tune the existing ConvNet models trained on image data without training a large number of parameters from scratch. The proposed method achieved the state-of-art results on three large datasets, namely, the large-scale continuous gesture recognition dataset (means the Jaccard index 0.4109), the large-scale isolated gesture recognition dataset (59.21%), and the NTU RGB+D dataset (87.08% cross-subject and 84.22% cross-view) even though only the depth modality was used.

Authors


Publication Date


  • 2018

Citation


  • Wang, P., Li, W., Gao, Z., Tang, C. & Ogunbona, P. (2018). Depth Pooling Based Large-Scale 3-D Action Recognition with Convolutional Neural Networks. IEEE Transactions on Multimedia, 20 (5), 1051-1061.

Scopus Eid


  • 2-s2.0-85046042144

Ro Full-text Url


  • http://ro.uow.edu.au/cgi/viewcontent.cgi?article=2369&context=eispapers1

Ro Metadata Url


  • http://ro.uow.edu.au/eispapers1/1367

Number Of Pages


  • 10

Start Page


  • 1051

End Page


  • 1061

Volume


  • 20

Issue


  • 5

Place Of Publication


  • United States