Skip to main content
placeholder image

Large-scale Isolated Gesture Recognition using Convolutional Neural Networks

Conference Paper


Abstract


  • This paper proposes three simple, compact yet effective representations of depth sequences, referred to respectively as Dynamic Depth Images (DDI), Dynamic Depth Normal Images (DDNI) and Dynamic Depth Motion Normal Images (DDMNI). These dynamic images are constructed from a sequence of depth maps using bidirectional rank pooling to effectively capture the spatial-temporal information. Such image-based representations enable us to fine-tune the existing ConvNets models trained on image data for classification of depth sequences, without introducing large parameters to learn. Upon the proposed representations, a convolutional Neural networks (ConvNets) based method is developed for gesture recognition and evaluated on the Large-scale Isolated Gesture Recognition at the ChaLearn Looking at People (LAP) challenge 2016. The method achieved 55.57% classification accuracy and ranked 2nd place in this challenge but was very close to the best performance even though we only used depth data.

Authors


  •   Wang, Pichao (external author)
  •   Li, Wanqing
  •   Liu, Song (external author)
  •   Gao, Zhimin (external author)
  •   Tang, Chang (external author)
  •   Ogunbona, Philip O.

Publication Date


  • 2016

Citation


  • Wang, P., Li, W., Liu, S., Gao, Z., Tang, C. & Ogunbona, P. (2016). Large-scale Isolated Gesture Recognition using Convolutional Neural Networks. Proceedings - 23rd International Conference on Pattern Recognition (ICPR) (pp. 7-12). United States: IEEE.

Scopus Eid


  • 2-s2.0-85019124987

Start Page


  • 7

End Page


  • 12

Place Of Publication


  • http://www.icpr2016.org/site/

Abstract


  • This paper proposes three simple, compact yet effective representations of depth sequences, referred to respectively as Dynamic Depth Images (DDI), Dynamic Depth Normal Images (DDNI) and Dynamic Depth Motion Normal Images (DDMNI). These dynamic images are constructed from a sequence of depth maps using bidirectional rank pooling to effectively capture the spatial-temporal information. Such image-based representations enable us to fine-tune the existing ConvNets models trained on image data for classification of depth sequences, without introducing large parameters to learn. Upon the proposed representations, a convolutional Neural networks (ConvNets) based method is developed for gesture recognition and evaluated on the Large-scale Isolated Gesture Recognition at the ChaLearn Looking at People (LAP) challenge 2016. The method achieved 55.57% classification accuracy and ranked 2nd place in this challenge but was very close to the best performance even though we only used depth data.

Authors


  •   Wang, Pichao (external author)
  •   Li, Wanqing
  •   Liu, Song (external author)
  •   Gao, Zhimin (external author)
  •   Tang, Chang (external author)
  •   Ogunbona, Philip O.

Publication Date


  • 2016

Citation


  • Wang, P., Li, W., Liu, S., Gao, Z., Tang, C. & Ogunbona, P. (2016). Large-scale Isolated Gesture Recognition using Convolutional Neural Networks. Proceedings - 23rd International Conference on Pattern Recognition (ICPR) (pp. 7-12). United States: IEEE.

Scopus Eid


  • 2-s2.0-85019124987

Start Page


  • 7

End Page


  • 12

Place Of Publication


  • http://www.icpr2016.org/site/