Skip to main content
placeholder image

Jointly Learning Visual Poses and Pose Lexicon for Semantic Action Recognition

Journal Article


Abstract


  • A novel method for semantic action recognition through learning a pose lexicon is presented in this paper. A pose lexicon comprises a set of semantic poses, a set of visual poses and a probabilistic mapping between visual and semantic poses. This paper assumes that both visual poses and mapping are hidden and proposes a method to simultaneously learn a visual pose model that estimates the likelihood of an observed video frame being generated from hidden visual poses, and a pose lexicon model that establishes the probabilistic mapping between the hidden visual poses and the semantic poses parsed from textual instructions. Specifically, the proposed method consists of two-level hidden Markov models. One level represents the alignment between the visual poses and semantic poses. The other level represents a visual pose sequence and each visual pose is modelled as a Gaussian mixture. An Expectation-maximization algorithm is developed to train a pose lexicon. With the learned lexicon, action classification is formulated as a problem of finding the maximum posterior probability of a given sequence of video frames that follows a given sequence of semantic poses, constrained by the most likely visual pose and the alignment sequences. The proposed method was evaluated on MSRC- 12, WorkoutSU-10, WorkoutUOW-18, Combined-15, Combined- 17 and Combined-50 action datasets using cross-subject, crossdataset, zero-shot and seen/unseen protocols.

Publication Date


  • 2019

Citation


  • Zhou, L., Li, W., Ogunbona, P. & Zhang, Z. (2019). Jointly Learning Visual Poses and Pose Lexicon for Semantic Action Recognition. IEEE Transactions on Circuits and Systems for Video Technology, Online First 1-11.

Scopus Eid


  • 2-s2.0-85059606463

Number Of Pages


  • 10

Start Page


  • 1

End Page


  • 11

Volume


  • Online First

Place Of Publication


  • United States

Abstract


  • A novel method for semantic action recognition through learning a pose lexicon is presented in this paper. A pose lexicon comprises a set of semantic poses, a set of visual poses and a probabilistic mapping between visual and semantic poses. This paper assumes that both visual poses and mapping are hidden and proposes a method to simultaneously learn a visual pose model that estimates the likelihood of an observed video frame being generated from hidden visual poses, and a pose lexicon model that establishes the probabilistic mapping between the hidden visual poses and the semantic poses parsed from textual instructions. Specifically, the proposed method consists of two-level hidden Markov models. One level represents the alignment between the visual poses and semantic poses. The other level represents a visual pose sequence and each visual pose is modelled as a Gaussian mixture. An Expectation-maximization algorithm is developed to train a pose lexicon. With the learned lexicon, action classification is formulated as a problem of finding the maximum posterior probability of a given sequence of video frames that follows a given sequence of semantic poses, constrained by the most likely visual pose and the alignment sequences. The proposed method was evaluated on MSRC- 12, WorkoutSU-10, WorkoutUOW-18, Combined-15, Combined- 17 and Combined-50 action datasets using cross-subject, crossdataset, zero-shot and seen/unseen protocols.

Publication Date


  • 2019

Citation


  • Zhou, L., Li, W., Ogunbona, P. & Zhang, Z. (2019). Jointly Learning Visual Poses and Pose Lexicon for Semantic Action Recognition. IEEE Transactions on Circuits and Systems for Video Technology, Online First 1-11.

Scopus Eid


  • 2-s2.0-85059606463

Number Of Pages


  • 10

Start Page


  • 1

End Page


  • 11

Volume


  • Online First

Place Of Publication


  • United States