Skip to main content
placeholder image

Self-supervising Action Recognition by Statistical Moment and Subspace Descriptors

Conference Paper


Abstract


  • In this paper, we build on a concept of self-supervision by taking RGB frames as input to learn to predict both action concepts and auxiliary descriptors e.g., object descriptors. So-called hallucination streams are trained to predict auxiliary cues, simultaneously fed into classification layers, and then hallucinated at the testing stage to aid network. We design and hallucinate two descriptors, one leveraging four popular object detectors applied to training videos, and the other leveraging image- and video-level saliency detectors. The first descriptor encodes the detector- and Image Net-wise class prediction scores, confidence scores, and spatial locations of bounding boxes and frame indexes to capture the spatio-temporal distribution of features per video. Another descriptor encodes spatio-angular gradient distributions of saliency maps and intensity patterns. Inspired by the characteristic function of the probability distribution, we capture four statistical moments on the above intermediate descriptors. As numbers of coefficients in the mean, covariance, coskewness and cokurtotsis grow linearly, quadratically, cubically and quartically w.r.t. the dimension of feature vectors, we describe the covariance matrix by its leading n' eigenvectors (so-called subspace) and we capture skewness/kurtosis rather than costly coskewness/cokurtosis. We obtain state of the art on five popular datasets such as Charades and EPIC-Kitchens.

Publication Date


  • 2021

Publisher


Citation


  • Wang, L., & Koniusz, P. (2021). Self-supervising Action Recognition by Statistical Moment and Subspace Descriptors. In MM 2021 - Proceedings of the 29th ACM International Conference on Multimedia (pp. 4324-4333). doi:10.1145/3474085.3475572

Scopus Eid


  • 2-s2.0-85115000243

Web Of Science Accession Number


Start Page


  • 4324

End Page


  • 4333

Abstract


  • In this paper, we build on a concept of self-supervision by taking RGB frames as input to learn to predict both action concepts and auxiliary descriptors e.g., object descriptors. So-called hallucination streams are trained to predict auxiliary cues, simultaneously fed into classification layers, and then hallucinated at the testing stage to aid network. We design and hallucinate two descriptors, one leveraging four popular object detectors applied to training videos, and the other leveraging image- and video-level saliency detectors. The first descriptor encodes the detector- and Image Net-wise class prediction scores, confidence scores, and spatial locations of bounding boxes and frame indexes to capture the spatio-temporal distribution of features per video. Another descriptor encodes spatio-angular gradient distributions of saliency maps and intensity patterns. Inspired by the characteristic function of the probability distribution, we capture four statistical moments on the above intermediate descriptors. As numbers of coefficients in the mean, covariance, coskewness and cokurtotsis grow linearly, quadratically, cubically and quartically w.r.t. the dimension of feature vectors, we describe the covariance matrix by its leading n' eigenvectors (so-called subspace) and we capture skewness/kurtosis rather than costly coskewness/cokurtosis. We obtain state of the art on five popular datasets such as Charades and EPIC-Kitchens.

Publication Date


  • 2021

Publisher


Citation


  • Wang, L., & Koniusz, P. (2021). Self-supervising Action Recognition by Statistical Moment and Subspace Descriptors. In MM 2021 - Proceedings of the 29th ACM International Conference on Multimedia (pp. 4324-4333). doi:10.1145/3474085.3475572

Scopus Eid


  • 2-s2.0-85115000243

Web Of Science Accession Number


Start Page


  • 4324

End Page


  • 4333