Skip to main content
placeholder image

ReDro: Efficiently Learning Large-Sized SPD Visual Representation

Conference Paper


Abstract


  • Symmetric positive definite (SPD) matrix has recently been used as an effective visual representation. When learning this representation in deep networks, eigen-decomposition of covariance matrix is usually needed for a key step called matrix normalisation. This could result in significant computational cost, especially when facing the increasing number of channels in recent advanced deep networks. This work proposes a novel scheme called Relation Dropout (ReDro). It is inspired by the fact that eigen-decomposition of a block diagonal matrix can be efficiently obtained by decomposing each of its diagonal square matrices, which are of smaller sizes. Instead of using a full covariance matrix as in the literature, we generate a block diagonal one by randomly grouping the channels and only considering the covariance within the same group. We insert ReDro as an additional layer before the step of matrix normalisation and make its random grouping transparent to all subsequent layers. Additionally, we can view the ReDro scheme as a dropout-like regularisation, which drops the channel relationship across groups. As experimentally demonstrated, for the SPD methods typically involving the matrix normalisation step, ReDro can effectively help them reduce computational cost in learning large-sized SPD visual representation and also help to improve image recognition performance.

Publication Date


  • 2020

Citation


  • Rahman, S., Wang, L., Sun, C., & Zhou, L. (2020). ReDro: Efficiently Learning Large-Sized SPD Visual Representation. In Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics) Vol. 12360 LNCS (pp. 1-17). doi:10.1007/978-3-030-58555-6_1

Scopus Eid


  • 2-s2.0-85097428372

Web Of Science Accession Number


Start Page


  • 1

End Page


  • 17

Volume


  • 12360 LNCS

Abstract


  • Symmetric positive definite (SPD) matrix has recently been used as an effective visual representation. When learning this representation in deep networks, eigen-decomposition of covariance matrix is usually needed for a key step called matrix normalisation. This could result in significant computational cost, especially when facing the increasing number of channels in recent advanced deep networks. This work proposes a novel scheme called Relation Dropout (ReDro). It is inspired by the fact that eigen-decomposition of a block diagonal matrix can be efficiently obtained by decomposing each of its diagonal square matrices, which are of smaller sizes. Instead of using a full covariance matrix as in the literature, we generate a block diagonal one by randomly grouping the channels and only considering the covariance within the same group. We insert ReDro as an additional layer before the step of matrix normalisation and make its random grouping transparent to all subsequent layers. Additionally, we can view the ReDro scheme as a dropout-like regularisation, which drops the channel relationship across groups. As experimentally demonstrated, for the SPD methods typically involving the matrix normalisation step, ReDro can effectively help them reduce computational cost in learning large-sized SPD visual representation and also help to improve image recognition performance.

Publication Date


  • 2020

Citation


  • Rahman, S., Wang, L., Sun, C., & Zhou, L. (2020). ReDro: Efficiently Learning Large-Sized SPD Visual Representation. In Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics) Vol. 12360 LNCS (pp. 1-17). doi:10.1007/978-3-030-58555-6_1

Scopus Eid


  • 2-s2.0-85097428372

Web Of Science Accession Number


Start Page


  • 1

End Page


  • 17

Volume


  • 12360 LNCS