Skip to main content
placeholder image

Deep Affine Motion Compensation Network for Inter Prediction in VVC

Journal Article


Abstract


  • In video coding, it is a challenge to deal with scenes with complex motions, such as rotation and zooming. Although affine motion compensation (AMC) is employed in Versatile Video Coding (VVC), it is still difficult to handle non-translational motions due to the adopted hand-craft block-based motion compensation. In this paper, we propose a deep affine motion compensation network (DAMC-Net) for inter prediction in video coding to effectively improve the prediction accuracy. To the best of our knowledge, our work is the first attempt to deal with the deformable motion compensation based on CNN in VVC. Specifically, a deformable motion-compensated prediction (DMCP) module is proposed to compensate the current encoding block through a learnable way to estimate accurate motion fields. Meanwhile, the spatial neighboring information and the temporal reference block as well as the initial motion field are fully exploited. By effectively fusing the multi-channel feature maps from DMCP, an attention-based fusion and reconstruction (AFR) module is designed to reconstruct the output block. The proposed DAMC-Net is integrated into VVC and the experimental results demonstrate that the proposed method considerably enhances the coding performance.

Publication Date


  • 2022

Citation


  • Jin, D., Lei, J., Peng, B., Li, W., Ling, N., & Huang, Q. (2022). Deep Affine Motion Compensation Network for Inter Prediction in VVC. IEEE Transactions on Circuits and Systems for Video Technology, 32(6), 3923-3933. doi:10.1109/TCSVT.2021.3107135

Scopus Eid


  • 2-s2.0-85113898940

Start Page


  • 3923

End Page


  • 3933

Volume


  • 32

Issue


  • 6

Abstract


  • In video coding, it is a challenge to deal with scenes with complex motions, such as rotation and zooming. Although affine motion compensation (AMC) is employed in Versatile Video Coding (VVC), it is still difficult to handle non-translational motions due to the adopted hand-craft block-based motion compensation. In this paper, we propose a deep affine motion compensation network (DAMC-Net) for inter prediction in video coding to effectively improve the prediction accuracy. To the best of our knowledge, our work is the first attempt to deal with the deformable motion compensation based on CNN in VVC. Specifically, a deformable motion-compensated prediction (DMCP) module is proposed to compensate the current encoding block through a learnable way to estimate accurate motion fields. Meanwhile, the spatial neighboring information and the temporal reference block as well as the initial motion field are fully exploited. By effectively fusing the multi-channel feature maps from DMCP, an attention-based fusion and reconstruction (AFR) module is designed to reconstruct the output block. The proposed DAMC-Net is integrated into VVC and the experimental results demonstrate that the proposed method considerably enhances the coding performance.

Publication Date


  • 2022

Citation


  • Jin, D., Lei, J., Peng, B., Li, W., Ling, N., & Huang, Q. (2022). Deep Affine Motion Compensation Network for Inter Prediction in VVC. IEEE Transactions on Circuits and Systems for Video Technology, 32(6), 3923-3933. doi:10.1109/TCSVT.2021.3107135

Scopus Eid


  • 2-s2.0-85113898940

Start Page


  • 3923

End Page


  • 3933

Volume


  • 32

Issue


  • 6