Skip to main content
placeholder image

A Self-boosting Framework for Automated Radiographic Report Generation

Conference Paper


Abstract


  • Automated radiographic report generation is a challenging task since it requires to generate paragraphs describing fine-grained visual differences of cases, especially for those between the diseased and the healthy. Existing image captioning methods commonly target at generic images, and lack mechanism to meet this requirement. To bridge this gap, in this paper, we propose a self-boosting framework that improves radiographic report generation based on the cooperation of the main task of report generation and an auxiliary task of image-text matching. The two tasks are built as the two branches of a network model and influence each other in a cooperative way. On one hand, the image-text matching branch helps to learn highly text-correlated visual features for the report generation branch to output high quality reports. On the other hand, the improved reports produced by the report generation branch provide additional harder samples for the image-text matching branch and enforce the latter to improve itself by learning better visual and text feature representations. This, in turn, helps improve the report generation branch again. These two branches are jointly trained to help improve each other iteratively and progressively, so that the whole model is self-boosted without requiring external resources. Experimental results demonstrate the effectiveness of our method on two public datasets, showing its superior performance over multiple state-of-the-art image captioning and medical report generation methods.

Publication Date


  • 2021

Citation


  • Wang, Z., Zhou, L., Wang, L., & Li, X. (2021). A Self-boosting Framework for Automated Radiographic Report Generation. In Proceedings of the IEEE Computer Society Conference on Computer Vision and Pattern Recognition (pp. 2433-2442). doi:10.1109/CVPR46437.2021.00246

Scopus Eid


  • 2-s2.0-85123220605

Web Of Science Accession Number


Start Page


  • 2433

End Page


  • 2442

Abstract


  • Automated radiographic report generation is a challenging task since it requires to generate paragraphs describing fine-grained visual differences of cases, especially for those between the diseased and the healthy. Existing image captioning methods commonly target at generic images, and lack mechanism to meet this requirement. To bridge this gap, in this paper, we propose a self-boosting framework that improves radiographic report generation based on the cooperation of the main task of report generation and an auxiliary task of image-text matching. The two tasks are built as the two branches of a network model and influence each other in a cooperative way. On one hand, the image-text matching branch helps to learn highly text-correlated visual features for the report generation branch to output high quality reports. On the other hand, the improved reports produced by the report generation branch provide additional harder samples for the image-text matching branch and enforce the latter to improve itself by learning better visual and text feature representations. This, in turn, helps improve the report generation branch again. These two branches are jointly trained to help improve each other iteratively and progressively, so that the whole model is self-boosted without requiring external resources. Experimental results demonstrate the effectiveness of our method on two public datasets, showing its superior performance over multiple state-of-the-art image captioning and medical report generation methods.

Publication Date


  • 2021

Citation


  • Wang, Z., Zhou, L., Wang, L., & Li, X. (2021). A Self-boosting Framework for Automated Radiographic Report Generation. In Proceedings of the IEEE Computer Society Conference on Computer Vision and Pattern Recognition (pp. 2433-2442). doi:10.1109/CVPR46437.2021.00246

Scopus Eid


  • 2-s2.0-85123220605

Web Of Science Accession Number


Start Page


  • 2433

End Page


  • 2442