Skip to main content
placeholder image

Defensive Few-shot Learning

Journal Article


Abstract


  • This paper investigates a new challenging problem called defensive few-shot learning in order to learn a robust few-shot model against adversarial attacks. Simply applying the existing adversarial defense methods to few-shot learning cannot effectively solve this problem. This is because the commonly assumed sample-level distribution consistency between the training and test sets can no longer be met in the few-shot setting. To address this situation, we develop a general defensive few-shot learning (DFSL) framework to answer the following two key questions: (1) how to transfer adversarial defense knowledge from one sample distribution to another? (2) how to narrow the distribution gap between clean and adversarial examples under the few-shot setting? To answer the first question, we propose an episode-based adversarial training mechanism by assuming a task-level distribution consistency to better transfer the adversarial defense knowledge. As for the second question, within each few-shot task, we design two kinds of distribution consistency criteria to narrow the distribution gap between clean and adversarial examples from the feature-wise and prediction-wise perspectives, respectively. Extensive experiments demonstrate that the proposed framework can effectively make the existing few-shot models robust against adversarial attacks. Code is available at https://github.com/WenbinLee/DefensiveFSL.git.

Publication Date


  • 2022

Citation


  • Li, W., Wang, L., Zhang, X., Qi, L., Huo, J., Gao, Y., & Luo, J. (2022). Defensive Few-shot Learning. IEEE Transactions on Pattern Analysis and Machine Intelligence. doi:10.1109/TPAMI.2022.3213755

Scopus Eid


  • 2-s2.0-85139850835

Web Of Science Accession Number


Volume


Issue


Place Of Publication


Abstract


  • This paper investigates a new challenging problem called defensive few-shot learning in order to learn a robust few-shot model against adversarial attacks. Simply applying the existing adversarial defense methods to few-shot learning cannot effectively solve this problem. This is because the commonly assumed sample-level distribution consistency between the training and test sets can no longer be met in the few-shot setting. To address this situation, we develop a general defensive few-shot learning (DFSL) framework to answer the following two key questions: (1) how to transfer adversarial defense knowledge from one sample distribution to another? (2) how to narrow the distribution gap between clean and adversarial examples under the few-shot setting? To answer the first question, we propose an episode-based adversarial training mechanism by assuming a task-level distribution consistency to better transfer the adversarial defense knowledge. As for the second question, within each few-shot task, we design two kinds of distribution consistency criteria to narrow the distribution gap between clean and adversarial examples from the feature-wise and prediction-wise perspectives, respectively. Extensive experiments demonstrate that the proposed framework can effectively make the existing few-shot models robust against adversarial attacks. Code is available at https://github.com/WenbinLee/DefensiveFSL.git.

Publication Date


  • 2022

Citation


  • Li, W., Wang, L., Zhang, X., Qi, L., Huo, J., Gao, Y., & Luo, J. (2022). Defensive Few-shot Learning. IEEE Transactions on Pattern Analysis and Machine Intelligence. doi:10.1109/TPAMI.2022.3213755

Scopus Eid


  • 2-s2.0-85139850835

Web Of Science Accession Number


Volume


Issue


Place Of Publication