• 查询稿件
  • 获取最新论文
  • 知晓行业信息
随玉腾, 戴琳琳, 朱宇豪, 景辉. 面向铁路客运场景的对抗鲁棒性人头检测模型[J]. 铁路计算机应用, 2023, 32(6): 14-19. DOI: 10.3969/j.issn.1005-8451.2023.06.03
引用本文: 随玉腾, 戴琳琳, 朱宇豪, 景辉. 面向铁路客运场景的对抗鲁棒性人头检测模型[J]. 铁路计算机应用, 2023, 32(6): 14-19. DOI: 10.3969/j.issn.1005-8451.2023.06.03
SUI Yuteng, DAI Linlin, ZHU Yuhao, JING Hui. Adversarial robust head detection model oriented to railway passenger transport scenes[J]. Railway Computer Application, 2023, 32(6): 14-19. DOI: 10.3969/j.issn.1005-8451.2023.06.03
Citation: SUI Yuteng, DAI Linlin, ZHU Yuhao, JING Hui. Adversarial robust head detection model oriented to railway passenger transport scenes[J]. Railway Computer Application, 2023, 32(6): 14-19. DOI: 10.3969/j.issn.1005-8451.2023.06.03

面向铁路客运场景的对抗鲁棒性人头检测模型

Adversarial robust head detection model oriented to railway passenger transport scenes

  • 摘要: 基于人头检测的人群数量估计算法能为铁路客运车站应对突发客流、防止人群聚集提供有效的决策辅助,但人头检测使用的深度学习模型易受到对抗样本影响。为提升深度学习模型的对抗鲁棒性,建立了基于RetinaNet算法的人头检测模型;在Brianwash数据集上分别使用快速梯度符号法(FGSM,Fast Gradient Sign Method)和投影梯度下降(PGD,Projected Gradient Descent)2种对抗攻击方法生成对抗样本,初始模型在对抗样本数据集上的mAP值均有显著下降,验证了对抗攻击对模型性能影响的有效性;再对模型进行对抗训练,对抗训练后的模型在各类对抗样本验证数据集上的mAP值均有显著提升。实验结果表明,对抗训练后的人头检测模型能有效防御对抗样本的攻击,提升模型检测性能和对抗鲁棒性。

     

    Abstract: The crowd estimation algorithm based on head detection can provide effective decision-making assistance for railway passenger stations to cope with sudden passenger flow and prevent crowd aggregation, but the deep learning model used for head detection is easily affected by adversarial samples. To improve the adversarial robustness of deep learning models, this paper established a head detection model based on the RetinaNet algorithm. The paper used two adversarial attack methods, Fast Gradient Sign Method (FGSM)and Projected Gradient Descent (PGD), to generate adversarial samples on the Brianwash dataset. The initial model had a significant decrease in mAP on the adversarial sample dataset, was verified the effectiveness of adversarial attacks on model performance. After conducting adversarial training on the model, the mAP values of the trained model were significantly improved on various adversarial sample validation datasets. The experimental results show that the head detection model trained in adversarial training can effectively defend against attacks from adversarial samples, improve the model's detection performance and adversarial robustness.

     

/

返回文章
返回