基于注意力机制与特征融合的改进YOLOv7车辆检测方法
DOI:
CSTR:
作者:
作者单位:

1.郑州工商学院;2.北京理工大学珠海校区

作者简介:

通讯作者:

中图分类号:

TP391.4

基金项目:

河南省教育部产学合作协同育人项目(220600440151815)


Vehicle Detection Based on YOLOv7 Improved by Attention Mechanism And Feature Fusion
Author:
Affiliation:

Fund Project:

  • 摘要
  • |
  • 图/表
  • |
  • 访问统计
  • |
  • 参考文献
  • |
  • 相似文献
  • |
  • 引证文献
  • |
  • 资源附件
  • |
  • 文章评论
    摘要:

    智能辅助驾驶场景下车辆检测存在车辆目标小、被遮挡严重等情况,造成对交通环境感知不全。针对上述问题,从防止特征在传递过程中信息丢失出发,提出了一种基于改进YOLOv7的车辆检测方法。首先对BAM注意力机制的通道和空间注意力分支分别进行改进,减少特征信息的丢失;使用meta-ACON激活函数强化网络对特征的有效表达;最后引入带权重的特征融合和跨层连接,将不同层的特征进行融合,进一步避免信息丢失。实验结果表明,改进后的方法在KITTI车辆数据集上检测精度达到96.05%,相比于原始YOLOv7,检测速度基本不变,精度提升了3.62%,并且对小目标车辆和被遮挡车辆的检测效果提升明显。

    Abstract:

    In Intelligent assisted driving scenarios, vehicle detection may have small targets and severe occlusion, resulting in incomplete perception of the traffic environment. To solve the above issues, a vehicle detection method based on improved YOLOv7 is proposed to prevent information loss during feature transmission. Firstly, the channel and spatial attention branch of BAM attention mechanism are improved to reduce the loss of feature information; The meta-ACON activation functions is used to enhance the effective expression of features in the network; Finally, weighted feature fusion and cross layer connectivity are introduced to fuse features from different layers to further avoid information loss. The experimental results show that the mAP of the improved method on the KITTI vehicle dataset is 96.05%. Compared to the original YOLOv7, the detection speed is basically unchanged, and the accuracy is improved by 3.62%. Moreover, the detection effect for small target vehicles and occluded vehicles is significantly improved.

    参考文献
    相似文献
    引证文献
引用本文
相关视频

分享
文章指标
  • 点击次数:
  • 下载次数:
  • HTML阅读次数:
  • 引用次数:
历史
  • 收稿日期:2023-04-26
  • 最后修改日期:2023-07-06
  • 录用日期:2023-07-10
  • 在线发布日期:
  • 出版日期:
文章二维码
×
《国外电子测量技术》
2025年投稿方式有变