YOLOv5-Fog: A Multiobjective Visual Detection Algorithm for Fog Driving Scenes Based on Improved YOLOv5
With the rapid development of deep learning in recent years, the level of automatic driving perception has also increased substantially. However, automatic driving perception under adverse conditions, such as fog, remains a significant obstacle. The existing fog-oriented detection algorithms are una...
Saved in:
| Published in | IEEE transactions on instrumentation and measurement Vol. 71; pp. 1 - 12 |
|---|---|
| Main Authors | , , , , , , , |
| Format | Journal Article |
| Language | English |
| Published |
New York
IEEE
2022
The Institute of Electrical and Electronics Engineers, Inc. (IEEE) |
| Subjects | |
| Online Access | Get full text |
| ISSN | 0018-9456 1557-9662 |
| DOI | 10.1109/TIM.2022.3196954 |
Cover
| Abstract | With the rapid development of deep learning in recent years, the level of automatic driving perception has also increased substantially. However, automatic driving perception under adverse conditions, such as fog, remains a significant obstacle. The existing fog-oriented detection algorithms are unable to simultaneously address the detection accuracy and detection speed. Based on improved YOLOv5, this work provides a multiobject detection network for fog driving scenes. We construct a synthetic fog dataset by using the dataset of a virtual scene and the depth information of the image. Second, we present a detection network for driving in fog based on improved YOLOv5. The ResNeXt model, which has been modified by structural re-parameterization, serves as the model's backbone. We build a new feature enhancement module (FEM) in response to the lack of features in fog scene images and use the attention mechanism to help the detection network pay more attention to the more useful features in the fog scenes. The test results show that the proposed fog multitarget detection network outperforms the original YOLOv5 in terms of detection accuracy and speed. The accuracy of the Real-world Task-driven Testing Set (RTTS) public dataset is 77.8%, and the detection speed is 31 frames/s, which is 14 frames faster as compared with the original YOLOv5. |
|---|---|
| AbstractList | With the rapid development of deep learning in recent years, the level of automatic driving perception has also increased substantially. However, automatic driving perception under adverse conditions, such as fog, remains a significant obstacle. The existing fog-oriented detection algorithms are unable to simultaneously address the detection accuracy and detection speed. Based on improved YOLOv5, this work provides a multiobject detection network for fog driving scenes. We construct a synthetic fog dataset by using the dataset of a virtual scene and the depth information of the image. Second, we present a detection network for driving in fog based on improved YOLOv5. The ResNeXt model, which has been modified by structural re-parameterization, serves as the model’s backbone. We build a new feature enhancement module (FEM) in response to the lack of features in fog scene images and use the attention mechanism to help the detection network pay more attention to the more useful features in the fog scenes. The test results show that the proposed fog multitarget detection network outperforms the original YOLOv5 in terms of detection accuracy and speed. The accuracy of the Real-world Task-driven Testing Set (RTTS) public dataset is 77.8%, and the detection speed is 31 frames/s, which is 14 frames faster as compared with the original YOLOv5. |
| Author | Li, Yicheng Xu, Yansong He, Youguo Chen, Long Wang, Hai Sotelo, Miguel Angel Cai, Yingfeng Li, Zhixiong |
| Author_xml | – sequence: 1 givenname: Hai orcidid: 0000-0002-9136-8091 surname: Wang fullname: Wang, Hai email: wanghai1019@163.com organization: School of Automotive and Traffic Engineering, Jiangsu University, Zhenjiang, China – sequence: 2 givenname: Yansong orcidid: 0000-0002-9652-2780 surname: Xu fullname: Xu, Yansong email: xu1157658889@163.com organization: School of Automotive and Traffic Engineering, Jiangsu University, Zhenjiang, China – sequence: 3 givenname: Youguo orcidid: 0000-0003-2648-0025 surname: He fullname: He, Youguo email: hyg197715@163.com organization: Automotive Engineering Research Institute, Jiangsu University, Zhenjiang, China – sequence: 4 givenname: Yingfeng orcidid: 0000-0002-0633-9887 surname: Cai fullname: Cai, Yingfeng email: caicaixiao0304@126.com organization: Automotive Engineering Research Institute, Jiangsu University, Zhenjiang, China – sequence: 5 givenname: Long orcidid: 0000-0002-2079-3867 surname: Chen fullname: Chen, Long email: chenlong@ujs.edu.cn organization: Automotive Engineering Research Institute, Jiangsu University, Zhenjiang, China – sequence: 6 givenname: Yicheng orcidid: 0000-0003-2937-7162 surname: Li fullname: Li, Yicheng email: liyucheng070@163.com organization: Automotive Engineering Research Institute, Jiangsu University, Zhenjiang, China – sequence: 7 givenname: Miguel Angel orcidid: 0000-0001-8809-2103 surname: Sotelo fullname: Sotelo, Miguel Angel email: miguel.sotelo@uah.es organization: Department of Computer Engineering, University of Alcal, Alcal de Henares, Madrid, Spain – sequence: 8 givenname: Zhixiong orcidid: 0000-0003-4067-0669 surname: Li fullname: Li, Zhixiong email: zhixiong.li@yonsei.ac.kr organization: Yonsei Frontier Laboratory, Yonsei University, Seoul, Republic of Korea |
| BookMark | eNp9kM9PwjAYhhujiYDeTbw08Tzsj7VdvSGIkkA4iCaeljK-YclYsR0k_vd2gXjw4KlN-z7fm-_povPa1YDQDSV9Som-X0xmfUYY63OqpRbpGepQIVSipWTnqEMIzRKdCnmJuiFsCCFKpqqD1h_z6fwgkrFbP-ABnu2rxrrlBorGHgC_27A3FR5B0z64Gg-qtfO2-dzi0nkcITzy9mDrNX4toIaAH02AFY7JyXbn3SHejwVX6KI0VYDr09lDb-OnxfAlmc6fJ8PBNCmYpk1imFou00JwyAQpOdeKqZLprKSMrYiGQsn4LTMouDbSpKUAIQzjItXLuKHgPXR3nBvbv_YQmnzj9r6OlTlThFPOs7RNkWOq8C4ED2W-83Zr_HdOSd7qzKPOvNWZn3RGRP5BCtuYVkrjja3-A2-PoAWA3x6dCSqV4j-bUoI9 |
| CODEN | IEIMAO |
| CitedBy_id | crossref_primary_10_1109_ACCESS_2025_3536496 crossref_primary_10_1117_1_JEI_33_2_023049 crossref_primary_10_1007_s10489_023_04616_2 crossref_primary_10_1007_s11063_024_11564_6 crossref_primary_10_1109_TIM_2024_3351240 crossref_primary_10_1109_ACCESS_2023_3275432 crossref_primary_10_1109_ACCESS_2023_3302909 crossref_primary_10_1109_TIM_2024_3379090 crossref_primary_10_1109_TIM_2023_3282677 crossref_primary_10_1038_s41598_024_80830_3 crossref_primary_10_1109_TIM_2023_3341122 crossref_primary_10_1109_TIM_2024_3497132 crossref_primary_10_1109_ACCESS_2023_3344666 crossref_primary_10_1109_TITS_2024_3394573 crossref_primary_10_1109_TITS_2022_3232153 crossref_primary_10_1109_TIM_2023_3318748 crossref_primary_10_56038_ejrnd_v4i4_595 crossref_primary_10_1109_ACCESS_2024_3481642 crossref_primary_10_1109_ACCESS_2024_3453298 crossref_primary_10_1109_TIM_2023_3271753 crossref_primary_10_1007_s00530_023_01243_2 crossref_primary_10_1109_TIM_2024_3425490 crossref_primary_10_1109_TIM_2022_3229717 crossref_primary_10_1109_TIM_2023_3318671 crossref_primary_10_1007_s11760_024_03785_y crossref_primary_10_1109_JSTARS_2024_3417615 crossref_primary_10_1109_TIM_2024_3472860 crossref_primary_10_1117_1_JEI_33_5_053022 crossref_primary_10_1109_ACCESS_2023_3252021 crossref_primary_10_1109_TGRS_2024_3485682 crossref_primary_10_1109_JSEN_2025_3526670 |
| Cites_doi | 10.1609/aaai.v32i1.12317 10.1007/978-3-030-01264-9_45 10.1109/CVPR.2017.634 10.1609/aaai.v34i07.6865 10.1109/CVPR.2018.00913 10.1109/TNNLS.2018.2876865 10.1109/TIP.2018.2867951 10.1109/ITSC.2019.8917518 10.1109/CVPR.2018.00907 10.1109/ICIP42928.2021.9506039 10.1109/TIP.2020.2975986 10.1109/CVPR.2018.00352 10.1109/CVPR.2018.00813 10.1109/TIM.2022.3165251 10.1109/TIP.2016.2598681 10.1109/CVPR.2018.00484 10.1109/TITS.2021.3059674 10.1109/CVPR42600.2020.00223 10.1109/CVPR42600.2020.00271 10.1109/TPAMI.2010.168 10.1109/CVPR46437.2021.01352 10.1007/s41095-022-0271-y 10.1109/CVPR.2017.690 10.1109/CVPR.2017.106 10.1007/978-3-030-01234-2_1 10.1109/tits.2022.3170551 10.1109/CVPR.2017.667 10.1109/tits.2022.3177615 10.1109/TPAMI.2020.2977911 10.1109/TFUZZ.2021.3052092 10.1109/TIM.2021.3065438 10.1109/TITS.2021.3052908 10.1109/ICCV.2019.00741 10.1007/s42154-021-00154-0 10.1109/tnnls.2021.3080261 10.1063/1.3037551 10.1007/978-3-030-58548-8_28 10.1007/978-3-030-58568-6_45 10.1109/ICCV.2017.324 10.1109/MIM.2014.6825388 10.1007/s42154-021-00157-x 10.1109/CVPR.2016.182 10.1109/ICCVW54120.2021.00312 10.1109/CVPR.2018.00343 10.1016/j.isatra.2021.03.015 10.1109/ICCV.2017.511 10.1109/CVPR42600.2020.00185 10.1109/TPAMI.2015.2389824 |
| ContentType | Journal Article |
| Copyright | Copyright The Institute of Electrical and Electronics Engineers, Inc. (IEEE) 2022 |
| Copyright_xml | – notice: Copyright The Institute of Electrical and Electronics Engineers, Inc. (IEEE) 2022 |
| DBID | 97E RIA RIE AAYXX CITATION 7SP 7U5 8FD L7M |
| DOI | 10.1109/TIM.2022.3196954 |
| DatabaseName | IEEE Xplore (IEEE) IEEE All-Society Periodicals Package (ASPP) 1998–Present IEEE Electronic Library (IEL) CrossRef Electronics & Communications Abstracts Solid State and Superconductivity Abstracts Technology Research Database Advanced Technologies Database with Aerospace |
| DatabaseTitle | CrossRef Solid State and Superconductivity Abstracts Technology Research Database Advanced Technologies Database with Aerospace Electronics & Communications Abstracts |
| DatabaseTitleList | Solid State and Superconductivity Abstracts |
| Database_xml | – sequence: 1 dbid: RIE name: IEEE Xplore url: https://proxy.k.utb.cz/login?url=https://ieeexplore.ieee.org/ sourceTypes: Publisher |
| DeliveryMethod | fulltext_linktorsrc |
| Discipline | Engineering Physics |
| EISSN | 1557-9662 |
| EndPage | 12 |
| ExternalDocumentID | 10_1109_TIM_2022_3196954 9851677 |
| Genre | orig-research |
| GrantInformation_xml | – fundername: National Natural Science Foundation of China grantid: U20A20333; 52072160; 51875255 funderid: 10.13039/501100001809 – fundername: Key Research and Development Program of Jiangsu Province grantid: BE2019010-2; BE2020083-3 – fundername: Jiangsu Province’s Six Talent Peaks grantid: TD-GDZB-022 funderid: 10.13039/501100010014 – fundername: Narodowego Centrum Nauki Poland grantid: 2020/37/K/ST8/02748 – fundername: Zhenjiang Key Research and Development Program grantid: GY2020006 |
| GroupedDBID | -~X 0R~ 29I 4.4 5GY 5VS 6IK 85S 8WZ 97E A6W AAJGR AARMG AASAJ AAWTH ABAZT ABQJQ ABVLG ACGFO ACIWK ACNCT AENEX AETIX AGQYO AGSQL AHBIQ AI. AIBXA AKJIK AKQYR ALLEH ALMA_UNASSIGNED_HOLDINGS ATWAV BEFXN BFFAM BGNUA BKEBE BPEOZ CS3 DU5 EBS EJD F5P HZ~ H~9 IAAWW IBMZZ ICLAB IDIHD IFIPE IFJZH IPLJI JAVBF LAI M43 O9- OCL P2P RIA RIE RNS TN5 TWZ VH1 VJK AAYXX CITATION 7SP 7U5 8FD L7M |
| ID | FETCH-LOGICAL-c291t-a27bb4c53e850f339727f298f122d09ec76b4c68ec39a6a4f5e55a23549b01853 |
| IEDL.DBID | RIE |
| ISSN | 0018-9456 |
| IngestDate | Mon Jun 30 10:10:36 EDT 2025 Wed Oct 01 03:46:39 EDT 2025 Thu Apr 24 23:01:22 EDT 2025 Wed Aug 27 02:23:00 EDT 2025 |
| IsPeerReviewed | true |
| IsScholarly | true |
| Language | English |
| License | https://ieeexplore.ieee.org/Xplorehelp/downloads/license-information/IEEE.html https://doi.org/10.15223/policy-029 https://doi.org/10.15223/policy-037 |
| LinkModel | DirectLink |
| MergedId | FETCHMERGED-LOGICAL-c291t-a27bb4c53e850f339727f298f122d09ec76b4c68ec39a6a4f5e55a23549b01853 |
| Notes | ObjectType-Article-1 SourceType-Scholarly Journals-1 ObjectType-Feature-2 content type line 14 |
| ORCID | 0000-0002-9136-8091 0000-0002-0633-9887 0000-0003-4067-0669 0000-0003-2937-7162 0000-0002-9652-2780 0000-0001-8809-2103 0000-0002-2079-3867 0000-0003-2648-0025 |
| PQID | 2703133845 |
| PQPubID | 85462 |
| PageCount | 12 |
| ParticipantIDs | ieee_primary_9851677 crossref_primary_10_1109_TIM_2022_3196954 proquest_journals_2703133845 crossref_citationtrail_10_1109_TIM_2022_3196954 |
| ProviderPackageCode | CITATION AAYXX |
| PublicationCentury | 2000 |
| PublicationDate | 20220000 2022-00-00 20220101 |
| PublicationDateYYYYMMDD | 2022-01-01 |
| PublicationDate_xml | – year: 2022 text: 20220000 |
| PublicationDecade | 2020 |
| PublicationPlace | New York |
| PublicationPlace_xml | – name: New York |
| PublicationTitle | IEEE transactions on instrumentation and measurement |
| PublicationTitleAbbrev | TIM |
| PublicationYear | 2022 |
| Publisher | IEEE The Institute of Electrical and Electronics Engineers, Inc. (IEEE) |
| Publisher_xml | – name: IEEE – name: The Institute of Electrical and Electronics Engineers, Inc. (IEEE) |
| References | ref12 ref15 Gregor (ref43) ref53 Zhang (ref27) ref52 Redmon (ref13) 2018 ref55 ref10 ref17 ref16 ref19 ref18 Ren (ref11); 28 ref51 ref50 ref46 ref45 ref48 ref47 ref42 ref41 ref44 ref49 ref8 ref7 ref9 ref4 ref3 ref6 ref5 ref40 Bochkovskiy (ref14) 2020 ref35 ref34 ref37 ref36 ref31 ref30 ref33 ref32 ref2 ref1 ref39 ref38 ref24 ref23 ref26 Wrenninge (ref54) 2018 ref25 ref20 ref22 ref21 ref28 ref29 |
| References_xml | – ident: ref32 doi: 10.1609/aaai.v32i1.12317 – ident: ref17 doi: 10.1007/978-3-030-01264-9_45 – ident: ref51 doi: 10.1109/CVPR.2017.634 – start-page: 1462 volume-title: Proc. Int. Conf. Mach. Learn. ident: ref43 article-title: Draw: A recurrent neural network for image generation – ident: ref22 doi: 10.1609/aaai.v34i07.6865 – ident: ref36 doi: 10.1109/CVPR.2018.00913 – ident: ref9 doi: 10.1109/TNNLS.2018.2876865 – ident: ref29 doi: 10.1109/TIP.2018.2867951 – ident: ref53 doi: 10.1109/ITSC.2019.8917518 – ident: ref49 doi: 10.1109/CVPR.2018.00907 – ident: ref6 doi: 10.1109/ICIP42928.2021.9506039 – ident: ref33 doi: 10.1109/TIP.2020.2975986 – ident: ref26 doi: 10.1109/CVPR.2018.00352 – ident: ref42 doi: 10.1109/CVPR.2018.00813 – ident: ref28 doi: 10.1109/TIM.2022.3165251 – start-page: 785 volume-title: Proc. Asian Conf. Mach. Learn. ident: ref27 article-title: Domain adaptive YOLO for one-stage cross-domain detection – ident: ref30 doi: 10.1109/TIP.2016.2598681 – ident: ref44 doi: 10.1109/CVPR.2018.00484 – ident: ref10 doi: 10.1109/TITS.2021.3059674 – ident: ref21 doi: 10.1109/CVPR42600.2020.00223 – ident: ref55 doi: 10.1109/CVPR42600.2020.00271 – ident: ref19 doi: 10.1109/TPAMI.2010.168 – year: 2020 ident: ref14 article-title: YOLOv4: Optimal speed and accuracy of object detection publication-title: arXiv:2004.10934 – ident: ref50 doi: 10.1109/CVPR46437.2021.01352 – ident: ref37 doi: 10.1007/s41095-022-0271-y – ident: ref12 doi: 10.1109/CVPR.2017.690 – ident: ref35 doi: 10.1109/CVPR.2017.106 – ident: ref45 doi: 10.1007/978-3-030-01234-2_1 – ident: ref2 doi: 10.1109/tits.2022.3170551 – ident: ref46 doi: 10.1109/CVPR.2017.667 – ident: ref1 doi: 10.1109/tits.2022.3177615 – year: 2018 ident: ref13 article-title: YOLOv3: An incremental improvement publication-title: arXiv:1804.02767 – ident: ref25 doi: 10.1109/TPAMI.2020.2977911 – ident: ref39 doi: 10.1109/TFUZZ.2021.3052092 – ident: ref7 doi: 10.1109/TIM.2021.3065438 – year: 2018 ident: ref54 article-title: Synscapes: A photorealistic synthetic dataset for street scene parsing publication-title: arXiv:1810.08705 – ident: ref16 doi: 10.1109/TITS.2021.3052908 – ident: ref20 doi: 10.1109/ICCV.2019.00741 – ident: ref8 doi: 10.1007/s42154-021-00154-0 – volume: 28 start-page: 1 volume-title: Proc. Adv. Neural Inf. Process. Syst. ident: ref11 article-title: Faster R-CNN: Towards real-time object detection with region proposal networks – ident: ref5 doi: 10.1109/tnnls.2021.3080261 – ident: ref47 doi: 10.1063/1.3037551 – ident: ref18 doi: 10.1007/978-3-030-58548-8_28 – ident: ref24 doi: 10.1007/978-3-030-58568-6_45 – ident: ref48 doi: 10.1109/ICCV.2017.324 – ident: ref3 doi: 10.1109/MIM.2014.6825388 – ident: ref38 doi: 10.1007/s42154-021-00157-x – ident: ref52 doi: 10.1109/CVPR.2016.182 – ident: ref15 doi: 10.1109/ICCVW54120.2021.00312 – ident: ref31 doi: 10.1109/CVPR.2018.00343 – ident: ref41 doi: 10.1016/j.isatra.2021.03.015 – ident: ref23 doi: 10.1109/ICCV.2017.511 – ident: ref40 doi: 10.1109/tits.2022.3170551 – ident: ref4 doi: 10.1109/CVPR42600.2020.00185 – ident: ref34 doi: 10.1109/TPAMI.2015.2389824 |
| SSID | ssj0007647 |
| Score | 2.5662396 |
| Snippet | With the rapid development of deep learning in recent years, the level of automatic driving perception has also increased substantially. However, automatic... |
| SourceID | proquest crossref ieee |
| SourceType | Aggregation Database Enrichment Source Index Database Publisher |
| StartPage | 1 |
| SubjectTerms | 2-D object detection Accuracy Algorithms Atmospheric modeling autonomous driving Autonomous vehicles complex traffic conditions Datasets Detection algorithms Feature extraction Fog Machine learning Meteorology Object detection Parameterization Perception Training Virtual reality |
| Title | YOLOv5-Fog: A Multiobjective Visual Detection Algorithm for Fog Driving Scenes Based on Improved YOLOv5 |
| URI | https://ieeexplore.ieee.org/document/9851677 https://www.proquest.com/docview/2703133845 |
| Volume | 71 |
| hasFullText | 1 |
| inHoldings | 1 |
| isFullTextHit | |
| isPrint | |
| journalDatabaseRights | – providerCode: PRVIEE databaseName: IEEE Xplore customDbUrl: eissn: 1557-9662 dateEnd: 99991231 omitProxy: false ssIdentifier: ssj0007647 issn: 0018-9456 databaseCode: RIE dateStart: 19630101 isFulltext: true titleUrlDefault: https://ieeexplore.ieee.org/ providerName: IEEE |
| link | http://utb.summon.serialssolutions.com/2.0.0/link/0/eLvHCXMwjV1LTxsxEB4BUiU4QMtDBGjlAxckNtn12t64t7Q0AgTlwENwWq0dO00JWZRsOPTXd2xvIloQ4mZpx15LMx5_33g8BthXrLAp8v_IKssiJoWOiiJREYJn5eArMgCf5ftTHF-z01t-uwCH87swxhiffGaarunP8nulnrpQWUsiPBBZtgiLWVuEu1pzr5sJFupjJriAERXMjiRj2bo6OUciSGnTmZvk7J8tyL-p8sIR-92luwbns3mFpJL75rRSTf3nv5KN7534R1itYSbpBLv4BAtmtA4rz4oPrsMHn_ypJxvQv7s4u3jiUbfsfyUd4u_klup3cIXkZjCZ4lBHpvJZWyPSGfbL8aD69UAQ8BLsRI7GAxeXIJfaeU7yDXfGHkHJELHAdvjBJlx3f1x9P47qBxgiTWVSRQXNlGKap6bNY5sidKGZpbJtE0p7sTQ6E_hZtI1OZSEKZrnhvKApck4VOyCwBUujcmS2gRiLOIIaYTnKIUdUXCVacIuGrJCE8ga0ZjrJdV2d3D2SMcw9S4lljlrMnRbzWosNOJj3eAyVOd6Q3XBKmcvV-mjA3kzteb10Jzl1Ff2RuDO-83qvXVh2Y4c4zB4sVeOp-YzIpFJfvEn-Be513RA |
| linkProvider | IEEE |
| linkToHtml | http://utb.summon.serialssolutions.com/2.0.0/link/0/eLvHCXMwjV1LTxsxEB5Rqqrl0AdQEUpbH3qp1E12vbY35pYWotAmcGio6Gm1duwQHlmUbDjw6xnbm6gPVPVmacdrSzMef994PAb4oFhhU-T_kVWWRUwKHRVFoiIEz8rBV2QAPsv3WPRO2dczfrYGn1Z3YYwxPvnMNF3Tn-WPSr1wobKWRHggsuwRPOaMMR5ua638biZYqJCZ4BJGXLA8lIxla3g0QCpIadMZnOTst03Iv6rylyv2-0v3BQyWMwtpJZfNRaWa-u6Poo3_O_WX8LwGmqQTLOMVrJnpJmz8Un5wE5749E8934Lxz5P-yS2PuuV4n3SIv5VbqovgDMmPyXyBvzowlc_bmpLO1bicTarza4KQl2AncjCbuMgE-a6d7ySfcW8cEZQMMQtshwG24bR7OPzSi-onGCJNZVJFBc2UYpqnps1jmyJ4oZmlsm0TSkexNDoT-Fm0jU5lIQpmueG8oCmyThU7KPAa1qfl1OwAMRaRBDXCcpRDlqi4SrTgFk1ZIQ3lDWgtdZLruj65eybjKvc8JZY5ajF3WsxrLTbg46rHTajN8Q_ZLaeUlVytjwbsLdWe14t3nlNX0x-pO-O7D_d6D097w0E_7x8df3sDz9w4ISqzB-vVbGHeIk6p1DtvnvesW-Bd |
| openUrl | ctx_ver=Z39.88-2004&ctx_enc=info%3Aofi%2Fenc%3AUTF-8&rfr_id=info%3Asid%2Fsummon.serialssolutions.com&rft_val_fmt=info%3Aofi%2Ffmt%3Akev%3Amtx%3Ajournal&rft.genre=article&rft.atitle=YOLOv5-Fog%3A+A+Multiobjective+Visual+Detection+Algorithm+for+Fog+Driving+Scenes+Based+on+Improved+YOLOv5&rft.jtitle=IEEE+transactions+on+instrumentation+and+measurement&rft.au=Wang%2C+Hai&rft.au=Xu%2C+Yansong&rft.au=He%2C+Youguo&rft.au=Cai%2C+Yingfeng&rft.date=2022&rft.issn=0018-9456&rft.eissn=1557-9662&rft.volume=71&rft.spage=1&rft.epage=12&rft_id=info:doi/10.1109%2FTIM.2022.3196954&rft.externalDBID=n%2Fa&rft.externalDocID=10_1109_TIM_2022_3196954 |
| thumbnail_l | http://covers-cdn.summon.serialssolutions.com/index.aspx?isbn=/lc.gif&issn=0018-9456&client=summon |
| thumbnail_m | http://covers-cdn.summon.serialssolutions.com/index.aspx?isbn=/mc.gif&issn=0018-9456&client=summon |
| thumbnail_s | http://covers-cdn.summon.serialssolutions.com/index.aspx?isbn=/sc.gif&issn=0018-9456&client=summon |