Deep Dual-Resolution Networks for Real-Time and Accurate Semantic Segmentation of Traffic Scenes
Using light-weight architectures or reasoning on low-resolution images, recent methods realize very fast scene parsing, even running at more than 100 FPS on a single GPU. However, there is still a significant gap in performance between these real-time methods and the models based on dilation backbon...
        Saved in:
      
    
          | Published in | IEEE transactions on intelligent transportation systems Vol. 24; no. 3; pp. 1 - 13 | 
|---|---|
| Main Authors | , , , | 
| Format | Journal Article | 
| Language | English | 
| Published | 
        New York
          IEEE
    
        01.03.2023
     The Institute of Electrical and Electronics Engineers, Inc. (IEEE)  | 
| Subjects | |
| Online Access | Get full text | 
| ISSN | 1524-9050 1558-0016  | 
| DOI | 10.1109/TITS.2022.3228042 | 
Cover
| Abstract | Using light-weight architectures or reasoning on low-resolution images, recent methods realize very fast scene parsing, even running at more than 100 FPS on a single GPU. However, there is still a significant gap in performance between these real-time methods and the models based on dilation backbones. To this end, we proposed a family of deep dual-resolution networks (DDRNets) for real-time and accurate semantic segmentation, which consist of deep dual-resolution backbones and enhanced low-resolution contextual information extractors. The two deep branches and multiple bilateral fusions of backbones generate higher quality details compared to existing two-pathway methods. The enhanced contextual information extractor named Deep Aggregation Pyramid Pooling Module (DAPPM) enlarges effective receptive fields and fuses multi-scale context based on low-resolution feature maps with little time cost. Our method achieves a new state-of-the-art trade-off between accuracy and speed on both Cityscapes and CamVid dataset. For the input of full resolution, on a single 2080Ti GPU without hardware acceleration, DDRNet-23-slim yields 77.4<inline-formula> <tex-math notation="LaTeX">\%</tex-math> </inline-formula> mIoU at 102 FPS on Cityscapes test set and 74.7<inline-formula> <tex-math notation="LaTeX">\%</tex-math> </inline-formula> mIoU at 230 FPS on CamVid test set. With widely used test augmentation, our method is superior to most state-of-the-art models and requires much less computation. Codes and trained models are available at https://github.com/ydhongHIT/DDRNet. | 
    
|---|---|
| AbstractList | Using light-weight architectures or reasoning on low-resolution images, recent methods realize very fast scene parsing, even running at more than 100 FPS on a single GPU. However, there is still a significant gap in performance between these real-time methods and the models based on dilation backbones. To this end, we proposed a family of deep dual-resolution networks (DDRNets) for real-time and accurate semantic segmentation, which consist of deep dual-resolution backbones and enhanced low-resolution contextual information extractors. The two deep branches and multiple bilateral fusions of backbones generate higher quality details compared to existing two-pathway methods. The enhanced contextual information extractor named Deep Aggregation Pyramid Pooling Module (DAPPM) enlarges effective receptive fields and fuses multi-scale context based on low-resolution feature maps with little time cost. Our method achieves a new state-of-the-art trade-off between accuracy and speed on both Cityscapes and CamVid dataset. For the input of full resolution, on a single 2080Ti GPU without hardware acceleration, DDRNet-23-slim yields 77.4% mIoU at 102 FPS on Cityscapes test set and 74.7% mIoU at 230 FPS on CamVid test set. With widely used test augmentation, our method is superior to most state-of-the-art models and requires much less computation. Codes and trained models are available at https://github.com/ydhongHIT/DDRNet . Using light-weight architectures or reasoning on low-resolution images, recent methods realize very fast scene parsing, even running at more than 100 FPS on a single GPU. However, there is still a significant gap in performance between these real-time methods and the models based on dilation backbones. To this end, we proposed a family of deep dual-resolution networks (DDRNets) for real-time and accurate semantic segmentation, which consist of deep dual-resolution backbones and enhanced low-resolution contextual information extractors. The two deep branches and multiple bilateral fusions of backbones generate higher quality details compared to existing two-pathway methods. The enhanced contextual information extractor named Deep Aggregation Pyramid Pooling Module (DAPPM) enlarges effective receptive fields and fuses multi-scale context based on low-resolution feature maps with little time cost. Our method achieves a new state-of-the-art trade-off between accuracy and speed on both Cityscapes and CamVid dataset. For the input of full resolution, on a single 2080Ti GPU without hardware acceleration, DDRNet-23-slim yields 77.4<inline-formula> <tex-math notation="LaTeX">\%</tex-math> </inline-formula> mIoU at 102 FPS on Cityscapes test set and 74.7<inline-formula> <tex-math notation="LaTeX">\%</tex-math> </inline-formula> mIoU at 230 FPS on CamVid test set. With widely used test augmentation, our method is superior to most state-of-the-art models and requires much less computation. Codes and trained models are available at https://github.com/ydhongHIT/DDRNet.  | 
    
| Author | Jia, Yisong Pan, Huihui Sun, Weichao Hong, Yuanduo  | 
    
| Author_xml | – sequence: 1 givenname: Huihui orcidid: 0000-0002-8931-1774 surname: Pan fullname: Pan, Huihui organization: Research Institute of Intelligent Control and Systems, Harbin Institute of Technology, Harbin, China – sequence: 2 givenname: Yuanduo orcidid: 0000-0002-8684-6765 surname: Hong fullname: Hong, Yuanduo organization: Research Institute of Intelligent Control and Systems, Harbin Institute of Technology, Harbin, China – sequence: 3 givenname: Weichao orcidid: 0000-0001-6837-3821 surname: Sun fullname: Sun, Weichao organization: Research Institute of Intelligent Control and Systems, Harbin Institute of Technology, Harbin, China – sequence: 4 givenname: Yisong surname: Jia fullname: Jia, Yisong organization: Research Institute of Intelligent Control and Systems, Harbin Institute of Technology, Harbin, China  | 
    
| BookMark | eNp9kE9PwzAMxSMEEtvgAyAulTh3OGm7Jsdp48-kCaStnEuaOqijS0bSCvHtaenEgQMnW7Z_fnpvTE6NNUjIFYUppSBus1W2nTJgbBoxxiFmJ2REk4SHAHR22vcsDgUkcE7G3u-6aZxQOiKvS8RDsGxlHW7Q27ptKmuCJ2w-rXv3gbYu2GC3zKo9BtKUwVyp1skGgy3upWkq1TVvezSN_CGtDjInte7nCg36C3KmZe3x8lgn5OX-Lls8huvnh9Vivg4VE1ET8gR4IqI4lQUkJecslppDkTI9g4gVKi4LyhhIJbVEiESJkKYlsAIw0lqxaEJuhr8HZz9a9E2-s60znWTOUg6Cp53l7ooOV8pZ7x3q_OCqvXRfOYW8DzLvg8z7IPNjkB2T_mFUNbhtnKzqf8nrgawQ8VdJCDHrPEffziSCWw | 
    
| CODEN | ITISFG | 
    
| CitedBy_id | crossref_primary_10_3390_sym16111477 crossref_primary_10_1109_TGRS_2024_3376389 crossref_primary_10_1109_TIFS_2024_3403507 crossref_primary_10_1109_TIV_2024_3360418 crossref_primary_10_4218_etrij_2023_0017 crossref_primary_10_1016_j_engappai_2024_108269 crossref_primary_10_11648_j_ijdst_20241003_12 crossref_primary_10_1016_j_bspc_2024_106057 crossref_primary_10_1016_j_compag_2025_110124 crossref_primary_10_1016_j_engappai_2024_108309 crossref_primary_10_1016_j_engappai_2023_107736 crossref_primary_10_1109_JSEN_2024_3363690 crossref_primary_10_1016_j_compeleceng_2024_109996 crossref_primary_10_1016_j_engappai_2024_109881 crossref_primary_10_1109_TAI_2023_3299899 crossref_primary_10_1109_TGRS_2024_3503589 crossref_primary_10_1109_TVT_2023_3303630 crossref_primary_10_1049_ipr2_12846 crossref_primary_10_1109_JSEN_2024_3506831 crossref_primary_10_1109_TSMC_2024_3377280 crossref_primary_10_1016_j_eswa_2024_125465 crossref_primary_10_1016_j_jag_2024_104347 crossref_primary_10_1109_ACCESS_2024_3350432 crossref_primary_10_1016_j_inffus_2024_102401 crossref_primary_10_1109_TIP_2023_3318967 crossref_primary_10_1016_j_dsp_2025_105148 crossref_primary_10_1007_s00371_024_03287_5 crossref_primary_10_1016_j_aquaculture_2023_740144 crossref_primary_10_1117_1_JEI_33_3_033015 crossref_primary_10_1109_TITS_2024_3496538 crossref_primary_10_1016_j_media_2025_103470 crossref_primary_10_1109_TCSVT_2024_3456097 crossref_primary_10_1088_1402_4896_ad8b7d crossref_primary_10_1016_j_jksuci_2024_102226 crossref_primary_10_1109_JIOT_2024_3403174 crossref_primary_10_11834_jig_230605 crossref_primary_10_1007_s11554_024_01453_5 crossref_primary_10_1088_1361_6501_ad9106 crossref_primary_10_1109_JSEN_2023_3346470 crossref_primary_10_1063_5_0230117 crossref_primary_10_1109_JSTARS_2024_3378695 crossref_primary_10_1109_TASE_2023_3310335 crossref_primary_10_3390_app14167291 crossref_primary_10_1109_TITS_2024_3397509 crossref_primary_10_1109_ACCESS_2024_3510746 crossref_primary_10_3390_s23136008 crossref_primary_10_3390_app14177706 crossref_primary_10_1002_int_22804 crossref_primary_10_1109_TGRS_2024_3368659 crossref_primary_10_1109_TITS_2024_3398037 crossref_primary_10_1186_s13677_024_00637_5 crossref_primary_10_1109_JSTARS_2024_3439267 crossref_primary_10_3390_s24165330 crossref_primary_10_1109_TCSVT_2023_3325360 crossref_primary_10_3390_su17062640 crossref_primary_10_1007_s11042_023_17664_0 crossref_primary_10_1007_s10489_025_06309_4 crossref_primary_10_1109_TIM_2024_3413168 crossref_primary_10_1007_s10845_023_02205_1 crossref_primary_10_1080_23248378_2024_2382117 crossref_primary_10_1109_TITS_2024_3510551 crossref_primary_10_1007_s10489_024_06146_x crossref_primary_10_1016_j_engappai_2024_109016 crossref_primary_10_1016_j_neucom_2025_129442 crossref_primary_10_1109_TPAMI_2024_3386971 crossref_primary_10_1109_JSTARS_2025_3534285 crossref_primary_10_1109_TITS_2024_3431537 crossref_primary_10_1016_j_eswa_2024_124586 crossref_primary_10_1109_TITS_2023_3330498 crossref_primary_10_1109_TIM_2023_3334368 crossref_primary_10_3390_en17246404 crossref_primary_10_1109_TITS_2024_3383405 crossref_primary_10_1007_s00371_024_03416_0 crossref_primary_10_1016_j_cag_2023_12_015 crossref_primary_10_1016_j_engappai_2025_110386 crossref_primary_10_1109_JSEN_2024_3452114 crossref_primary_10_1016_j_neunet_2023_09_031 crossref_primary_10_3390_s24247933 crossref_primary_10_7717_peerj_cs_1751 crossref_primary_10_1109_TIV_2023_3273620 crossref_primary_10_3390_s23135977 crossref_primary_10_1016_j_knosys_2025_113293 crossref_primary_10_1109_ACCESS_2025_3534117 crossref_primary_10_1109_TITS_2024_3413675 crossref_primary_10_1109_TIM_2025_3542875 crossref_primary_10_3390_s23125386 crossref_primary_10_1007_s12559_025_10407_3 crossref_primary_10_3390_drones7090574 crossref_primary_10_1109_TITS_2023_3348631 crossref_primary_10_1109_TIV_2024_3363830 crossref_primary_10_3389_frai_2024_1452563 crossref_primary_10_3390_s23135982 crossref_primary_10_1109_TIM_2025_3533639 crossref_primary_10_1016_j_jag_2025_104361 crossref_primary_10_1016_j_compeleceng_2024_110045 crossref_primary_10_1007_s11554_024_01602_w crossref_primary_10_1109_TIM_2024_3500045 crossref_primary_10_3390_s24061826 crossref_primary_10_1080_01431161_2023_2274820 crossref_primary_10_1109_TGRS_2025_3532960 crossref_primary_10_3390_electronics13173361 crossref_primary_10_3390_math12111673 crossref_primary_10_3390_app131810102 crossref_primary_10_1016_j_imavis_2025_105483 crossref_primary_10_1109_TGRS_2024_3417398 crossref_primary_10_1109_TITS_2024_3412432 crossref_primary_10_1016_j_eswa_2024_123156 crossref_primary_10_1109_TGRS_2024_3432397 crossref_primary_10_1109_JSEN_2024_3400817 crossref_primary_10_1016_j_measurement_2025_116700 crossref_primary_10_1109_TIM_2024_3415777 crossref_primary_10_1007_s00371_025_03853_5 crossref_primary_10_1016_j_jvcir_2024_104217 crossref_primary_10_1007_s12559_023_10206_8 crossref_primary_10_1109_JSEN_2024_3383233 crossref_primary_10_1109_TITS_2024_3454274 crossref_primary_10_1109_TIM_2023_3328708 crossref_primary_10_1109_TITS_2024_3404654 crossref_primary_10_1109_TIM_2024_3427806 crossref_primary_10_1016_j_neucom_2025_129655 crossref_primary_10_1109_TCE_2024_3377377 crossref_primary_10_1109_TIM_2024_3387500 crossref_primary_10_11834_jig_230659 crossref_primary_10_3390_sym16111449 crossref_primary_10_1007_s13177_024_00434_z crossref_primary_10_1016_j_eswa_2023_122386 crossref_primary_10_1109_TIM_2025_3545502 crossref_primary_10_1109_TBDATA_2024_3423719 crossref_primary_10_3390_drones9010030 crossref_primary_10_1109_JSTARS_2024_3369660 crossref_primary_10_1109_TIM_2024_3522705 crossref_primary_10_1016_j_eswa_2024_125964 crossref_primary_10_1016_j_conengprac_2023_105620 crossref_primary_10_1109_ACCESS_2024_3359425 crossref_primary_10_1117_1_JEI_32_4_043031 crossref_primary_10_3390_electronics14061109 crossref_primary_10_1016_j_autcon_2023_105112 crossref_primary_10_1109_TII_2023_3233975 crossref_primary_10_1109_TIM_2025_3547130 crossref_primary_10_3390_app13148130 crossref_primary_10_1007_s10489_024_05743_0 crossref_primary_10_1016_j_neucom_2025_129489 crossref_primary_10_1007_s00371_025_03839_3 crossref_primary_10_1109_TGRS_2024_3373493 crossref_primary_10_1109_TITS_2025_3525772 crossref_primary_10_1007_s11760_025_03908_z  | 
    
| Cites_doi | 10.1109/TITS.2017.2750080 10.1109/CVPR.2019.00975 10.1109/ICRA48506.2021.9560977 10.1109/TPAMI.2017.2737535 10.1109/LRA.2020.3039744 10.1109/CVPR.2019.00271 10.1109/ICCV48922.2021.00717 10.1109/TITS.2021.3139001 10.1109/CVPR.2018.00929 10.1109/CVPR46437.2021.00681 10.1109/TITS.2018.2801309 10.1007/978-3-030-01219-9_25 10.1109/TPAMI.2016.2644615 10.1109/TITS.2022.3161141 10.1109/ijcnn52387.2021.9533819 10.1109/CVPR.2019.01191 10.1109/CVPR46437.2021.00959 10.1109/CVPR42600.2020.00426 10.1109/TPAMI.2017.2699184 10.1109/CVPR.2019.00326 10.1109/tpami.2020.3007032 10.48550/arXiv.1909.11065 10.1007/s11263-021-01515-2 10.1109/TPAMI.2019.2938758 10.1007/978-3-319-24574-4_28 10.1109/CVPR.2018.00388 10.1109/CVPR.2018.00716 10.1109/TITS.2019.2962094 10.1109/CVPR.2018.00199 10.1109/CVPR.2015.7298965 10.48550/arXiv.1802.02611 10.1109/CVPR.2019.01289 10.1109/TITS.2020.3044672 10.48550/ARXIV.1604.01685 10.48550/ARXIV.1412.7062 10.1109/ICCV.2019.00140 10.1007/s11263-015-0816-y 10.1016/j.patcog.2019.01.006 10.1007/978-3-030-58452-8_45 10.1007/978-3-030-01240-3_17 10.1109/CVPR46437.2021.00405 10.1109/TPAMI.2020.2983686 10.1109/CVPR.2018.00106 10.1109/CVPR.2010.5539957 10.1007/978-3-030-01249-6_34 10.1007/s11263-021-01465-9 10.1109/CVPR.2016.89 10.1109/CVPR52688.2022.01177 10.1109/TITS.2020.2980426 10.1109/CVPR.2017.660 10.1016/j.patrec.2008.04.005 10.1109/ICCV.2017.224 10.1109/CVPR.2018.00474 10.1609/aaai.v32i1.12301 10.1109/CVPR.2016.90 10.1109/CVPR.2017.195  | 
    
| ContentType | Journal Article | 
    
| Copyright | Copyright The Institute of Electrical and Electronics Engineers, Inc. (IEEE) 2023 | 
    
| Copyright_xml | – notice: Copyright The Institute of Electrical and Electronics Engineers, Inc. (IEEE) 2023 | 
    
| DBID | 97E RIA RIE AAYXX CITATION 7SC 7SP 8FD FR3 JQ2 KR7 L7M L~C L~D  | 
    
| DOI | 10.1109/TITS.2022.3228042 | 
    
| DatabaseName | IEEE All-Society Periodicals Package (ASPP) 2005–Present IEEE All-Society Periodicals Package (ASPP) 1998–Present IEEE Xplore CrossRef Computer and Information Systems Abstracts Electronics & Communications Abstracts Technology Research Database Engineering Research Database ProQuest Computer Science Collection Civil Engineering Abstracts Advanced Technologies Database with Aerospace Computer and Information Systems Abstracts  Academic Computer and Information Systems Abstracts Professional  | 
    
| DatabaseTitle | CrossRef Civil Engineering Abstracts Technology Research Database Computer and Information Systems Abstracts – Academic Electronics & Communications Abstracts ProQuest Computer Science Collection Computer and Information Systems Abstracts Engineering Research Database Advanced Technologies Database with Aerospace Computer and Information Systems Abstracts Professional  | 
    
| DatabaseTitleList | Civil Engineering Abstracts | 
    
| Database_xml | – sequence: 1 dbid: RIE name: IEEE Electronic Library (IEL) url: https://proxy.k.utb.cz/login?url=https://ieeexplore.ieee.org/ sourceTypes: Publisher  | 
    
| DeliveryMethod | fulltext_linktorsrc | 
    
| Discipline | Engineering | 
    
| EISSN | 1558-0016 | 
    
| EndPage | 13 | 
    
| ExternalDocumentID | 10_1109_TITS_2022_3228042 9996293  | 
    
| Genre | orig-research | 
    
| GrantInformation_xml | – fundername: Post-Doctoral Science Foundation grantid: 2019T120270; LBH-TZ2111 – fundername: National Natural Science Foundation of China grantid: U1964201; 62173108; 62022031 funderid: 10.13039/501100001809 – fundername: Fundamental Research Funds for the Central Universities funderid: 10.13039/501100012226 – fundername: Major Scientific and Technological Special Project of Heilongjiang Province grantid: 2021ZX05A01  | 
    
| GroupedDBID | -~X 0R~ 29I 4.4 5GY 5VS 6IK 97E AAJGR AARMG AASAJ AAWTH ABAZT ABQJQ ABVLG ACGFO ACGFS ACIWK ACNCT AENEX AGQYO AHBIQ AKJIK AKQYR ALMA_UNASSIGNED_HOLDINGS ATWAV BEFXN BFFAM BGNUA BKEBE BPEOZ CS3 DU5 EBS HZ~ IFIPE IPLJI JAVBF LAI M43 O9- OCL P2P PQQKQ RIA RIE RNS AAYXX AETIX AGSQL AIBXA CITATION EJD H~9 ZY4 7SC 7SP 8FD FR3 JQ2 KR7 L7M L~C L~D  | 
    
| ID | FETCH-LOGICAL-c293t-850859347ab05d8824af80b72f6032bc4db1220acafae039de077d02b0e3ffc23 | 
    
| IEDL.DBID | RIE | 
    
| ISSN | 1524-9050 | 
    
| IngestDate | Sun Jun 29 16:35:51 EDT 2025 Wed Oct 01 05:03:16 EDT 2025 Thu Apr 24 22:59:41 EDT 2025 Wed Aug 27 02:29:11 EDT 2025  | 
    
| IsPeerReviewed | true | 
    
| IsScholarly | true | 
    
| Issue | 3 | 
    
| Language | English | 
    
| License | https://ieeexplore.ieee.org/Xplorehelp/downloads/license-information/IEEE.html https://doi.org/10.15223/policy-029 https://doi.org/10.15223/policy-037  | 
    
| LinkModel | DirectLink | 
    
| MergedId | FETCHMERGED-LOGICAL-c293t-850859347ab05d8824af80b72f6032bc4db1220acafae039de077d02b0e3ffc23 | 
    
| Notes | ObjectType-Article-1 SourceType-Scholarly Journals-1 ObjectType-Feature-2 content type line 14  | 
    
| ORCID | 0000-0001-6837-3821 0000-0002-8684-6765 0000-0002-8931-1774  | 
    
| PQID | 2780987014 | 
    
| PQPubID | 75735 | 
    
| PageCount | 13 | 
    
| ParticipantIDs | ieee_primary_9996293 proquest_journals_2780987014 crossref_primary_10_1109_TITS_2022_3228042 crossref_citationtrail_10_1109_TITS_2022_3228042  | 
    
| ProviderPackageCode | CITATION AAYXX  | 
    
| PublicationCentury | 2000 | 
    
| PublicationDate | 2023-03-01 | 
    
| PublicationDateYYYYMMDD | 2023-03-01 | 
    
| PublicationDate_xml | – month: 03 year: 2023 text: 2023-03-01 day: 01  | 
    
| PublicationDecade | 2020 | 
    
| PublicationPlace | New York | 
    
| PublicationPlace_xml | – name: New York | 
    
| PublicationTitle | IEEE transactions on intelligent transportation systems | 
    
| PublicationTitleAbbrev | TITS | 
    
| PublicationYear | 2023 | 
    
| Publisher | IEEE The Institute of Electrical and Electronics Engineers, Inc. (IEEE)  | 
    
| Publisher_xml | – name: IEEE – name: The Institute of Electrical and Electronics Engineers, Inc. (IEEE)  | 
    
| References | ref13 ref57 ref56 ref15 ref59 ref14 ref53 ref11 ref55 ref10 ref54 Liu (ref27) 2022 ref17 ref16 Howard (ref40) 2017 ref18 ref51 ref50 ref46 ref45 ref48 ref47 ref42 ref41 ref44 ref43 Treml (ref52); 2 ref49 ref8 Xie (ref31); 34 ref7 ref4 ref3 ref6 ref5 Paszke (ref12) 2016 ref35 ref34 ref37 Sun (ref29) 2019 ref30 ref33 ref32 Chen (ref58) Poudel (ref19) ref2 ref1 ref39 ref38 Tan (ref61) ref24 ref23 ref67 ref26 ref25 ref20 ref64 Si (ref36) 2019 ref63 ref22 ref66 ref21 ref65 ref28 ref60 Chen (ref9) 2017 ref62  | 
    
| References_xml | – ident: ref5 doi: 10.1109/TITS.2017.2750080 – ident: ref17 doi: 10.1109/CVPR.2019.00975 – ident: ref38 doi: 10.1109/ICRA48506.2021.9560977 – ident: ref1 doi: 10.1109/TPAMI.2017.2737535 – ident: ref21 doi: 10.1109/LRA.2020.3039744 – ident: ref25 doi: 10.1109/CVPR.2019.00271 – ident: ref34 doi: 10.1109/ICCV48922.2021.00717 – start-page: 6105 volume-title: Proc. Int. Conf. Mach. Learn. ident: ref61 article-title: EfficientNet: Rethinking model scaling for convolutional neural networks – ident: ref26 doi: 10.1109/TITS.2021.3139001 – ident: ref59 doi: 10.1109/CVPR.2018.00929 – ident: ref33 doi: 10.1109/CVPR46437.2021.00681 – volume-title: arXiv:1704.04861 year: 2017 ident: ref40 article-title: MobileNets: Efficient convolutional neural networks for mobile vision applications – volume: 34 start-page: 12077 volume-title: Proc. Adv. Neural Inf. Process. Syst. ident: ref31 article-title: SegFormer: Simple and efficient design for semantic segmentation with transformers – ident: ref2 doi: 10.1109/TITS.2018.2801309 – ident: ref13 doi: 10.1007/978-3-030-01219-9_25 – ident: ref51 doi: 10.1109/TPAMI.2016.2644615 – ident: ref32 doi: 10.1109/TITS.2022.3161141 – start-page: 1 volume-title: Proc. Int. Conf. Learn. Represent. ident: ref58 article-title: FasterSeg: Searching for faster real-time semantic segmentation – ident: ref18 doi: 10.1109/ijcnn52387.2021.9533819 – ident: ref53 doi: 10.1109/CVPR.2019.01191 – ident: ref24 doi: 10.1109/CVPR46437.2021.00959 – volume-title: arXiv:1911.07217 year: 2019 ident: ref36 article-title: Real-time semantic segmentation via multiply spatial fusion network – ident: ref54 doi: 10.1109/CVPR42600.2020.00426 – ident: ref8 doi: 10.1109/TPAMI.2017.2699184 – ident: ref44 doi: 10.1109/CVPR.2019.00326 – ident: ref46 doi: 10.1109/tpami.2020.3007032 – ident: ref67 doi: 10.48550/arXiv.1909.11065 – ident: ref23 doi: 10.1007/s11263-021-01515-2 – volume: 2 start-page: 1 volume-title: Proc. 29th Conf. Neural Inf. Process. Syst. ident: ref52 article-title: Speeding up semantic segmentation for autonomous driving – ident: ref47 doi: 10.1109/TPAMI.2019.2938758 – ident: ref60 doi: 10.1007/978-3-319-24574-4_28 – volume-title: arXiv:1904.04514 year: 2019 ident: ref29 article-title: High-resolution representations for labeling pixels and regions – ident: ref11 doi: 10.1109/CVPR.2018.00388 – ident: ref41 doi: 10.1109/CVPR.2018.00716 – ident: ref16 doi: 10.1109/TITS.2019.2962094 – ident: ref65 doi: 10.1109/CVPR.2018.00199 – volume-title: arXiv:1606.02147 year: 2016 ident: ref12 article-title: ENet: A deep neural network architecture for real-time semantic segmentation – ident: ref6 doi: 10.1109/CVPR.2015.7298965 – ident: ref28 doi: 10.48550/arXiv.1802.02611 – start-page: 1 volume-title: Proc. Brit. Mach. Vis. Conf. ident: ref19 article-title: Fast-SCNN: Fast semantic segmentation network – ident: ref20 doi: 10.1109/CVPR.2019.01289 – ident: ref15 doi: 10.1109/TITS.2020.3044672 – ident: ref48 doi: 10.48550/ARXIV.1604.01685 – ident: ref7 doi: 10.48550/ARXIV.1412.7062 – ident: ref39 doi: 10.1109/ICCV.2019.00140 – ident: ref50 doi: 10.1007/s11263-015-0816-y – ident: ref64 doi: 10.1016/j.patcog.2019.01.006 – ident: ref22 doi: 10.1007/978-3-030-58452-8_45 – ident: ref66 doi: 10.1007/978-3-030-01240-3_17 – ident: ref55 doi: 10.1109/CVPR46437.2021.00405 – ident: ref56 doi: 10.1109/TPAMI.2020.2983686 – ident: ref63 doi: 10.1109/CVPR.2018.00106 – volume-title: arXiv:2202.13393 year: 2022 ident: ref27 article-title: TransKD: Transformer knowledge distillation for efficient semantic segmentation – ident: ref35 doi: 10.1109/CVPR.2010.5539957 – volume-title: arXiv:1706.05587 year: 2017 ident: ref9 article-title: Rethinking atrous convolution for semantic image segmentation – ident: ref14 doi: 10.1007/978-3-030-01249-6_34 – ident: ref45 doi: 10.1007/s11263-021-01465-9 – ident: ref57 doi: 10.1109/CVPR.2016.89 – ident: ref37 doi: 10.1109/CVPR52688.2022.01177 – ident: ref3 doi: 10.1109/TITS.2020.2980426 – ident: ref10 doi: 10.1109/CVPR.2017.660 – ident: ref49 doi: 10.1016/j.patrec.2008.04.005 – ident: ref62 doi: 10.1109/ICCV.2017.224 – ident: ref43 doi: 10.1109/CVPR.2018.00474 – ident: ref4 doi: 10.1609/aaai.v32i1.12301 – ident: ref30 doi: 10.1109/CVPR.2016.90 – ident: ref42 doi: 10.1109/CVPR.2017.195  | 
    
| SSID | ssj0014511 | 
    
| Score | 2.7144682 | 
    
| Snippet | Using light-weight architectures or reasoning on low-resolution images, recent methods realize very fast scene parsing, even running at more than 100 FPS on a... | 
    
| SourceID | proquest crossref ieee  | 
    
| SourceType | Aggregation Database Enrichment Source Index Database Publisher  | 
    
| StartPage | 1 | 
    
| SubjectTerms | Acceleration autonomous driving Computer architecture Data mining deep convolutional neural networks Feature extraction Feature maps Image resolution Image segmentation Real time Real-time systems Semantic segmentation Semantics Task analysis Test sets Weight reduction  | 
    
| Title | Deep Dual-Resolution Networks for Real-Time and Accurate Semantic Segmentation of Traffic Scenes | 
    
| URI | https://ieeexplore.ieee.org/document/9996293 https://www.proquest.com/docview/2780987014  | 
    
| Volume | 24 | 
    
| hasFullText | 1 | 
    
| inHoldings | 1 | 
    
| isFullTextHit | |
| isPrint | |
| journalDatabaseRights | – providerCode: PRVIEE databaseName: IEEE Electronic Library (IEL) customDbUrl: eissn: 1558-0016 dateEnd: 99991231 omitProxy: false ssIdentifier: ssj0014511 issn: 1524-9050 databaseCode: RIE dateStart: 20000101 isFulltext: true titleUrlDefault: https://ieeexplore.ieee.org/ providerName: IEEE  | 
    
| link | http://utb.summon.serialssolutions.com/2.0.0/link/0/eLvHCXMwjV1NT9wwEB3BntpDKaVVl1LkQ08ILxPHiZPjCopoJTh0F4lb6o9JhQpZVDaX_vp6kuyKtghx88GWRnr2zLM98wbgk824W7RNJEf3eEFxJG2iSaLxtqAix9JxcfL5RX52qb9eZVcbcLiuhSGiLvmMJjzs_vLDwrf8VHbE5DyGp03YNEXe12qtfwxYZ6vTRlValpitfjATLI_mX-azeBNUapKy-ItWf8WgrqnKf564Cy-nW3C-MqzPKvk5aZdu4n__o9n4XMtfw6uBZ4ppvzG2YYOaN_DygfrgDnw_IboTJ629kfyI329BcdHnhd-LyGbFt0gjJVeJCNsEMfW-ZWUJMaPbCMi1j4Mft0PxUiMWtYiRjyUpxMyzC30Ll6ef58dncmi4IH00bimLrJM_08Y6zELk3trWBTqj6hxT5bwOLlEKrbe1JUzLQGhMQOWQ0rr2Kn0Ho2bR0HsQaVGr4A1RHkgbnzmKvKBGbX30AS7gGHAFQeUHNXJuinFTdbcSLCtGrWLUqgG1MRysl9z1UhxPTd5hFNYTBwDGsLfCuRoO632lTIFl9FuJ3n181Qd4wV3m-9SzPRgtf7X0MXKRpdvvNuEfmtHaoQ | 
    
| linkProvider | IEEE | 
    
| linkToHtml | http://utb.summon.serialssolutions.com/2.0.0/link/0/eLvHCXMwjV1NT9wwEB1Remg5QIFWLKWtD5wQXiaOs0mOCIqWwu6BXSRuqT8mVVXIorK59NfXk2RXlKKqNx9saaRnzzzbM28A9k3C3aJNJDm6hwuKJWkiTRJTZzLKBphbLk4ejQfDa_3lJrlZgcNlLQwRNcln1Odh85fvZ67mp7IjJuchPL2Al4nWOmmrtZZ_Bqy01aijKi1zTBZ_mBHmR9Pz6STcBZXqxyz_otUfUahpq_KXL24CzNkGjBamtXklP_r13Pbdryeqjf9r-xtY75imOG63xiasULUFa4_0B7fh6ynRvTitza3kZ_x2E4pxmxn-IAKfFVeBSEquExGm8uLYuZq1JcSE7gIk310YfLvrypcqMStFiH0sSiEmjp3oW7g--zw9Gcqu5YJ0wbi5zJJGAE2nxmLiA_vWpszQpqocYKys095GSqFxpjSEce4J09SjskhxWToVv4PValbRDog4K5V3KdHAk05dYikwgxK1ccELWI89wAUEhev0yLktxm3R3EswLxi1glErOtR6cLBcct-Kcfxr8jajsJzYAdCDvQXORXdcHwqVZpgHzxXp3edXfYJXw-nosrg8H1-8h9fcc75NRNuD1fnPmj4EZjK3H5sN-Rs15d3u | 
    
| openUrl | ctx_ver=Z39.88-2004&ctx_enc=info%3Aofi%2Fenc%3AUTF-8&rfr_id=info%3Asid%2Fsummon.serialssolutions.com&rft_val_fmt=info%3Aofi%2Ffmt%3Akev%3Amtx%3Ajournal&rft.genre=article&rft.atitle=Deep+Dual-Resolution+Networks+for+Real-Time+and+Accurate+Semantic+Segmentation+of+Traffic+Scenes&rft.jtitle=IEEE+transactions+on+intelligent+transportation+systems&rft.au=Pan%2C+Huihui&rft.au=Hong%2C+Yuanduo&rft.au=Sun%2C+Weichao&rft.au=Jia%2C+Yisong&rft.date=2023-03-01&rft.pub=IEEE&rft.issn=1524-9050&rft.spage=1&rft.epage=13&rft_id=info:doi/10.1109%2FTITS.2022.3228042&rft.externalDocID=9996293 | 
    
| thumbnail_l | http://covers-cdn.summon.serialssolutions.com/index.aspx?isbn=/lc.gif&issn=1524-9050&client=summon | 
    
| thumbnail_m | http://covers-cdn.summon.serialssolutions.com/index.aspx?isbn=/mc.gif&issn=1524-9050&client=summon | 
    
| thumbnail_s | http://covers-cdn.summon.serialssolutions.com/index.aspx?isbn=/sc.gif&issn=1524-9050&client=summon |