Low-latency automotive vision with event cameras

The computer vision algorithms used currently in advanced driver assistance systems rely on image-based RGB cameras, leading to a critical bandwidth–latency trade-off for delivering safe driving experiences. To address this, event cameras have emerged as alternative vision sensors. Event cameras mea...

Full description

Saved in:
Bibliographic Details
Published inNature (London) Vol. 629; no. 8014; pp. 1034 - 1040
Main Authors Gehrig, Daniel, Scaramuzza, Davide
Format Journal Article
LanguageEnglish
Published London Nature Publishing Group UK 30.05.2024
Nature Publishing Group
Subjects
Online AccessGet full text
ISSN0028-0836
1476-4687
1476-4687
DOI10.1038/s41586-024-07409-w

Cover

Abstract The computer vision algorithms used currently in advanced driver assistance systems rely on image-based RGB cameras, leading to a critical bandwidth–latency trade-off for delivering safe driving experiences. To address this, event cameras have emerged as alternative vision sensors. Event cameras measure the changes in intensity asynchronously, offering high temporal resolution and sparsity, markedly reducing bandwidth and latency requirements 1 . Despite these advantages, event-camera-based algorithms are either highly efficient but lag behind image-based ones in terms of accuracy or sacrifice the sparsity and efficiency of events to achieve comparable results. To overcome this, here we propose a hybrid event- and frame-based object detector that preserves the advantages of each modality and thus does not suffer from this trade-off. Our method exploits the high temporal resolution and sparsity of events and the rich but low temporal resolution information in standard images to generate efficient, high-rate object detections, reducing perceptual and computational latency. We show that the use of a 20 frames per second (fps) RGB camera plus an event camera can achieve the same latency as a 5,000-fps camera with the bandwidth of a 45-fps camera without compromising accuracy. Our approach paves the way for efficient and robust perception in edge-case scenarios by uncovering the potential of event cameras 2 . Use of a 20 frames per second (fps) RGB camera plus an event camera can achieve the same latency as a 5,000-fps camera with the bandwidth of a 45-fps camera without compromising accuracy.
AbstractList The computer vision algorithms used currently in advanced driver assistance systems rely on image-based RGB cameras, leading to a critical bandwidth-latency trade-off for delivering safe driving experiences. To address this, event cameras have emerged as alternative vision sensors. Event cameras measure the changes in intensity asynchronously, offering high temporal resolution and sparsity, markedly reducing bandwidth and latency requirements1. Despite these advantages, event-camera-based algorithms are either highly efficient but lag behind image-based ones in terms of accuracy or sacrifice the sparsity and efficiency of events to achieve comparable results. To overcome this, here we propose a hybrid event- and frame-based object detector that preserves the advantages of each modality and thus does not suffer from this trade-off. Our method exploits the high temporal resolution and sparsity of events and the rich but low temporal resolution information in standard images to generate efficient, high-rate object detections, reducing perceptual and computational latency. We show that the use of a 20 frames per second (fps) RGB camera plus an event camera can achieve the same latency as a 5,000-fps camera with the bandwidth of a 45-fps camera without compromising accuracy. Our approach paves the way for efficient and robust perception in edge-case scenarios by uncovering the potential of event cameras2.The computer vision algorithms used currently in advanced driver assistance systems rely on image-based RGB cameras, leading to a critical bandwidth-latency trade-off for delivering safe driving experiences. To address this, event cameras have emerged as alternative vision sensors. Event cameras measure the changes in intensity asynchronously, offering high temporal resolution and sparsity, markedly reducing bandwidth and latency requirements1. Despite these advantages, event-camera-based algorithms are either highly efficient but lag behind image-based ones in terms of accuracy or sacrifice the sparsity and efficiency of events to achieve comparable results. To overcome this, here we propose a hybrid event- and frame-based object detector that preserves the advantages of each modality and thus does not suffer from this trade-off. Our method exploits the high temporal resolution and sparsity of events and the rich but low temporal resolution information in standard images to generate efficient, high-rate object detections, reducing perceptual and computational latency. We show that the use of a 20 frames per second (fps) RGB camera plus an event camera can achieve the same latency as a 5,000-fps camera with the bandwidth of a 45-fps camera without compromising accuracy. Our approach paves the way for efficient and robust perception in edge-case scenarios by uncovering the potential of event cameras2.
The computer vision algorithms used currently in advanced driver assistance systems rely on image-based RGB cameras, leading to a critical bandwidth-latency trade-off for delivering safe driving experiences. To address this, event cameras have emerged as alternative vision sensors. Event cameras measure the changes in intensity asynchronously, offering high temporal resolution and sparsity, markedly reducing bandwidth and latency requirements . Despite these advantages, event-camera-based algorithms are either highly efficient but lag behind image-based ones in terms of accuracy or sacrifice the sparsity and efficiency of events to achieve comparable results. To overcome this, here we propose a hybrid event- and frame-based object detector that preserves the advantages of each modality and thus does not suffer from this trade-off. Our method exploits the high temporal resolution and sparsity of events and the rich but low temporal resolution information in standard images to generate efficient, high-rate object detections, reducing perceptual and computational latency. We show that the use of a 20 frames per second (fps) RGB camera plus an event camera can achieve the same latency as a 5,000-fps camera with the bandwidth of a 45-fps camera without compromising accuracy. Our approach paves the way for efficient and robust perception in edge-case scenarios by uncovering the potential of event cameras .
The computer vision algorithms used currently in advanced driver assistance systems rely on image-based RGB cameras, leading to a critical bandwidth–latency trade-off for delivering safe driving experiences. To address this, event cameras have emerged as alternative vision sensors. Event cameras measure the changes in intensity asynchronously, offering high temporal resolution and sparsity, markedly reducing bandwidth and latency requirements1. Despite these advantages, event-camera-based algorithms are either highly efficient but lag behind image-based ones in terms of accuracy or sacrifice the sparsity and efficiency of events to achieve comparable results. To overcome this, here we propose a hybrid event- and frame-based object detector that preserves the advantages of each modality and thus does not suffer from this trade-off. Our method exploits the high temporal resolution and sparsity of events and the rich but low temporal resolution information in standard images to generate efficient, high-rate object detections, reducing perceptual and computational latency. We show that the use of a 20 frames per second (fps) RGB camera plus an event camera can achieve the same latency as a 5,000-fps camera with the bandwidth of a 45-fps camera without compromising accuracy. Our approach paves the way for efficient and robust perception in edge-case scenarios by uncovering the potential of event cameras2. Use of a 20 frames per second (fps) RGB camera plus an event camera can achieve the same latency as a 5,000-fps camera with the bandwidth of a 45-fps camera without compromising accuracy.
The computer vision algorithms used currently in advanced driver assistance systems rely on image-based RGB cameras, leading to a critical bandwidth–latency trade-off for delivering safe driving experiences. To address this, event cameras have emerged as alternative vision sensors. Event cameras measure the changes in intensity asynchronously, offering high temporal resolution and sparsity, markedly reducing bandwidth and latency requirements 1 . Despite these advantages, event-camera-based algorithms are either highly efficient but lag behind image-based ones in terms of accuracy or sacrifice the sparsity and efficiency of events to achieve comparable results. To overcome this, here we propose a hybrid event- and frame-based object detector that preserves the advantages of each modality and thus does not suffer from this trade-off. Our method exploits the high temporal resolution and sparsity of events and the rich but low temporal resolution information in standard images to generate efficient, high-rate object detections, reducing perceptual and computational latency. We show that the use of a 20 frames per second (fps) RGB camera plus an event camera can achieve the same latency as a 5,000-fps camera with the bandwidth of a 45-fps camera without compromising accuracy. Our approach paves the way for efficient and robust perception in edge-case scenarios by uncovering the potential of event cameras 2 . Use of a 20 frames per second (fps) RGB camera plus an event camera can achieve the same latency as a 5,000-fps camera with the bandwidth of a 45-fps camera without compromising accuracy.
The computer vision algorithms used currently in advanced driver assistance systems rely on image-based RGB cameras, leading to a critical bandwidth-latency trade-off for delivering safe driving experiences. To address this, event cameras have emerged as alternative vision sensors. Event cameras measure the changes in intensity asynchronously, offering high temporal resolution and sparsity, markedly reducing bandwidth and latency requirements. Despite these advantages, event-camera-based algorithms are either highly efficient but lag behind image-based ones in terms of accuracy or sacrifice the sparsity and efficiency of events to achieve comparable results. To overcome this, here we propose a hybrid event- and frame-based object detector that preserves the advantages of each modality and thus does not suffer from this trade-off. Our method exploits the high temporal resolution and sparsity of events and the rich but low temporal resolution information in standard images to generate efficient, high-rate object detections, reducing perceptual and computational latency. We show that the use of a 20 frames per second (fps) RGB camera plus an event camera can achieve the same latency as a 5,000-fps camera with the bandwidth of a 45-fps camera without compromising accuracy. Our approach paves the way for efficient and robust perception in edge-case scenarios by uncovering the potential of event cameras.
The computer vision algorithms used currently in advanced driver assistance systems rely on image-based RGB cameras, leading to a critical bandwidth–latency trade-off for delivering safe driving experiences. To address this, event cameras have emerged as alternative vision sensors. Event cameras measure the changes in intensity asynchronously, offering high temporal resolution and sparsity, markedly reducing bandwidth and latency requirements 1 . Despite these advantages, event-camera-based algorithms are either highly efficient but lag behind image-based ones in terms of accuracy or sacrifice the sparsity and efficiency of events to achieve comparable results. To overcome this, here we propose a hybrid event- and frame-based object detector that preserves the advantages of each modality and thus does not suffer from this trade-off. Our method exploits the high temporal resolution and sparsity of events and the rich but low temporal resolution information in standard images to generate efficient, high-rate object detections, reducing perceptual and computational latency. We show that the use of a 20 frames per second (fps) RGB camera plus an event camera can achieve the same latency as a 5,000-fps camera with the bandwidth of a 45-fps camera without compromising accuracy. Our approach paves the way for efficient and robust perception in edge-case scenarios by uncovering the potential of event cameras 2 .
Author Gehrig, Daniel
Scaramuzza, Davide
Author_xml – sequence: 1
  givenname: Daniel
  orcidid: 0000-0001-9952-3335
  surname: Gehrig
  fullname: Gehrig, Daniel
  email: dgehrig@ifi.uzh.ch
  organization: Robotics and Perception Group, University of Zurich
– sequence: 2
  givenname: Davide
  orcidid: 0000-0002-3831-6778
  surname: Scaramuzza
  fullname: Scaramuzza, Davide
  email: sdavide@ifi.uzh.ch
  organization: Robotics and Perception Group, University of Zurich
BackLink https://www.ncbi.nlm.nih.gov/pubmed/38811712$$D View this record in MEDLINE/PubMed
BookMark eNqNkc1u1DAUhS1URKeFF2CBIrFhY7i245-sEKqgRRqJDawtj2O3rhJ7sJOJ5u3rMgOFLipWXtzvHJ9z7xk6iSk6hF4TeE-AqQ-lJVwJDLTFIFvo8PIMrUgrBW6FkidoBUAVBsXEKTor5RYAOJHtC3TKlCJEErpCsE4LHszkot03Zp7SmKawc80ulJBis4TppnE7F6fGmtFlU16i594Mxb06vufox5fP3y-u8Prb5deLT2tsW8kn3DtvwW6oF7aXPXS97ahtHeUcwAKVTBqxEbzznbG8hmfcC0l975U3Gy89O0fs4DvHrdkvZhj0NofR5L0moO_760N_XfvrX_31UlUfD6rtvBldb2vwbB6UyQT97ySGG32ddpoQwoQQtDq8Ozrk9HN2ZdJjKNYNg4kuzUUzEJRT2XWqom8fobdpzrFu5Z5ilakpK_Xm70h_svy-QQXoAbA5lZKd_7-i6pHIhslM9Wa1Vhielh43W-o_8drlh9hPqO4A5mK8TQ
CitedBy_id crossref_primary_10_3389_fnins_2024_1477979
crossref_primary_10_1109_TCSVT_2024_3482436
crossref_primary_10_1002_inf2_70007
crossref_primary_10_1109_SR_2024_3513952
crossref_primary_10_1007_s00348_024_03946_2
crossref_primary_10_3390_electronics14061105
crossref_primary_10_1002_advs_202414319
crossref_primary_10_3390_jsan14010007
crossref_primary_10_1038_s44287_024_00072_3
crossref_primary_10_1002_aisy_202401065
crossref_primary_10_1109_TGRS_2025_3527474
crossref_primary_10_3390_electronics13142879
crossref_primary_10_1109_LRA_2025_3527311
crossref_primary_10_1109_ACCESS_2024_3523411
crossref_primary_10_1109_OJVT_2024_3519951
crossref_primary_10_1109_TCASAI_2024_3520905
Cites_doi 10.3389/fnins.2016.00508
10.1038/d41586-018-01683-1
10.1007/978-3-319-46448-0_2
10.1109/TCSVT.2022.3189480
10.1109/ICRA40945.2020.9196877
10.1109/CVPR.2017.781
10.1109/ICCV48922.2021.00097
10.3389/fnins.2015.00437
10.1109/CVPR.2019.00401
10.1109/ICRA.2019.8793924
10.1109/ISCA52012.2021.00010
10.1109/ICRA48891.2023.10161392
10.1109/CVPR.2017.11
10.1109/CVPR52729.2023.01713
10.1007/978-3-031-19830-4_20
10.1109/TBCAS.2008.2005781
10.1109/TIP.2022.3162962
10.1109/TPAMI.2020.3008413
10.1109/IROS.2018.8594119
10.1109/ICCV.2019.00161
10.1007/978-3-030-58565-5_9
10.1109/JSSC.2010.2085952
10.1007/978-3-030-20887-5_7
10.1109/TPAMI.2023.3301975
10.1109/ICRA40945.2020.9197133
10.1109/CVPR.2018.00568
10.1109/ICCV.2019.00573
10.1109/CVPR.2018.00097
10.3389/fnins.2011.00073
10.1109/CVPR52729.2023.02106
10.1109/CVPRW.2019.00209
10.1109/IROS.2018.8593805
10.1109/IJCNN55064.2022.9892618
10.1109/CVPR.2018.00186
10.1109/CVPRW.2018.00107
10.1109/CVPR46437.2021.00023
10.1109/TPAMI.2013.71
10.1109/CVPR.2016.90
10.1007/978-3-319-10602-1_48
10.1109/CVPR52729.2023.01707
10.1007/s11263-015-0816-y
10.1109/CVPR42600.2020.01442
10.1126/scirobotics.aaz9712
10.1109/ICCV.2019.00058
10.1109/CVPR.2014.81
10.1109/34.888718
10.1109/CVPRW.2019.00205
10.1109/CVPR46437.2021.01589
10.1109/LRA.2021.3068942
10.1109/CVPR52688.2022.01723
10.1109/ICRA48891.2023.10160984
10.1109/JSSC.2014.2342715
10.1007/978-3-030-58598-3_25
10.1109/CVPR.2019.00398
10.1109/CVPR.2016.91
10.1109/JSSC.2007.914337
10.1007/s11263-019-01209-w
10.1109/ICRA48891.2023.10161563
10.3390/biomimetics7010031
10.1109/CVPR52688.2022.01205
10.1109/CVPR.2019.00108
10.1109/ICCV.2015.169
10.1109/ACCESS.2020.3015759
10.1109/CVPR52688.2022.00124
10.1109/CVPR.2018.00961
ContentType Journal Article
Copyright The Author(s) 2024
2024. The Author(s).
Copyright Nature Publishing Group May 30, 2024
Copyright_xml – notice: The Author(s) 2024
– notice: 2024. The Author(s).
– notice: Copyright Nature Publishing Group May 30, 2024
DBID C6C
AAYXX
CITATION
NPM
3V.
7QG
7QL
7QP
7QR
7RV
7SN
7SS
7ST
7T5
7TG
7TK
7TM
7TO
7U9
7X2
7X7
7XB
88A
88E
88G
88I
8AF
8AO
8C1
8FD
8FE
8FG
8FH
8FI
8FJ
8FK
8G5
ABJCF
ABUWG
AEUYN
AFKRA
ARAPS
ATCPS
AZQEC
BBNVY
BEC
BENPR
BGLVJ
BHPHI
BKSAR
C1K
CCPQU
D1I
DWQXO
FR3
FYUFA
GHDGH
GNUQQ
GUQSH
H94
HCIFZ
K9.
KB.
KB0
KL.
L6V
LK8
M0K
M0S
M1P
M2M
M2O
M2P
M7N
M7P
M7S
MBDVC
NAPCQ
P5Z
P62
P64
PATMY
PCBAR
PDBOC
PHGZM
PHGZT
PJZUB
PKEHL
PPXIY
PQEST
PQGLB
PQQKQ
PQUKI
PRINS
PSYQQ
PTHSS
PYCSY
Q9U
R05
RC3
S0X
SOI
7X8
5PM
ADTOC
UNPAY
DOI 10.1038/s41586-024-07409-w
DatabaseName Springer Nature OA Free Journals
CrossRef
PubMed
ProQuest Central (Corporate)
Animal Behavior Abstracts
Bacteriology Abstracts (Microbiology B)
Calcium & Calcified Tissue Abstracts
Chemoreception Abstracts
Nursing & Allied Health Database
Ecology Abstracts
Entomology Abstracts (Full archive)
Environment Abstracts
Immunology Abstracts
Meteorological & Geoastrophysical Abstracts
Neurosciences Abstracts
Nucleic Acids Abstracts
Oncogenes and Growth Factors Abstracts
Virology and AIDS Abstracts
Agricultural Science Collection
Health & Medical Collection
ProQuest Central (purchase pre-March 2016)
Biology Database (Alumni Edition)
Medical Database (Alumni Edition)
Psychology Database (Alumni)
Science Database (Alumni Edition)
STEM Database
ProQuest Pharma Collection
Public Health Database
Technology Research Database
ProQuest SciTech Collection
ProQuest Technology Collection
ProQuest Natural Science Journals
Hospital Premium Collection
Hospital Premium Collection (Alumni Edition)
ProQuest Central (Alumni) (purchase pre-March 2016)
Research Library
Materials Science & Engineering Collection
ProQuest Central (Alumni)
ProQuest One Sustainability
ProQuest Central UK/Ireland
ProQuest Advanced Technologies & Aerospace Database
Agricultural & Environmental Science Collection
ProQuest Central Essentials
Biological Science Collection
eLibrary
ProQuest Central
ProQuest Technology Collection
Natural Science Collection
Earth, Atmospheric & Aquatic Science Collection
Environmental Sciences and Pollution Management
ProQuest One
ProQuest Materials Science Collection
ProQuest Central
Engineering Research Database
Health Research Premium Collection
Health Research Premium Collection (Alumni)
ProQuest Central Student
ProQuest Research Library
AIDS and Cancer Research Abstracts
SciTech Premium Collection
ProQuest Health & Medical Complete (Alumni)
Materials Science Database
Nursing & Allied Health Database (Alumni Edition)
Meteorological & Geoastrophysical Abstracts - Academic
ProQuest Engineering Collection
Biological Sciences
Agricultural Science Database
ProQuest Health & Medical Collection
Medical Database
Psychology Database
ProQuest Research Library
Science Database
Algology Mycology and Protozoology Abstracts (Microbiology C)
Biological Science Database
Engineering Database
Research Library (Corporate)
Nursing & Allied Health Premium
Advanced Technologies & Aerospace Database
ProQuest Advanced Technologies & Aerospace Collection
Biotechnology and BioEngineering Abstracts
Environmental Science Database
Earth, Atmospheric & Aquatic Science Database
Materials Science Collection
ProQuest Central Premium
ProQuest One Academic
ProQuest Health & Medical Research Collection
ProQuest One Academic Middle East (New)
ProQuest One Health & Nursing
ProQuest One Academic Eastern Edition (DO NOT USE)
ProQuest One Applied & Life Sciences
ProQuest One Academic
ProQuest One Academic UKI Edition
ProQuest Central China
ProQuest One Psychology
Engineering Collection
Environmental Science Collection
ProQuest Central Basic
University of Michigan
Genetics Abstracts
SIRS Editorial
Environment Abstracts
MEDLINE - Academic
PubMed Central (Full Participant titles)
Unpaywall for CDI: Periodical Content
Unpaywall
DatabaseTitle CrossRef
PubMed
Agricultural Science Database
ProQuest One Psychology
Research Library Prep
ProQuest Central Student
Oncogenes and Growth Factors Abstracts
ProQuest Advanced Technologies & Aerospace Collection
ProQuest Central Essentials
Nucleic Acids Abstracts
elibrary
ProQuest AP Science
SciTech Premium Collection
ProQuest Central China
Environmental Sciences and Pollution Management
ProQuest One Applied & Life Sciences
ProQuest One Sustainability
Health Research Premium Collection
Meteorological & Geoastrophysical Abstracts
Natural Science Collection
Health & Medical Research Collection
Biological Science Collection
Chemoreception Abstracts
ProQuest Central (New)
ProQuest Medical Library (Alumni)
Engineering Collection
Advanced Technologies & Aerospace Collection
Engineering Database
Virology and AIDS Abstracts
ProQuest Science Journals (Alumni Edition)
ProQuest Biological Science Collection
ProQuest One Academic Eastern Edition
Earth, Atmospheric & Aquatic Science Database
Agricultural Science Collection
ProQuest Hospital Collection
ProQuest Technology Collection
Health Research Premium Collection (Alumni)
Biological Science Database
Ecology Abstracts
Neurosciences Abstracts
ProQuest Hospital Collection (Alumni)
Biotechnology and BioEngineering Abstracts
Environmental Science Collection
Entomology Abstracts
Nursing & Allied Health Premium
ProQuest Health & Medical Complete
ProQuest One Academic UKI Edition
Environmental Science Database
ProQuest Nursing & Allied Health Source (Alumni)
Engineering Research Database
ProQuest One Academic
Calcium & Calcified Tissue Abstracts
Meteorological & Geoastrophysical Abstracts - Academic
ProQuest One Academic (New)
University of Michigan
Technology Collection
Technology Research Database
ProQuest One Academic Middle East (New)
SIRS Editorial
Materials Science Collection
ProQuest Health & Medical Complete (Alumni)
ProQuest Central (Alumni Edition)
ProQuest One Community College
ProQuest One Health & Nursing
Research Library (Alumni Edition)
ProQuest Natural Science Collection
ProQuest Pharma Collection
ProQuest Biology Journals (Alumni Edition)
ProQuest Central
Earth, Atmospheric & Aquatic Science Collection
ProQuest Health & Medical Research Collection
Genetics Abstracts
ProQuest Engineering Collection
Health and Medicine Complete (Alumni Edition)
ProQuest Central Korea
Bacteriology Abstracts (Microbiology B)
Algology Mycology and Protozoology Abstracts (Microbiology C)
Agricultural & Environmental Science Collection
AIDS and Cancer Research Abstracts
Materials Science Database
ProQuest Research Library
ProQuest Materials Science Collection
ProQuest Public Health
ProQuest Central Basic
ProQuest Science Journals
ProQuest Nursing & Allied Health Source
ProQuest Psychology Journals (Alumni)
ProQuest SciTech Collection
Advanced Technologies & Aerospace Database
ProQuest Medical Library
ProQuest Psychology Journals
Animal Behavior Abstracts
Materials Science & Engineering Collection
Immunology Abstracts
Environment Abstracts
ProQuest Central (Alumni)
MEDLINE - Academic
DatabaseTitleList MEDLINE - Academic
PubMed


Agricultural Science Database
CrossRef
Database_xml – sequence: 1
  dbid: C6C
  name: Springer Nature OA Free Journals
  url: http://www.springeropen.com/
  sourceTypes: Publisher
– sequence: 2
  dbid: NPM
  name: PubMed
  url: https://proxy.k.utb.cz/login?url=http://www.ncbi.nlm.nih.gov/entrez/query.fcgi?db=PubMed
  sourceTypes: Index Database
– sequence: 3
  dbid: UNPAY
  name: Unpaywall
  url: https://proxy.k.utb.cz/login?url=https://unpaywall.org/
  sourceTypes: Open Access Repository
– sequence: 4
  dbid: 8FG
  name: ProQuest Technology Collection
  url: https://search.proquest.com/technologycollection1
  sourceTypes: Aggregation Database
DeliveryMethod fulltext_linktorsrc
Discipline Sciences (General)
Physics
EISSN 1476-4687
EndPage 1040
ExternalDocumentID 10.1038/s41586-024-07409-w
PMC11136662
38811712
10_1038_s41586_024_07409_w
Genre Journal Article
GroupedDBID ---
--Z
-DZ
-ET
-~X
.55
.CO
.XZ
07C
0R~
123
186
1OL
29M
2KS
39C
53G
5RE
6TJ
70F
7RV
85S
8WZ
97F
A6W
A7Z
AAEEF
AAHBH
AAHTB
AAIKC
AAKAB
AAMNW
AASDW
AAYEP
AAYZH
AAZLF
ABDQB
ABFSI
ABIVO
ABJNI
ABLJU
ABOCM
ABPEJ
ABPPZ
ABWJO
ABZEH
ACBEA
ACBWK
ACGFO
ACGFS
ACGOD
ACIWK
ACKOT
ACMJI
ACNCT
ACPRK
ACWUS
ADBBV
ADFRT
ADUKH
AENEX
AFBBN
AFFNX
AFLOW
AFRAH
AFSHS
AGAYW
AGHSJ
AGHTU
AGOIJ
AGSOS
AHMBA
AHSBF
AIDUJ
ALFFA
ALIPV
ALMA_UNASSIGNED_HOLDINGS
AMTXH
ARAPS
ARMCB
ASPBG
ATCPS
ATWCN
AVWKF
AXYYD
AZFZN
BENPR
BHPHI
BIN
BKKNO
C6C
CJ0
CS3
DU5
E.-
E.L
EAP
EBS
EE.
EPS
EXGXG
F5P
FAC
FEDTE
FQGFK
FSGXE
HCIFZ
HG6
HVGLF
HZ~
IAO
ICQ
IEA
IEP
IGS
IH2
IHR
INH
IOF
IPY
KOO
L7B
LGEZI
LOTEE
LSO
M0K
M2O
M7P
N9A
NADUK
NEPJS
NXXTH
O9-
OBC
ODYON
OES
OHH
OMK
OVD
P2P
PKN
PV9
RND
RNS
RNT
RNTTT
RXW
SC5
SHXYY
SIXXV
SJN
SNYQT
SOJ
TAE
TAOOD
TBHMF
TDRGL
TEORI
TN5
TSG
TWZ
U5U
UIG
UKR
UMD
UQL
VQA
VVN
WH7
X7M
XIH
XKW
XZL
Y6R
YAE
YCJ
YFH
YIF
YIN
YJ6
YNT
YOC
YQT
YR2
YR5
YXB
YZZ
Z5M
ZCA
~02
~88
~KM
1VR
2XV
41X
7X2
7X7
7XC
88E
88I
8AF
8AO
8C1
8CJ
8FE
8FG
8FH
8FI
8FJ
8G5
8R4
8R5
97L
AARCD
AAYXX
ABFSG
ABJCF
ABUWG
ACSTC
AEUYN
AEZWR
AFANA
AFHIU
AFKRA
AFKWF
AHWEU
AIXLP
ALPWD
ATHPR
AZQEC
BBNVY
BCU
BEC
BGLVJ
BKEYQ
BKSAR
BPHCQ
BVXVI
CCPQU
CITATION
D1I
D1J
D1K
DWQXO
EMH
EX3
FYUFA
GNUQQ
GUQSH
HMCUK
INR
ISR
K6-
KB.
L6V
LK5
LK8
M1P
M2M
M2P
M7R
M7S
NAPCQ
NFIDA
P62
PATMY
PCBAR
PDBOC
PHGZM
PHGZT
PJZUB
PPXIY
PQGLB
PQQKQ
PROAC
PSQYO
PSYQQ
PTHSS
PUEGO
PYCSY
Q2X
R05
S0X
SJFOW
TUS
UKHRP
WOW
~7V
.-4
.GJ
.HR
00M
08P
0B8
0WA
1CY
1VW
354
3EH
3O-
3V.
4.4
41~
42X
4R4
663
79B
88A
9M8
A8Z
AAJYS
AAKAS
AAVBQ
AAYOK
ABAWZ
ABDBF
ABDPE
ABEFU
ABMOR
ABNNU
ABTAH
ACBNA
ACBTR
ACRPL
ACTDY
ACUHS
ADNMO
ADRHT
ADYSU
ADZCM
AFFDN
AFHKK
AGCDD
AGGDT
AGNAY
AIDAL
AIYXT
AJUXI
APEBS
ARTTT
B0M
BCR
BDKGC
BES
BKOMP
BLC
DB5
DO4
EAD
EAS
EAZ
EBC
EBD
EBO
ECC
EJD
EMB
EMF
EMK
EMOBN
EPL
ESE
ESN
ESX
FA8
I-F
ITC
J5H
L-9
M0L
MVM
N4W
NEJ
NPM
OHT
P-O
PEA
PM3
QS-
R4F
RHI
SKT
SV3
TH9
TUD
UAO
UBY
UHB
USG
VOH
X7L
XOL
YQI
YQJ
YV5
YXA
YYP
YYQ
ZCG
ZE2
ZGI
ZHY
ZKB
ZKG
ZY4
~8M
~G0
7QG
7QL
7QP
7QR
7SN
7SS
7ST
7T5
7TG
7TK
7TM
7TO
7U9
7XB
8FD
8FK
ABUFD
AGSTI
C1K
FR3
H94
K9.
KL.
M7N
MBDVC
P64
PKEHL
PQEST
PQUKI
PRINS
Q9U
RC3
SOI
7X8
5PM
ADTOC
ADXHL
AETEA
AGQPQ
ESTFP
UNPAY
ID FETCH-LOGICAL-c475t-defc0cb2f6cd7d09dc92c4e25500c02737a6b659f9ac574035f672fdf8fabf7f3
IEDL.DBID UNPAY
ISSN 0028-0836
1476-4687
IngestDate Sun Oct 26 04:13:54 EDT 2025
Tue Sep 30 17:09:08 EDT 2025
Fri Sep 05 08:18:12 EDT 2025
Tue Oct 07 07:02:31 EDT 2025
Wed Feb 19 02:07:46 EST 2025
Wed Oct 01 03:39:24 EDT 2025
Thu Apr 24 22:50:48 EDT 2025
Fri Feb 21 02:39:35 EST 2025
IsDoiOpenAccess true
IsOpenAccess true
IsPeerReviewed true
IsScholarly true
Issue 8014
Language English
License 2024. The Author(s).
Open Access This article is licensed under a Creative Commons Attribution 4.0 International License, which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons licence, and indicate if changes were made. The images or other third party material in this article are included in the article’s Creative Commons licence, unless indicated otherwise in a credit line to the material. If material is not included in the article’s Creative Commons licence and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder. To view a copy of this licence, visit http://creativecommons.org/licenses/by/4.0/.
cc-by
LinkModel DirectLink
MergedId FETCHMERGED-LOGICAL-c475t-defc0cb2f6cd7d09dc92c4e25500c02737a6b659f9ac574035f672fdf8fabf7f3
Notes ObjectType-Article-1
SourceType-Scholarly Journals-1
ObjectType-Feature-2
content type line 14
content type line 23
ORCID 0000-0002-3831-6778
0000-0001-9952-3335
OpenAccessLink https://proxy.k.utb.cz/login?url=https://www.nature.com/articles/s41586-024-07409-w.pdf
PMID 38811712
PQID 3063799038
PQPubID 40569
PageCount 7
ParticipantIDs unpaywall_primary_10_1038_s41586_024_07409_w
pubmedcentral_primary_oai_pubmedcentral_nih_gov_11136662
proquest_miscellaneous_3062527998
proquest_journals_3063799038
pubmed_primary_38811712
crossref_primary_10_1038_s41586_024_07409_w
crossref_citationtrail_10_1038_s41586_024_07409_w
springer_journals_10_1038_s41586_024_07409_w
ProviderPackageCode CITATION
AAYXX
PublicationCentury 2000
PublicationDate 2024-05-30
PublicationDateYYYYMMDD 2024-05-30
PublicationDate_xml – month: 05
  year: 2024
  text: 2024-05-30
  day: 30
PublicationDecade 2020
PublicationPlace London
PublicationPlace_xml – name: London
– name: England
PublicationSubtitle International weekly journal of science
PublicationTitle Nature (London)
PublicationTitleAbbrev Nature
PublicationTitleAlternate Nature
PublicationYear 2024
Publisher Nature Publishing Group UK
Nature Publishing Group
Publisher_xml – name: Nature Publishing Group UK
– name: Nature Publishing Group
References ZhangZA flexible new technique for camera calibrationIEEE Trans. Pattern Anal. Mach. Intell.2000221330133410.1109/34.888718
ZhangLZhangHChenJWangLHybrid deblur net: deep non-uniform deblurring with event cameraIEEE Access2020814807514808310.1109/ACCESS.2020.3015759
Qi, C. R., Yi, L., Su, H. & Guibas, L. J. in Advances in Neural Information Processing Systems pages 5099–5108 (MIT, 2017).
Sony. Image Sensors for Automotive Use. https://www.sony-semicon.com/en/products/is/automotive/automotive.html (2023).
Perot, E., de Tournemire, P., Nitti, D., Masci, J. & Sironi, A. Learning to detect objects with a 1 megapixel event camera. In Proc. Advances in Neural Information Processing Systems 33 (NeurIPS) 16639–16652 (eds Larochelle, H. et al.) (2020).
Alonso, Iñigo and Murillo, A. C. EV-SegNet: semantic segmentation for event-based cameras. In Proc. 2019 IEEE/CVF Conference on Computer Vision and Pattern Recognition Workshops (CVPRW) 1624–1633 (IEEE, 2019).
He, K., Zhang, X., Ren, S. & Sun, J. Deep residual learning for image recognition. In Proc.2016 IEEE Conference on Computer Vision and Pattern Recognition (CVPR) 770–778 (IEEE, 2016).
Zhou, Z. et al. RGB-event fusion for moving object detection in autonomous driving. In Proc. 2023 IEEE International Conference on Robotics and Automation (ICRA) 7808–7815 (IEEE, 2023).
Gehrig, D., Loquercio, A., Derpanis, K. G. & Scaramuzza, D. End-to-end learning of representations for asynchronous event-based data. In Proc. 2019 IEEE/CVF International Conference on Computer Vision (ICCV) 5632–5642 (IEEE, 2019).
Sekikawa, Y., Hara, K. & Saito, H. EventNet: asynchronous recursive event processing. In 2019 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR) 3882–3891 (IEEE, 2019).
Wang, X., Su, T., Da, F. & Yang, X. ProphNet: efficient agent-centric motion forecasting with anchor-informed proposals. In Proc.2023 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR) 21995–22003 (IEEE, 2023).
Fey, M., Lenssen, J. E., Weichert, F. & Müller, H. SplineCNN: fast geometric deep learning with continuous b-spline kernels. In Proc.2018 IEEE/CVF Conference on Computer Vision and Pattern Recognition 869–877 (2018).
Chen, Nicholas F. Y. Pseudo-labels for supervised learning on dynamic vision sensor data, applied to object detection under ego-motion. In Proc.2018 IEEE/CVF Conference on Computer Vision and Pattern Recognition Workshops (CVPRW) 757–709 (IEEE, 2018).
PoschCMatolinDWohlgenanntRA QVGA 143 dB dynamic range frame-free PWM image sensor with lossless pixel-level video compression and time-domain CDSIEEE J. Solid State Circuits2011462592752011IJSSC..46..259P10.1109/JSSC.2010.2085952
Messikommer, N. A., Gehrig, D., Loquercio, A. & Scaramuzza, D. Event-based asynchronous sparse convolutional networks. In Proc. 16th European Conference of Computer Vision (ECCV) 415–431 (ACM, 2020).
OrchardGJayawantACohenGKThakorNConverting static image datasets to spiking neuromorphic datasets using saccadesFront. Neurosci.2015943710.3389/fnins.2015.00437266355134644806
Bi, Y., Chadha, A., Abbas, A., Bourtsoulatze, E. & Andreopoulos, Y. Graph-based object classification for neuromorphic vision sensing. In Proc. 2019 IEEE/CVF International Conference on Computer Vision (ICCV) 491–501 (IEEE, 2019).
Prophesee Evaluation Kit - 2 HD.https://www.prophesee.ai/event-based-evk (2023).
Sun, Z., Messikommer, N., Gehrig, D. & Scaramuzza, D. ESS: learning event-based semantic segmentation from still images. In Proc. 17th European Conference of Computer Vision (ECCV) 341–357 (ACM, 2022).
LichtsteinerPPoschCDelbruckTA 128 × 128 120 dB 15 μs latency asynchronous temporal contrast vision sensorIEEE J. Solid State Circuits2008435665762008IJSSC..43..566L10.1109/JSSC.2007.914337
MitraSFusiSIndiveriGReal-time classification of complex patterns using spike-based learning in neuromorphic vlsiIEEE Trans. Biomed. Circuits Syst.2009332421:STN:280:DC%2BC3sfgslykug%3D%3D10.1109/TBCAS.2008.200578123853161
Fei-Fei, L., Fergus, R. & Perona, P. Learning generative visual models from few training examples: an incremental bayesian approach tested on 101 object categories. In Proc.2004 Conference on Computer Vision and Pattern Recognition Workshop 178 (IEEE, 2004).
Prophesee. Transfer latency.https://support.prophesee.ai/portal/en/kb/articles/evk-latency (2023).
Cruise. Cruise 101: Learn the Basics of How a Cruise Car Navigates City Streets Safely and Efficiently. https://getcruise.com/technology (2023).
Schaefer, S., Gehrig, D. & Scaramuzza, D. AEGNN: asynchronous event-based graph neural networks. In Proc. Conference of Computer Vision and Pattern Recognition (CVPR) 12371–12381 (CVF, 2022).
Girshick, R. Fast R-CNN. In Proc. 2015 IEEE International Conference on Computer Vision (ICCV) 1440–1448 (IEEE, 2015).
RussakovskyOImageNet large scale visual recognition challengeInt. J. Comput. Vis.2015115211252342248210.1007/s11263-015-0816-y
Sanket, N. et al. EVDodgeNet: deep dynamic obstacle dodging with event cameras. In Proc. 2020 IEEE International Conference on Robotics and Automation (ICRA) 10651–10657 (IEEE, 2020).
Lin, T.-Y. et al. Microsoft COCO: common objects in context. In Proc. 2014 European Conference of Computer Vision (ECCV), 740–755 (Springer, 2014).
Fey, M. & Lenssen, J. E. Fast graph representation learning with PyTorch geometric. In Proc. ICLR 2019 Workshop on Representation Learning on Graphs and Manifolds (ICLR, 2019).
Groh, F., Wieschollek, P. & Lensch, H. P. A. Flex-convolution (million-scale point-cloud learning beyond grid-worlds). In Proc. Computer Vision – ACCV 2018 Vol. 11361 (eds Jawahar, C. et al.) 105–122 (Springer, 2018).
Zeng, W., Liang, M., Liao, R. & Urtasun, R. Systems and methods for actor motion forecasting within a surrounding environment of an autonomous vehicle, US Patent 0347941 (2023).
FalangaDKleberKScaramuzzaDDynamic obstacle avoidance for quadrotors with event camerasSci. Robot.20205eaaz971210.1126/scirobotics.aaz971233022598
Tulyakov, S. et al. Time lens: event-based video frame interpolation. In Proc. 2021 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR) 16150–16159 (IEEE, 2021).
Ge, Z., Liu, S., Wang, F., Li, Z. & Sun, J. YOLOX: exceeding YOLO series in 2021. Preprint at https://arxiv.org/abs/2107.08430 (2021).
Cui, A., Casas, S., Wong, K., Suo, S. & Urtasun, R. GoRela: go relative for viewpoint-invariant motion forecasting. In Proc. 2023 IEEE International Conference on Robotics and Automation (ICRA) 7801–7807 (IEEE, 2022).
Redmon, J. & Farhadi, A. YOLOv3: an incremental improvement. Preprint at https://arxiv.org/abs/1804.02767 (2018).
Izmailov, P., Podoprikhin, D., Garipov, T., Vetrov, D. & Wilson, A. G. Averaging weights leads to wider optima and better generalization. In Proc. 34thConference on Uncertainty in Artificial Intelligence (UAI) Vol. 2 (eds Silva, R. et al.) 876–885 (Association For Uncertainty in Artificial Intelligence, 2018).
LiJAsynchronous spatio-temporal memory network for continuous event-based object detectionIEEE Trans. Image Process.202231297529872022ITIP...31.2975L10.1109/TIP.2022.316296235377848
Gehrig, M., Shrestha, S. B., Mouritzen, D. & Scaramuzza, D. Event-based angular velocity regression with spiking networks. In Proc. 2020 IEEE International Conference on Robotics and Automation (ICRA) 4195-4202 (IEEE, 2020).
Jouppi, N. P. et al. Ten lessons from three generations shaped Google’s TPUv4i: industrial product. In Proc. 2021 ACM/IEEE 48th Annual International Symposium on Computer Architecture (ISCA) 1–14 (IEEE, 2021).
Girshick, R., Donahue, J., Darrell, T. & Malik, J. Rich feature hierarchies for accurate object detection and semantic segmentation. In Proc. 2014 IEEE Conference on Computer Vision and Pattern Recognition 580–587 (IEEE, 2014).
Rebecq, H., Ranftl, R., Koltun, V. & Scaramuzza, D. Events-to-video: bringing modern computer vision to event cameras. In Proc.2019 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR) 3852–3861 (IEEE, 2019).
de Tournemire, P., Nitti, D., Perot, E., Migliore, D. & Sironi, A. A large scale event-based detection dataset for automotive. Preprint at https://arxiv.org/abs/2001.08499 (2020).
Sironi, A., Brambilla, M., Bourdis, N., Lagorce, X. & Benosman, R. HATS: histograms of averaged time surfaces for robust event-based object classification. In Proc. 2018 IEEE/CVF Conference on Computer Vision and Pattern Recognition 1731–1740 (IEEE, 2018).
Deng, Y., Chen, H., Liu, H. & Li, Y. A voxel graph CNN for object classification with event cameras. In Proc. 2022 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR) 1162–1171 (IEEE, 2022).
Loshchilov, I. & Hutter, F. Decoupled weight decay regularization. In Proc. 2019 International Conference on Learning Representations (OpenReview.net, 2019).
Liu, W. et al. SSD: single shot multibox detector. In Proc. 2016 European Conference of Computer Vision (ECCV) Vol. 9905, 21–37 (eds Leibe, B. et al.) (Springer, 2016).
OmniVision. OX08B4C 8.3 MP Product Brief. https://www.ovt.com/wp-content/uploads/2022/01/OX08B4C-PB-v1.0-WEB.pdf (2023).
Mitrokhin, A., Hua, Z., Fermuller, C. & Aloimonos, Y. Learning visual motion segmentation using event surfaces. In Proc.2020 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR) 14402–14411 (IEEE, 2020).
ZhaoJJiSCaiZZengYWangYMoving object detection and tracking by event frame from neuromorphic vision sensorsBiomimetics20227311:CAS:528:DC%2BB38XhslOhurjJ10.3390/biomimetics7010031353231888945359
Gallego, G. et al. Event-based vision: a survey. In Proc.IEEE Transactions on Pattern Analysis and Machine Intelligence Vol. 44, 154–180 (IEEE, 2020).
Graham, B., Engelcke, M. & van der Maaten, L. 3D semantic segmentation with submanifold sparse convolutional networks. In Proc. 2018 IEEE/CVF Conference on Computer Vision and Pattern Recognition 9224–9232 (IEEE, 2018).
Cannici, M., Ciccone, M., Romanoni, A. & Matteucci, M. A differentiable recurrent surface for asynchronous event-based data. In Proc. European Conference of Computer Vis
P Lichtsteiner (7409_CR16) 2008; 43
J Li (7409_CR28) 2022; 31
7409_CR47
7409_CR44
7409_CR45
7409_CR49
G Orchard (7409_CR42) 2015; 9
7409_CR83
7409_CR86
7409_CR43
7409_CR41
7409_CR85
S Mitra (7409_CR68) 2009; 3
D Gehrig (7409_CR80) 2019; 128
SMNadim Uddin (7409_CR82) 2022; 32
JH Lee (7409_CR71) 2016; 10
JA Perez-Carrasco (7409_CR73) 2013; 35
7409_CR35
7409_CR36
7409_CR33
7409_CR77
7409_CR34
7409_CR78
7409_CR39
D Falanga (7409_CR3) 2020; 5
7409_CR37
7409_CR38
7409_CR72
7409_CR70
7409_CR31
7409_CR75
7409_CR32
7409_CR76
7409_CR30
7409_CR74
Z Zhang (7409_CR84) 2000; 22
O Russakovsky (7409_CR46) 2015; 115
7409_CR8
7409_CR9
7409_CR4
7409_CR24
7409_CR5
7409_CR25
7409_CR69
7409_CR6
7409_CR22
7409_CR66
7409_CR7
7409_CR23
7409_CR1
7409_CR29
7409_CR2
7409_CR26
7409_CR27
7409_CR60
7409_CR61
7409_CR20
7409_CR64
7409_CR21
7409_CR65
C Brandli (7409_CR17) 2014; 49
7409_CR62
M Gehrig (7409_CR40) 2021; 6
7409_CR63
7409_CR19
C Posch (7409_CR48) 2011; 46
G Indiveri (7409_CR67) 2011; 5
L Zhang (7409_CR81) 2020; 8
J Zhao (7409_CR79) 2022; 7
7409_CR13
7409_CR57
7409_CR14
7409_CR58
7409_CR11
7409_CR55
7409_CR12
7409_CR56
7409_CR18
7409_CR15
7409_CR59
7409_CR50
7409_CR53
7409_CR10
7409_CR54
7409_CR51
7409_CR52
References_xml – reference: PoschCMatolinDWohlgenanntRA QVGA 143 dB dynamic range frame-free PWM image sensor with lossless pixel-level video compression and time-domain CDSIEEE J. Solid State Circuits2011462592752011IJSSC..46..259P10.1109/JSSC.2010.2085952
– reference: Chen, Nicholas F. Y. Pseudo-labels for supervised learning on dynamic vision sensor data, applied to object detection under ego-motion. In Proc.2018 IEEE/CVF Conference on Computer Vision and Pattern Recognition Workshops (CVPRW) 757–709 (IEEE, 2018).
– reference: Fey, M. & Lenssen, J. E. Fast graph representation learning with PyTorch geometric. In Proc. ICLR 2019 Workshop on Representation Learning on Graphs and Manifolds (ICLR, 2019).
– reference: Zeng, W., Liang, M., Liao, R. & Urtasun, R. Systems and methods for actor motion forecasting within a surrounding environment of an autonomous vehicle, US Patent 0347941 (2023).
– reference: FalangaDKleberKScaramuzzaDDynamic obstacle avoidance for quadrotors with event camerasSci. Robot.20205eaaz971210.1126/scirobotics.aaz971233022598
– reference: Loshchilov, I. & Hutter, F. Decoupled weight decay regularization. In Proc. 2019 International Conference on Learning Representations (OpenReview.net, 2019).
– reference: Zhu, A. Z., Yuan, L., Chaney, K. & Daniilidis, K. Unsupervised event-based learning of optical flow, depth, and egomotion. In Proc. 2019 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR) 989–997 (IEEE, 2019).
– reference: Cristovao, N. Tesla’s FSD hardware 4.0 to use cameras with LED flicker mitigation. Not a Tesla App. https://www.notateslaapp.com/news/679/tesla-s-fsd-hardware-4-0-to-use-new-cameras (2022).
– reference: Bosch. Multi Purpose Camera: Combination of Classic Cutting Edge Computer Vision Algorithms and Artificial Intelligence Methods. https://www.bosch-mobility.com/media/global/solutions/passenger-cars-and-light-commercial-vehicles/driver-assistance-systems/multi-camera-system/multi-purpose-camera/summary_multi-purpose-camera_en.pdf (2023).
– reference: Ren, S., He, K., Girshick, R. & Sun, J. in Advances in Neural Information Processing Systems Vol. 28. (eds Cortes, C. et al.) 91–99 (Curran Associates, 2015).
– reference: Perez-CarrascoJAMapping from frame-driven to frame-free event-driven vision systems by low-rate rate coding and coincidence processing–application to feedforward ConvNetsIEEE Trans. Pattern Anal. Mach. Intell.2013352706271910.1109/TPAMI.2013.7124051730
– reference: Sironi, A., Brambilla, M., Bourdis, N., Lagorce, X. & Benosman, R. HATS: histograms of averaged time surfaces for robust event-based object classification. In Proc. 2018 IEEE/CVF Conference on Computer Vision and Pattern Recognition 1731–1740 (IEEE, 2018).
– reference: Mitrokhin, A., Hua, Z., Fermuller, C. & Aloimonos, Y. Learning visual motion segmentation using event surfaces. In Proc.2020 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR) 14402–14411 (IEEE, 2020).
– reference: Tulyakov, S. et al. Time lens: event-based video frame interpolation. In Proc. 2021 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR) 16150–16159 (IEEE, 2021).
– reference: LeeJHDelbruckTPfeifferMTraining deep spiking neural networks using backpropagationFront. Neurosci.20161050810.3389/fnins.2016.00508278771075099523
– reference: de Tournemire, P., Nitti, D., Perot, E., Migliore, D. & Sironi, A. A large scale event-based detection dataset for automotive. Preprint at https://arxiv.org/abs/2001.08499 (2020).
– reference: Naughton, K. Driverless cars’ need for data is sparking a new space race. Bloomberg (17 September 2021).
– reference: Rebecq, H., Ranftl, R., Koltun, V. & Scaramuzza, D. Events-to-video: bringing modern computer vision to event cameras. In Proc.2019 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR) 3852–3861 (IEEE, 2019).
– reference: Cannici, M., Ciccone, M., Romanoni, A. & Matteucci, M. A differentiable recurrent surface for asynchronous event-based data. In Proc. European Conference of Computer Vision (ECCV) (eds Vedaldi, A. et al.) Vol. 12365, 136–152 (Springer, 2020).
– reference: LiJAsynchronous spatio-temporal memory network for continuous event-based object detectionIEEE Trans. Image Process.202231297529872022ITIP...31.2975L10.1109/TIP.2022.316296235377848
– reference: He, K., Zhang, X., Ren, S. & Sun, J. Deep residual learning for image recognition. In Proc.2016 IEEE Conference on Computer Vision and Pattern Recognition (CVPR) 770–778 (IEEE, 2016).
– reference: Zhou, Z., Wang, J., Li, Y.-H. & Huang, Y.-K. Query-centric trajectory prediction. In Proc. IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR) 17863–17873 (IEEE, 2023).
– reference: Fei-Fei, L., Fergus, R. & Perona, P. Learning generative visual models from few training examples: an incremental bayesian approach tested on 101 object categories. In Proc.2004 Conference on Computer Vision and Pattern Recognition Workshop 178 (IEEE, 2004).
– reference: ZhangLZhangHChenJWangLHybrid deblur net: deep non-uniform deblurring with event cameraIEEE Access2020814807514808310.1109/ACCESS.2020.3015759
– reference: Gallego, G. et al. Event-based vision: a survey. In Proc.IEEE Transactions on Pattern Analysis and Machine Intelligence Vol. 44, 154–180 (IEEE, 2020).
– reference: Schaefer, S., Gehrig, D. & Scaramuzza, D. AEGNN: asynchronous event-based graph neural networks. In Proc. Conference of Computer Vision and Pattern Recognition (CVPR) 12371–12381 (CVF, 2022).
– reference: Sanket, N. et al. EVDodgeNet: deep dynamic obstacle dodging with event cameras. In Proc. 2020 IEEE International Conference on Robotics and Automation (ICRA) 10651–10657 (IEEE, 2020).
– reference: Shashua, A., Shalev-Shwartz, S. & Shammah, S. Systems and methods for navigating with sensing uncertainty. US patent 0269277 (2022).
– reference: Qi, C. R., Yi, L., Su, H. & Guibas, L. J. in Advances in Neural Information Processing Systems pages 5099–5108 (MIT, 2017).
– reference: Big data needs a hardware revolution. Nature554, 145–146 (2018).
– reference: Fischer, T. et al. QDTrack: quasi-dense similarity learning for appearance-only multiple object tracking. IEEE Trans. Pattern Anal. Mach. Intell.45, 15380–15393 (2023).
– reference: ZhaoJJiSCaiZZengYWangYMoving object detection and tracking by event frame from neuromorphic vision sensorsBiomimetics20227311:CAS:528:DC%2BB38XhslOhurjJ10.3390/biomimetics7010031353231888945359
– reference: Lin, T.-Y. et al. Microsoft COCO: common objects in context. In Proc. 2014 European Conference of Computer Vision (ECCV), 740–755 (Springer, 2014).
– reference: Mitrokhin, A., Fermuller, C., Parameshwara, C. & Aloimonos, Y. Event-based moving object detection and tracking. In Proc. 2018 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS) (IEEE, 2018).
– reference: Girshick, R., Donahue, J., Darrell, T. & Malik, J. Rich feature hierarchies for accurate object detection and semantic segmentation. In Proc. 2014 IEEE Conference on Computer Vision and Pattern Recognition 580–587 (IEEE, 2014).
– reference: Cannici, M., Ciccone, M., Romanoni, A. & Matteucci, M. Asynchronous convolutional networks for object detection in neuromorphic cameras. In Proc. 2019 IEEE/CVF Conference on Computer Vision and Pattern Recognition Workshops (CVPRW) 1656–1665 (IEEE, 2019).
– reference: Deng, Y., Chen, H., Liu, H. & Li, Y. A voxel graph CNN for object classification with event cameras. In Proc. 2022 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR) 1162–1171 (IEEE, 2022).
– reference: Wang, X., Su, T., Da, F. & Yang, X. ProphNet: efficient agent-centric motion forecasting with anchor-informed proposals. In Proc.2023 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR) 21995–22003 (IEEE, 2023).
– reference: GehrigDRebecqHGallegoGScaramuzzaDEKLT: asynchronous photometric feature tracking using events and framesInt. J. Comput. Vis.201912860161810.1007/s11263-019-01209-w
– reference: Fey, M., Lenssen, J. E., Weichert, F. & Müller, H. SplineCNN: fast geometric deep learning with continuous b-spline kernels. In Proc.2018 IEEE/CVF Conference on Computer Vision and Pattern Recognition 869–877 (2018).
– reference: Maqueda, A. I., Loquercio, A., Gallego, G., García, N. & Scaramuzza, D. Event-based vision meets deep learning on steering prediction for self-driving cars. In Proc. 2018 IEEE/CVF Conference on Computer Vision and Pattern Recognition 5419–5427 (IEEE, 2018).
– reference: Redmon, J. & Farhadi, A. YOLOv3: an incremental improvement. Preprint at https://arxiv.org/abs/1804.02767 (2018).
– reference: Girshick, R. Fast R-CNN. In Proc. 2015 IEEE International Conference on Computer Vision (ICCV) 1440–1448 (IEEE, 2015).
– reference: Messikommer, N. A., Gehrig, D., Loquercio, A. & Scaramuzza, D. Event-based asynchronous sparse convolutional networks. In Proc. 16th European Conference of Computer Vision (ECCV) 415–431 (ACM, 2020).
– reference: Sony. Image Sensors for Automotive Use. https://www.sony-semicon.com/en/products/is/automotive/automotive.html (2023).
– reference: MitraSFusiSIndiveriGReal-time classification of complex patterns using spike-based learning in neuromorphic vlsiIEEE Trans. Biomed. Circuits Syst.2009332421:STN:280:DC%2BC3sfgslykug%3D%3D10.1109/TBCAS.2008.200578123853161
– reference: Alonso, Iñigo and Murillo, A. C. EV-SegNet: semantic segmentation for event-based cameras. In Proc. 2019 IEEE/CVF Conference on Computer Vision and Pattern Recognition Workshops (CVPRW) 1624–1633 (IEEE, 2019).
– reference: Jian, Z. et al. Mixed frame-/event-driven fast pedestrian detection. In Proc.2019 International Conference on Robotics and Automation (ICRA) 8332–8338 (IEEE, 2019).
– reference: UddinSMNadimAhmedSoikatHasanJungYongJuUnsupervised deep event stereo for depth estimationIEEE Trans. Circuits Syst. Video Technol.2022327489750410.1109/TCSVT.2022.3189480
– reference: Mobileye. EyeQ: Vision System on a Chip. https://www.mobileye-vision.com/uploaded/eyeq.pdf (2023).
– reference: LichtsteinerPPoschCDelbruckTA 128 × 128 120 dB 15 μs latency asynchronous temporal contrast vision sensorIEEE J. Solid State Circuits2008435665762008IJSSC..43..566L10.1109/JSSC.2007.914337
– reference: BrandliCBernerRYangMLiuShih-ChiiDelbruckTA 240 × 180 130 dB 3 μs latency global shutter spatiotemporal vision sensorIEEE J. Solid State Circuits201449233323412014IJSSC..49.2333B10.1109/JSSC.2014.2342715
– reference: RussakovskyOImageNet large scale visual recognition challengeInt. J. Comput. Vis.2015115211252342248210.1007/s11263-015-0816-y
– reference: OmniVision. OX08B4C 8.3 MP Product Brief. https://www.ovt.com/wp-content/uploads/2022/01/OX08B4C-PB-v1.0-WEB.pdf (2023).
– reference: Cui, A., Casas, S., Wong, K., Suo, S. & Urtasun, R. GoRela: go relative for viewpoint-invariant motion forecasting. In Proc. 2023 IEEE International Conference on Robotics and Automation (ICRA) 7801–7807 (IEEE, 2022).
– reference: GehrigMAarentsWGehrigDScaramuzzaDDSEC: a stereo event camera dataset for driving scenariosIEEE Robot. Automat.Lett.202164947495410.1109/LRA.2021.3068942
– reference: Perot, E., de Tournemire, P., Nitti, D., Masci, J. & Sironi, A. Learning to detect objects with a 1 megapixel event camera. In Proc. Advances in Neural Information Processing Systems 33 (NeurIPS) 16639–16652 (eds Larochelle, H. et al.) (2020).
– reference: Zhou, Z. et al. RGB-event fusion for moving object detection in autonomous driving. In Proc. 2023 IEEE International Conference on Robotics and Automation (ICRA) 7808–7815 (IEEE, 2023).
– reference: Forrai, B., Miki, T., Gehrig, D., Hutter, M. & Scaramuzza, D. Event-based agile object catching with a quadrupedal robot. In Proc. 2023 IEEE International Conference on Robotics and Automation (ICRA) 12177–12183 (IEEE, 2023).
– reference: Simonovsky, M. & Komodakis, N. Dynamic edge-conditioned filters in convolutional neural networks on graphs. In Proc. 2017 IEEE Conference on Computer Vision and Pattern Recognition (CVPR) 29–38 (IEEE, 2017).
– reference: Prophesee. Transfer latency.https://support.prophesee.ai/portal/en/kb/articles/evk-latency (2023).
– reference: Cordone, L., Miramond, B. & Thierion, P. Object detection with spiking neural networks on automotive event data. In Proc. 2022 International Joint Conference on Neural Networks (IJCNN) 1–8 (IEEE, 2022).
– reference: Gehrig, D., Loquercio, A., Derpanis, K. G. & Scaramuzza, D. End-to-end learning of representations for asynchronous event-based data. In Proc. 2019 IEEE/CVF International Conference on Computer Vision (ICCV) 5632–5642 (IEEE, 2019).
– reference: Li, Y. et al. Graph-based asynchronous event processing for rapid object recognition. In Proc. 2021 IEEE/CVF International Conference on Computer Vision (ICCV) 914–923 (IEEE, 2021).
– reference: Amir, A. et al. A low power, fully event-based gesture recognition system. In Proc. 2017 IEEE Conference on Computer Vision and Pattern Recognition (CVPR) 7388–7397 (IEEE, 2017).
– reference: Tulyakov, S., Fleuret, F., Kiefel, M., Gehler, P. & Hirsch, M. Learning an event sequence embedding for dense event-based deep stereo. In Proc. 2019 IEEE/CVF International Conference on Computer Vision (ICCV) 1527–1537 (IEEE, 2019).
– reference: Tulyakov, S. et al. Time lens++: event-based frame interpolation with parametric nonlinear flow and multi-scale fusion. In Proc. 2022 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR) 17734–17743 (IEEE, 2022).
– reference: Pang, J. et al. Quasi-dense similarity learning for multiple object tracking. In Proc. 2021 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR) 164–173 (IEEE, 2021).
– reference: Prophesee Evaluation Kit - 2 HD.https://www.prophesee.ai/event-based-evk (2023).
– reference: Bi, Y., Chadha, A., Abbas, A., Bourtsoulatze, E. & Andreopoulos, Y. Graph-based object classification for neuromorphic vision sensing. In Proc. 2019 IEEE/CVF International Conference on Computer Vision (ICCV) 491–501 (IEEE, 2019).
– reference: Cruise. Cruise 101: Learn the Basics of How a Cruise Car Navigates City Streets Safely and Efficiently. https://getcruise.com/technology (2023).
– reference: Izmailov, P., Podoprikhin, D., Garipov, T., Vetrov, D. & Wilson, A. G. Averaging weights leads to wider optima and better generalization. In Proc. 34thConference on Uncertainty in Artificial Intelligence (UAI) Vol. 2 (eds Silva, R. et al.) 876–885 (Association For Uncertainty in Artificial Intelligence, 2018).
– reference: Cho, H., Cho, J. & Yoon, K.-J. Learning adaptive dense event stereo from the image domain. In Proc.2023 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR) 17797–17807 (IEEE, 2023).
– reference: Sun, Z., Messikommer, N., Gehrig, D. & Scaramuzza, D. ESS: learning event-based semantic segmentation from still images. In Proc. 17th European Conference of Computer Vision (ECCV) 341–357 (ACM, 2022).
– reference: Gehrig, M., Shrestha, S. B., Mouritzen, D. & Scaramuzza, D. Event-based angular velocity regression with spiking networks. In Proc. 2020 IEEE International Conference on Robotics and Automation (ICRA) 4195-4202 (IEEE, 2020).
– reference: Graham, B., Engelcke, M. & van der Maaten, L. 3D semantic segmentation with submanifold sparse convolutional networks. In Proc. 2018 IEEE/CVF Conference on Computer Vision and Pattern Recognition 9224–9232 (IEEE, 2018).
– reference: Sekikawa, Y., Hara, K. & Saito, H. EventNet: asynchronous recursive event processing. In 2019 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR) 3882–3891 (IEEE, 2019).
– reference: Ge, Z., Liu, S., Wang, F., Li, Z. & Sun, J. YOLOX: exceeding YOLO series in 2021. Preprint at https://arxiv.org/abs/2107.08430 (2021).
– reference: OrchardGJayawantACohenGKThakorNConverting static image datasets to spiking neuromorphic datasets using saccadesFront. Neurosci.2015943710.3389/fnins.2015.00437266355134644806
– reference: Iacono, M., Weber, S., Glover, A. & Bartolozzi, C. Towards event-driven object detection with off-the-shelf deep learning. In Proc. 2018 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS) 1–9 (IEEE, 2018).
– reference: Jouppi, N. P. et al. Ten lessons from three generations shaped Google’s TPUv4i: industrial product. In Proc. 2021 ACM/IEEE 48th Annual International Symposium on Computer Architecture (ISCA) 1–14 (IEEE, 2021).
– reference: Liu, W. et al. SSD: single shot multibox detector. In Proc. 2016 European Conference of Computer Vision (ECCV) Vol. 9905, 21–37 (eds Leibe, B. et al.) (Springer, 2016).
– reference: Redmon, J., Divvala, S., Girshick, R. & Farhadi, A. You Only Look Once: unified, real-time object detection. In Proc. 2016 IEEE Conference on Computer Vision and Pattern Recognition (CVPR) 779–788 (IEEE, 2016).
– reference: IndiveriGNeuromorphic silicon neuron circuitsFront. Neurosci.201157310.3389/fnins.2011.00073217477543130465
– reference: ZhangZA flexible new technique for camera calibrationIEEE Trans. Pattern Anal. Mach. Intell.2000221330133410.1109/34.888718
– reference: Groh, F., Wieschollek, P. & Lensch, H. P. A. Flex-convolution (million-scale point-cloud learning beyond grid-worlds). In Proc. Computer Vision – ACCV 2018 Vol. 11361 (eds Jawahar, C. et al.) 105–122 (Springer, 2018).
– ident: 7409_CR63
– ident: 7409_CR15
– ident: 7409_CR34
– volume: 10
  start-page: 508
  year: 2016
  ident: 7409_CR71
  publication-title: Front. Neurosci.
  doi: 10.3389/fnins.2016.00508
– ident: 7409_CR53
– ident: 7409_CR7
– ident: 7409_CR2
  doi: 10.1038/d41586-018-01683-1
– ident: 7409_CR64
  doi: 10.1007/978-3-319-46448-0_2
– volume: 32
  start-page: 7489
  year: 2022
  ident: 7409_CR82
  publication-title: IEEE Trans. Circuits Syst. Video Technol.
  doi: 10.1109/TCSVT.2022.3189480
– ident: 7409_CR4
– ident: 7409_CR69
  doi: 10.1109/ICRA40945.2020.9196877
– ident: 7409_CR72
  doi: 10.1109/CVPR.2017.781
– ident: 7409_CR32
  doi: 10.1109/ICCV48922.2021.00097
– volume: 9
  start-page: 437
  year: 2015
  ident: 7409_CR42
  publication-title: Front. Neurosci.
  doi: 10.3389/fnins.2015.00437
– ident: 7409_CR19
– ident: 7409_CR75
  doi: 10.1109/CVPR.2019.00401
– ident: 7409_CR27
  doi: 10.1109/ICRA.2019.8793924
– ident: 7409_CR37
  doi: 10.1109/ISCA52012.2021.00010
– ident: 7409_CR60
  doi: 10.1109/ICRA48891.2023.10161392
– ident: 7409_CR86
– ident: 7409_CR33
  doi: 10.1109/CVPR.2017.11
– ident: 7409_CR12
  doi: 10.1109/CVPR52729.2023.01713
– ident: 7409_CR18
  doi: 10.1007/978-3-031-19830-4_20
– volume: 3
  start-page: 32
  year: 2009
  ident: 7409_CR68
  publication-title: IEEE Trans. Biomed. Circuits Syst.
  doi: 10.1109/TBCAS.2008.2005781
– volume: 31
  start-page: 2975
  year: 2022
  ident: 7409_CR28
  publication-title: IEEE Trans. Image Process.
  doi: 10.1109/TIP.2022.3162962
– ident: 7409_CR1
  doi: 10.1109/TPAMI.2020.3008413
– ident: 7409_CR26
  doi: 10.1109/IROS.2018.8594119
– ident: 7409_CR21
  doi: 10.1109/ICCV.2019.00161
– ident: 7409_CR29
  doi: 10.1007/978-3-030-58565-5_9
– volume: 46
  start-page: 259
  year: 2011
  ident: 7409_CR48
  publication-title: IEEE J. Solid State Circuits
  doi: 10.1109/JSSC.2010.2085952
– ident: 7409_CR8
– ident: 7409_CR78
  doi: 10.1007/978-3-030-20887-5_7
– ident: 7409_CR49
  doi: 10.1109/TPAMI.2023.3301975
– ident: 7409_CR70
  doi: 10.1109/ICRA40945.2020.9197133
– ident: 7409_CR5
– ident: 7409_CR47
– ident: 7409_CR59
  doi: 10.1109/CVPR.2018.00568
– ident: 7409_CR23
  doi: 10.1109/ICCV.2019.00573
– ident: 7409_CR35
  doi: 10.1109/CVPR.2018.00097
– volume: 5
  start-page: 73
  year: 2011
  ident: 7409_CR67
  publication-title: Front. Neurosci.
  doi: 10.3389/fnins.2011.00073
– ident: 7409_CR11
  doi: 10.1109/CVPR52729.2023.02106
– ident: 7409_CR58
  doi: 10.1109/CVPRW.2019.00209
– ident: 7409_CR76
  doi: 10.1109/IROS.2018.8593805
– ident: 7409_CR39
  doi: 10.1109/IJCNN55064.2022.9892618
– ident: 7409_CR13
– ident: 7409_CR74
  doi: 10.1109/CVPR.2018.00186
– ident: 7409_CR38
  doi: 10.1109/CVPRW.2018.00107
– ident: 7409_CR50
  doi: 10.1109/CVPR46437.2021.00023
– volume: 35
  start-page: 2706
  year: 2013
  ident: 7409_CR73
  publication-title: IEEE Trans. Pattern Anal. Mach. Intell.
  doi: 10.1109/TPAMI.2013.71
– ident: 7409_CR55
– ident: 7409_CR30
  doi: 10.1109/CVPR.2016.90
– ident: 7409_CR9
– ident: 7409_CR85
  doi: 10.1007/978-3-319-10602-1_48
– ident: 7409_CR54
  doi: 10.1109/CVPR52729.2023.01707
– volume: 115
  start-page: 211
  year: 2015
  ident: 7409_CR46
  publication-title: Int. J. Comput. Vis.
  doi: 10.1007/s11263-015-0816-y
– ident: 7409_CR45
  doi: 10.1109/CVPR42600.2020.01442
– volume: 5
  start-page: eaaz9712
  year: 2020
  ident: 7409_CR3
  publication-title: Sci. Robot.
  doi: 10.1126/scirobotics.aaz9712
– ident: 7409_CR43
  doi: 10.1109/ICCV.2019.00058
– ident: 7409_CR61
  doi: 10.1109/CVPR.2014.81
– volume: 22
  start-page: 1330
  year: 2000
  ident: 7409_CR84
  publication-title: IEEE Trans. Pattern Anal. Mach. Intell.
  doi: 10.1109/34.888718
– ident: 7409_CR20
  doi: 10.1109/CVPRW.2019.00205
– ident: 7409_CR56
– ident: 7409_CR14
– ident: 7409_CR22
  doi: 10.1109/CVPR46437.2021.01589
– ident: 7409_CR41
– volume: 6
  start-page: 4947
  year: 2021
  ident: 7409_CR40
  publication-title: IEEE Robot. Automat.Lett.
  doi: 10.1109/LRA.2021.3068942
– ident: 7409_CR83
  doi: 10.1109/CVPR52688.2022.01723
– ident: 7409_CR10
  doi: 10.1109/ICRA48891.2023.10160984
– ident: 7409_CR77
– volume: 49
  start-page: 2333
  year: 2014
  ident: 7409_CR17
  publication-title: IEEE J. Solid State Circuits
  doi: 10.1109/JSSC.2014.2342715
– ident: 7409_CR36
  doi: 10.1007/978-3-030-58598-3_25
– ident: 7409_CR25
  doi: 10.1109/CVPR.2019.00398
– ident: 7409_CR65
  doi: 10.1109/CVPR.2016.91
– ident: 7409_CR6
– volume: 43
  start-page: 566
  year: 2008
  ident: 7409_CR16
  publication-title: IEEE J. Solid State Circuits
  doi: 10.1109/JSSC.2007.914337
– volume: 128
  start-page: 601
  year: 2019
  ident: 7409_CR80
  publication-title: Int. J. Comput. Vis.
  doi: 10.1007/s11263-019-01209-w
– ident: 7409_CR51
  doi: 10.1109/ICRA48891.2023.10161563
– volume: 7
  start-page: 31
  year: 2022
  ident: 7409_CR79
  publication-title: Biomimetics
  doi: 10.3390/biomimetics7010031
– ident: 7409_CR52
– ident: 7409_CR31
  doi: 10.1109/CVPR52688.2022.01205
– ident: 7409_CR24
  doi: 10.1109/CVPR.2019.00108
– ident: 7409_CR66
– ident: 7409_CR62
  doi: 10.1109/ICCV.2015.169
– volume: 8
  start-page: 148075
  year: 2020
  ident: 7409_CR81
  publication-title: IEEE Access
  doi: 10.1109/ACCESS.2020.3015759
– ident: 7409_CR44
  doi: 10.1109/CVPR52688.2022.00124
– ident: 7409_CR57
  doi: 10.1109/CVPR.2018.00961
SSID ssj0005174
Score 2.6969593
Snippet The computer vision algorithms used currently in advanced driver assistance systems rely on image-based RGB cameras, leading to a critical bandwidth–latency...
The computer vision algorithms used currently in advanced driver assistance systems rely on image-based RGB cameras, leading to a critical bandwidth-latency...
SourceID unpaywall
pubmedcentral
proquest
pubmed
crossref
springer
SourceType Open Access Repository
Aggregation Database
Index Database
Enrichment Source
Publisher
StartPage 1034
SubjectTerms 639/166
639/705/117
Accuracy
Advanced driver assistance systems
Algorithms
Bandwidths
Cameras
Computer vision
Efficiency
Frames per second
Graphs
Humanities and Social Sciences
Latency
Methods
multidisciplinary
Neural networks
Science
Science (multidisciplinary)
Sensors
Sparsity
Temporal resolution
Tradeoffs
SummonAdditionalLinks – databaseName: ProQuest Central
  dbid: BENPR
  link: http://utb.summon.serialssolutions.com/2.0.0/link/0/eLvHCXMwjV3da9RAEB_qFdE-iK1fp1Ui-KDYpZtsdjd5KKLSUkQOEQt9C5v9QOHIneauR_97ZzebXM_C4XNmk-zszOzszsxvAN4oiwaglinhWkuSC66JYkIRWWijSpuizxyyLSbi_CL_cskvd2DS18L4tMreJgZDbWba35Efo2vLJJpOVnyY_ya-a5SPrvYtNFRsrWBOAsTYHdjNPDLWCHY_nU6-fV8nffyDyxzLaPCtxy1uZYVPyM0Jbqu0JKvNreqW_3k7jXKIpe7BvWUzV9crNZ3e2K7OHsKD6GcmHzvB2Icd2xzA3ZDvqdsD2I863SZvI_D0u0dAv85WZKq8F32dqOUiJOpd2aSrP0_8lW0SAJ8SrfxdVvsYLs5Of3w-J7GjAtG55AtirNNU15kT2khDS6PLTOcWjxWUao9sI5WoBS9dqTRHJjDuhMyccYVTtZOOPYFRM2vsM0jQDtSS1hmesEyOnCxyK9BZVI7Z1OZUjyHtmVfpCDfuu15MqxD2ZkXVMbxChleB4dVqDO-HMfMObGMr9WG_JlVUvLZai8kYXg-PUWV8HEQ1drYMNBnPkAppnnZLOHyOFb7yNs3GUGws7kDg4bg3nzS_fgZY7tS3xxEChx71crD-r23TOBpk5T9m_Xz7rF_A_SyIMCeMHsJo8WdpX6LbtKhfRV34C2jJEoo
  priority: 102
  providerName: ProQuest
– databaseName: Springer Nature OA Free Journals
  dbid: C6C
  link: http://utb.summon.serialssolutions.com/2.0.0/link/0/eLvHCXMwlV3dT9swED8xpmnwgAbjo1CmTNoDaLVw4vgjj6gCoWna05B4ixzH1iZVaUVaKv57zk6aUiohePbZSe4jd_adfwfwQ1v8ARQyJtwYSVLBDdFMaCKVKXVmY4yZQ7XFH3Fzm_6643cbMFjchVnJ3wfo7hpdjPKFsilBd0czMv8AHxUqpu9XMBTDZUHHC8zl9ooMrnKxvsaqG1qLLddLJLs86TZ8nlUT_TjXo9EzV3T9BXbaGDK6bIS-Cxu22oNPoZbT1Huw29prHZ21oNLnX4H-Hs_JSPsI-THSs2kownuwUXO3PPLHsVEAc4qM9udU9T7cXl_9Hd6QtlsCMankU1JaZ6gpEidMKUualSZLTGpxy0Cp8ag1UotC8Mxl2nBkAuNOyMSVTjldOOnYAWxW48oeQYQ2XkhaJLh7KlPkpEqtwEBQO2Zjm1LTg3jBvNy0UOK-o8UoDyltpvKG4TkyPA8Mz-c9-NnNmTRAGq9S9xcyyVujqnPc3TCJ3pOpHnzvhtEcfI5DV3Y8CzQJT5AKaQ4bEXaPY8rfqo2THqgV4XYEHmp7daT6_y9Abse-9Y0QOHWw0IPle732GYNOV97w1cfvW_0EtpKg0pww2ofN6f3MnmKINC2-Bct4AnxvBdE
  priority: 102
  providerName: Springer Nature
Title Low-latency automotive vision with event cameras
URI https://link.springer.com/article/10.1038/s41586-024-07409-w
https://www.ncbi.nlm.nih.gov/pubmed/38811712
https://www.proquest.com/docview/3063799038
https://www.proquest.com/docview/3062527998
https://pubmed.ncbi.nlm.nih.gov/PMC11136662
https://www.nature.com/articles/s41586-024-07409-w.pdf
UnpaywallVersion publishedVersion
Volume 629
hasFullText 1
inHoldings 1
isFullTextHit
isPrint
journalDatabaseRights – providerCode: PRVLSH
  databaseName: SpringerLink Journals
  customDbUrl:
  mediaType: online
  eissn: 1476-4687
  dateEnd: 99991231
  omitProxy: false
  ssIdentifier: ssj0005174
  issn: 0028-0836
  databaseCode: AFBBN
  dateStart: 20190103
  isFulltext: true
  providerName: Library Specific Holdings
– providerCode: PRVPQU
  databaseName: Health & Medical Collection
  customDbUrl:
  eissn: 1476-4687
  dateEnd: 20241101
  omitProxy: true
  ssIdentifier: ssj0005174
  issn: 0028-0836
  databaseCode: 7X7
  dateStart: 19880107
  isFulltext: true
  titleUrlDefault: https://search.proquest.com/healthcomplete
  providerName: ProQuest
– providerCode: PRVPQU
  databaseName: ProQuest Central
  customDbUrl: http://www.proquest.com/pqcentral?accountid=15518
  eissn: 1476-4687
  dateEnd: 20241101
  omitProxy: true
  ssIdentifier: ssj0005174
  issn: 0028-0836
  databaseCode: BENPR
  dateStart: 19880107
  isFulltext: true
  titleUrlDefault: https://www.proquest.com/central
  providerName: ProQuest
– providerCode: PRVPQU
  databaseName: ProQuest Technology Collection
  customDbUrl:
  eissn: 1476-4687
  dateEnd: 20241101
  omitProxy: true
  ssIdentifier: ssj0005174
  issn: 0028-0836
  databaseCode: 8FG
  dateStart: 19900104
  isFulltext: true
  titleUrlDefault: https://search.proquest.com/technologycollection1
  providerName: ProQuest
– providerCode: PRVPQU
  databaseName: Public Health Database
  customDbUrl:
  eissn: 1476-4687
  dateEnd: 20241101
  omitProxy: true
  ssIdentifier: ssj0005174
  issn: 0028-0836
  databaseCode: 8C1
  dateStart: 19880107
  isFulltext: true
  titleUrlDefault: https://search.proquest.com/publichealth
  providerName: ProQuest
link http://utb.summon.serialssolutions.com/2.0.0/link/0/eLvHCXMwlV3db9MwED9trRDwAGwMKIwqSDyAmEs-beexrVYmhKoJUVGeIsexBaJKK5JQjb-es_PBytDEXhIpPsc---w723c_A7wUCieAlHkkkpKRkEaSiIAKwrjMRKw8tJmtt8Wcni3C98touQe0jYWxTvsW0tJO06132NsCFQ037rIhQaXnxmQ72mR6H_o0Qhu8B_3F_Hz85TLksg0rYhTL56yJlnED_o8f7WqkK2bmVW_J7sj0Ltyu8o242IrV6pJWmt2Hzy0_tTPK91FVpiP56y-ox5sz_ADuNYaqM64pD2BP5YdwyzqMyuIQDppJoXBeNcjVrx-C-2G9JSthzPALR1Sl9fT7qZw6gN0xe76ORYxypDCbYcURLGann6ZnpLmSgciQRSXJlJauTH1NZcYyN85k7MtQ4brEdaWBxmGCptglOhYywmoHkabM15nmWqSa6eAR9PJ1rp6AgxNJytzUxyVaFmJ_8VBRtDaFDpSnQlcOwGu7JZENXrm5NmOV2HPzgCd1EyXYRIltomQ7gDddnk2N1nEt9XHb20kzcosEl1ABQxUd8AG86JJxzJmDFJGrdWVp_MhHKqR5XAtHV1zATeiu5w-A74hNR2DwvHdT8m9fLa63Z-7XoRSznrQS9qde17Fx0knhf3D99Gbkz-COb4UwIoF7DL3yR6Weox1WpkPYZ0uGTz71zHP2bgj98WwymeN7cjo__4hfp3Q6bAbnb3AUMUk
linkProvider Unpaywall
linkToHtml http://utb.summon.serialssolutions.com/2.0.0/link/0/eLvHCXMwtR3bbtMw9GgMocEDYuOywoAggQRi1hzHsZMHhBAwdazsaZP6FhzHFpOqtJCWqD_FN3LsXEqZVPGyZx8n9rn52OcG8FIZVAC5DEmstSRcxJqoSCgiE12o1IRoM_toizMxvOBfxvF4C353uTAurLLTiV5RF1Pt3siP0LSNJKrOKHk_-0Fc1yjnXe1aaDRscWqWNV7Zqncnn5C-rxg7_nz-cUjargJEcxnPSWGspjpnVuhCFjQtdMo0N2haU6pddRepRC7i1KZKx5LTKLZCMlvYxKrcShvhd2_ATR6hLkH5kWO5Cin5p-pzm6SDaz6q8KBMXLgvJ3ho05TU6wfhFev2apBm76m9AzuLcqaWtZpM_joMj-_B3daKDT40bLcLW6bcg1s-mlRXe7DbaowqeN2WtX5zH-hoWpOJcjb6MlCLuQ8D_GWCJrs9cA_CgS8nFWjlXsqqB3BxLZh9CNvltDT7EKCWySXNGd7fCo6YTLgRaIoqG5nQcKoHEHbIy3RbzNz11Jhk3qkeJVmD8AwRnnmEZ_UA3vZzZk0pj43QBx1Nslasq2zFhAN40Q-jQDoviyrNdOFhWMwQCmEeNSTsfxclLq83ZANI1ojbA7hi3-sj5eV3X_Q7dM13hMCphx0frNa1aRuHPa_8x64fb971c9gZnn8dZaOTs9MncJt5do5JRA9ge_5zYZ6igTbPn3mpCODbdYvhH1bpSp8
linkToPdf http://utb.summon.serialssolutions.com/2.0.0/link/0/eLvHCXMwtV3db9MwED-NIb4eEBuDFQYECSTQZs2xEzt5QAgxqo1NEw9M6ltwHFsgVWlZWqL-a_x1nJ2PUiZVvOzZ58Q-353Pvp_vAF4pgwYglyGJtZYkErEmigtFZKILlZoQfWaPtjgXxxfR51E82oDf3VsYB6vsbKI31MVEuzvyQ3RtuUTTyZND28IivhwN309_EldBykVau3IajYicmkWNx7fq3ckRrvVrxoafvn48Jm2FAaIjGc9IYaymOmdW6EIWNC10ynRk0M2mVLtML1KJXMSpTZWOZUR5bIVktrCJVbmVluN3b8BNyXnq4IRyJJfwkn8yQLcPdtz4K9w0Ewf9jQhu4DQl9eqmeMXTvQrY7KO29-DOvJyqRa3G4782xuEDuN96tMGHRgS3YMOU23DLI0t1tQ1brfWogjdtiuu3D4GeTWoyVs5fXwRqPvOQwF8maF66B-5yOPCppQKt3K1ZtQMX18LZR7BZTkqzCwFanFzSnOFZroiQk0lkBLqlynITmojqAYQd8zLdJjZ39TXGmQ-w8yRrGJ4hwzPP8KwewH7fZ9qk9VhLvdetSdaqeJUtBXIAL_tmVE4XcVGlmcw9DYsZUiHN42YJ-9_xxL3xDdkAkpXF7Qlc4u_VlvLHd58APHSFeITArgedHCzHtW4aB72s_Mesn6yf9Qu4jQqYnZ2cnz6Fu8xLc0w43YPN2eXcPENfbZY_90oRwLfr1sI_DYZO4g
linkToUnpaywall http://utb.summon.serialssolutions.com/2.0.0/link/0/eLvHCXMwlV3db9QwDLfGTQh4ADYYHAxUJB5ALLe0aT76OCGmCaGJB06MpypNE4E49U605TT-epz0gx1DE3uO28SxYzuJ_QvAS23RABQyJtwYSVLBDdFMaCKVKXVmY4yZQ7bFqTiZp-_P-NkWiKEWJiTtB0jLYKaH7LDDGh2N8umyKUGnRzOynq1KdwO2BccYfALb89OPR18uQi6HsiIpsH8l-2oZytQ_frTpkS6FmZezJccr0ztwq61W-nytF4sLXun4Hnwe-OmSUb7P2qaYmV9_QT1en-H7cLcPVKOjjnIHtmy1CzdDwqipd2GnNwp19KpHrn79AOiH5ZostA_DzyPdNiHT76eNugL2yJ_5RgExKjLaH4bVD2F-_O7T2xPSP8lATCp5Q0rrDDVF4oQpZUmz0mSJSS3uSyg1HhpHalGgSFymDcdhM-6ETFzplNOFk47twaRaVvYxRGhICkmLBLdoZYryUqkVGG1qx2xsU2qmEA9iyU2PV-6fzVjk4d6cqbybohynKA9TlK-n8Gb8ZtWhdVxJvT9IO-9Xbp3jFopJdNFMTeHF2Ixrzl-k6Mou20CT8ASpkOZRpxxjd0z50t04mYLaUJuRwON5b7ZU374GXO_Yv68jBH56MGjYn3FdxcbBqIX_wfWT65E_hdtJUEJOGN2HSfOjtc8wDmuK5_2i-w1W-Cq8
openUrl ctx_ver=Z39.88-2004&ctx_enc=info%3Aofi%2Fenc%3AUTF-8&rfr_id=info%3Asid%2Fsummon.serialssolutions.com&rft_val_fmt=info%3Aofi%2Ffmt%3Akev%3Amtx%3Ajournal&rft.genre=article&rft.atitle=Low-latency+automotive+vision+with+event+cameras&rft.jtitle=Nature+%28London%29&rft.au=Gehrig%2C+Daniel&rft.au=Scaramuzza%2C+Davide&rft.date=2024-05-30&rft.issn=1476-4687&rft.eissn=1476-4687&rft.volume=629&rft.issue=8014&rft.spage=1034&rft_id=info:doi/10.1038%2Fs41586-024-07409-w&rft.externalDBID=NO_FULL_TEXT
thumbnail_l http://covers-cdn.summon.serialssolutions.com/index.aspx?isbn=/lc.gif&issn=0028-0836&client=summon
thumbnail_m http://covers-cdn.summon.serialssolutions.com/index.aspx?isbn=/mc.gif&issn=0028-0836&client=summon
thumbnail_s http://covers-cdn.summon.serialssolutions.com/index.aspx?isbn=/sc.gif&issn=0028-0836&client=summon