Random Features for Kernel Approximation: A Survey on Algorithms, Theory, and Beyond

The class of random features is one of the most popular techniques to speed up kernel methods in large-scale problems. Related works have been recognized by the NeurIPS Test-of-Time award in 2017 and the ICML Best Paper Finalist in 2019. The body of work on random features has grown rapidly, and hen...

Full description

Saved in:
Bibliographic Details
Published inIEEE transactions on pattern analysis and machine intelligence Vol. 44; no. 10; pp. 7128 - 7148
Main Authors Liu, Fanghui, Huang, Xiaolin, Chen, Yudong, Suykens, Johan A. K.
Format Journal Article
LanguageEnglish
Published New York IEEE 01.10.2022
The Institute of Electrical and Electronics Engineers, Inc. (IEEE)
Subjects
Online AccessGet full text
ISSN0162-8828
1939-3539
2160-9292
1939-3539
DOI10.1109/TPAMI.2021.3097011

Cover

Abstract The class of random features is one of the most popular techniques to speed up kernel methods in large-scale problems. Related works have been recognized by the NeurIPS Test-of-Time award in 2017 and the ICML Best Paper Finalist in 2019. The body of work on random features has grown rapidly, and hence it is desirable to have a comprehensive overview on this topic explaining the connections among various algorithms and theoretical results. In this survey, we systematically review the work on random features from the past ten years. First, the motivations, characteristics and contributions of representative random features based algorithms are summarized according to their sampling schemes, learning procedures, variance reduction properties and how they exploit training data. Second, we review theoretical results that center around the following key question: how many random features are needed to ensure a high approximation quality or no loss in the empirical/expected risks of the learned estimator. Third, we provide a comprehensive evaluation of popular random features based algorithms on several large-scale benchmark datasets and discuss their approximation quality and prediction performance for classification. Last, we discuss the relationship between random features and modern over-parameterized deep neural networks (DNNs), including the use of high dimensional random features in the analysis of DNNs as well as the gaps between current theoretical and empirical results. This survey may serve as a gentle introduction to this topic, and as a users' guide for practitioners interested in applying the representative algorithms and understanding theoretical results under various technical assumptions. We hope that this survey will facilitate discussion on the open problems in this topic, and more importantly, shed light on future research directions. Due to the page limit, we suggest the readers refer to the full version of this survey https://arxiv.org/abs/2004.11154 .
AbstractList The class of random features is one of the most popular techniques to speed up kernel methods in large-scale problems. Related works have been recognized by the NeurIPS Test-of-Time award in 2017 and the ICML Best Paper Finalist in 2019. The body of work on random features has grown rapidly, and hence it is desirable to have a comprehensive overview on this topic explaining the connections among various algorithms and theoretical results. In this survey, we systematically review the work on random features from the past ten years. First, the motivations, characteristics and contributions of representative random features based algorithms are summarized according to their sampling schemes, learning procedures, variance reduction properties and how they exploit training data. Second, we review theoretical results that center around the following key question: how many random features are needed to ensure a high approximation quality or no loss in the empirical/expected risks of the learned estimator. Third, we provide a comprehensive evaluation of popular random features based algorithms on several large-scale benchmark datasets and discuss their approximation quality and prediction performance for classification. Last, we discuss the relationship between random features and modern over-parameterized deep neural networks (DNNs), including the use of high dimensional random features in the analysis of DNNs as well as the gaps between current theoretical and empirical results. This survey may serve as a gentle introduction to this topic, and as a users' guide for practitioners interested in applying the representative algorithms and understanding theoretical results under various technical assumptions. We hope that this survey will facilitate discussion on the open problems in this topic, and more importantly, shed light on future research directions. Due to the page limit, we suggest the readers refer to the full version of this survey https://arxiv.org/abs/2004.11154 .
The class of random features is one of the most popular techniques to speed up kernel methods in large-scale problems. Related works have been recognized by the NeurIPS Test-of-Time award in 2017 and the ICML Best Paper Finalist in 2019. The body of work on random features has grown rapidly, and hence it is desirable to have a comprehensive overview on this topic explaining the connections among various algorithms and theoretical results. In this survey, we systematically review the work on random features from the past ten years. First, the motivations, characteristics and contributions of representative random features based algorithms are summarized according to their sampling schemes, learning procedures, variance reduction properties and how they exploit training data. Second, we review theoretical results that center around the following key question: how many random features are needed to ensure a high approximation quality or no loss in the empirical/expected risks of the learned estimator. Third, we provide a comprehensive evaluation of popular random features based algorithms on several large-scale benchmark datasets and discuss their approximation quality and prediction performance for classification. Last, we discuss the relationship between random features and modern over-parameterized deep neural networks (DNNs), including the use of high dimensional random features in the analysis of DNNs as well as the gaps between current theoretical and empirical results. This survey may serve as a gentle introduction to this topic, and as a users' guide for practitioners interested in applying the representative algorithms and understanding theoretical results under various technical assumptions. We hope that this survey will facilitate discussion on the open problems in this topic, and more importantly, shed light on future research directions. Due to the page limit, we suggest the readers refer to the full version of this survey https://arxiv.org/abs/2004.11154.The class of random features is one of the most popular techniques to speed up kernel methods in large-scale problems. Related works have been recognized by the NeurIPS Test-of-Time award in 2017 and the ICML Best Paper Finalist in 2019. The body of work on random features has grown rapidly, and hence it is desirable to have a comprehensive overview on this topic explaining the connections among various algorithms and theoretical results. In this survey, we systematically review the work on random features from the past ten years. First, the motivations, characteristics and contributions of representative random features based algorithms are summarized according to their sampling schemes, learning procedures, variance reduction properties and how they exploit training data. Second, we review theoretical results that center around the following key question: how many random features are needed to ensure a high approximation quality or no loss in the empirical/expected risks of the learned estimator. Third, we provide a comprehensive evaluation of popular random features based algorithms on several large-scale benchmark datasets and discuss their approximation quality and prediction performance for classification. Last, we discuss the relationship between random features and modern over-parameterized deep neural networks (DNNs), including the use of high dimensional random features in the analysis of DNNs as well as the gaps between current theoretical and empirical results. This survey may serve as a gentle introduction to this topic, and as a users' guide for practitioners interested in applying the representative algorithms and understanding theoretical results under various technical assumptions. We hope that this survey will facilitate discussion on the open problems in this topic, and more importantly, shed light on future research directions. Due to the page limit, we suggest the readers refer to the full version of this survey https://arxiv.org/abs/2004.11154.
Author Liu, Fanghui
Chen, Yudong
Huang, Xiaolin
Suykens, Johan A. K.
Author_xml – sequence: 1
  givenname: Fanghui
  orcidid: 0000-0003-4133-7921
  surname: Liu
  fullname: Liu, Fanghui
  email: fanghui.liu@esat.kuleuven.be
  organization: Department of Electrical Engineering (ESAT-STADIUS), KU Leuven, Leuven, Belgium
– sequence: 2
  givenname: Xiaolin
  orcidid: 0000-0003-4285-6520
  surname: Huang
  fullname: Huang, Xiaolin
  email: xiaolinhuang@sjtu.edu.cn
  organization: Institute of Image Processing and Pattern Recognition, and Institute of Medical Robotics, Shanghai Jiao Tong University, Shanghai, China
– sequence: 3
  givenname: Yudong
  orcidid: 0000-0002-6416-5635
  surname: Chen
  fullname: Chen, Yudong
  email: yudong.chen@cornell.edu
  organization: School of Operations Research and Information Engineering, Cornell University, Ithaca, NY, USA
– sequence: 4
  givenname: Johan A. K.
  orcidid: 0000-0002-8846-6352
  surname: Suykens
  fullname: Suykens, Johan A. K.
  email: johan.suykens@esat.kuleuven.be
  organization: Department of Electrical Engineering (ESAT-STADIUS), KU Leuven, Leuven, Belgium
BookMark eNp9kEFP3DAQha2KqizQP9BeLPXCgWxnbMexewsIWlSqonZ7tkIyLkFZe7GTqvvvCSzqgUNPc3nf05vvgO2FGIixdwhLRLAfV9f1t8ulAIFLCbYCxFdsIVBDYYUVe2wBqEVhjDD77CDnOwBUJcg3bF8qiSBMuWCrH03o4ppfUDNOiTL3MfGvlAINvN5sUvzbr5uxj-ETr_nPKf2hLY-B18PvmPrxdp1P-OqWYtqe8LmIn9I2hu6IvfbNkOnt8z1kvy7OV2dfiqvvny_P6quilcKMRdXaG1Fqc2NMpytZyfkHDR687Kz2BrRqQBky0prWi054YT121CCZVhhS8pAd73rnnfcT5dGt-9zSMDSB4pSdKMtSSwtKztEPL6J3cUphXudEhcpYRAVzyuxSbYo5J_Ku7cen98fU9INDcI_m3ZN592jePZufUfEC3aRZXdr-H3q_g3oi-gdYZUuUWj4A4x6NOw
CODEN ITPIDJ
CitedBy_id crossref_primary_10_1016_j_patcog_2024_111125
crossref_primary_10_1093_imaiai_iaad052
crossref_primary_10_1109_TPAMI_2023_3272925
crossref_primary_10_1080_00949655_2023_2182304
crossref_primary_10_1016_j_neucom_2024_128060
crossref_primary_10_1007_s10208_022_09550_2
crossref_primary_10_1109_TNNLS_2023_3326464
crossref_primary_10_1016_j_jfca_2023_105500
crossref_primary_10_1109_TKDE_2023_3266648
crossref_primary_10_1137_23M1620478
crossref_primary_10_1007_s10994_024_06626_8
crossref_primary_10_1016_j_cam_2023_115471
crossref_primary_10_1109_TPAMI_2023_3257351
crossref_primary_10_1016_j_procs_2022_11_029
crossref_primary_10_1111_jofi_13298
crossref_primary_10_1109_JPHOT_2022_3163714
crossref_primary_10_1063_5_0108967
crossref_primary_10_21105_joss_06372
crossref_primary_10_1137_22M147339X
crossref_primary_10_1109_TNNLS_2023_3348833
crossref_primary_10_1016_j_neucom_2024_128288
crossref_primary_10_1016_j_neucom_2024_128640
crossref_primary_10_1007_s10489_021_02618_6
crossref_primary_10_1016_j_neucom_2024_128100
crossref_primary_10_1109_TNNLS_2023_3296895
crossref_primary_10_1016_j_physd_2024_134097
crossref_primary_10_1016_j_aml_2023_108734
crossref_primary_10_1016_j_ins_2024_120649
crossref_primary_10_1007_s10489_023_04606_4
crossref_primary_10_1109_TNSM_2023_3246798
crossref_primary_10_3390_e25101404
crossref_primary_10_1007_s11222_025_10587_w
crossref_primary_10_1109_TKDE_2024_3510296
crossref_primary_10_1017_S0956792524000706
crossref_primary_10_1109_TCSI_2024_3476150
crossref_primary_10_1109_TNNLS_2023_3302802
crossref_primary_10_1109_TNNLS_2022_3213777
crossref_primary_10_1109_TPAMI_2021_3120183
crossref_primary_10_1016_j_sysarc_2024_103134
crossref_primary_10_1038_s42256_024_00943_2
crossref_primary_10_1002_env_2780
crossref_primary_10_1287_mnsc_2022_00204
crossref_primary_10_1371_journal_pcbi_1010484
crossref_primary_10_1007_s10489_024_06065_x
crossref_primary_10_1007_s10915_024_02463_y
crossref_primary_10_1109_ACCESS_2024_3434595
crossref_primary_10_1615_Int_J_UncertaintyQuantification_2024049519
crossref_primary_10_3390_su142315601
Cites_doi 10.1109/TPAMI.2017.2785313
10.7551/mitpress/4175.001.0001
10.1017/CBO9780511618796
10.1090/S0002-9904-1934-05843-9
10.1007/978-3-642-22147-7
10.1142/5089
10.1109/TIT.2022.3217698
10.1007/3-540-36169-3_4
10.1109/TNNLS.2019.2934729
10.1073/pnas.1903070116
10.1609/aaai.v34i04.5920
10.1162/0899766054323008
10.2139/ssrn.3714013
10.1007/978-0-387-76371-2
10.1142/0271
10.1007/978-981-10-0530-5
10.1609/aaai.v31i1.10825
10.24963/ijcai.2017/207
10.1090/s0002-9947-1950-0051437-7
10.1109/CVPR.2009.5206848
10.1145/2487575.2487591
10.1088/1742-5468/ac3a74
10.2307/j.ctv36zrf8.5
10.1017/CBO9780511617539
10.1109/ICASSP40776.2020.9053272
10.1017/9781108627771
10.1103/physrevresearch.4.013201
10.1002/cpa.22008
10.1145/3446776
10.1007/s10208-006-0196-8
10.1007/s00365-006-0659-y
10.1109/ICASSP.2016.7472872
10.1109/TPAMI.2021.3120183
10.1016/j.acha.2021.12.003
10.1007/978-1-4757-2440-0
10.1145/3097983.3098081
10.1214/17-AAP1328
10.1215/S0012-7094-42-00908-6
10.1214/20-aos1990
10.1090/gsm/132
10.1137/20M1336072
10.1109/ALLERTON.2008.4797607
10.1609/aaai.v32i1.11697
10.1109/5.726791
10.1016/j.jco.2015.02.003
10.1016/j.neucom.2005.12.126
10.1088/1742-5468/ac3ae6
10.1007/s13398-014-0173-7.2
10.1214/009053605000000282
10.1016/j.acha.2017.11.005
10.1016/j.jeconom.2007.12.004
10.1137/1.9781611970081
10.7551/mitpress/7496.003.0015
10.1214/21-aos2133
10.1103/revmodphys.91.045002
ContentType Journal Article
Copyright Copyright The Institute of Electrical and Electronics Engineers, Inc. (IEEE) 2022
Copyright_xml – notice: Copyright The Institute of Electrical and Electronics Engineers, Inc. (IEEE) 2022
DBID 97E
RIA
RIE
AAYXX
CITATION
7SC
7SP
8FD
JQ2
L7M
L~C
L~D
7X8
DOI 10.1109/TPAMI.2021.3097011
DatabaseName IEEE All-Society Periodicals Package (ASPP) 2005–Present
IEEE All-Society Periodicals Package (ASPP) 1998–Present
IEEE Electronic Library (IEL)
CrossRef
Computer and Information Systems Abstracts
Electronics & Communications Abstracts
Technology Research Database
ProQuest Computer Science Collection
Advanced Technologies Database with Aerospace
Computer and Information Systems Abstracts – Academic
Computer and Information Systems Abstracts Professional
MEDLINE - Academic
DatabaseTitle CrossRef
Technology Research Database
Computer and Information Systems Abstracts – Academic
Electronics & Communications Abstracts
ProQuest Computer Science Collection
Computer and Information Systems Abstracts
Advanced Technologies Database with Aerospace
Computer and Information Systems Abstracts Professional
MEDLINE - Academic
DatabaseTitleList
MEDLINE - Academic
Technology Research Database
Database_xml – sequence: 1
  dbid: RIE
  name: IEEE Electronic Library (IEL)
  url: https://proxy.k.utb.cz/login?url=https://ieeexplore.ieee.org/
  sourceTypes: Publisher
DeliveryMethod fulltext_linktorsrc
Discipline Engineering
Computer Science
EISSN 2160-9292
1939-3539
EndPage 7148
ExternalDocumentID 10_1109_TPAMI_2021_3097011
9495136
Genre orig-research
GrantInformation_xml – fundername: AI Research Program
– fundername: EU H2020 ICT-48 Network TAILOR
– fundername: National Science Foundation
  grantid: CCF-1657420; CCF-1704828
  funderid: 10.13039/501100008982
– fundername: Flemish Government
  grantid: GOA4917N
– fundername: SJTU Global Strategic Partnership Fund
– fundername: Shanghai Municipal Science and Technology Major Project
  grantid: 2021SHZDZX0102
– fundername: Stability analysis and performance improvement
– fundername: Foundations of Trustworthy AI - Integrating Reasoning, Learning and Optimization
– fundername: European Research Council
  funderid: 10.13039/501100000781
– fundername: European Union's Horizon 2020 research and innovation program/ERC Advanced Grant E-DUALITY
  grantid: 787960
GroupedDBID ---
-DZ
-~X
.DC
0R~
29I
4.4
53G
5GY
6IK
97E
AAJGR
AARMG
AASAJ
AAWTH
ABAZT
ABQJQ
ABVLG
ACGFO
ACGFS
ACIWK
ACNCT
AENEX
AGQYO
AHBIQ
AKJIK
AKQYR
ALMA_UNASSIGNED_HOLDINGS
ASUFR
ATWAV
BEFXN
BFFAM
BGNUA
BKEBE
BPEOZ
CS3
DU5
E.L
EBS
EJD
F5P
HZ~
IEDLZ
IFIPE
IPLJI
JAVBF
LAI
M43
MS~
O9-
OCL
P2P
PQQKQ
RIA
RIE
RNS
RXW
TAE
TN5
UHB
~02
AAYXX
CITATION
7SC
7SP
8FD
JQ2
L7M
L~C
L~D
7X8
ID FETCH-LOGICAL-c328t-7c9b2568b88d6737397060f0f3d96f8064a048e8398cf2d2f29f1dea1e8c28e43
IEDL.DBID RIE
ISSN 0162-8828
1939-3539
IngestDate Sat Sep 27 23:31:27 EDT 2025
Mon Jun 30 05:29:12 EDT 2025
Wed Oct 01 03:57:35 EDT 2025
Thu Apr 24 23:02:07 EDT 2025
Wed Aug 27 02:18:57 EDT 2025
IsPeerReviewed true
IsScholarly true
Issue 10
Language English
License https://ieeexplore.ieee.org/Xplorehelp/downloads/license-information/IEEE.html
https://doi.org/10.15223/policy-029
https://doi.org/10.15223/policy-037
LinkModel DirectLink
MergedId FETCHMERGED-LOGICAL-c328t-7c9b2568b88d6737397060f0f3d96f8064a048e8398cf2d2f29f1dea1e8c28e43
Notes ObjectType-Article-1
SourceType-Scholarly Journals-1
ObjectType-Feature-2
content type line 14
content type line 23
ORCID 0000-0002-6416-5635
0000-0002-8846-6352
0000-0003-4133-7921
0000-0003-4285-6520
PMID 34310285
PQID 2714891140
PQPubID 85458
PageCount 21
ParticipantIDs proquest_miscellaneous_2555639043
proquest_journals_2714891140
crossref_citationtrail_10_1109_TPAMI_2021_3097011
crossref_primary_10_1109_TPAMI_2021_3097011
ieee_primary_9495136
ProviderPackageCode CITATION
AAYXX
PublicationCentury 2000
PublicationDate 2022-10-01
PublicationDateYYYYMMDD 2022-10-01
PublicationDate_xml – month: 10
  year: 2022
  text: 2022-10-01
  day: 01
PublicationDecade 2020
PublicationPlace New York
PublicationPlace_xml – name: New York
PublicationTitle IEEE transactions on pattern analysis and machine intelligence
PublicationTitleAbbrev TPAMI
PublicationYear 2022
Publisher IEEE
The Institute of Electrical and Electronics Engineers, Inc. (IEEE)
Publisher_xml – name: IEEE
– name: The Institute of Electrical and Electronics Engineers, Inc. (IEEE)
References ref58
ref53
Liang (ref137)
Choromanski (ref61)
Yamasaki (ref104)
Lee (ref45)
Feng (ref62)
Arora (ref19)
Li (ref76)
Bojarski (ref85)
Sun (ref167) 2018
ref48
Frankle (ref168)
Jacot (ref155)
Li (ref92)
Yang (ref65)
Sutherland (ref29)
Thrampoulidis (ref161)
Lyu (ref66)
Sun (ref10)
ref8
d’Ascoli (ref150) 2020
ref3
Rahimi (ref71)
ref6
ref5
Xie (ref91)
ref100
ref40
Arora (ref12)
Arora (ref132)
Li (ref89)
Adlam (ref152)
Avron (ref24) 2016; 17
ref35
Yang (ref80)
ref37
Wang (ref101) 2019
ref36
Yuan (ref96)
ref31
Bietti (ref49)
ref33
Liu (ref138)
Zandieh (ref13) 2021
Woodruff (ref56)
Wu (ref173)
Javanmard (ref164)
Sriperumbudur (ref114) 2017
ref156
Steinwart (ref34) 2008
ref154
Avron (ref112)
ref151
Blanchard (ref116)
Fan (ref128) 2008; 9
Bach (ref87) 2017; 18
ref20
Smola Zoltan (ref38)
ref159
Meanti (ref171)
Zambon (ref15)
ref157
Choromanski (ref94)
Ullah (ref115)
Ji (ref22)
ref28
ref27
Agrawal (ref75)
Alaoui (ref105)
Meister (ref54)
ref165
ref163
Bullins (ref78)
ref160
Hendrycks (ref43) 2016
Du (ref14)
ref129
Kobak (ref174) 2020; 21
ref124
Liao (ref147)
Bietti (ref47)
ref98
Hu (ref170) 2016
Refinetti (ref51) 2021
Lopez-Paz (ref9)
Shen (ref81)
Choromanski (ref93)
ref16
Carratino (ref126)
Jacot (ref11)
Zhang (ref83)
Yang (ref90)
Bach (ref103)
Lin (ref158) 2020
Munkhoeva (ref26)
ref133
ref134
ref95
ref130
Sutherland (ref111)
Liu (ref59)
Belhadji (ref99)
ref139
Dai (ref84)
ref86
ref135
ref136
Liao (ref148)
Ghorbani (ref50)
Cho (ref41)
Yu (ref77) 2015
Chen (ref172) 2020
(ref32) 2002; 39
Montanari (ref162) 2019
Daniely (ref44)
ref145
ref143
Rahimi (ref7)
Rudi (ref109)
Krizhevsky (ref131) 2009
ref140
ref141
Avron (ref88)
Sriperumbudur (ref107)
Allen-Zhu (ref144)
Liu (ref4) 2021; 22
Erdélyi (ref70)
Evans (ref97) 1993
Malach (ref169) 2020
Wilson (ref79)
ref74
ref102
Ghashami (ref113)
Dhifallah (ref153) 2020
ref2
ref1
Lin (ref122) 2017; 18
Kar (ref52)
Chizat (ref21)
Sinha (ref73)
Choromanski (ref64)
Avron (ref55)
Pennington (ref146)
Li (ref30)
Müller (ref39) 2006
ref72
ref110
Peng (ref17)
ref119
ref67
ref117
ref69
ref118
Ba (ref149)
ref63
Williams (ref42)
Wang (ref127) 2019
Oliva (ref82)
Cao (ref18)
Dao (ref25)
Yu (ref23)
Le (ref60)
Yehudai (ref166)
Rudi (ref68)
Li (ref125) 2021; 22
Pennington (ref57)
Honorio (ref108) 2017
Chizat (ref46)
ref123
ref120
ref121
Belkin (ref142)
Calandriello (ref106)
References_xml – start-page: 12873
  volume-title: Proc. Int. Conf. Neural Inf. Process. Syst.
  ident: ref47
  article-title: On the inductive bias of neural tangent kernels
– volume-title: Practical Numerical Integration
  year: 1993
  ident: ref97
– ident: ref3
  doi: 10.1109/TPAMI.2017.2785313
– start-page: 2502
  volume-title: Proc. Int. Conf. Mach. Learn.
  ident: ref61
  article-title: Recycling randomness with structure for sublinear time kernel expansions
– start-page: 5672
  volume-title: Proc. Int. Conf. Neural Inf. Process. Syst.
  ident: ref68
  article-title: On fast leverage score sampling and optimal learning
– start-page: 226
  volume-title: Proc. Int. Conf. Neural Inf. Process. Syst.
  ident: ref116
  article-title: Optimal learning rates for kernel conjugate gradient regression
– start-page: 10835
  volume-title: Proc. Int. Conf. Mach. Learn.
  ident: ref18
  article-title: Generalization bounds of stochastic gradient descent for wide and deep neural networks
– ident: ref1
  doi: 10.7551/mitpress/4175.001.0001
– start-page: 1177
  volume-title: Proc. Int. Conf. Neural Inf. Process. Syst.
  ident: ref7
  article-title: Random features for large-scale kernel machines
– ident: ref33
  doi: 10.1017/CBO9780511618796
– ident: ref35
  doi: 10.1090/S0002-9904-1934-05843-9
– start-page: 1
  volume-title: Proc. Int. Conf. Neural Inf. Process. Syst.
  ident: ref50
  article-title: When do neural networks outperform kernel methods?
– ident: ref123
  doi: 10.1007/978-3-642-22147-7
– start-page: 6107
  volume-title: Proc. Int. Conf. Neural Inf. Process. Syst.
  ident: ref25
  article-title: Gaussian quadrature for kernel features
– start-page: 1846
  volume-title: Proc. Int. Conf. Neural Inf. Process. Syst.
  ident: ref57
  article-title: Spherical random features for polynomial kernels
– start-page: 3041
  volume-title: Proc. Int. Conf. Neural Inf. Process. Syst.
  ident: ref84
  article-title: Scalable kernel methods via doubly stochastic gradients
– start-page: 4631
  volume-title: Proc. Int. Conf. Mach. Learn.
  ident: ref155
  article-title: Implicit regularization of random feature models
– ident: ref2
  doi: 10.1142/5089
– ident: ref154
  doi: 10.1109/TIT.2022.3217698
– ident: ref119
  doi: 10.1007/3-540-36169-3_4
– year: 2021
  ident: ref51
  article-title: Classifying high-dimensional gaussian mixtures: Where kernel methods fail and neural networks succeed
– volume: 9
  start-page: 1871
  year: 2008
  ident: ref128
  article-title: LIBLINEAR: A library for large linear classification
  publication-title: J. Mach. Learn. Res.
– start-page: 7311
  volume-title: Proc. Int. Conf. Neural Inf. Process. Syst.
  ident: ref115
  article-title: Streaming kernel PCA with $\tilde{O}(\sqrt{n})$O˜(n) random features
– start-page: 1098
  volume-title: Proc. Int. Conf. Artif. Intell. Stat.
  ident: ref80
  article-title: À la carte–learning fast kernels
– year: 2020
  ident: ref169
  article-title: Proving the lottery ticket hypothesis: Pruning is all you need
– ident: ref58
  doi: 10.1109/TNNLS.2019.2934729
– volume-title: Spherical Harmonics
  year: 2006
  ident: ref39
– ident: ref140
  doi: 10.1073/pnas.1903070116
– ident: ref69
  doi: 10.1609/aaai.v34i04.5920
– ident: ref102
  doi: 10.1162/0899766054323008
– ident: ref163
  doi: 10.2139/ssrn.3714013
– start-page: 1144
  volume-title: Proc. Int. Conf. Neural Inf. Process. Syst.
  ident: ref107
  article-title: Optimal rates for random Fourier features
– start-page: 1
  volume-title: Proc. Int. Conf. Neural Inf. Process. Syst.
  ident: ref104
  article-title: Fast quantum algorithm for learning with optimized random features
– start-page: 1421
  volume-title: Proc. Int. Conf. Artif. Intell. Stat.
  ident: ref106
  article-title: Distributed adaptive sampling for kernel matrix approximation
– ident: ref36
  doi: 10.1007/978-0-387-76371-2
– ident: ref159
  doi: 10.1142/0271
– start-page: 219
  volume-title: Proc. Int. Conf. Neural Inf. Process. Syst.
  ident: ref64
  article-title: The unreasonable effectiveness of structured random orthogonal embeddings
– start-page: 1067
  volume-title: Proc. Int. Conf. Mach. Learn.
  ident: ref79
  article-title: Gaussian process kernels for pattern discovery and extrapolation
– start-page: 2256
  volume-title: Proc. Int. Conf. Mach. Learn.
  ident: ref66
  article-title: Spherical structured feature maps for kernel approximation
– start-page: 1
  volume-title: Proc. Int. Conf. Learn. Representation
  ident: ref168
  article-title: The lottery ticket hypothesis: Finding sparse, trainable neural networks
– start-page: 1359
  volume-title: Proc. Int. Conf. Mach. Learn.
  ident: ref9
  article-title: Randomized nonlinear component analysis
– start-page: 6369
  volume-title: Proc. Int. Conf. Mach. Learn.
  ident: ref89
  article-title: Quantization algorithms for random fourier features
– year: 2017
  ident: ref108
  article-title: The error probability of random Fourier features is dimensionality independent
– start-page: 9475
  volume-title: Proc. Int. Conf. Neural Inf. Process. Syst.
  ident: ref54
  article-title: Tight dimensionality reduction for sketching low degree polynomial kernels
– ident: ref6
  doi: 10.1007/978-981-10-0530-5
– start-page: 1365
  volume-title: Proc. Int. Conf. Artif. Intell. Stat.
  ident: ref113
  article-title: Streaming kernel principal component analysis
– start-page: 1
  volume-title: Proc. Int. Conf. Learn. Representation
  ident: ref45
  article-title: Deep neural networks as Gaussian processes
– volume-title: Support Vector Machines
  year: 2008
  ident: ref34
– ident: ref67
  doi: 10.1609/aaai.v31i1.10825
– ident: ref72
  doi: 10.24963/ijcai.2017/207
– start-page: 485
  volume-title: Proc. Int. Conf. Mach. Learn.
  ident: ref65
  article-title: Quasi-Monte Carlo feature maps for shift-invariant kernels
– start-page: 775
  volume-title: Proc. Int. Conf. Neural Inf. Process. Syst.
  ident: ref105
  article-title: Fast randomized kernel ridge regression with statistical guarantees
– start-page: 3215
  volume-title: Proc. Int. Conf. Neural Inf. Process. Syst.
  ident: ref109
  article-title: Generalization properties of learning with random features
– start-page: 342
  volume-title: Proc. Int. Conf. Neural Inf. Process. Syst.
  ident: ref41
  article-title: Kernel methods for deep learning
– start-page: 2034
  volume-title: Proc. Conf. Learn. Theory
  ident: ref164
  article-title: Precise tradeoffs in adversarial training for linear regression
– ident: ref5
  doi: 10.1090/s0002-9947-1950-0051437-7
– volume: 18
  start-page: 714
  issue: 1
  year: 2017
  ident: ref87
  article-title: On the equivalence between kernel quadrature rules and random feature expansions
  publication-title: J. Mach. Learn. Res.
– ident: ref134
  doi: 10.1109/CVPR.2009.5206848
– ident: ref53
  doi: 10.1145/2487575.2487591
– ident: ref141
  doi: 10.1088/1742-5468/ac3a74
– start-page: 322
  volume-title: Proc. Int. Conf. Mach. Learn.
  ident: ref19
  article-title: Fine-grained analysis of optimization and generalization for overparameterized two-layer neural networks
– year: 2015
  ident: ref77
  article-title: Compact nonlinear maps and circulant extensions
– year: 2009
  ident: ref131
  article-title: Learning multiple layers of features from tiny images
– start-page: 10324
  volume-title: Proc. Int. Conf. Mach. Learn.
  ident: ref56
  article-title: Near input sparsity time kernel embeddings via adaptive sampling
– ident: ref16
  doi: 10.2307/j.ctv36zrf8.5
– year: 2020
  ident: ref158
  article-title: What causes the test error? Going beyond bias-variance via anova
– start-page: 583
  volume-title: Proc. Int. Conf. Artif. Intell. Stat.
  ident: ref52
  article-title: Random feature maps for dot product kernels
– start-page: 1
  volume-title: Proc. Int. Conf. Learn. Representation
  ident: ref22
  article-title: Polylogarithmic width suffices for gradient descent to achieve arbitrarily small test error with shallow ReLU networks
– ident: ref8
  doi: 10.1017/CBO9780511617539
– ident: ref28
  doi: 10.1109/ICASSP40776.2020.9053272
– ident: ref117
  doi: 10.1017/9781108627771
– start-page: 1264
  volume-title: Proc. Int. Conf. Artif. Intell. Stat.
  ident: ref83
  article-title: Low-precision random Fourier features for memory-constrained kernel approximation
– ident: ref157
  doi: 10.1103/physrevresearch.4.013201
– ident: ref136
  doi: 10.1002/cpa.22008
– start-page: 1
  volume-title: Proc. Int. Conf. Learn. Representation
  ident: ref78
  article-title: Not-so-random features
– start-page: 1
  volume-title: Proc. Int. Conf. Neural Inf. Process. Syst.
  ident: ref99
  article-title: Kernel quadrature with DPPs
– start-page: 3383
  volume-title: Proc. Int. Conf. Neural Inf. Process. Syst.
  ident: ref10
  article-title: But how does it work in theory? Linear SVM with random features
– start-page: 1313
  volume-title: Proc. Int. Conf. Neural Inf. Process. Syst.
  ident: ref71
  article-title: Weighted sums of random kitchen sinks: Replacing minimization with randomization in learning
– start-page: 2007
  volume-title: Proc. Int. Conf. Artif. Intell. Stat.
  ident: ref76
  article-title: Implicit kernel learning
– start-page: 6158
  volume-title: Proc. Int. Conf. Neural Inf. Process. Syst.
  ident: ref144
  article-title: Learning and generalization in overparameterized neural networks, going beyond two layers
– ident: ref139
  doi: 10.1145/3446776
– year: 2021
  ident: ref13
  article-title: Scaling neural tangent kernels via sketching and random features
– ident: ref118
  doi: 10.1007/s10208-006-0196-8
– year: 2018
  ident: ref167
  article-title: On the approximation properties of random ReLU features
– start-page: 1
  volume-title: Proc. Int. Conf. Neural Inf. Process. Syst.
  ident: ref173
  article-title: On the optimal weighted $\ell _2$ℓ2 regularization in overparameterized linear regression
– ident: ref120
  doi: 10.1007/s00365-006-0659-y
– ident: ref27
  doi: 10.1109/ICASSP.2016.7472872
– ident: ref100
  doi: 10.1109/TPAMI.2021.3120183
– ident: ref20
  doi: 10.1016/j.acha.2021.12.003
– year: 2020
  ident: ref172
  article-title: Multiple descent: Design your own generalization curve
– start-page: 2253
  volume-title: Proc. Int. Conf. Neural Inf. Process. Syst.
  ident: ref44
  article-title: Toward deeper understanding of neural networks: The power of initialization and a dual view on expressivity
– start-page: 1
  volume-title: Proc. Int. Conf. Learn. Representation
  ident: ref132
  article-title: Harnessing the power of infinitely wide deep nets on small-data tasks
– start-page: 1
  volume-title: Proc. Int. Conf. Learn. Representation
  ident: ref17
  article-title: Random feature attention
– start-page: 1
  volume-title: Proc. Int. Conf. Artif. Intell. Stat.
  ident: ref59
  article-title: Fast learning in reproducing kernel Kreĭn spaces via signed measures
– ident: ref31
  doi: 10.1007/978-1-4757-2440-0
– start-page: 185
  volume-title: Proc. Conf. Learn. Theory
  ident: ref103
  article-title: Sharp analysis of low-rank kernel matrix approximations
– start-page: 1
  volume-title: Proc. Int. Conf. Learn. Representation
  ident: ref149
  article-title: Generalization of two-layer neural networks: an asymptotic viewpoint
– start-page: 295
  volume-title: Proc. Int. Conf. Neural Inf. Process. Syst.
  ident: ref42
  article-title: Computing with infinite networks
– ident: ref63
  doi: 10.1145/3097983.3098081
– start-page: 3063
  volume-title: Proc. Int. Conf. Mach. Learn.
  ident: ref147
  article-title: On the spectrum of random features maps of high dimensional data
– ident: ref165
  doi: 10.1214/17-AAP1328
– start-page: 3905
  volume-title: Proc. Int. Conf. Mach. Learn.
  ident: ref30
  article-title: Towards a unified analysis of random Fourier features
– ident: ref37
  doi: 10.1215/S0012-7094-42-00908-6
– volume: 22
  start-page: 1
  issue: 108
  year: 2021
  ident: ref125
  article-title: Towards a unified analysis of random fourier features
  publication-title: J. Mach. Learn. Res.
– start-page: 1
  volume-title: Proc. Conf. Learn. Theory
  ident: ref137
  article-title: On the multiple descent of minimum-norm interpolants and restricted lower isometry of kernels
– volume: 17
  start-page: 4096
  issue: 1
  year: 2016
  ident: ref24
  article-title: Quasi-Monte Carlo feature maps for shift-invariant kernels
  publication-title: J. Mach. Learn. Res.
– start-page: 2341
  volume-title: Proc. Int. Conf. Neural Inf. Process. Syst.
  ident: ref91
  article-title: Scale up nonlinear component analysis with doubly stochastic gradients
– start-page: 1
  volume-title: Proc. Int. Conf. Neural Inf. Process. Syst.
  ident: ref14
  article-title: Graph neural tangent kernel: Fusing graph neural networks with graph kernels
– start-page: 308
  volume-title: Proc. Int. Conf. Neural Inf. Process. Syst.
  ident: ref38
  article-title: Regularization with dot-product kernels
– start-page: 862
  volume-title: Proc. Conf. Uncertainty Artif. Intell.
  ident: ref29
  article-title: On the error of random Fourier features
– ident: ref48
  doi: 10.1214/20-aos1990
– start-page: 1298
  volume-title: Proc. Int. Conf. Neural Inf. Process. Syst.
  ident: ref73
  article-title: Learning kernels with random features
– year: 2017
  ident: ref114
  article-title: Statistical consistency of kernel PCA with random features
– ident: ref145
  doi: 10.1090/gsm/132
– start-page: 1078
  volume-title: Proc. Int. Conf. Artif. Intell. Stat.
  ident: ref82
  article-title: Bayesian nonparametric kernel learning
– start-page: 476
  volume-title: Proc. Int. Conf. Neural Inf. Process. Syst.
  ident: ref90
  article-title: Nyström method vs random Fourier features: A theoretical and empirical comparison
– start-page: 13939
  volume-title: Proc. Int. Conf. Neural Inf. Process. Syst.
  ident: ref148
  article-title: A random matrix analysis of random fourier features: Beyond the gaussian kernel, a precise phase transition, and the corresponding double descent
– start-page: 6594
  volume-title: Proc. Int. Conf. Neural Inf. Process. Syst.
  ident: ref166
  article-title: On the power and limitations of random features for understanding neural networks
– start-page: 3490
  volume-title: Proc. Int. Conf. Artif. Intell.
  ident: ref62
  article-title: Random feature mapping with signed circulant matrix projection
– start-page: 253
  volume-title: Proc. Int. Conf. Mach. Learn.
  ident: ref112
  article-title: Random Fourier features for kernel ridge regression: Approximation bounds and statistical guarantees
– ident: ref156
  doi: 10.1137/20M1336072
– start-page: 862
  volume-title: Proc. Conf. Uncertainty Artif. Intell.
  ident: ref111
  article-title: On the error of random Fourier features
– start-page: 10968
  volume-title: Proc. Int. Conf. Mach. Learn.
  ident: ref15
  article-title: Graph random neural features for distance-preserving graph representations
– start-page: 1020
  volume-title: Proc. Int. Conf. Artif. Intell. Stat.
  ident: ref85
  article-title: Structured adaptive and random spinners for fast machine learning computations
– start-page: 244
  volume-title: Proc. Int. Conf. Mach. Learn.
  ident: ref60
  article-title: FastFood—Approximating kernel expansions in loglinear time
– ident: ref110
  doi: 10.1109/ALLERTON.2008.4797607
– start-page: 8571
  volume-title: Proc. Int. Conf. Neural Inf. Process. Syst.
  ident: ref11
  article-title: Neural tangent kernel: Convergence and generalization in neural networks
– volume: 21
  start-page: 1
  issue: 169
  year: 2020
  ident: ref174
  article-title: The optimal ridge penalty for real-world high-dimensional data can be zero or negative due to the implicit ridge regularization
  publication-title: J. Mach. Learn. Res.
– volume: 18
  start-page: 3202
  issue: 1
  year: 2017
  ident: ref122
  article-title: Distributed learning with regularized least squares
  publication-title: J. Mach. Learn. Res.
– year: 2020
  ident: ref153
  article-title: A precise performance analysis of learning with random features
– ident: ref74
  doi: 10.1609/aaai.v32i1.11697
– start-page: 2258
  volume-title: Proc. Int. Conf. Neural Inf. Process. Syst.
  ident: ref55
  article-title: Subspace embeddings for the polynomial kernel
– year: 2020
  ident: ref150
  article-title: Double trouble in double descent: Bias and variance(s) in the lazy regime
– start-page: 1683
  volume-title: Proc. Conf. Learn. Theory
  ident: ref161
  article-title: Regularized linear regression: A precise analysis of the estimation error
– start-page: 2933
  volume-title: Proc. Int. Conf. Neural Inf. Process. Syst.
  ident: ref46
  article-title: On lazy training in differentiable programming
– start-page: 9147
  volume-title: Proc. Int. Conf. Neural Inf. Process. Syst.
  ident: ref26
  article-title: Quadrature-based features for kernel approximation
– ident: ref143
  doi: 10.1109/5.726791
– ident: ref95
  doi: 10.1016/j.jco.2015.02.003
– year: 2019
  ident: ref127
  article-title: Simple and almost assumption-free out-of-sample bound for random feature mapping
– start-page: 1
  volume-title: Proc. Int. Conf. Artif. Intell. Stat.
  ident: ref138
  article-title: Kernel regression in high dimensions: Refined analysis beyond double descent
– start-page: 11022
  volume-title: Proc. Int. Conf. Neural Inf. Process. Syst.
  ident: ref152
  article-title: Understanding double descent requires a fine-grained bias-variance decomposition
– volume: 39
  start-page: 1
  issue: 1
  year: 2002
  ident: ref32
  article-title: On the mathematical foundations of learning
  publication-title: Bull.
– start-page: 1305
  volume-title: Proc. Conf. Learn. Theory
  ident: ref21
  article-title: Implicit bias of gradient descent for wide two-layer neural networks trained with the logistic loss
– ident: ref130
  doi: 10.1109/5.726791
– ident: ref40
  doi: 10.1016/j.neucom.2005.12.126
– ident: ref151
  doi: 10.1088/1742-5468/ac3ae6
– start-page: 1
  volume-title: Proc. Int. Conf. Artif. Intell. Stat.
  ident: ref94
  article-title: The geometry of random features
– year: 2016
  ident: ref43
  article-title: Gaussian error linear units
– volume-title: Proc. Int. Conf. Artif. Intell. Stat
  ident: ref81
  article-title: Harmonizable mixture kernels with variational Fourier features
– start-page: 253
  volume-title: Proc. Int. Conf. Mach. Learn.
  ident: ref88
  article-title: Random Fourier features for kernel ridge regression: Approximation bounds and statistical guarantees
– ident: ref133
  doi: 10.1007/s13398-014-0173-7.2
– start-page: 1
  volume-title: Proc. Int. Conf. Artif. Intell. Stat.
  ident: ref92
  article-title: Triply stochastic gradients on multiple kernel learning.
– ident: ref124
  doi: 10.1214/009053605000000282
– ident: ref121
  doi: 10.1016/j.acha.2017.11.005
– start-page: 1
  volume-title: Proc. Int. Conf. Learn. Representation
  ident: ref49
  article-title: Deep equals shallow for ReLU networks in kernel regimes
– year: 2016
  ident: ref170
  article-title: Network trimming: A data-driven neuron pruning approach towards efficient deep architectures
– ident: ref98
  doi: 10.1016/j.jeconom.2007.12.004
– volume: 22
  start-page: 1
  issue: 140
  year: 2021
  ident: ref4
  article-title: Generalization properties of hyper-RKHS and its applications
  publication-title: J. Mach. Learn. Res.
– start-page: 1975
  volume-title: Proc. Int. Conf. Neural Inf. Process. Syst.
  ident: ref23
  article-title: Orthogonal random features
– ident: ref86
  doi: 10.1137/1.9781611970081
– start-page: 6269
  volume-title: Proc. Int. Conf. Neural Inf. Process. Syst.
  ident: ref96
  article-title: Subgroup-based rank-1 lattice quasi-monte carlo
– ident: ref129
  doi: 10.7551/mitpress/7496.003.0015
– year: 2019
  ident: ref101
  article-title: A general scoring rule for randomized kernel approximation with application to canonical correlation analysis
– start-page: 2634
  volume-title: Proc. Int. Conf. Neural Inf. Process. Syst.
  ident: ref146
  article-title: Nonlinear random matrix theory for deep learning
– start-page: 11022
  volume-title: Proc. Int. Conf. Neural Inf. Process. Syst.
  ident: ref171
  article-title: Kernel methods through the roof: Handling billions of points efficiently
– start-page: 10212
  volume-title: Proc. Int. Conf. Neural Inf. Process. Syst.
  ident: ref126
  article-title: Learning with SGD and random features
– start-page: 541
  volume-title: Proc. Int. Conf. Mach. Learn.
  ident: ref142
  article-title: To understand deep learning we need to understand kernel learning
– start-page: 109
  volume-title: Proc. Int. Conf. Neural Inf. Process. Syst.
  ident: ref70
  article-title: Fourier sparse leverage scores and approximate kernel learning
– start-page: 1822
  volume-title: Proc. Int. Conf. Artif. Intell. Stat.
  ident: ref75
  article-title: Data-dependent compression of random features for large-scale kernel approximation
– year: 2019
  ident: ref162
  article-title: The generalization error of max-margin linear classifiers: High-dimensional asymptotics in the overparametrized regime
– ident: ref135
  doi: 10.1214/21-aos2133
– start-page: 1203
  volume-title: Proc. Int. Conf. Mach. Learn.
  ident: ref93
  article-title: Unifying orthogonal Monte Carlo methods
– start-page: 8139
  volume-title: Proc. Int. Conf. Neural Inf. Process. Syst.
  ident: ref12
  article-title: On exact computation with an infinitely wide neural net
– ident: ref160
  doi: 10.1103/revmodphys.91.045002
SSID ssj0014503
Score 2.6919017
SecondaryResourceType review_article
Snippet The class of random features is one of the most popular techniques to speed up kernel methods in large-scale problems. Related works have been recognized by...
SourceID proquest
crossref
ieee
SourceType Aggregation Database
Enrichment Source
Index Database
Publisher
StartPage 7128
SubjectTerms Algorithms
Approximation
Approximation algorithms
Artificial neural networks
Empirical analysis
generalization properties
Kernel
kernel approximation
Kernels
Loss measurement
Machine learning
Mathematical analysis
over-parameterized models
Prediction algorithms
Random features
Risk management
Scalability
Taxonomy
Title Random Features for Kernel Approximation: A Survey on Algorithms, Theory, and Beyond
URI https://ieeexplore.ieee.org/document/9495136
https://www.proquest.com/docview/2714891140
https://www.proquest.com/docview/2555639043
Volume 44
hasFullText 1
inHoldings 1
isFullTextHit
isPrint
journalDatabaseRights – providerCode: PRVIEE
  databaseName: IEEE Electronic Library (IEL)
  customDbUrl:
  eissn: 2160-9292
  dateEnd: 99991231
  omitProxy: false
  ssIdentifier: ssj0014503
  issn: 0162-8828
  databaseCode: RIE
  dateStart: 19790101
  isFulltext: true
  titleUrlDefault: https://ieeexplore.ieee.org/
  providerName: IEEE
link http://utb.summon.serialssolutions.com/2.0.0/link/0/eLvHCXMwjV1NT9wwEB0BhwoO0PKhLqWVkbixWRzbSezeVlURLVpUlUXiFiW2QxGQoCWp2v76jp0PUVpV3CLFsRI9j-dNPDMP4ABZhdDGioBnsgiE0TRQJtKByrkwNOOKe_WG2Vl8ciE-X0aXSzAeamGstT75zE7cpT_LN5Vu3K-yI4VsPuTxMiwnMm5rtYYTAxF5FWRkMGjhGEb0BTJUHc2_TGefMBRk4YRTleCKXoUXHD0n-tboD3_kBVb-2pW9qznegFn_km2Gyc2kqfOJ_vWkf-Nzv-IlrHeck0zbRfIKlmy5CRu9ngPpzHsT1h41J9yC-desNNUdcSSxwaCcIL0lp3ZRWpzKdSL_cd2WPb4nU3LeLL7bn6QqyfT2qlpc19_uHsakLfwfE5yItKUy23Bx_HH-4SToNBgCzZmsg0SrHFmRzKU0TtIG6QuNaUELblRcSCQ0Ge4BFmmW1AUzrGCqCI3NQis1k1bwHVgpq9K-BmIkshGncmZiKVSic81UxjEC1TpUguUjCHskUt01KHc6GbepD1SoSj2QqQMy7YAcweHwzH3bnuO_o7ccHMPIDokR7PWAp50FP6QswUARPYGgI9gfbqPtuQOVrLRVg2Mi115NUcF3_z3zG1hlrlzCJ__twUq9aOxbJDF1_s6v3t--kekQ
linkProvider IEEE
linkToHtml http://utb.summon.serialssolutions.com/2.0.0/link/0/eLvHCXMwjV1Nb9QwEB2VVoJyaEsLYvtBjcSNzdaxnazNbYVabWm3QrCVeosS24GKNkHbBNH--o6dDwFFiFukOFai5_G8iWfmAbxBViG0sSLgqcwDYTQNlIl0oDIuDE254l69YXYWT8_Fh4voYgmGfS2MtdYnn9mRu_Rn-abUtftVdqCQzYc8fgQrkRAiaqq1-jMDEXkdZOQwaOMYSHQlMlQdzD9OZscYDLJwxKka45pehcccfSd61-g3j-QlVh7sy97ZHK3DrHvNJsfk26iuspG--6OD4_9-xwastayTTJpl8gyWbLEJ652iA2kNfBOe_tKecAvmn9LClNfE0cQaw3KCBJec2EVhcSrXi_znZVP4-I5MyOd68cPekrIgk6sv5eKy-np9MyRN6f-Q4ESkKZZ5DudHh_P306BVYQg0Z7IKxlplyItkJqVxojZIYGhMc5pzo-JcIqVJcRewSLSkzplhOVN5aGwaWqmZtIK_gOWiLOxLIEYiH3E6ZyaWQo11pplKOcagWodKsGwAYYdEotsW5U4p4yrxoQpViQcycUAmLZADeNs_871p0PHP0VsOjn5ki8QAdjvAk9aGbxI2xlARfYGgA3jd30brc0cqaWHLGsdErsGaooJv_33mfXgync9Ok9Pjs5MdWGWueMKnAu7CcrWo7R5Smip75VfyPR1r7F0
openUrl ctx_ver=Z39.88-2004&ctx_enc=info%3Aofi%2Fenc%3AUTF-8&rfr_id=info%3Asid%2Fsummon.serialssolutions.com&rft_val_fmt=info%3Aofi%2Ffmt%3Akev%3Amtx%3Ajournal&rft.genre=article&rft.atitle=Random+Features+for+Kernel+Approximation%3A+A+Survey+on+Algorithms%2C+Theory%2C+and+Beyond&rft.jtitle=IEEE+transactions+on+pattern+analysis+and+machine+intelligence&rft.au=Liu%2C+Fanghui&rft.au=Huang%2C+Xiaolin&rft.au=Chen%2C+Yudong&rft.au=Suykens%2C+Johan+A+K&rft.date=2022-10-01&rft.issn=1939-3539&rft.eissn=1939-3539&rft.volume=44&rft.issue=10&rft.spage=7128&rft_id=info:doi/10.1109%2FTPAMI.2021.3097011&rft.externalDBID=NO_FULL_TEXT
thumbnail_l http://covers-cdn.summon.serialssolutions.com/index.aspx?isbn=/lc.gif&issn=0162-8828&client=summon
thumbnail_m http://covers-cdn.summon.serialssolutions.com/index.aspx?isbn=/mc.gif&issn=0162-8828&client=summon
thumbnail_s http://covers-cdn.summon.serialssolutions.com/index.aspx?isbn=/sc.gif&issn=0162-8828&client=summon