Variational Bayes Ensemble Learning Neural Networks With Compressed Feature Space
We consider the problem of nonparametric classification from a high-dimensional input vector (small <inline-formula> <tex-math notation="LaTeX">n </tex-math></inline-formula> large <inline-formula> <tex-math notation="LaTeX">p </tex-math>...
Saved in:
| Published in | IEEE transaction on neural networks and learning systems Vol. 35; no. 1; pp. 1379 - 1385 |
|---|---|
| Main Authors | , , |
| Format | Journal Article |
| Language | English |
| Published |
United States
IEEE
01.01.2024
The Institute of Electrical and Electronics Engineers, Inc. (IEEE) |
| Subjects | |
| Online Access | Get full text |
| ISSN | 2162-237X 2162-2388 2162-2388 |
| DOI | 10.1109/TNNLS.2022.3172276 |
Cover
| Abstract | We consider the problem of nonparametric classification from a high-dimensional input vector (small <inline-formula> <tex-math notation="LaTeX">n </tex-math></inline-formula> large <inline-formula> <tex-math notation="LaTeX">p </tex-math></inline-formula> problem). To handle the high-dimensional feature space, we propose a random projection (RP) of the feature space followed by training of a neural network (NN) on the compressed feature space. Unlike regularization techniques (lasso, ridge, etc.), which train on the full data, NNs based on compressed feature space have significantly lower computation complexity and memory storage requirements. Nonetheless, a random compression-based method is often sensitive to the choice of compression. To address this issue, we adopt a Bayesian model averaging (BMA) approach and leverage the posterior model weights to determine: 1) uncertainty under each compression and 2) intrinsic dimensionality of the feature space (the effective dimension of feature space useful for prediction). The final prediction is improved by averaging models with projected dimensions close to the intrinsic dimensionality. Furthermore, we propose a variational approach to the afore-mentioned BMA to allow for simultaneous estimation of both model weights and model-specific parameters. Since the proposed variational solution is parallelizable across compressions, it preserves the computational gain of frequentist ensemble techniques while providing the full uncertainty quantification of a Bayesian approach. We establish the asymptotic consistency of the proposed algorithm under the suitable characterization of the RPs and the prior parameters. Finally, we provide extensive numerical examples for empirical validation of the proposed method. |
|---|---|
| AbstractList | We consider the problem of nonparametric classification from a high-dimensional input vector (small n large p problem). To handle the high-dimensional feature space, we propose a random projection (RP) of the feature space followed by training of a neural network (NN) on the compressed feature space. Unlike regularization techniques (lasso, ridge, etc.), which train on the full data, NNs based on compressed feature space have significantly lower computation complexity and memory storage requirements. Nonetheless, a random compression-based method is often sensitive to the choice of compression. To address this issue, we adopt a Bayesian model averaging (BMA) approach and leverage the posterior model weights to determine: 1) uncertainty under each compression and 2) intrinsic dimensionality of the feature space (the effective dimension of feature space useful for prediction). The final prediction is improved by averaging models with projected dimensions close to the intrinsic dimensionality. Furthermore, we propose a variational approach to the afore-mentioned BMA to allow for simultaneous estimation of both model weights and model-specific parameters. Since the proposed variational solution is parallelizable across compressions, it preserves the computational gain of frequentist ensemble techniques while providing the full uncertainty quantification of a Bayesian approach. We establish the asymptotic consistency of the proposed algorithm under the suitable characterization of the RPs and the prior parameters. Finally, we provide extensive numerical examples for empirical validation of the proposed method.We consider the problem of nonparametric classification from a high-dimensional input vector (small n large p problem). To handle the high-dimensional feature space, we propose a random projection (RP) of the feature space followed by training of a neural network (NN) on the compressed feature space. Unlike regularization techniques (lasso, ridge, etc.), which train on the full data, NNs based on compressed feature space have significantly lower computation complexity and memory storage requirements. Nonetheless, a random compression-based method is often sensitive to the choice of compression. To address this issue, we adopt a Bayesian model averaging (BMA) approach and leverage the posterior model weights to determine: 1) uncertainty under each compression and 2) intrinsic dimensionality of the feature space (the effective dimension of feature space useful for prediction). The final prediction is improved by averaging models with projected dimensions close to the intrinsic dimensionality. Furthermore, we propose a variational approach to the afore-mentioned BMA to allow for simultaneous estimation of both model weights and model-specific parameters. Since the proposed variational solution is parallelizable across compressions, it preserves the computational gain of frequentist ensemble techniques while providing the full uncertainty quantification of a Bayesian approach. We establish the asymptotic consistency of the proposed algorithm under the suitable characterization of the RPs and the prior parameters. Finally, we provide extensive numerical examples for empirical validation of the proposed method. We consider the problem of nonparametric classification from a high-dimensional input vector (small [Formula Omitted] large [Formula Omitted] problem). To handle the high-dimensional feature space, we propose a random projection (RP) of the feature space followed by training of a neural network (NN) on the compressed feature space. Unlike regularization techniques (lasso, ridge, etc.), which train on the full data, NNs based on compressed feature space have significantly lower computation complexity and memory storage requirements. Nonetheless, a random compression-based method is often sensitive to the choice of compression. To address this issue, we adopt a Bayesian model averaging (BMA) approach and leverage the posterior model weights to determine: 1) uncertainty under each compression and 2) intrinsic dimensionality of the feature space (the effective dimension of feature space useful for prediction). The final prediction is improved by averaging models with projected dimensions close to the intrinsic dimensionality. Furthermore, we propose a variational approach to the afore-mentioned BMA to allow for simultaneous estimation of both model weights and model-specific parameters. Since the proposed variational solution is parallelizable across compressions, it preserves the computational gain of frequentist ensemble techniques while providing the full uncertainty quantification of a Bayesian approach. We establish the asymptotic consistency of the proposed algorithm under the suitable characterization of the RPs and the prior parameters. Finally, we provide extensive numerical examples for empirical validation of the proposed method. We consider the problem of nonparametric classification from a high-dimensional input vector (small <inline-formula> <tex-math notation="LaTeX">n </tex-math></inline-formula> large <inline-formula> <tex-math notation="LaTeX">p </tex-math></inline-formula> problem). To handle the high-dimensional feature space, we propose a random projection (RP) of the feature space followed by training of a neural network (NN) on the compressed feature space. Unlike regularization techniques (lasso, ridge, etc.), which train on the full data, NNs based on compressed feature space have significantly lower computation complexity and memory storage requirements. Nonetheless, a random compression-based method is often sensitive to the choice of compression. To address this issue, we adopt a Bayesian model averaging (BMA) approach and leverage the posterior model weights to determine: 1) uncertainty under each compression and 2) intrinsic dimensionality of the feature space (the effective dimension of feature space useful for prediction). The final prediction is improved by averaging models with projected dimensions close to the intrinsic dimensionality. Furthermore, we propose a variational approach to the afore-mentioned BMA to allow for simultaneous estimation of both model weights and model-specific parameters. Since the proposed variational solution is parallelizable across compressions, it preserves the computational gain of frequentist ensemble techniques while providing the full uncertainty quantification of a Bayesian approach. We establish the asymptotic consistency of the proposed algorithm under the suitable characterization of the RPs and the prior parameters. Finally, we provide extensive numerical examples for empirical validation of the proposed method. We consider the problem of nonparametric classification from a high-dimensional input vector (small n large p problem). To handle the high-dimensional feature space, we propose a random projection (RP) of the feature space followed by training of a neural network (NN) on the compressed feature space. Unlike regularization techniques (lasso, ridge, etc.), which train on the full data, NNs based on compressed feature space have significantly lower computation complexity and memory storage requirements. Nonetheless, a random compression-based method is often sensitive to the choice of compression. To address this issue, we adopt a Bayesian model averaging (BMA) approach and leverage the posterior model weights to determine: 1) uncertainty under each compression and 2) intrinsic dimensionality of the feature space (the effective dimension of feature space useful for prediction). The final prediction is improved by averaging models with projected dimensions close to the intrinsic dimensionality. Furthermore, we propose a variational approach to the afore-mentioned BMA to allow for simultaneous estimation of both model weights and model-specific parameters. Since the proposed variational solution is parallelizable across compressions, it preserves the computational gain of frequentist ensemble techniques while providing the full uncertainty quantification of a Bayesian approach. We establish the asymptotic consistency of the proposed algorithm under the suitable characterization of the RPs and the prior parameters. Finally, we provide extensive numerical examples for empirical validation of the proposed method. |
| Author | Bhattacharya, Shrijita Liu, Zihuan Maiti, Tapabrata |
| Author_xml | – sequence: 1 givenname: Zihuan orcidid: 0000-0003-4251-1477 surname: Liu fullname: Liu, Zihuan organization: Collaborative Center for Statistics in Science, Yale University School of Public Health, New Haven, CT, USA – sequence: 2 givenname: Shrijita orcidid: 0000-0001-6958-8613 surname: Bhattacharya fullname: Bhattacharya, Shrijita email: bhatta61@msu.edu organization: Department of Statistics and Probability, Michigan State University, East Lansing, MI, USA – sequence: 3 givenname: Tapabrata orcidid: 0000-0002-9362-4984 surname: Maiti fullname: Maiti, Tapabrata organization: Department of Statistics and Probability, Michigan State University, East Lansing, MI, USA |
| BackLink | https://www.ncbi.nlm.nih.gov/pubmed/35584070$$D View this record in MEDLINE/PubMed |
| BookMark | eNpdkVtLw0AQhRdRbK39AwoS8MWX1s3e91FLq0KpSOvlLWySiabmUncTpP_era19cF5mYL4zcOacoMOqrgChsxAPwxDr68VsNp0PCSZkSENJiBQHqEtCQQaEKnW4n-VbB_WdW2JfAnPB9DHqUM4VwxJ30dOLsblp8royRXBr1uCCceWgjAsIpmBslVfvwQxa69czaL5r--mC17z5CEZ1ubLgHKTBBEzTWgjmK5PAKTrKTOGgv-s99DwZL0b3g-nj3cPoZjpIKCbNgMsswZgZqkAbLmLBecopj3lKVUoyDAKLJJM0Jcz7xYxSRWOFdRJLzSWNaQ9dbe-ubP3VgmuiMncJFIWpoG5dRIQQnCnNlEcv_6HLurXesad0GDLt38g8dbGj2riENFrZvDR2Hf09ywNkCyS2ds5CtkdCHG1CiX5DiTahRLtQvOh8K8oBYC_QUkrFGf0B3oiFMQ |
| CODEN | ITNNAL |
| Cites_doi | 10.1198/016214508000000337 10.1145/1132516.1132597 10.1080/10618600.2012.681250 10.1109/TIT.2011.2162175 10.1007/s10994-014-5466-8 10.1214/07-AOAS114 10.1080/01621459.2017.1409122 10.1002/cpa.20124 10.2307/1267352 10.1109/DSAA.2019.00032 10.1117/12.605553 10.1007/978-1-4612-0745-0 10.1007/s00365-006-0663-2 10.1109/TIT.2006.871582 10.1080/01621459.2018.1473776 10.3390/e23101368 10.1111/j.1467-9868.2005.00503.x 10.1016/S0893-6080(00)00045-9 10.1007/s11222-015-9607-0 10.1007/978-0-387-72076-0_18 10.1214/07-aos504 10.3233/JAD-201398 10.1111/j.2517-6161.1996.tb02080.x 10.1007/bf00058655 10.1109/TIT.1958.1057444 10.1186/1471-2105-11-58 10.1162/neco.1992.4.3.448 10.1080/01621459.1997.10473615 10.1080/01621459.2014.969425 10.1016/0893-6080(89)90020-8 10.1016/S0893-6080(00)00098-8 10.1109/5.726791 10.1002/rsa.10073 10.3390/make2010001 10.1007/s11704-019-8208-z 10.1214/19-AOS1883 10.1111/rssb.12228 10.1016/j.neunet.2021.01.027 10.1145/168304.168306 10.1214/12-BA703 |
| ContentType | Journal Article |
| Copyright | Copyright The Institute of Electrical and Electronics Engineers, Inc. (IEEE) 2024 |
| Copyright_xml | – notice: Copyright The Institute of Electrical and Electronics Engineers, Inc. (IEEE) 2024 |
| DBID | 97E RIA RIE AAYXX CITATION NPM 7QF 7QO 7QP 7QQ 7QR 7SC 7SE 7SP 7SR 7TA 7TB 7TK 7U5 8BQ 8FD F28 FR3 H8D JG9 JQ2 KR7 L7M L~C L~D P64 7X8 |
| DOI | 10.1109/TNNLS.2022.3172276 |
| DatabaseName | IEEE All-Society Periodicals Package (ASPP) 2005–Present IEEE All-Society Periodicals Package (ASPP) 1998–Present IEEE Electronic Library (IEL) CrossRef PubMed Aluminium Industry Abstracts Biotechnology Research Abstracts Calcium & Calcified Tissue Abstracts Ceramic Abstracts Chemoreception Abstracts Computer and Information Systems Abstracts Corrosion Abstracts Electronics & Communications Abstracts Engineered Materials Abstracts Materials Business File Mechanical & Transportation Engineering Abstracts Neurosciences Abstracts Solid State and Superconductivity Abstracts METADEX Technology Research Database ANTE: Abstracts in New Technology & Engineering Engineering Research Database Aerospace Database Materials Research Database ProQuest Computer Science Collection Civil Engineering Abstracts Advanced Technologies Database with Aerospace Computer and Information Systems Abstracts Academic Computer and Information Systems Abstracts Professional Biotechnology and BioEngineering Abstracts MEDLINE - Academic |
| DatabaseTitle | CrossRef PubMed Materials Research Database Technology Research Database Computer and Information Systems Abstracts – Academic Mechanical & Transportation Engineering Abstracts ProQuest Computer Science Collection Computer and Information Systems Abstracts Materials Business File Aerospace Database Engineered Materials Abstracts Biotechnology Research Abstracts Chemoreception Abstracts Advanced Technologies Database with Aerospace ANTE: Abstracts in New Technology & Engineering Civil Engineering Abstracts Aluminium Industry Abstracts Electronics & Communications Abstracts Ceramic Abstracts Neurosciences Abstracts METADEX Biotechnology and BioEngineering Abstracts Computer and Information Systems Abstracts Professional Solid State and Superconductivity Abstracts Engineering Research Database Calcium & Calcified Tissue Abstracts Corrosion Abstracts MEDLINE - Academic |
| DatabaseTitleList | MEDLINE - Academic Materials Research Database PubMed |
| Database_xml | – sequence: 1 dbid: NPM name: PubMed url: https://proxy.k.utb.cz/login?url=http://www.ncbi.nlm.nih.gov/entrez/query.fcgi?db=PubMed sourceTypes: Index Database – sequence: 2 dbid: RIE name: IEEE Electronic Library (IEL) url: https://proxy.k.utb.cz/login?url=https://ieeexplore.ieee.org/ sourceTypes: Publisher |
| DeliveryMethod | fulltext_linktorsrc |
| Discipline | Computer Science |
| EISSN | 2162-2388 |
| EndPage | 1385 |
| ExternalDocumentID | 35584070 10_1109_TNNLS_2022_3172276 9777854 |
| Genre | orig-research Journal Article |
| GrantInformation_xml | – fundername: National Science Foundation (NSF) grantid: DMS-1952856; DMS-1924724 funderid: 10.13039/100000001 |
| GroupedDBID | 0R~ 4.4 5VS 6IK 97E AAJGR AARMG AASAJ AAWTH ABAZT ABQJQ ABVLG ACIWK ACPRK AENEX AFRAH AGQYO AGSQL AHBIQ AKJIK AKQYR ALMA_UNASSIGNED_HOLDINGS ATWAV BEFXN BFFAM BGNUA BKEBE BPEOZ EBS EJD IFIPE IPLJI JAVBF M43 MS~ O9- OCL PQQKQ RIA RIE RNS AAYXX CITATION NPM 7QF 7QO 7QP 7QQ 7QR 7SC 7SE 7SP 7SR 7TA 7TB 7TK 7U5 8BQ 8FD F28 FR3 H8D JG9 JQ2 KR7 L7M L~C L~D P64 7X8 |
| ID | FETCH-LOGICAL-c302t-57fc004a38e9a56b655d535b5d38d2f0e606cf73d24110043383b809cb79573b3 |
| IEDL.DBID | RIE |
| ISSN | 2162-237X 2162-2388 |
| IngestDate | Sun Sep 28 12:40:09 EDT 2025 Mon Jun 30 06:31:52 EDT 2025 Mon Jul 21 05:44:05 EDT 2025 Wed Oct 01 00:45:04 EDT 2025 Wed Aug 27 02:37:21 EDT 2025 |
| IsPeerReviewed | false |
| IsScholarly | true |
| Issue | 1 |
| Language | English |
| License | https://doi.org/10.15223/policy-029 https://doi.org/10.15223/policy-037 https://ieeexplore.ieee.org/Xplorehelp/downloads/license-information/USG.html |
| LinkModel | DirectLink |
| MergedId | FETCHMERGED-LOGICAL-c302t-57fc004a38e9a56b655d535b5d38d2f0e606cf73d24110043383b809cb79573b3 |
| Notes | ObjectType-Article-1 SourceType-Scholarly Journals-1 ObjectType-Feature-2 content type line 14 content type line 23 |
| ORCID | 0000-0003-4251-1477 0000-0002-9362-4984 0000-0001-6958-8613 |
| PMID | 35584070 |
| PQID | 2911492024 |
| PQPubID | 85436 |
| PageCount | 7 |
| ParticipantIDs | pubmed_primary_35584070 crossref_primary_10_1109_TNNLS_2022_3172276 proquest_journals_2911492024 proquest_miscellaneous_2666548948 ieee_primary_9777854 |
| ProviderPackageCode | CITATION AAYXX |
| PublicationCentury | 2000 |
| PublicationDate | 2024-01-01 |
| PublicationDateYYYYMMDD | 2024-01-01 |
| PublicationDate_xml | – month: 01 year: 2024 text: 2024-01-01 day: 01 |
| PublicationDecade | 2020 |
| PublicationPlace | United States |
| PublicationPlace_xml | – name: United States – name: Piscataway |
| PublicationTitle | IEEE transaction on neural networks and learning systems |
| PublicationTitleAbbrev | TNNLS |
| PublicationTitleAlternate | IEEE Trans Neural Netw Learn Syst |
| PublicationYear | 2024 |
| Publisher | IEEE The Institute of Electrical and Electronics Engineers, Inc. (IEEE) |
| Publisher_xml | – name: IEEE – name: The Institute of Electrical and Electronics Engineers, Inc. (IEEE) |
| References | ref13 ref57 ref56 ref15 Guhaniyogi (ref20) 2016; 17 Dasgupta (ref55) ref59 ref14 ref58 ref53 ref11 Feng (ref2) 2017 Hubin (ref29) 2018 ref54 Durrant (ref36); 28 Pati (ref50); 84 ref17 ref16 ref19 ref18 ref51 ref45 ref48 ref42 ref41 Ranganath (ref46); 33 Xiao (ref62) 2017 Srivastava (ref3) 2014; 15 ref49 ref8 Sun (ref27); 54 ref7 Bai (ref33) 2020; 33 ref9 ref4 ref6 ref5 Paisley (ref47) ref40 Li (ref60) 2016; 50 Pedregosa (ref66) 2018 ref34 ref37 ref31 Kejzlar (ref52) 2022 Lopes (ref35) 2011; 24 ref1 ref39 ref38 Blundell (ref23) Mullachery (ref28) 2018 Javid (ref30) 2020 Neal (ref25) 1996 Carvalho (ref10); 5 ref24 Blundell (ref43); 37 Ghosh (ref32) 2019; 20 ref26 ref64 ref63 ref22 ref65 Shlens (ref12) 2014 Sun (ref44) Jaakkola (ref21) 1996 ref61 |
| References_xml | – ident: ref9 doi: 10.1198/016214508000000337 – ident: ref34 doi: 10.1145/1132516.1132597 – ident: ref8 doi: 10.1080/10618600.2012.681250 – volume: 5 start-page: 73 volume-title: Proc. 12th Int. Conf. Artif. Intell. Statist. ident: ref10 article-title: Handling sparsity via the horseshoe – ident: ref16 doi: 10.1109/TIT.2011.2162175 – ident: ref18 doi: 10.1007/s10994-014-5466-8 – ident: ref22 doi: 10.1214/07-AOAS114 – start-page: 143 volume-title: Proc. 16th Conf. Uncertainty Artif. Intell. ident: ref55 article-title: Experiments with random projection – ident: ref31 doi: 10.1080/01621459.2017.1409122 – ident: ref15 doi: 10.1002/cpa.20124 – ident: ref6 doi: 10.2307/1267352 – ident: ref1 doi: 10.1109/DSAA.2019.00032 – ident: ref38 doi: 10.1117/12.605553 – volume-title: Bayesian Learning for Neural Networks year: 1996 ident: ref25 doi: 10.1007/978-1-4612-0745-0 – volume: 54 start-page: 1283 volume-title: Proc. 20th Int. Conf. Artif. Intell. Statist. ident: ref27 article-title: Learning structured weight uncertainty in Bayesian neural networks – ident: ref4 doi: 10.1007/s00365-006-0663-2 – ident: ref14 doi: 10.1109/TIT.2006.871582 – start-page: 1363 volume-title: Proc. 29th Int. Conf. Int. Conf. Mach. Learn. (ICML) ident: ref47 article-title: Variational Bayesian inference with stochastic search – volume: 84 start-page: 1579 volume-title: Proc. 21st Int. Conf. Artif. Intell. Statist. ident: ref50 article-title: On statistical optimality of variational Bayes – volume-title: arXiv:1708.07747 year: 2017 ident: ref62 article-title: Fashion-MNIST: A novel image dataset for benchmarking machine learning algorithms – ident: ref49 doi: 10.1080/01621459.2018.1473776 – volume-title: arXiv:1711.07592 year: 2017 ident: ref2 article-title: Sparse-input neural networks for high-dimensional nonparametric regression and classification – ident: ref58 doi: 10.3390/e23101368 – volume: 17 start-page: 1 issue: 69 year: 2016 ident: ref20 article-title: Compressed Gaussian process for manifold regression publication-title: J. Mach. Learn. Res. – ident: ref7 doi: 10.1111/j.1467-9868.2005.00503.x – ident: ref63 doi: 10.1016/S0893-6080(00)00045-9 – volume: 24 start-page: 1206 volume-title: Advances in Neural Information Processing Systems year: 2011 ident: ref35 – volume: 37 start-page: 1613 volume-title: Proc. ICML ident: ref43 article-title: Weight uncertainty in neural network – volume: 33 start-page: 814 volume-title: Proc. 17th Int. Conf. Artif. Intell. Statist. ident: ref46 article-title: Black box variational inference – ident: ref53 doi: 10.1007/s11222-015-9607-0 – ident: ref59 doi: 10.1007/978-0-387-72076-0_18 – ident: ref65 doi: 10.1214/07-aos504 – volume-title: arXiv:2004.12211 year: 2020 ident: ref30 article-title: Compromise-free Bayesian neural networks – volume-title: arXiv:1404.1100 year: 2014 ident: ref12 article-title: A tutorial on principal component analysis – volume: 28 start-page: 693 issue: 3 volume-title: Proc. 30th Int. Conf. Mach. Learn. ident: ref36 article-title: Sharp generalization error bounds for randomly-projected classifiers – volume-title: Proc. 7th Int. Conf. Learn. Represent. (ICLR) ident: ref44 article-title: Functional variational Bayesian neural networks – volume-title: A Variational Approach to Bayesian Logistic Regression Models and Their Extensions year: 1996 ident: ref21 – ident: ref64 doi: 10.3233/JAD-201398 – volume: 20 start-page: 1 issue: 182 year: 2019 ident: ref32 article-title: Model selection in Bayesian neural networks via horseshoe priors publication-title: J. Mach. Learn. Res. – volume-title: arXiv:1806.02160 year: 2018 ident: ref29 article-title: Deep Bayesian regression models – ident: ref5 doi: 10.1111/j.2517-6161.1996.tb02080.x – ident: ref39 doi: 10.1007/bf00058655 – start-page: 1613 volume-title: Proc. Int. Conf. Mach. Learn. ident: ref23 article-title: Weight uncertainty in neural network – ident: ref45 doi: 10.1109/TIT.1958.1057444 – ident: ref41 doi: 10.1186/1471-2105-11-58 – volume-title: arXiv:1801.07710 year: 2018 ident: ref28 article-title: Bayesian neural networks – volume-title: arXiv:1201.0490 year: 2018 ident: ref66 article-title: Scikit-learn: Machine learning in Python – start-page: 1 year: 2022 ident: ref52 article-title: Black box variational Bayes model averaging publication-title: Amer. Stat. Assoc. – volume: 50 start-page: 94 issue: 6 year: 2016 ident: ref60 article-title: Feature selection: A data perspective publication-title: ACM Comput. Surv. – ident: ref24 doi: 10.1162/neco.1992.4.3.448 – ident: ref19 doi: 10.1080/01621459.1997.10473615 – ident: ref11 doi: 10.1080/01621459.2014.969425 – ident: ref57 doi: 10.1016/0893-6080(89)90020-8 – volume: 15 start-page: 1929 year: 2014 ident: ref3 article-title: Dropout: A simple way to prevent neural networks from overfitting publication-title: J. Mach. Learn. Res. – ident: ref26 doi: 10.1016/S0893-6080(00)00098-8 – ident: ref61 doi: 10.1109/5.726791 – ident: ref13 doi: 10.1002/rsa.10073 – ident: ref54 doi: 10.3390/make2010001 – ident: ref56 doi: 10.1007/s11704-019-8208-z – ident: ref51 doi: 10.1214/19-AOS1883 – ident: ref17 doi: 10.1111/rssb.12228 – ident: ref48 doi: 10.1016/j.neunet.2021.01.027 – volume: 33 start-page: 466 volume-title: Advances in Neural Information Processing Systems year: 2020 ident: ref33 article-title: Efficient variational inference for sparse deep learning with theoretical guarantee – ident: ref40 doi: 10.1145/168304.168306 – ident: ref37 doi: 10.1007/s10994-014-5466-8 – ident: ref42 doi: 10.1214/12-BA703 |
| SSID | ssj0000605649 |
| Score | 2.4334953 |
| Snippet | We consider the problem of nonparametric classification from a high-dimensional input vector (small <inline-formula> <tex-math notation="LaTeX">n... We consider the problem of nonparametric classification from a high-dimensional input vector (small n large p problem). To handle the high-dimensional feature... We consider the problem of nonparametric classification from a high-dimensional input vector (small [Formula Omitted] large [Formula Omitted] problem). To... |
| SourceID | proquest pubmed crossref ieee |
| SourceType | Aggregation Database Index Database Publisher |
| StartPage | 1379 |
| SubjectTerms | Algorithms Artificial neural networks Bayes methods Bayesian analysis Compression Computational modeling Ensemble learning Intrinsic dimensionality Machine learning Mathematical models model averaging Neural networks Numerical models Parameters Predictive models random compression Regularization Storage requirements Training Uncertainty variational inference (VI) |
| Title | Variational Bayes Ensemble Learning Neural Networks With Compressed Feature Space |
| URI | https://ieeexplore.ieee.org/document/9777854 https://www.ncbi.nlm.nih.gov/pubmed/35584070 https://www.proquest.com/docview/2911492024 https://www.proquest.com/docview/2666548948 |
| Volume | 35 |
| hasFullText | 1 |
| inHoldings | 1 |
| isFullTextHit | |
| isPrint | |
| journalDatabaseRights | – providerCode: PRVIEE databaseName: IEEE Electronic Library (IEL) customDbUrl: eissn: 2162-2388 dateEnd: 99991231 omitProxy: false ssIdentifier: ssj0000605649 issn: 2162-237X databaseCode: RIE dateStart: 20120101 isFulltext: true titleUrlDefault: https://ieeexplore.ieee.org/ providerName: IEEE |
| link | http://utb.summon.serialssolutions.com/2.0.0/link/0/eLvHCXMwjV1Lb9QwEB61PXGhLeWxpSAjcYNss37FPgJqVSG6EmoLe4tiewJVIYu6yQF-fcfOQwKBxC2SH4k9nvj7xuMZgJfaG7uQtc6cCZjJSsmssrXIhFg4Jwkfq-RNeL7UZ1fy_UqttuD1dBcGEZPzGc7jYzrLD2vfRVPZMWGVwii5DduF0f1drcmekhMu1wnt8oXmGRfFarwjk9vjy-XywwWxQc6JpBacFzF1UYwsTnwm_21LSjlW_g0307Zzugvn4wf33iY38651c__rj1iO_zuiPbg_4E_2pl8w-7CFzQPYHXM7sEHVD-DjJyLRg6GQva1-4oadNBv87r4hG2KyfmExsgcVL3tX8g37fN1-ZbGvFJE8sIgvu1tkF0TM8SFcnZ5cvjvLhvQLmRc5bzNV1J5UqBIGbaW000oFkp1TQZjA6xyJ-_i6EIFAQIw7F8muM7n1rrCqEE48gp1m3eATYFwaGRxxPyuVFPXCWO_RBGKfnhphPYNXowTKH32UjTKxk9yWSXRlFF05iG4GB3Emp5rDJM7gaBRaOSjipuT0M5eWWlPxi6mYVCiei1QNrjuqo2MKZmOlmcHjXthT3-MaOfz7O5_Cvdh3b5M5gp32tsNnhFJa9zwtzztLvN8L |
| linkProvider | IEEE |
| linkToHtml | http://utb.summon.serialssolutions.com/2.0.0/link/0/eLvHCXMwjV1Lb9QwEB6VcoALBQplSwEjcYNss34k9pFWrRbYjYS6hb1FsTMBBGSrbnKAX8_YeUggkLhF8ivxeOLvG49nAF4kTpuZrJLI6hIjWSgZFaYSkRAzayXhYxW8CZdZMr-Ub9dqvQOvxrswiBicz3DqH8NZfrlxrTeVHRNWSbWSN-CmklKq7rbWaFGJCZknAe_yWcIjLtL1cEsmNserLFtcEB_knGhqynnqkxf52OLEaOLfNqWQZeXfgDNsPOd7sBxeufM3-TptGzt1P_-I5vi_33QX7vQIlL3ulsw92MH6PuwN2R1Yr-z78P4D0ejeVMhOih-4ZWf1Fr_bb8j6qKyfmI_tQcVZ50y-ZR-_NJ-Z7yvEJC-ZR5jtNbILoub4AC7Pz1an86hPwBA5EfMmUmnlSIkKodEUKrGJUiVJz6pS6JJXMRL7cVUqSoIBPvKcp7tWx8bZ1KhUWPEQdutNjY-AcallaYn9GamkqGbaOIe6JP7pqBFWE3g5SCC_6uJs5IGfxCYPosu96PJedBPY9zM51uwncQJHg9DyXhW3OaffuTTUmoqfj8WkRP5kpKhx01KdxCdh1kbqCRx0wh77HtbI4d_HfAa35qvlIl-8yd49htt-nM5CcwS7zXWLTwizNPZpWKq_ACWw4lg |
| openUrl | ctx_ver=Z39.88-2004&ctx_enc=info%3Aofi%2Fenc%3AUTF-8&rfr_id=info%3Asid%2Fsummon.serialssolutions.com&rft_val_fmt=info%3Aofi%2Ffmt%3Akev%3Amtx%3Ajournal&rft.genre=article&rft.atitle=Variational+Bayes+Ensemble+Learning+Neural+Networks+With+Compressed+Feature+Space&rft.jtitle=IEEE+transaction+on+neural+networks+and+learning+systems&rft.au=Liu%2C+Zihuan&rft.au=Bhattacharya%2C+Shrijita&rft.au=Maiti%2C+Tapabrata&rft.date=2024-01-01&rft.eissn=2162-2388&rft.volume=PP&rft_id=info:doi/10.1109%2FTNNLS.2022.3172276&rft_id=info%3Apmid%2F35584070&rft.externalDocID=35584070 |
| thumbnail_l | http://covers-cdn.summon.serialssolutions.com/index.aspx?isbn=/lc.gif&issn=2162-237X&client=summon |
| thumbnail_m | http://covers-cdn.summon.serialssolutions.com/index.aspx?isbn=/mc.gif&issn=2162-237X&client=summon |
| thumbnail_s | http://covers-cdn.summon.serialssolutions.com/index.aspx?isbn=/sc.gif&issn=2162-237X&client=summon |