ϵ-Confidence Approximately Correct (ϵ-CoAC) Learnability and Hyperparameter Selection in Linear Regression Modeling
In a data based learning process, training data set is utilized to provide a hypothesis that can be generalized to explain all data points from a domain set. The hypothesis is chosen from classes with potentially different complexities. Linear regression modeling is an important category of learning...
Saved in:
| Published in | IEEE access Vol. 13; pp. 14273 - 14289 |
|---|---|
| Main Authors | , |
| Format | Journal Article |
| Language | English |
| Published |
Piscataway
IEEE
2025
The Institute of Electrical and Electronics Engineers, Inc. (IEEE) |
| Subjects | |
| Online Access | Get full text |
| ISSN | 2169-3536 2169-3536 |
| DOI | 10.1109/ACCESS.2025.3529622 |
Cover
| Abstract | In a data based learning process, training data set is utilized to provide a hypothesis that can be generalized to explain all data points from a domain set. The hypothesis is chosen from classes with potentially different complexities. Linear regression modeling is an important category of learning algorithms. The practical uncertainty of the label samples in the training data set has a major effect in the generalization ability of the learned model. Failing to choose a proper model or hypothesis class can lead to serious issues such as underfitting or overfitting. These issues have been addressed mostly by alternating modeling cost functions or by utilizing cross-validation methods. Drawbacks of these methods include introducing new hyperparameters with their own new challenges and uncertainties, potential increase of the computational complexity or requiring large set of training data sets. On the other hand, the theory of probably approximately correct (PAC) aims at defining learnability based on probabilistic settings. Despite its theoretical value, PAC bounds can't be utilized in practical regression learning applications with only available training data sets. This work is motivated by practical issues in regression learning generalization and is inspired by the foundations of the theory of statistical learning. The proposed approach, denoted by <inline-formula> <tex-math notation="LaTeX">\epsilon </tex-math></inline-formula>-Confidence Approximately Correct (<inline-formula> <tex-math notation="LaTeX">\epsilon </tex-math></inline-formula>-CoAC), utilizes the conventional Kullback-Leibler divergence (relative entropy) and defines new related typical sets to develop a unique method of probabilistic statistical learning for practical regression learning and generalization. <inline-formula> <tex-math notation="LaTeX">\epsilon </tex-math></inline-formula>-CoAC learnability is able to validate the learning process as a function of training data sample size, as well as a function of the hypothesis class complexity order. Consequently, it enables the learner to automatically compare hypothesis classes of different complexity orders and to choose among them the optimum class with the minimum <inline-formula> <tex-math notation="LaTeX">\epsilon </tex-math></inline-formula> in the <inline-formula> <tex-math notation="LaTeX">\epsilon </tex-math></inline-formula>-CoAC framework. The <inline-formula> <tex-math notation="LaTeX">\epsilon </tex-math></inline-formula>-CoAC learnability overcomes the issues of overfitting and underfitting. In addition, it shows advantages over the well-known cross-validation method in the sense of accuracy and data length requirements for convergence. Simulation results, for both synthetic and real data, confirm not only strength and capability of <inline-formula> <tex-math notation="LaTeX">\epsilon </tex-math></inline-formula>-CoAC in providing learning measurements as a function of data length and/or hypothesis complexity, but also superiority of the method over the existing approaches in hypothesis complexity and model selection. |
|---|---|
| AbstractList | In a data based learning process, training data set is utilized to provide a hypothesis that can be generalized to explain all data points from a domain set. The hypothesis is chosen from classes with potentially different complexities. Linear regression modeling is an important category of learning algorithms. The practical uncertainty of the label samples in the training data set has a major effect in the generalization ability of the learned model. Failing to choose a proper model or hypothesis class can lead to serious issues such as underfitting or overfitting. These issues have been addressed mostly by alternating modeling cost functions or by utilizing cross-validation methods. Drawbacks of these methods include introducing new hyperparameters with their own new challenges and uncertainties, potential increase of the computational complexity or requiring large set of training data sets. On the other hand, the theory of probably approximately correct (PAC) aims at defining learnability based on probabilistic settings. Despite its theoretical value, PAC bounds can't be utilized in practical regression learning applications with only available training data sets. This work is motivated by practical issues in regression learning generalization and is inspired by the foundations of the theory of statistical learning. The proposed approach, denoted by <inline-formula> <tex-math notation="LaTeX">\epsilon </tex-math></inline-formula>-Confidence Approximately Correct (<inline-formula> <tex-math notation="LaTeX">\epsilon </tex-math></inline-formula>-CoAC), utilizes the conventional Kullback-Leibler divergence (relative entropy) and defines new related typical sets to develop a unique method of probabilistic statistical learning for practical regression learning and generalization. <inline-formula> <tex-math notation="LaTeX">\epsilon </tex-math></inline-formula>-CoAC learnability is able to validate the learning process as a function of training data sample size, as well as a function of the hypothesis class complexity order. Consequently, it enables the learner to automatically compare hypothesis classes of different complexity orders and to choose among them the optimum class with the minimum <inline-formula> <tex-math notation="LaTeX">\epsilon </tex-math></inline-formula> in the <inline-formula> <tex-math notation="LaTeX">\epsilon </tex-math></inline-formula>-CoAC framework. The <inline-formula> <tex-math notation="LaTeX">\epsilon </tex-math></inline-formula>-CoAC learnability overcomes the issues of overfitting and underfitting. In addition, it shows advantages over the well-known cross-validation method in the sense of accuracy and data length requirements for convergence. Simulation results, for both synthetic and real data, confirm not only strength and capability of <inline-formula> <tex-math notation="LaTeX">\epsilon </tex-math></inline-formula>-CoAC in providing learning measurements as a function of data length and/or hypothesis complexity, but also superiority of the method over the existing approaches in hypothesis complexity and model selection. In a data based learning process, training data set is utilized to provide a hypothesis that can be generalized to explain all data points from a domain set. The hypothesis is chosen from classes with potentially different complexities. Linear regression modeling is an important category of learning algorithms. The practical uncertainty of the label samples in the training data set has a major effect in the generalization ability of the learned model. Failing to choose a proper model or hypothesis class can lead to serious issues such as underfitting or overfitting. These issues have been addressed mostly by alternating modeling cost functions or by utilizing cross-validation methods. Drawbacks of these methods include introducing new hyperparameters with their own new challenges and uncertainties, potential increase of the computational complexity or requiring large set of training data sets. On the other hand, the theory of probably approximately correct (PAC) aims at defining learnability based on probabilistic settings. Despite its theoretical value, PAC bounds can’t be utilized in practical regression learning applications with only available training data sets. This work is motivated by practical issues in regression learning generalization and is inspired by the foundations of the theory of statistical learning. The proposed approach, denoted by <tex-math notation="LaTeX">$\epsilon $ </tex-math>-Confidence Approximately Correct ( <tex-math notation="LaTeX">$\epsilon $ </tex-math>-CoAC), utilizes the conventional Kullback-Leibler divergence (relative entropy) and defines new related typical sets to develop a unique method of probabilistic statistical learning for practical regression learning and generalization. <tex-math notation="LaTeX">$\epsilon $ </tex-math>-CoAC learnability is able to validate the learning process as a function of training data sample size, as well as a function of the hypothesis class complexity order. Consequently, it enables the learner to automatically compare hypothesis classes of different complexity orders and to choose among them the optimum class with the minimum <tex-math notation="LaTeX">$\epsilon $ </tex-math> in the <tex-math notation="LaTeX">$\epsilon $ </tex-math>-CoAC framework. The <tex-math notation="LaTeX">$\epsilon $ </tex-math>-CoAC learnability overcomes the issues of overfitting and underfitting. In addition, it shows advantages over the well-known cross-validation method in the sense of accuracy and data length requirements for convergence. Simulation results, for both synthetic and real data, confirm not only strength and capability of <tex-math notation="LaTeX">$\epsilon $ </tex-math>-CoAC in providing learning measurements as a function of data length and/or hypothesis complexity, but also superiority of the method over the existing approaches in hypothesis complexity and model selection. In a data based learning process, training data set is utilized to provide a hypothesis that can be generalized to explain all data points from a domain set. The hypothesis is chosen from classes with potentially different complexities. Linear regression modeling is an important category of learning algorithms. The practical uncertainty of the label samples in the training data set has a major effect in the generalization ability of the learned model. Failing to choose a proper model or hypothesis class can lead to serious issues such as underfitting or overfitting. These issues have been addressed mostly by alternating modeling cost functions or by utilizing cross-validation methods. Drawbacks of these methods include introducing new hyperparameters with their own new challenges and uncertainties, potential increase of the computational complexity or requiring large set of training data sets. On the other hand, the theory of probably approximately correct (PAC) aims at defining learnability based on probabilistic settings. Despite its theoretical value, PAC bounds can’t be utilized in practical regression learning applications with only available training data sets. This work is motivated by practical issues in regression learning generalization and is inspired by the foundations of the theory of statistical learning. The proposed approach, denoted by [Formula Omitted]-Confidence Approximately Correct ([Formula Omitted]-CoAC), utilizes the conventional Kullback-Leibler divergence (relative entropy) and defines new related typical sets to develop a unique method of probabilistic statistical learning for practical regression learning and generalization. [Formula Omitted]-CoAC learnability is able to validate the learning process as a function of training data sample size, as well as a function of the hypothesis class complexity order. Consequently, it enables the learner to automatically compare hypothesis classes of different complexity orders and to choose among them the optimum class with the minimum [Formula Omitted] in the [Formula Omitted]-CoAC framework. The [Formula Omitted]-CoAC learnability overcomes the issues of overfitting and underfitting. In addition, it shows advantages over the well-known cross-validation method in the sense of accuracy and data length requirements for convergence. Simulation results, for both synthetic and real data, confirm not only strength and capability of [Formula Omitted]-CoAC in providing learning measurements as a function of data length and/or hypothesis complexity, but also superiority of the method over the existing approaches in hypothesis complexity and model selection. |
| Author | Shamsi, Mahdi Beheshti, Soosan |
| Author_xml | – sequence: 1 givenname: Soosan orcidid: 0000-0001-7161-5887 surname: Beheshti fullname: Beheshti, Soosan email: soosan@torontomu.ca organization: Department of Electrical, Computer, and Biomedical Engineering, Toronto Metropolitan University, Toronto, ON, Canada – sequence: 2 givenname: Mahdi orcidid: 0000-0002-0795-6238 surname: Shamsi fullname: Shamsi, Mahdi organization: Department of Electrical, Computer, and Biomedical Engineering, Toronto Metropolitan University, Toronto, ON, Canada |
| BookMark | eNplkc1u1DAUhSNUJErpE8DCEhtYZPBPnImXo6jQSoOQGFhb1871yKPUDk5GJQ_Gc_SV6mkqVIE31zo655PuPa-LsxADFsVbRleMUfVp07ZXu92KUy5XQnJVc_6iOOesVqWQoj579n9VXI7jgebXZEmuz4vj_Z-yjcH5DoNFshmGFH_7W5iwn0kbU0I7kQ-Ppk37kWwRUgDjez_NBEJHrucB0wAJbnHCRHbY54CPgfhAtj5kO_mO-4TjeBK_xg57H_ZvipcO-hEvn-ZF8fPz1Y_2utx--3LTbralFVJNpQUO3CBnQIVQjBtqlROdcmtaSdcozjrTqIaLtYNKAIPONsqsO2eFhZobcVHcLNwuwkEPKS-WZh3B60chpr2GNHnbo5Z1xRpupHBmXQlHM7dzoq6NapTl5sSqFtYxDDDfQd__BTKqT01osDYvqk9N6Kcmcuz9EsuH_XXEcdKHeMw37EctmFSSiqZS2SUWl01xHBO6_9hLy_-y3y0pj4jPEk1FOVfiAekhpb8 |
| CODEN | IAECCG |
| Cites_doi | 10.1007/978-3-7091-2568-7_1 10.2307/2683673 10.1109/ICCV.2019.00041 10.1214/09-ss054 10.1007/978-1-4757-2440-0 10.1609/aaai.v34i04.6020 10.2307/2684253 10.1109/72.788640 10.1111/jtsa.12587 10.1115/1.859551.ch27 10.1111/j.2517-6161.1996.tb02080.x 10.1016/j.jmp.2005.06.008 10.1007/s00521-019-04625-8 10.1016/j.neucom.2020.07.061 10.1016/j.csda.2009.04.009 10.1093/cid/cix731 10.1002/biot.201800613 10.1007/978-3-030-12767-1_5 10.1002/wics.14 10.1007/978-3-030-41068-1 10.1016/S0167-9473(01)00069-X 10.3182/20060329-3-AU-2901.00130 10.1016/S1473-3099(20)30120-1 10.1109/ACCESS.2018.2836950 10.1162/089976603321891864 10.1109/RBME.2020.3013489 10.7551/mitpress/9780262170055.003.0008 10.18637/jss.v033.i01 10.1002/0471200611 10.1561/2200000100 10.1109/ICECA.2018.8474918 10.1016/j.asej.2021.08.016 10.1109/TPAMI.2022.3195549 10.1109/TIT.2011.2111010 10.1109/TSP.2009.2032031 10.1109/ACCESS.2023.3287571 10.1109/ACCESS.2023.3321794 10.1038/nature14541 10.1371/journal.pcbi.0030116 10.1002/bjs.10895 10.1007/978-3-030-71704-9_65 10.1002/widm.1306 10.1111/j.1467-9868.2005.00503.x 10.1109/TPAMI.2016.2599532 10.1109/4235.585893 10.1007/978-0-387-84858-7 10.1145/1968.1972 |
| ContentType | Journal Article |
| Copyright | Copyright The Institute of Electrical and Electronics Engineers, Inc. (IEEE) 2025 |
| Copyright_xml | – notice: Copyright The Institute of Electrical and Electronics Engineers, Inc. (IEEE) 2025 |
| DBID | 97E ESBDL RIA RIE AAYXX CITATION 7SC 7SP 7SR 8BQ 8FD JG9 JQ2 L7M L~C L~D ADTOC UNPAY DOA |
| DOI | 10.1109/ACCESS.2025.3529622 |
| DatabaseName | IEEE All-Society Periodicals Package (ASPP) 2005–Present IEEE Xplore Open Access Journals IEEE All-Society Periodicals Package (ASPP) 1998–Present IEEE Electronic Library (IEL) CrossRef Computer and Information Systems Abstracts Electronics & Communications Abstracts Engineered Materials Abstracts METADEX Technology Research Database Materials Research Database ProQuest Computer Science Collection Advanced Technologies Database with Aerospace Computer and Information Systems Abstracts Academic Computer and Information Systems Abstracts Professional Unpaywall for CDI: Periodical Content Unpaywall DOAJ Directory of Open Access Journals |
| DatabaseTitle | CrossRef Materials Research Database Engineered Materials Abstracts Technology Research Database Computer and Information Systems Abstracts – Academic Electronics & Communications Abstracts ProQuest Computer Science Collection Computer and Information Systems Abstracts Advanced Technologies Database with Aerospace METADEX Computer and Information Systems Abstracts Professional |
| DatabaseTitleList | Materials Research Database |
| Database_xml | – sequence: 1 dbid: DOA name: DOAJ Open Access Full Text url: https://www.doaj.org/ sourceTypes: Open Website – sequence: 2 dbid: RIE name: IEEE Electronic Library (IEL) url: https://proxy.k.utb.cz/login?url=https://ieeexplore.ieee.org/ sourceTypes: Publisher – sequence: 3 dbid: UNPAY name: Unpaywall url: https://proxy.k.utb.cz/login?url=https://unpaywall.org/ sourceTypes: Open Access Repository |
| DeliveryMethod | fulltext_linktorsrc |
| Discipline | Engineering |
| EISSN | 2169-3536 |
| EndPage | 14289 |
| ExternalDocumentID | oai_doaj_org_article_564182b53fb743f0898df366b989c2bb 10.1109/access.2025.3529622 10_1109_ACCESS_2025_3529622 10840229 |
| Genre | orig-research |
| GrantInformation_xml | – fundername: Natural Sciences and Engineering Research Council of Canada funderid: 10.13039/501100000038 |
| GroupedDBID | 0R~ 4.4 5VS 6IK 97E AAJGR ABAZT ABVLG ACGFS ADBBV AGSQL ALMA_UNASSIGNED_HOLDINGS BCNDV BEFXN BFFAM BGNUA BKEBE BPEOZ EBS EJD ESBDL GROUPED_DOAJ IPLJI JAVBF KQ8 M43 M~E O9- OCL OK1 RIA RIE RNS AAYXX CITATION 7SC 7SP 7SR 8BQ 8FD JG9 JQ2 L7M L~C L~D ADTOC UNPAY |
| ID | FETCH-LOGICAL-c359t-ca2a2be21a033912b0c9f3d9f7045f8921db898237fa43a1adc89b7dfc3ca62b3 |
| IEDL.DBID | UNPAY |
| ISSN | 2169-3536 |
| IngestDate | Tue Oct 14 19:09:23 EDT 2025 Sun Sep 07 11:07:32 EDT 2025 Mon Jun 30 13:06:44 EDT 2025 Wed Oct 01 03:43:35 EDT 2025 Wed Aug 27 01:55:17 EDT 2025 |
| IsDoiOpenAccess | true |
| IsOpenAccess | true |
| IsPeerReviewed | true |
| IsScholarly | true |
| Language | English |
| License | https://creativecommons.org/licenses/by-nc-nd/4.0 |
| LinkModel | DirectLink |
| MergedId | FETCHMERGED-LOGICAL-c359t-ca2a2be21a033912b0c9f3d9f7045f8921db898237fa43a1adc89b7dfc3ca62b3 |
| Notes | ObjectType-Article-1 SourceType-Scholarly Journals-1 ObjectType-Feature-2 content type line 14 |
| ORCID | 0000-0002-0795-6238 0000-0001-7161-5887 |
| OpenAccessLink | https://proxy.k.utb.cz/login?url=https://doi.org/10.1109/access.2025.3529622 |
| PQID | 3159503849 |
| PQPubID | 4845423 |
| PageCount | 17 |
| ParticipantIDs | unpaywall_primary_10_1109_access_2025_3529622 proquest_journals_3159503849 crossref_primary_10_1109_ACCESS_2025_3529622 doaj_primary_oai_doaj_org_article_564182b53fb743f0898df366b989c2bb ieee_primary_10840229 |
| ProviderPackageCode | CITATION AAYXX |
| PublicationCentury | 2000 |
| PublicationDate | 20250000 2025-00-00 20250101 2025-01-01 |
| PublicationDateYYYYMMDD | 2025-01-01 |
| PublicationDate_xml | – year: 2025 text: 20250000 |
| PublicationDecade | 2020 |
| PublicationPlace | Piscataway |
| PublicationPlace_xml | – name: Piscataway |
| PublicationTitle | IEEE access |
| PublicationTitleAbbrev | Access |
| PublicationYear | 2025 |
| Publisher | IEEE The Institute of Electrical and Electronics Engineers, Inc. (IEEE) |
| Publisher_xml | – name: IEEE – name: The Institute of Electrical and Electronics Engineers, Inc. (IEEE) |
| References | ref13 ref12 ref15 ref14 ref53 ref52 ref11 ref55 ref10 ref16 ref19 ref18 ref51 ref50 ref46 ref45 ref48 Domingos (ref47); 747 ref41 ref44 Shamsi (ref33) 2025 ref49 Mahajan (ref42) ref8 ref9 ref4 ref3 ref6 ref5 ref40 ref35 ref34 ref37 ref36 Zhang (ref17) ref31 ref30 Wen (ref38) ref32 ref2 ref1 Maronna (ref43) 2019 ref39 Patel (ref54) 1996; 150 ref24 ref23 ref26 ref25 ref20 ref22 ref21 ref28 ref27 ref29 Montgomery (ref56) 2021 Culkin (ref7) 2017; 15 |
| References_xml | – ident: ref16 doi: 10.1007/978-3-7091-2568-7_1 – ident: ref45 doi: 10.2307/2683673 – volume: 747 start-page: 223 volume-title: Proc. ICML ident: ref47 article-title: Bayesian averaging of classifiers and the overfitting problem – ident: ref18 doi: 10.1109/ICCV.2019.00041 – ident: ref25 doi: 10.1214/09-ss054 – ident: ref2 doi: 10.1007/978-1-4757-2440-0 – ident: ref20 doi: 10.1609/aaai.v34i04.6020 – ident: ref49 doi: 10.2307/2684253 – volume: 15 start-page: 92 issue: 4 year: 2017 ident: ref7 article-title: Machine learning in finance: The case of deep learning for option pricing publication-title: J. Investment Manage. – ident: ref1 doi: 10.1109/72.788640 – ident: ref28 doi: 10.1111/jtsa.12587 – ident: ref52 doi: 10.1115/1.859551.ch27 – ident: ref23 doi: 10.1111/j.2517-6161.1996.tb02080.x – ident: ref48 doi: 10.1016/j.jmp.2005.06.008 – ident: ref41 doi: 10.1007/s00521-019-04625-8 – ident: ref22 doi: 10.1016/j.neucom.2020.07.061 – ident: ref26 doi: 10.1016/j.csda.2009.04.009 – ident: ref6 doi: 10.1093/cid/cix731 – ident: ref10 doi: 10.1002/biot.201800613 – ident: ref15 doi: 10.1007/978-3-030-12767-1_5 – ident: ref24 doi: 10.1002/wics.14 – ident: ref8 doi: 10.1007/978-3-030-41068-1 – start-page: 7313 volume-title: Proc. Int. Conf. Mach. Learn. ident: ref42 article-title: Domain generalization using causal matching – ident: ref3 doi: 10.1016/S0167-9473(01)00069-X – ident: ref29 doi: 10.3182/20060329-3-AU-2901.00130 – ident: ref50 doi: 10.1016/S1473-3099(20)30120-1 – ident: ref11 doi: 10.1109/ACCESS.2018.2836950 – ident: ref27 doi: 10.1162/089976603321891864 – volume-title: Introduction To Linear Regression Analysis year: 2021 ident: ref56 – ident: ref5 doi: 10.1109/RBME.2020.3013489 – ident: ref37 doi: 10.7551/mitpress/9780262170055.003.0008 – ident: ref46 doi: 10.18637/jss.v033.i01 – ident: ref30 doi: 10.1002/0471200611 – ident: ref19 doi: 10.1561/2200000100 – ident: ref4 doi: 10.1109/ICECA.2018.8474918 – ident: ref51 doi: 10.1016/j.asej.2021.08.016 – ident: ref39 doi: 10.1109/TPAMI.2022.3195549 – volume-title: Robust Statistics: Theory and Methods (With R) year: 2019 ident: ref43 – volume: 150 volume-title: Handbook of the Normal Distribution year: 1996 ident: ref54 – ident: ref55 doi: 10.1109/TIT.2011.2111010 – ident: ref31 doi: 10.1109/TSP.2009.2032031 – ident: ref35 doi: 10.1109/ACCESS.2023.3287571 – start-page: 12468 volume-title: Proc. Int. Conf. Mach. Learn. ident: ref17 article-title: Learning from noisy labels with no change to the training process – ident: ref32 doi: 10.1109/ACCESS.2023.3321794 – start-page: 631 volume-title: Proc. Int. Conf. Mach. Learn. ident: ref38 article-title: Robust learning under uncertain test distributions: Relating covariate shift to model misspecification – ident: ref13 doi: 10.1038/nature14541 – ident: ref9 doi: 10.1371/journal.pcbi.0030116 – ident: ref44 doi: 10.1002/bjs.10895 – year: 2025 ident: ref33 article-title: Separability and scatteredness (S&S) ratio-based efficient SVM regularization parameter, kernel, and kernel parameter selection publication-title: Pattern Anal. Appl. – ident: ref36 doi: 10.1007/978-3-030-71704-9_65 – ident: ref12 doi: 10.1002/widm.1306 – ident: ref53 doi: 10.1111/j.1467-9868.2005.00503.x – ident: ref40 doi: 10.1109/TPAMI.2016.2599532 – ident: ref14 doi: 10.1109/4235.585893 – ident: ref21 doi: 10.1007/978-0-387-84858-7 – ident: ref34 doi: 10.1145/1968.1972 |
| SSID | ssj0000816957 |
| Score | 2.349507 |
| Snippet | In a data based learning process, training data set is utilized to provide a hypothesis that can be generalized to explain all data points from a domain set.... |
| SourceID | doaj unpaywall proquest crossref ieee |
| SourceType | Open Website Open Access Repository Aggregation Database Index Database Publisher |
| StartPage | 14273 |
| SubjectTerms | Accuracy Algorithms Complexity Complexity theory Confidence Cost function Data models Data points Datasets Divergence Hands Hypotheses hypothesis class complexity Kullback-Leibler divergence Learning Machine learning Overfitting Picture archiving and communication systems Regression analysis Regression models sample complexity Statistical analysis Statistical learning Statistical learning theory Training data Uncertainty Vectors |
| SummonAdditionalLinks | – databaseName: DOAJ Directory of Open Access Journals dbid: DOA link: http://utb.summon.serialssolutions.com/2.0.0/link/0/eLvHCXMwrV3NTtwwELYqLi0HVFpQlwLygUMrEUjs_M1xG4FWSO2hLRI3y78V0hIQ3VW7D9bn6Ct1xg4oKw5cerUtZzIz8cw49vcxdiTauimxK_POmwzjtc_AWixcqxasl0bUnvY7Pn-pZ5flxVV1NaL6ojNhCR44Ke60qktMgU0lg8FgF_IWWhdkXRvAyYQxtPpi46iYimtwW9RQNQPMUJHD6bTr8I2wIBTViaSfjUKshaKI2D9QrKxlmy-X_Z1e_dLz-SjwnL9mW0PGyKdJ0m32wvdv2OYIR_AtW_79k9HVvUQQyqeEE_77GnNRP1_xjvg37IJ_iIOm3UceMVUTPveK697xGRaj9wQCfkOHY_i3yI2DBuPXPcdiFYfzr_5HOjHbc6JPo0vsO-zy_Ox7N8sGPoXMygoWmdVCC-NFoXMpoRAmtxCkg9BgXhdaEIUzqF4hm6BLqQvtbAumccFKq2th5C7b6G97_45xaYnKzzTeGsAKzYIsgoDGYpvDBNJN2PGDatVdgs1QsdzIQSVLKLKEGiwxYZ9I_Y9DCfM6NqAnqMET1HOeMGE7ZLzR87B4FQImbP_Bmmr4QH8qiWkcIeGU2J09WviJrDqyVq7Juvc_ZH3PXtGcaS9nn20s7pf-ALObhTmMjvwP8Qr1yg priority: 102 providerName: Directory of Open Access Journals – databaseName: IEEE Electronic Library (IEL) dbid: RIE link: http://utb.summon.serialssolutions.com/2.0.0/link/0/eLvHCXMwjV3NbtQwELbaXgoHSksR2xbkAweQyJLY-fNxiahWldoDtFJvln9RxZJWJRFd3ovn6Ct1xs6udkFIvUW2JY8zY3lm7Pk-Qt6yuqxy6EqcdTqB89olwhgIXItaGMc1Kx3mO07PyulFfnJZXA7F6qEWxjkXHp-5MX6Gu3x7bXpMlcEOh3CEMbFJNqu6jMVay4QKMkiIohqQhbJUfJw0DSwCYkBWjDneLzK2dvoEkP6BVWXNwdzu2xs1_6Vms5Wz5niHnC2kjE9Mvo_7To_N778AHB-9jOfk2eB10kk0k12y4do98nQFi_AF6e__JFj-F0lG6QSxxu-uwJ91szltkMPDdPRdGDRp3tOAyxoxvudUtZZOIaC9RSDxH_jAhn4N_DqgdHrVUgh4YTj94r7FV7ctRQo2LITfJxfHn8-baTJwMiSGF6JLjGKKaccylXIuMqZTIzy3wlfgG_pasMzqWiACjlc5V5mypha6st5wo0qm-Uuy1V637hWh3CAdoK6c0QKiPCN45pmoDLRZcELtiHxY6EreROgNGUKWVMioWomqlYNqR-QT6nM5FHGzQwP8ezlsQ1mUOQRUuuBeg-vkUxDVel6WWoBpMq1HZB_1tTJfVNWIHC3MQw6b_Kfk4Aoimk4O3cnSZP6RVQXmyzVZD_4zzSF5gsNiiueIbHW3vXsNTk-n3wRjfwB4BQBO priority: 102 providerName: IEEE |
| Title | ϵ-Confidence Approximately Correct (ϵ-CoAC) Learnability and Hyperparameter Selection in Linear Regression Modeling |
| URI | https://ieeexplore.ieee.org/document/10840229 https://www.proquest.com/docview/3159503849 https://doi.org/10.1109/access.2025.3529622 https://doaj.org/article/564182b53fb743f0898df366b989c2bb |
| UnpaywallVersion | publishedVersion |
| Volume | 13 |
| hasFullText | 1 |
| inHoldings | 1 |
| isFullTextHit | |
| isPrint | |
| journalDatabaseRights | – providerCode: PRVAFT databaseName: Open Access Digital Library customDbUrl: eissn: 2169-3536 dateEnd: 99991231 omitProxy: true ssIdentifier: ssj0000816957 issn: 2169-3536 databaseCode: KQ8 dateStart: 20130101 isFulltext: true titleUrlDefault: http://grweb.coalliance.org/oadl/oadl.html providerName: Colorado Alliance of Research Libraries – providerCode: PRVAON databaseName: DOAJ Open Access Full Text customDbUrl: eissn: 2169-3536 dateEnd: 99991231 omitProxy: true ssIdentifier: ssj0000816957 issn: 2169-3536 databaseCode: DOA dateStart: 20130101 isFulltext: true titleUrlDefault: https://www.doaj.org/ providerName: Directory of Open Access Journals – providerCode: PRVHPJ databaseName: ROAD: Directory of Open Access Scholarly Resources customDbUrl: eissn: 2169-3536 dateEnd: 99991231 omitProxy: true ssIdentifier: ssj0000816957 issn: 2169-3536 databaseCode: M~E dateStart: 20130101 isFulltext: true titleUrlDefault: https://road.issn.org providerName: ISSN International Centre |
| link | http://utb.summon.serialssolutions.com/2.0.0/link/0/eLvHCXMwlV1Nb9QwELXQ9oA48FnEQln5wAEkvCR24sTHEFGtkKgQsFI5Wf5EVZdQtVnB8r_4HfwlZpy02gUJwdV2FMczjt_YnvcIecJrWRVQxYIPlsF6HZhyDgLXslYuCMtlwP2ON0dysSxeH5fHI8825sJsn9_nmXphkmwgxHG8nAs8I-Twv92TJQDvCdlbHr1tPqJ8XC4VE-kg8tFfntxZexJF_6ipsgMvr6-7M7P5alarrZXm8NaQwn2RCArxgsnpfN3bufv-G33jP37EbXJzRJy0GVzkDrkWurvkxhYP4T2y_vmDYerfIDBKG-QZ_3YCWDasNrRF_Q7X06epUdM-o4mTdeD33lDTebqAYPYcScQ_4-Ua-j5p64DB6UlHIdiF5vRd-DTcuO0oyq9hEvw-WR6--tAu2KjHwJwoVc-c4YbbwHOTCaFybjOnovAqVoALY6147m2tkP0mmkKY3HhXK1v56IQzkltxn0y6L114QKhwKAVoq-CsggjPKZFHrioHZR4AqJ-S55eW0mcD7YZO4UqmdNO24JsaR1OPozklL9GaV02RMzsVgBX0OAV1KQsIpmwpogXYFDPoqo9CSqvALbm1U7KPvrD1Pgh-OVdTcnDpHHqc4BdaAAxEJp0CqtmVw_zR18HyO319-J_tD8ikP1-Hx4B9ejtLewazlKY4G_3_FynsApk |
| linkProvider | Unpaywall |
| linkToHtml | http://utb.summon.serialssolutions.com/2.0.0/link/0/eLvHCXMwjV3NbtQwELagHAqH8lfEQgEfOIBElsSOk_i4RFQLtHuAVurN8l9QxZJWJRHdvlefg1dixs6udkFI3CLHke3MWJ4Zz3wfIS9ZVZQ5vEq88yaB89on0lpwXEUlreeGFR7jHYezYnqcfzwRJ0OxeqiF8d6H5DM_xsdwl-_ObI-hMtjh4I4wJm-SWyLPcxHLtVYhFeSQkKIcsIWyVL6d1DUsA7xAJsYcbxgZ2zh_Akz_wKuyYWJu9-25XvzU8_naabN_l8yW84xJJt_GfWfG9uoPCMf_Xsg9sjPYnXQSFeU-ueHbB-TOGhrhQ9L_uk6wADDSjNIJoo1fnoJF6-cLWiOLh-3oq9BpUr-mAZk1onwvqG4dnYJLe4FQ4t8xxYZ-CQw7IHZ62lJweaE7_ey_xrzbliIJG5bC75Lj_fdH9TQZWBkSy4XsEquZZsazTKecy4yZ1MqGO9mUYB02lWSZM5VEDJxG51xn2tlKmtI1lltdMMMfka32rPWPCeUWCQFN6a2R4OdZybOGydJCmwMz1I3Im6Ws1HkE31DBaUmliqJVKFo1iHZE3qE8V10ROTs0wL9Xw0ZUosjBpTKCNwaMpyaFqbqGF4WRoJzMmBHZRXmtjRdFNSJ7S_VQwzb_oTgYg4ink8PrZKUyf81VB-7Ljbk--ccwL8j29OjwQB18mH16Sm7jJzHgs0e2uovePwMTqDPPg-L_BgBtA5s |
| linkToUnpaywall | http://utb.summon.serialssolutions.com/2.0.0/link/0/eLvHCXMwlV1Nb9QwELXQ9oA4QIEiFgrygQNIeEnsxImPaUS1QqJCwErlZPkTVSyharOC5X_xO_qXOuOk1S5ICK7xRHE84_hN7HmPkGe8llUBTSz4YBms14Ep5yBxLWvlgrBcBvzf8fZIzhfFm-PyeOTZxlqYzf37PFOvTJINhDyOlzOBe4Qcvrc7sgTgPSE7i6N3zSeUj8ulYiJtRD7-y51ba0-i6B81Vbbg5c1Vd2rW381yubHSHN4ZSrjPE0EhHjD5Mlv1duZ-_kbf-I8vsUtuj4iTNkOI3CU3QneP3NrgIbxPVhe_GJb-DQKjtEGe8R8ngGXDck1b1O9wPX2ejJr2BU2crAO_95qaztM5JLNnSCL-FQ_X0A9JWwccTk86CskumNP34fNw4rajKL-GRfB7ZHH4-mM7Z6MeA3OiVD1zhhtuA89NJoTKuc2cisKrWAEujLXiube1QvabaAphcuNdrWzloxPOSG7FAzLpvnXhIaHCoRSgrYKzCjI8p0QeuaocXPMAQP2UvLzylD4daDd0SlcypZu2hdjUOJp6HM0pOUBvXpsiZ3a6AF7Q4xTUpSwgmbKliBZgU8ygqz4KKa2CsOTWTskexsLG8yD55VxNyf5VcOhxgp9rATAQmXQKaGbXAfNHXwfPb_X10X_a75NJf7YKTwD79PbpGPOXHCIAow |
| openUrl | ctx_ver=Z39.88-2004&ctx_enc=info%3Aofi%2Fenc%3AUTF-8&rfr_id=info%3Asid%2Fsummon.serialssolutions.com&rft_val_fmt=info%3Aofi%2Ffmt%3Akev%3Amtx%3Ajournal&rft.genre=article&rft.atitle=%CF%B5-Confidence+Approximately+Correct+%28%CF%B5-CoAC%29+Learnability+and+Hyperparameter+Selection+in+Linear+Regression+Modeling&rft.jtitle=IEEE+access&rft.au=Beheshti%2C+Soosan&rft.au=Shamsi%2C+Mahdi&rft.date=2025&rft.issn=2169-3536&rft.eissn=2169-3536&rft.volume=13&rft.spage=14273&rft.epage=14289&rft_id=info:doi/10.1109%2FACCESS.2025.3529622&rft.externalDBID=n%2Fa&rft.externalDocID=10_1109_ACCESS_2025_3529622 |
| thumbnail_l | http://covers-cdn.summon.serialssolutions.com/index.aspx?isbn=/lc.gif&issn=2169-3536&client=summon |
| thumbnail_m | http://covers-cdn.summon.serialssolutions.com/index.aspx?isbn=/mc.gif&issn=2169-3536&client=summon |
| thumbnail_s | http://covers-cdn.summon.serialssolutions.com/index.aspx?isbn=/sc.gif&issn=2169-3536&client=summon |