Improved learning algorithms for mixture of experts in multiclass classification
Mixture of experts (ME) is a modular neural network architecture for supervised learning. A double-loop Expectation-Maximization (EM) algorithm has been introduced to the ME architecture for adjusting the parameters and the iteratively reweighted least squares (IRLS) algorithm is used to perform max...
Saved in:
| Published in | Neural networks Vol. 12; no. 9; pp. 1229 - 1252 |
|---|---|
| Main Authors | , , |
| Format | Journal Article |
| Language | English |
| Published |
Oxford
Elsevier Ltd
01.11.1999
Elsevier Science |
| Subjects | |
| Online Access | Get full text |
| ISSN | 0893-6080 1879-2782 1879-2782 |
| DOI | 10.1016/S0893-6080(99)00043-X |
Cover
| Abstract | Mixture of experts (ME) is a modular neural network architecture for supervised learning. A double-loop Expectation-Maximization (EM) algorithm has been introduced to the ME architecture for adjusting the parameters and the iteratively reweighted least squares (IRLS) algorithm is used to perform maximization in the inner loop [Jordan, M.I., Jacobs, R.A. (1994). Hierarchical mixture of experts and the EM algorithm,
Neural Computation,
6(2), 181–214]. However, it is reported in literature that the IRLS algorithm is of instability and the ME architecture trained by the EM algorithm, where IRLS algorithm is used in the inner loop, often produces the poor performance in multiclass classification. In this paper, the reason of this instability is explored. We find out that due to an implicitly imposed incorrect assumption on parameter independence in multiclass classification, an incomplete Hessian matrix is used in that IRLS algorithm. Based on this finding, we apply the Newton–Raphson method to the inner loop of the EM algorithm in the case of multiclass classification, where the exact Hessian matrix is adopted. To tackle the expensive computation of the Hessian matrix and its inverse, we propose an approximation to the Newton–Raphson algorithm based on a so-called generalized Bernoulli density. The Newton–Raphson algorithm and its approximation have been applied to synthetic data, benchmark, and real-world multiclass classification tasks. For comparison, the IRLS algorithm and a quasi-Newton algorithm called BFGS have also been applied to the same tasks. Simulation results have shown that the use of the proposed learning algorithms avoids the instability problem and makes the ME architecture produce good performance in multiclass classification. In particular, our approximation algorithm leads to fast learning. In addition, the limitation of our approximation algorithm is also empirically investigated in this paper. |
|---|---|
| AbstractList | Mixture of experts (ME) is a modular neural network architecture for supervised learning. A double-loop Expectation-Maximization (EM) algorithm has been introduced to the ME architecture for adjusting the parameters and the iteratively reweighted least squares (IRLS) algorithm is used to perform maximization in the inner loop. However, it is reported in literature that the IRLS algorithm is of instability and the ME architecture trained by the EM algorithm, where IRLS algorithm is used in the inner loop, often produces the poor performance in multiclass classification. In this paper, the reason of this instability is explored. We find out that due to an implicitly imposed incorrect assumption on parameter independence in multiclass classification, an incomplete Hessian matrix is used in that IRLS algorithm. Based on this finding, we apply the Newton-Raphson method to the inner loop of the EM algorithm in the case of multiclass classification, where the exact Hessian matrix is adopted. To tackle the expensive computation of the Hessian matrix and its inverse, we propose an approximation to the Newton-Raphson algorithm based on a so-called generalized Bernoulli density. The Newton-Raphson algorithm and its approximation have been applied to synthetic data, benchmark, and real-world multiclass classification tasks. For comparison, the IRLS algorithm and a quasi-Newton algorithm called BFGS have also been applied to the same tasks. Simulation results have shown that the use of the proposed learning algorithms avoids the instability problem and makes the ME architecture produce good performance in multiclass classification. In particular, our approximation algorithm leads to fast learning. In addition, the limitation of our approximation algorithm is also empirically investigated in this paper. Mixture of experts (ME) is a modular neural network architecture for supervised learning. A double-loop Expectation-Maximization (EM) algorithm has been introduced to the ME architecture for adjusting the parameters and the iteratively reweighted least squares (IRLS) algorithm is used to perform maximization in the inner loop [Jordan, M.I., Jacobs, R.A. (1994). Hierarchical mixture of experts and the EM algorithm, Neural Computation, 6(2), 181-214]. However, it is reported in literature that the IRLS algorithm is of instability and the ME architecture trained by the EM algorithm, where IRLS algorithm is used in the inner loop, often produces the poor performance in multiclass classification. In this paper, the reason of this instability is explored. We find out that due to an implicitly imposed incorrect assumption on parameter independence in multiclass classification, an incomplete Hessian matrix is used in that IRLS algorithm. Based on this finding, we apply the Newton-Raphson method to the inner loop of the EM algorithm in the case of multiclass classification, where the exact Hessian matrix is adopted. To tackle the expensive computation of the Hessian matrix and its inverse, we propose an approximation to the Newton-Raphson algorithm based on a so-called generalized Bernoulli density. The Newton-Raphson algorithm and its approximation have been applied to synthetic data, benchmark, and real-world multiclass classification tasks. For comparison, the IRLS algorithm and a quasi-Newton algorithm called BFGS have also been applied to the same tasks. Simulation results have shown that the use of the proposed learning algorithms avoids the instability problem and makes the ME architecture produce good performance in multiclass classification. In particular, our approximation algorithm leads to fast learning. In addition, the limitation of our approximation algorithm is also empirically investigated in this paper. Mixture of experts (ME) is a modular neural network architecture for supervised learning. A double-loop Expectation-Maximization (EM) algorithm has been introduced to the ME architecture for adjusting the parameters and the iteratively reweighted least squares (IRLS) algorithm is used to perform maximization in the inner loop [Jordan, M.I., Jacobs, R.A. (1994). Hierarchical mixture of experts and the EM algorithm, Neural Computation, 6(2), 181-214]. However, it is reported in literature that the IRLS algorithm is of instability and the ME architecture trained by the EM algorithm, where IRLS algorithm is used in the inner loop, often produces the poor performance in multiclass classification. In this paper, the reason of this instability is explored. We find out that due to an implicitly imposed incorrect assumption on parameter independence in multiclass classification, an incomplete Hessian matrix is used in that IRLS algorithm. Based on this finding, we apply the Newton-Raphson method to the inner loop of the EM algorithm in the case of multiclass classification, where the exact Hessian matrix is adopted. To tackle the expensive computation of the Hessian matrix and its inverse, we propose an approximation to the Newton-Raphson algorithm based on a so-called generalized Bernoulli density. The Newton-Raphson algorithm and its approximation have been applied to synthetic data, benchmark, and real-world multiclass classification tasks. For comparison, the IRLS algorithm and a quasi-Newton algorithm called BFGS have also been applied to the same tasks. Simulation results have shown that the use of the proposed learning algorithms avoids the instability problem and makes the ME architecture produce good performance in multiclass classification. In particular, our approximation algorithm leads to fast learning. In addition, the limitation of our approximation algorithm is also empirically investigated in this paper.Mixture of experts (ME) is a modular neural network architecture for supervised learning. A double-loop Expectation-Maximization (EM) algorithm has been introduced to the ME architecture for adjusting the parameters and the iteratively reweighted least squares (IRLS) algorithm is used to perform maximization in the inner loop [Jordan, M.I., Jacobs, R.A. (1994). Hierarchical mixture of experts and the EM algorithm, Neural Computation, 6(2), 181-214]. However, it is reported in literature that the IRLS algorithm is of instability and the ME architecture trained by the EM algorithm, where IRLS algorithm is used in the inner loop, often produces the poor performance in multiclass classification. In this paper, the reason of this instability is explored. We find out that due to an implicitly imposed incorrect assumption on parameter independence in multiclass classification, an incomplete Hessian matrix is used in that IRLS algorithm. Based on this finding, we apply the Newton-Raphson method to the inner loop of the EM algorithm in the case of multiclass classification, where the exact Hessian matrix is adopted. To tackle the expensive computation of the Hessian matrix and its inverse, we propose an approximation to the Newton-Raphson algorithm based on a so-called generalized Bernoulli density. The Newton-Raphson algorithm and its approximation have been applied to synthetic data, benchmark, and real-world multiclass classification tasks. For comparison, the IRLS algorithm and a quasi-Newton algorithm called BFGS have also been applied to the same tasks. Simulation results have shown that the use of the proposed learning algorithms avoids the instability problem and makes the ME architecture produce good performance in multiclass classification. In particular, our approximation algorithm leads to fast learning. In addition, the limitation of our approximation algorithm is also empirically investigated in this paper. Mixture of experts (ME) is a modular neural network architecture for supervised learning. A double-loop Expectation-Maximization (EM) algorithm has been introduced to the ME architecture for adjusting the parameters and the iteratively reweighted least squares (IRLS) algorithm is used to perform maximization in the inner loop [Jordan, M.I., Jacobs, R.A. (1994). Hierarchical mixture of experts and the EM algorithm, Neural Computation, 6(2), 181–214]. However, it is reported in literature that the IRLS algorithm is of instability and the ME architecture trained by the EM algorithm, where IRLS algorithm is used in the inner loop, often produces the poor performance in multiclass classification. In this paper, the reason of this instability is explored. We find out that due to an implicitly imposed incorrect assumption on parameter independence in multiclass classification, an incomplete Hessian matrix is used in that IRLS algorithm. Based on this finding, we apply the Newton–Raphson method to the inner loop of the EM algorithm in the case of multiclass classification, where the exact Hessian matrix is adopted. To tackle the expensive computation of the Hessian matrix and its inverse, we propose an approximation to the Newton–Raphson algorithm based on a so-called generalized Bernoulli density. The Newton–Raphson algorithm and its approximation have been applied to synthetic data, benchmark, and real-world multiclass classification tasks. For comparison, the IRLS algorithm and a quasi-Newton algorithm called BFGS have also been applied to the same tasks. Simulation results have shown that the use of the proposed learning algorithms avoids the instability problem and makes the ME architecture produce good performance in multiclass classification. In particular, our approximation algorithm leads to fast learning. In addition, the limitation of our approximation algorithm is also empirically investigated in this paper. |
| Author | Xu, L. Chen, K. Chi, H. |
| Author_xml | – sequence: 1 givenname: K. surname: Chen fullname: Chen, K. organization: Department of Computer Science and Engineering, Chinese University of Hong Kong, Shatin, NT, Hong Kong, People's Republic of China – sequence: 2 givenname: L. surname: Xu fullname: Xu, L. email: lxu@cse.cuhk.edu.hk organization: National Laboratory of Machine Perception and Center for Information Science, Peking University, Beijing 100871, People's Republic of China – sequence: 3 givenname: H. surname: Chi fullname: Chi, H. organization: National Laboratory of Machine Perception and Center for Information Science, Peking University, Beijing 100871, People's Republic of China |
| BackLink | https://cir.nii.ac.jp/crid/1572543024569124864$$DView record in CiNii http://pascal-francis.inist.fr/vibad/index.php?action=getRecordDetail&idt=1976362$$DView record in Pascal Francis https://www.ncbi.nlm.nih.gov/pubmed/12662629$$D View this record in MEDLINE/PubMed |
| BookMark | eNqF0UtrFTEUB_AgFXtb_QjKLETqYjTvBy5ESquFgoIK3YXczEmNzGRuk9xSv725Dy24sJsTCL-THM7_CB2kOQFCzwl-QzCRb79ibVgvscYnxrzGGHPWXz1CC6KV6anS9AAt_pJDdFTKz4ak5uwJOiRUSiqpWaAvF9Mqz7cwdCO4nGK67tx4PedYf0ylC3PupnhX1xm6OXRwt4JcSxdTN63HGv3oSum2NYboXY1zeooeBzcWeLY_j9H387Nvp5_6y88fL04_XPaeC1p7qjnI5XJwmhMIYVBAGCVKygEHNghGlpRzh4NQ3jCluceKS0oYSC7aXWDH6NXu3Tb-zRpKtVMsHsbRJZjXxVKFsWBaN3jyX0i0MMxoJlSjL_Z0vZxgsKscJ5d_2T_rauDlHrji3RiySz6We2eUZJI2JnbM57mUDOFeYLtJz27Ts5torDF2m569an3v_unzsW63WrOL44Pd--FSjK1xU4lQVHCGKRfSEMq15I293zFo6dxGyLb4CMnDEDP4aoc5PvDRbzzivSM |
| CitedBy_id | crossref_primary_10_1111_j_1468_0394_2007_00418_x crossref_primary_10_4258_hir_2013_19_2_130 crossref_primary_10_1002_widm_1246 crossref_primary_10_1016_S0167_8191_02_00078_9 crossref_primary_10_1016_j_compbiomed_2007_12_002 crossref_primary_10_1016_j_neucom_2009_12_023 crossref_primary_10_1016_j_eswa_2017_11_046 crossref_primary_10_1109_JSAC_2021_3078489 crossref_primary_10_1109_TSP_2011_2144587 crossref_primary_10_1142_S0218213008004242 crossref_primary_10_1162_NECO_a_00154 crossref_primary_10_3103_S1060992X08030028 crossref_primary_10_1016_j_engappai_2005_03_002 crossref_primary_10_1016_j_eswa_2011_12_042 crossref_primary_10_4028_www_scientific_net_AMM_475_476_188 crossref_primary_10_1016_j_dsp_2007_02_002 crossref_primary_10_1016_j_insmatheco_2023_02_008 crossref_primary_10_1007_s10916_008_9191_3 crossref_primary_10_1016_j_cviu_2007_10_003 crossref_primary_10_1016_j_compbiomed_2004_04_001 crossref_primary_10_1016_j_eswa_2008_08_009 crossref_primary_10_1007_s10916_008_9239_4 crossref_primary_10_1016_j_neucom_2013_04_003 crossref_primary_10_1016_j_febslet_2006_12_050 crossref_primary_10_1142_S0129065701000588 crossref_primary_10_1016_j_omega_2011_01_009 crossref_primary_10_1080_01431161_2010_493564 crossref_primary_10_1016_j_neucom_2017_05_044 crossref_primary_10_1108_F_05_2014_0047 crossref_primary_10_1109_TNNLS_2019_2957109 crossref_primary_10_1109_TSMCC_2005_848166 crossref_primary_10_1142_S0129065799000587 crossref_primary_10_1109_TNN_2005_849826 crossref_primary_10_1007_s40815_016_0285_7 crossref_primary_10_1109_TNN_2004_826217 crossref_primary_10_1016_j_neunet_2016_03_002 crossref_primary_10_12677_SA_2022_115123 crossref_primary_10_1109_TBME_2005_863929 crossref_primary_10_1016_j_patrec_2006_10_004 crossref_primary_10_1016_j_eswa_2007_02_006 crossref_primary_10_1016_j_neucom_2019_08_014 crossref_primary_10_1093_bioinformatics_btq107 crossref_primary_10_1108_BEPAM_07_2012_0043 crossref_primary_10_1111_j_1468_0394_2008_00444_x crossref_primary_10_1007_s00521_012_1063_6 crossref_primary_10_1002_cjce_20364 crossref_primary_10_1007_s00521_010_0493_2 crossref_primary_10_1007_s10916_005_6112_6 crossref_primary_10_1007_s10462_012_9338_y crossref_primary_10_1109_MCI_2007_353418 crossref_primary_10_1002_apj_213 crossref_primary_10_1007_s00138_009_0232_9 crossref_primary_10_1016_j_neucom_2007_02_010 crossref_primary_10_1109_TITB_2008_920614 crossref_primary_10_1016_j_dsp_2007_05_005 crossref_primary_10_1016_j_jfranklin_2007_06_004 crossref_primary_10_1111_j_1468_0394_2009_00490_x crossref_primary_10_1016_j_neunet_2009_06_040 crossref_primary_10_1109_TNNLS_2012_2200299 |
| Cites_doi | 10.1162/neco.1996.8.1.129 10.1016/0893-6080(96)83696-3 10.1109/5.628714 10.1109/72.165594 10.1111/j.2517-6161.1977.tb01600.x 10.1162/neco.1991.3.1.79 10.1016/S0167-8655(98)00055-5 10.1109/72.536317 10.1093/comjnl/13.3.317 10.1109/ICNN.1997.611668 10.1016/0893-6080(95)00014-3 10.1090/S0025-5718-1970-0274029-X 10.1093/imamat/6.1.76 10.1016/S0167-8655(97)00073-1 10.1016/S0925-2312(98)00019-8 10.1111/j.1469-1809.1936.tb02137.x 10.1142/S0129065791000212 10.1162/neco.1992.4.4.494 10.1109/ICASSP.1996.550800 10.1090/S0025-5718-1970-0258249-6 10.1142/S012906579600004X 10.1162/neco.1994.6.2.181 10.1007/BF00571681 |
| ContentType | Journal Article |
| Copyright | 1999 1999 INIST-CNRS |
| Copyright_xml | – notice: 1999 – notice: 1999 INIST-CNRS |
| DBID | RYH AAYXX CITATION IQODW NPM 7X8 7SC 8FD JQ2 L7M L~C L~D |
| DOI | 10.1016/S0893-6080(99)00043-X |
| DatabaseName | CiNii Complete CrossRef Pascal-Francis PubMed MEDLINE - Academic Computer and Information Systems Abstracts Technology Research Database ProQuest Computer Science Collection Advanced Technologies Database with Aerospace Computer and Information Systems Abstracts Academic Computer and Information Systems Abstracts Professional |
| DatabaseTitle | CrossRef PubMed MEDLINE - Academic Computer and Information Systems Abstracts Technology Research Database Computer and Information Systems Abstracts – Academic Advanced Technologies Database with Aerospace ProQuest Computer Science Collection Computer and Information Systems Abstracts Professional |
| DatabaseTitleList | Computer and Information Systems Abstracts PubMed MEDLINE - Academic |
| Database_xml | – sequence: 1 dbid: NPM name: PubMed url: https://proxy.k.utb.cz/login?url=http://www.ncbi.nlm.nih.gov/entrez/query.fcgi?db=PubMed sourceTypes: Index Database |
| DeliveryMethod | fulltext_linktorsrc |
| Discipline | Computer Science Applied Sciences |
| EISSN | 1879-2782 |
| EndPage | 1252 |
| ExternalDocumentID | 12662629 1976362 10_1016_S0893_6080_99_00043_X 10008960174 S089360809900043X |
| Genre | Journal Article |
| GroupedDBID | --- --K --M -~X .DC .~1 0R~ 123 186 1B1 1RT 1~. 1~5 29N 4.4 457 4G. 53G 5RE 5VS 6TJ 7-5 71M 8P~ 9JM 9JN AABNK AACTN AADPK AAEDT AAEDW AAIAV AAIKJ AAKOC AALRI AAOAW AAQFI AAQXK AAXLA AAXUO AAYFN ABAOU ABBOA ABCQJ ABEFU ABFNM ABFRF ABHFT ABIVO ABJNI ABLJU ABMAC ABXDB ABYKQ ACAZW ACDAQ ACGFO ACGFS ACIUM ACNNM ACRLP ACZNC ADBBV ADEZE ADGUI ADJOM ADMUD ADRHT AEBSH AECPX AEFWE AEKER AENEX AFKWA AFTJW AFXIZ AGHFR AGUBO AGWIK AGYEJ AHHHB AHJVU AHZHX AIALX AIEXJ AIKHN AITUG AJBFU AJOXV ALMA_UNASSIGNED_HOLDINGS AMFUW AMRAJ AOUOD ARUGR ASPBG AVWKF AXJTR AZFZN BJAXD BKOJK BLXMC CS3 DU5 EBS EFJIC EFLBG EJD EO8 EO9 EP2 EP3 F0J F5P FDB FEDTE FGOYB FIRID FNPLU FYGXN G-2 G-Q G8K GBLVA GBOLZ HLZ HMQ HVGLF HZ~ IHE J1W JJJVA K-O KOM KZ1 LG9 LMP M2V M41 MHUIS MO0 MOBAO MVM N9A O-L O9- OAUVE OZT P-8 P-9 P2P PC. Q38 R2- RIG ROL RPZ SBC SCC SDF SDG SDP SES SEW SNS SPC SPCBC SSN SST SSV SSW SSZ T5K TAE UAP UNMZH VOH WUQ XPP ZMT ~G- AATTM AAXKI AAYWO ABWVN ACRPL ACVFH ADCNI ADNMO AEIPS AEUPX AFPUW AGCQF AGQPQ AGRNS AIIUN AKBMS AKRWK AKYEP ANKPU RYH SSH AAYXX ABDPE ACLOT AFJKZ AIGII APXCP CITATION EFKBS ~HD BNPGV IQODW NPM PKN 7X8 7SC 8FD JQ2 L7M L~C L~D |
| ID | FETCH-LOGICAL-c452t-284e6bbda841effd7e1321766d0f3d531b244a0f57c93784c0746213e64557cf3 |
| IEDL.DBID | AIKHN |
| ISSN | 0893-6080 1879-2782 |
| IngestDate | Sun Sep 28 08:01:09 EDT 2025 Sun Sep 28 02:39:09 EDT 2025 Wed Feb 19 01:40:47 EST 2025 Mon Jul 21 09:15:56 EDT 2025 Thu Apr 24 23:02:26 EDT 2025 Wed Oct 01 02:07:36 EDT 2025 Thu Jun 26 21:19:02 EDT 2025 Fri Feb 23 02:29:01 EST 2024 |
| IsPeerReviewed | true |
| IsScholarly | true |
| Issue | 9 |
| Keywords | Expectation-Maximization (EM) algorithm Mixture of experts Iterative reweighted least squares (IRLS) algorithm Generalized Bernoulli density BFGS algorithm Multinomial density Newton–Raphson method Multiclass classification Maximization Bernoulli scheme Expert system Classification Theoretical study Mixture theory Multinomial distribution Expectation Learning algorithm Newton Raphson method |
| Language | English |
| License | https://www.elsevier.com/tdm/userlicense/1.0 CC BY 4.0 |
| LinkModel | DirectLink |
| MergedId | FETCHMERGED-LOGICAL-c452t-284e6bbda841effd7e1321766d0f3d531b244a0f57c93784c0746213e64557cf3 |
| Notes | ObjectType-Article-1 SourceType-Scholarly Journals-1 ObjectType-Feature-2 content type line 23 ObjectType-Article-2 ObjectType-Feature-1 |
| PMID | 12662629 |
| PQID | 1859398357 |
| PQPubID | 23479 |
| PageCount | 24 |
| ParticipantIDs | proquest_miscellaneous_27005388 proquest_miscellaneous_1859398357 pubmed_primary_12662629 pascalfrancis_primary_1976362 crossref_primary_10_1016_S0893_6080_99_00043_X crossref_citationtrail_10_1016_S0893_6080_99_00043_X nii_cinii_1572543024569124864 elsevier_sciencedirect_doi_10_1016_S0893_6080_99_00043_X |
| ProviderPackageCode | CITATION AAYXX |
| PublicationCentury | 1900 |
| PublicationDate | 1999-11-01 |
| PublicationDateYYYYMMDD | 1999-11-01 |
| PublicationDate_xml | – month: 11 year: 1999 text: 1999-11-01 day: 01 |
| PublicationDecade | 1990 |
| PublicationPlace | Oxford |
| PublicationPlace_xml | – name: Oxford – name: United States |
| PublicationTitle | Neural networks |
| PublicationTitleAlternate | Neural Netw |
| PublicationYear | 1999 |
| Publisher | Elsevier Ltd Elsevier Science |
| Publisher_xml | – name: Elsevier Ltd – name: Elsevier Science |
| References | Chen, K., Xie, D., & Chi, H. (1996). A modified HME architecture for text-dependent speaker identification. Bishop (BIB3) 1991; 2 Xu (BIB41) 1998; 19 Campbell (BIB9) 1997; 85 Duda, Hart (BIB17) 1973 Shanno (BIB34) 1970; 24 Bridle (BIB7) 1989 Fletcher (BIB20) 1987 Bennani, Y., & Gallinari, P. (1994). Connectionist approaches for automatic speaker recognition. McCullagh, Nelder (BIB29) 1983 Xu, Jordan (BIB38) 1996; 8 Broyden (BIB8) 1970; 6 Xu, L., & Jordan, M.I. (1994). A modified gating network for the mixture of experts architecture. Guo, Gelfand (BIB24) 1992; 3 5): 1309–1313 (for errata see Jordan, Xu (BIB28) 1995; 8 Ramamurti, V., & Ghosh, J. (1997). Regularization and error bars for the mixture of experts network. Xu, Jordan, Hinton (BIB39) 1995 Minoux (BIB30) 1986 Fletcher (BIB19) 1970; 13 Goldfarb (BIB22) 1970; 24 Chen (BIB15) 1998; 19 Ishikawa (BIB25) 1996; 9 Breiman, Friedman, Olshen, Stone (BIB6) 1984 Chen, Xie, Chi (BIB13) 1996; 7 , Dempster, Laird, Rubin (BIB16) 1977; 39 Fisher (BIB18) 1936; 7 Furui (BIB21) 1997; 18 Houston, pp. 221–225. Böning (BIB5) 1993 Golub, Van Loan (BIB23) 1989 Neter, Wassermand, Kutner (BIB31) 1985 Waterhouse, S.R. (1993). The application of HME with the EM algorithm to speech recognition. Master Thesis, Department of Engineering, Cambridge University. Martigny, Switzerland, pp. 95-102. Chen, Xie, Chi (BIB11) 1995 (2): 455, 1997). Atlanta, pp. 3569–3572. Waterhouse, S.R. (1997). Classification and regression using mixtures of experts. Ph.D., Thesis, Department of Engineering, Cambridge University. San Diego, pp. II405–II410. Ramamurti, V., & Ghosh, J. (1996). Advances in using hierarchical mixture of experts for signal classification. Jacobs, Jordan, Nowlan, Hinton (BIB26) 1991; 3 Bishop (BIB4) 1992; 4 Xu, L. (1996). Bayesian-Kullback YING-YANG learning scheme: reviews and new results, Proceedings of International Conference on Neural Information Processing, Hong Kong, pp. 59-67. Jordan, Jacobs (BIB27) 1994; 6 Bishop (10.1016/S0893-6080(99)00043-X_BIB3) 1991; 2 Fletcher (10.1016/S0893-6080(99)00043-X_BIB19) 1970; 13 McCullagh (10.1016/S0893-6080(99)00043-X_BIB29) 1983 10.1016/S0893-6080(99)00043-X_BIB2 Goldfarb (10.1016/S0893-6080(99)00043-X_BIB22) 1970; 24 Golub (10.1016/S0893-6080(99)00043-X_BIB23) 1989 Xu (10.1016/S0893-6080(99)00043-X_BIB38) 1996; 8 Breiman (10.1016/S0893-6080(99)00043-X_BIB6) 1984 Bishop (10.1016/S0893-6080(99)00043-X_BIB4) 1992; 4 Duda (10.1016/S0893-6080(99)00043-X_BIB17) 1973 Chen (10.1016/S0893-6080(99)00043-X_NEWBIB10) 1998; 20 Broyden (10.1016/S0893-6080(99)00043-X_BIB8) 1970; 6 Jordan (10.1016/S0893-6080(99)00043-X_BIB27) 1994; 6 10.1016/S0893-6080(99)00043-X_BIB40 Fletcher (10.1016/S0893-6080(99)00043-X_BIB20) 1987 Furui (10.1016/S0893-6080(99)00043-X_BIB21) 1997; 18 Fisher (10.1016/S0893-6080(99)00043-X_BIB18) 1936; 7 Bridle (10.1016/S0893-6080(99)00043-X_BIB7) 1989 Chen (10.1016/S0893-6080(99)00043-X_BIB11) 1995 Chen (10.1016/S0893-6080(99)00043-X_NEWBIB14) 1996; 3 Minoux (10.1016/S0893-6080(99)00043-X_BIB30) 1986 10.1016/S0893-6080(99)00043-X_BIB12 Dempster (10.1016/S0893-6080(99)00043-X_BIB16) 1977; 39 10.1016/S0893-6080(99)00043-X_BIB35 Chen (10.1016/S0893-6080(99)00043-X_BIB15) 1998; 19 Jordan (10.1016/S0893-6080(99)00043-X_BIB28) 1995; 8 10.1016/S0893-6080(99)00043-X_BIB36 10.1016/S0893-6080(99)00043-X_BIB37 Chen (10.1016/S0893-6080(99)00043-X_BIB13) 1996; 7 Xu (10.1016/S0893-6080(99)00043-X_BIB41) 1998; 19 10.1016/S0893-6080(99)00043-X_BIB32 10.1016/S0893-6080(99)00043-X_BIB33 Campbell (10.1016/S0893-6080(99)00043-X_BIB9) 1997; 85 Böning (10.1016/S0893-6080(99)00043-X_BIB5) 1993 Guo (10.1016/S0893-6080(99)00043-X_BIB24) 1992; 3 Ishikawa (10.1016/S0893-6080(99)00043-X_BIB25) 1996; 9 Neter (10.1016/S0893-6080(99)00043-X_BIB31) 1985 Jacobs (10.1016/S0893-6080(99)00043-X_BIB26) 1991; 3 Bengio (10.1016/S0893-6080(99)00043-X_NEWBIB1) 1996; 7 Shanno (10.1016/S0893-6080(99)00043-X_BIB34) 1970; 24 Xu (10.1016/S0893-6080(99)00043-X_BIB39) 1995 |
| References_xml | – reference: Waterhouse, S.R. (1997). Classification and regression using mixtures of experts. Ph.D., Thesis, Department of Engineering, Cambridge University. – year: 1986 ident: BIB30 publication-title: Mathematical programming: theory and algorithms – reference: , Martigny, Switzerland, pp. 95-102. – reference: , – reference: Ramamurti, V., & Ghosh, J. (1997). Regularization and error bars for the mixture of experts network. – reference: Waterhouse, S.R. (1993). The application of HME with the EM algorithm to speech recognition. Master Thesis, Department of Engineering, Cambridge University. – volume: 19 start-page: 223 year: 1998 end-page: 257 ident: BIB41 publication-title: RBF nets, mixture of experts, and Bayesian Ying-Yang learning, Neurocomputing – volume: 39 start-page: 1 year: 1977 end-page: 38 ident: BIB16 article-title: Maximum-likelihood from incomplete data via the EM algorithm publication-title: Journal of the Royal Statistical Society B – start-page: 1493 year: 1995 end-page: 1496 ident: BIB11 article-title: Speaker identification based on hierarchical mixture of experts publication-title: Proceedings of World Congress on Neural Networks, Washington, DC – reference: , Houston, pp. 221–225. – volume: 8 start-page: 129 year: 1996 end-page: 151 ident: BIB38 article-title: On convergence properties of the EM algorithm for Gaussian mixtures publication-title: Neural Computation – volume: 4 start-page: 494 year: 1992 end-page: 501 ident: BIB4 article-title: Exact calculation of the Hessian matrix for the multilayer perceptron publication-title: Neural Computation – reference: , Atlanta, pp. 3569–3572. – year: 1984 ident: BIB6 publication-title: Classification and regression trees – volume: 7 start-page: 29 year: 1996 end-page: 43 ident: BIB13 article-title: Speaker identification using time-delay HMEs publication-title: International Journal of Neural Systems – year: 1989 ident: BIB23 publication-title: Matrix computations – volume: 3 start-page: 79 year: 1991 end-page: 87 ident: BIB26 article-title: Adaptive mixture of local experts publication-title: Neural Computation – reference: (2): 455, 1997). – year: 1985 ident: BIB31 publication-title: Applied linear statistical models – reference: Bennani, Y., & Gallinari, P. (1994). Connectionist approaches for automatic speaker recognition. – reference: (5): 1309–1313 (for errata see – start-page: 227 year: 1989 end-page: 236 ident: BIB7 article-title: Probabilistic interpretation of feedforward classification network outputs, with relationships to statistical pattern recognition publication-title: Neurocomputing: algorithm, architectures, and applications – volume: 6 start-page: 181 year: 1994 end-page: 214 ident: BIB27 article-title: Hierarchical mixture of experts and the EM algorithm publication-title: Neural Computation – year: 1973 ident: BIB17 publication-title: Pattern classification and scene analysis – volume: 85 start-page: 1437 year: 1997 end-page: 1463 ident: BIB9 article-title: Speaker recognition: a tutorial publication-title: Proceedings of the IEEE – volume: 3 start-page: 923 year: 1992 end-page: 933 ident: BIB24 article-title: Classification trees with neural network decision trees publication-title: IEEE Transactions on Neural Networks – volume: 9 start-page: 509 year: 1996 end-page: 521 ident: BIB25 article-title: Structural learning with forgetting publication-title: Neural Networks – volume: 19 start-page: 545 year: 1998 end-page: 558 ident: BIB15 article-title: A connectionist method for pattern classification with diverse features publication-title: Pattern Recognition Letters – year: 1983 ident: BIB29 publication-title: Generalized linear models – reference: , San Diego, pp. II405–II410. – volume: 2 start-page: 229 year: 1991 end-page: 396 ident: BIB3 article-title: A fast procedure for re-training the multilayer perception publication-title: International Journal of Neural Systems – reference: Xu, L. (1996). Bayesian-Kullback YING-YANG learning scheme: reviews and new results, Proceedings of International Conference on Neural Information Processing, Hong Kong, pp. 59-67. – start-page: 633 year: 1995 end-page: 640 ident: BIB39 article-title: Advances in neural information processing systems publication-title: Advances in Neural Information Processing Systems – reference: Ramamurti, V., & Ghosh, J. (1996). Advances in using hierarchical mixture of experts for signal classification. – reference: Chen, K., Xie, D., & Chi, H. (1996). A modified HME architecture for text-dependent speaker identification. – volume: 13 start-page: 317 year: 1970 end-page: 322 ident: BIB19 article-title: A general quadratic programming algorithm publication-title: Computer Journal – volume: 24 start-page: 23 year: 1970 end-page: 26 ident: BIB22 article-title: A family of variable metric methods derived by variational means publication-title: Mathematics of Computation – start-page: 409 year: 1993 end-page: 422 ident: BIB5 article-title: Construction of reliable maximum likelihood algorithms with applications to logistic and Cox regression publication-title: Computational statistics – reference: Xu, L., & Jordan, M.I. (1994). A modified gating network for the mixture of experts architecture. – volume: 18 start-page: 859 year: 1997 end-page: 872 ident: BIB21 article-title: Recent advances in speaker recognition publication-title: Pattern Recognition Letters – year: 1987 ident: BIB20 publication-title: Practical methods of optimization – volume: 24 start-page: 647 year: 1970 end-page: 657 ident: BIB34 article-title: On variable metric methods for sparse Hessians publication-title: Mathematics of Computation – volume: 6 start-page: 76 year: 1970 end-page: 90 ident: BIB8 article-title: The convergence of a class of double rank minimization algorithms publication-title: Journal of the Institute of Mathematics and Its Applications – volume: 7 start-page: 179 year: 1936 end-page: 188 ident: BIB18 article-title: The use of multiple measurements in taxonomic problem publication-title: Annals of Eugenices – volume: 8 start-page: 1409 year: 1995 end-page: 1431 ident: BIB28 article-title: Convergence results for the EM approach to mixtures of experts publication-title: Neural Networks – volume: 8 start-page: 129 issue: 2 year: 1996 ident: 10.1016/S0893-6080(99)00043-X_BIB38 article-title: On convergence properties of the EM algorithm for Gaussian mixtures publication-title: Neural Computation doi: 10.1162/neco.1996.8.1.129 – volume: 9 start-page: 509 issue: 3 year: 1996 ident: 10.1016/S0893-6080(99)00043-X_BIB25 article-title: Structural learning with forgetting publication-title: Neural Networks doi: 10.1016/0893-6080(96)83696-3 – volume: 85 start-page: 1437 issue: 9 year: 1997 ident: 10.1016/S0893-6080(99)00043-X_BIB9 article-title: Speaker recognition: a tutorial publication-title: Proceedings of the IEEE doi: 10.1109/5.628714 – volume: 3 start-page: 923 issue: 5 year: 1992 ident: 10.1016/S0893-6080(99)00043-X_BIB24 article-title: Classification trees with neural network decision trees publication-title: IEEE Transactions on Neural Networks doi: 10.1109/72.165594 – volume: 39 start-page: 1 issue: 1 year: 1977 ident: 10.1016/S0893-6080(99)00043-X_BIB16 article-title: Maximum-likelihood from incomplete data via the EM algorithm publication-title: Journal of the Royal Statistical Society B doi: 10.1111/j.2517-6161.1977.tb01600.x – ident: 10.1016/S0893-6080(99)00043-X_BIB40 – start-page: 1493 year: 1995 ident: 10.1016/S0893-6080(99)00043-X_BIB11 article-title: Speaker identification based on hierarchical mixture of experts publication-title: Proceedings of World Congress on Neural Networks, Washington, DC – volume: 3 start-page: 79 issue: 1 year: 1991 ident: 10.1016/S0893-6080(99)00043-X_BIB26 article-title: Adaptive mixture of local experts publication-title: Neural Computation doi: 10.1162/neco.1991.3.1.79 – volume: 19 start-page: 545 issue: 7 year: 1998 ident: 10.1016/S0893-6080(99)00043-X_BIB15 article-title: A connectionist method for pattern classification with diverse features publication-title: Pattern Recognition Letters doi: 10.1016/S0167-8655(98)00055-5 – start-page: 227 year: 1989 ident: 10.1016/S0893-6080(99)00043-X_BIB7 article-title: Probabilistic interpretation of feedforward classification network outputs, with relationships to statistical pattern recognition – year: 1986 ident: 10.1016/S0893-6080(99)00043-X_BIB30 – volume: 7 start-page: 1231 issue: 5 year: 1996 ident: 10.1016/S0893-6080(99)00043-X_NEWBIB1 article-title: Input/output HMMs for sequence processing publication-title: IEEE Transactions on Neural Networks doi: 10.1109/72.536317 – ident: 10.1016/S0893-6080(99)00043-X_BIB37 – start-page: 633 year: 1995 ident: 10.1016/S0893-6080(99)00043-X_BIB39 article-title: Advances in neural information processing systems – year: 1984 ident: 10.1016/S0893-6080(99)00043-X_BIB6 – volume: 13 start-page: 317 year: 1970 ident: 10.1016/S0893-6080(99)00043-X_BIB19 article-title: A general quadratic programming algorithm publication-title: Computer Journal doi: 10.1093/comjnl/13.3.317 – ident: 10.1016/S0893-6080(99)00043-X_BIB33 doi: 10.1109/ICNN.1997.611668 – ident: 10.1016/S0893-6080(99)00043-X_BIB35 – volume: 19 start-page: 223 issue: 1-3 year: 1998 ident: 10.1016/S0893-6080(99)00043-X_BIB41 publication-title: RBF nets, mixture of experts, and Bayesian Ying-Yang learning, Neurocomputing – ident: 10.1016/S0893-6080(99)00043-X_BIB12 – year: 1985 ident: 10.1016/S0893-6080(99)00043-X_BIB31 – volume: 8 start-page: 1409 issue: 9 year: 1995 ident: 10.1016/S0893-6080(99)00043-X_BIB28 article-title: Convergence results for the EM approach to mixtures of experts publication-title: Neural Networks doi: 10.1016/0893-6080(95)00014-3 – volume: 24 start-page: 647 year: 1970 ident: 10.1016/S0893-6080(99)00043-X_BIB34 article-title: On variable metric methods for sparse Hessians publication-title: Mathematics of Computation doi: 10.1090/S0025-5718-1970-0274029-X – volume: 6 start-page: 76 year: 1970 ident: 10.1016/S0893-6080(99)00043-X_BIB8 article-title: The convergence of a class of double rank minimization algorithms publication-title: Journal of the Institute of Mathematics and Its Applications doi: 10.1093/imamat/6.1.76 – volume: 18 start-page: 859 issue: 9 year: 1997 ident: 10.1016/S0893-6080(99)00043-X_BIB21 article-title: Recent advances in speaker recognition publication-title: Pattern Recognition Letters doi: 10.1016/S0167-8655(97)00073-1 – year: 1973 ident: 10.1016/S0893-6080(99)00043-X_BIB17 – year: 1989 ident: 10.1016/S0893-6080(99)00043-X_BIB23 – volume: 20 start-page: 227 issue: 1-3 year: 1998 ident: 10.1016/S0893-6080(99)00043-X_NEWBIB10 article-title: A method of combining multiple probabilistic classifiers through soft competition on different features sets publication-title: Neurocomputing doi: 10.1016/S0925-2312(98)00019-8 – year: 1983 ident: 10.1016/S0893-6080(99)00043-X_BIB29 – volume: 7 start-page: 179 year: 1936 ident: 10.1016/S0893-6080(99)00043-X_BIB18 article-title: The use of multiple measurements in taxonomic problem publication-title: Annals of Eugenices doi: 10.1111/j.1469-1809.1936.tb02137.x – volume: 2 start-page: 229 issue: 3 year: 1991 ident: 10.1016/S0893-6080(99)00043-X_BIB3 article-title: A fast procedure for re-training the multilayer perception publication-title: International Journal of Neural Systems doi: 10.1142/S0129065791000212 – start-page: 409 year: 1993 ident: 10.1016/S0893-6080(99)00043-X_BIB5 article-title: Construction of reliable maximum likelihood algorithms with applications to logistic and Cox regression – volume: 4 start-page: 494 issue: 4 year: 1992 ident: 10.1016/S0893-6080(99)00043-X_BIB4 article-title: Exact calculation of the Hessian matrix for the multilayer perceptron publication-title: Neural Computation doi: 10.1162/neco.1992.4.4.494 – ident: 10.1016/S0893-6080(99)00043-X_BIB32 doi: 10.1109/ICASSP.1996.550800 – ident: 10.1016/S0893-6080(99)00043-X_BIB2 – ident: 10.1016/S0893-6080(99)00043-X_BIB36 – year: 1987 ident: 10.1016/S0893-6080(99)00043-X_BIB20 – volume: 24 start-page: 23 year: 1970 ident: 10.1016/S0893-6080(99)00043-X_BIB22 article-title: A family of variable metric methods derived by variational means publication-title: Mathematics of Computation doi: 10.1090/S0025-5718-1970-0258249-6 – volume: 7 start-page: 29 issue: 1 year: 1996 ident: 10.1016/S0893-6080(99)00043-X_BIB13 article-title: Speaker identification using time-delay HMEs publication-title: International Journal of Neural Systems doi: 10.1142/S012906579600004X – volume: 6 start-page: 181 issue: 2 year: 1994 ident: 10.1016/S0893-6080(99)00043-X_BIB27 article-title: Hierarchical mixture of experts and the EM algorithm publication-title: Neural Computation doi: 10.1162/neco.1994.6.2.181 – volume: 3 start-page: 81 issue: 2 year: 1996 ident: 10.1016/S0893-6080(99)00043-X_NEWBIB14 article-title: Text-dependent speaker identification based on input/output HMM: an empirical study publication-title: Neural Processing Letters doi: 10.1007/BF00571681 |
| SSID | ssj0006843 |
| Score | 1.9163061 |
| Snippet | Mixture of experts (ME) is a modular neural network architecture for supervised learning. A double-loop Expectation-Maximization (EM) algorithm has been... |
| SourceID | proquest pubmed pascalfrancis crossref nii elsevier |
| SourceType | Aggregation Database Index Database Enrichment Source Publisher |
| StartPage | 1229 |
| SubjectTerms | Applied sciences Artificial intelligence BFGS algorithm Computer science; control theory; systems Connectionism. Neural networks Exact sciences and technology Expectation-Maximization (EM) algorithm Generalized Bernoulli density Iterative reweighted least squares (IRLS) algorithm Learning and adaptive systems Mixture of experts Multiclass classification Multinomial density Newton-Raphson method |
| Title | Improved learning algorithms for mixture of experts in multiclass classification |
| URI | https://dx.doi.org/10.1016/S0893-6080(99)00043-X https://cir.nii.ac.jp/crid/1572543024569124864 https://www.ncbi.nlm.nih.gov/pubmed/12662629 https://www.proquest.com/docview/1859398357 https://www.proquest.com/docview/27005388 |
| Volume | 12 |
| hasFullText | 1 |
| inHoldings | 1 |
| isFullTextHit | |
| isPrint | |
| journalDatabaseRights | – providerCode: PRVESC databaseName: Elsevier SD Complete Freedom Collection [SCCMFC] customDbUrl: eissn: 1879-2782 dateEnd: 99991231 omitProxy: true ssIdentifier: ssj0006843 issn: 0893-6080 databaseCode: ACRLP dateStart: 19950101 isFulltext: true titleUrlDefault: https://www.sciencedirect.com providerName: Elsevier – providerCode: PRVESC databaseName: Elsevier SD Freedom Collection Journals [SCFCJ] customDbUrl: eissn: 1879-2782 dateEnd: 99991231 omitProxy: true ssIdentifier: ssj0006843 issn: 0893-6080 databaseCode: AIKHN dateStart: 19950101 isFulltext: true titleUrlDefault: https://www.sciencedirect.com providerName: Elsevier – providerCode: PRVESC databaseName: ScienceDirect (Elsevier) customDbUrl: eissn: 1879-2782 dateEnd: 99991231 omitProxy: true ssIdentifier: ssj0006843 issn: 0893-6080 databaseCode: .~1 dateStart: 19950101 isFulltext: true titleUrlDefault: https://www.sciencedirect.com providerName: Elsevier – providerCode: PRVLSH databaseName: Elsevier Journals customDbUrl: mediaType: online eissn: 1879-2782 dateEnd: 99991231 omitProxy: true ssIdentifier: ssj0006843 issn: 0893-6080 databaseCode: AKRWK dateStart: 19930101 isFulltext: true providerName: Library Specific Holdings |
| link | http://utb.summon.serialssolutions.com/2.0.0/link/0/eLvHCXMwnV1NT9wwEB2xy6WX0kI_AgVcqYf2YHaTdZz4iFDRtlURUouUm5XENo0EWcQGiRO_nRnHYcVhhdRLDk5sOX6230tmPAPwhXwXhDOG43SZcWEyy6vaZDyRSiZTi4RmyKL7-0zOL8TPIi024GQ4C0NulWHv7_d0v1uHkkkYzclN00z-TJFqJQoepbw9qxjBJvJPno9h8_jHr_nZ04Ys8955Dp_nVGF1kKdvxBd-Veqbb4cX6yhq1DYN-U6WSxw-1-e9WC9MPUGdvoHXQVmy477zb2HDttuwNWRtYGER78B5_x_BGhYyRlyy8upycdt0_66XDDUsu27uya7AFo75BADdkjUt866HNYlt5q_kY-RhfQcXp9__nsx5yKvAa5EmHUdGsrKqTJmL2DqH-OAnKQWKNFM3M7goK-T8curSrEbxkouacpIk8cxKkWKZm72Hcbto7UdgCGlWZs4g0nTKlXxSqyrBphJjalReEYhhKHUdgo5T7osrvfIuQwQ0IaCV0h4BXURw9FTtpo-68VKFfMBJP5s-Gpnhpar7iCv2jq5xmlF0ALJHS4XSJ5cC7z9DfNUhVHJI_xF8HmaAxqVJ9paytYu7pY4plpxCiZtFcLjmGbL7I-fkEXzoJ8-qeZROiUzU7v-_2h688lEm_NHJTzDubu_sPmqorjqA0dFDfBBWyiOidhFi |
| linkProvider | Elsevier |
| linkToHtml | http://utb.summon.serialssolutions.com/2.0.0/link/0/eLvHCXMwnV1NT9wwEB0BPcClUFpoaAEj9UAPZnezjhMfKwTalg9VKki5WUlsL5Egi9ggceK3d8ZJiDiskLjk4LUtr5_t95IZzwD8IN8F4YzhuFzGXJjY8rwwMQ-lkuHQIqEZsuheXMrJtfiTRukSHHd3Ycitsj37mzPdn9ZtyaCdzcF9WQ7-DZFqJQoepbw9K12GDyIKY3oDO3ru_Txk0rjOYW1O1ftrPE0XvvBQqZ--F54uIqjlqizJczKb4-S5JuvFYlnq6el0Az62upL9aob-CZZstQnrXc4G1m7hz_C3-YpgDWvzRUxZdjudPZT1zd2coYJld-UTWRXYzDEf_r-es7Ji3vGwIKnN_JM8jDyoX-D69OTqeMLbrAq8wOmpOfKRlXluskSMrHOIDr6QUphIM3Rjg1syR8bPhi6KC5QuiSgoI0k4GlspIixz4y1YqWaV_QoMAY2z2BnEme64kkdqnofYVWhMgborANFNpS7akOOU-eJW975liIAmBLRS2iOg0wCOXprdNzE33mqQdDjpV4tHIy-81XQXccXR0XMUxRQbgKzRUqHwSaTA318h3g8IdRySfwAH3QrQuDHJ2pJVdvY41yOKJKdQ4MYB7C-oQ1Z_ZJwkgO1m8fTdo3AKZah23v_X9mF1cnVxrs9_X559gzUfb8JfovwOK_XDo91FNVXne363_AfbGBIq |
| openUrl | ctx_ver=Z39.88-2004&ctx_enc=info%3Aofi%2Fenc%3AUTF-8&rfr_id=info%3Asid%2Fsummon.serialssolutions.com&rft_val_fmt=info%3Aofi%2Ffmt%3Akev%3Amtx%3Ajournal&rft.genre=article&rft.atitle=Improved+learning+algorithms+for+mixture+of+experts+in+multiclass+classification&rft.jtitle=Neural+networks&rft.au=Chen%2C+K&rft.au=Xu%2C+L&rft.au=Chi%2C+H&rft.date=1999-11-01&rft.issn=0893-6080&rft.volume=12&rft.issue=9&rft.spage=1229&rft.epage=1252&rft_id=info:doi/10.1016%2FS0893-6080%2899%2900043-X&rft.externalDBID=NO_FULL_TEXT |
| thumbnail_l | http://covers-cdn.summon.serialssolutions.com/index.aspx?isbn=/lc.gif&issn=0893-6080&client=summon |
| thumbnail_m | http://covers-cdn.summon.serialssolutions.com/index.aspx?isbn=/mc.gif&issn=0893-6080&client=summon |
| thumbnail_s | http://covers-cdn.summon.serialssolutions.com/index.aspx?isbn=/sc.gif&issn=0893-6080&client=summon |