Contrastive Hebbian Feedforward Learning for Neural Networks

This paper addresses the biological plausibility of both backpropagation (BP) and contrastive Hebbian learning (CHL) used in the Boltzmann machines. The main claim of this paper is that CHL is a general learning algorithm that can be used to steer feedforward networks toward desirable outcomes, and...

Full description

Saved in:
Bibliographic Details
Published inIEEE transaction on neural networks and learning systems Vol. 31; no. 6; pp. 2118 - 2128
Main Author Kermiche, Noureddine
Format Journal Article
LanguageEnglish
Published United States IEEE 01.06.2020
The Institute of Electrical and Electronics Engineers, Inc. (IEEE)
Subjects
Online AccessGet full text
ISSN2162-237X
2162-2388
2162-2388
DOI10.1109/TNNLS.2019.2927957

Cover

Abstract This paper addresses the biological plausibility of both backpropagation (BP) and contrastive Hebbian learning (CHL) used in the Boltzmann machines. The main claim of this paper is that CHL is a general learning algorithm that can be used to steer feedforward networks toward desirable outcomes, and steer them away from undesirable outcomes without any need for the specialized feedback circuit of BP or the symmetric connections used by the Boltzmann machines. After adding perturbations during the learning phase to all the neurons in the network, multiple feedforward outcomes are classified into Hebbian and anti-Hebbian sets based on the network predictions. The algorithm is applied to networks when optimizing a loss objective where BP excels and is also applied to networks with stochastic binary outputs where BP cannot be easily applied. The power of the proposed algorithm lies in its simplicity where both learning and gradient estimation through stochastic binary activations are combined into a single local Hebbian rule. We will also show that both Hebbian and anti-Hebbian correlations are evaluated from the readily available signals that are fundamentally different from CHL used in the Boltzmann machines. We will demonstrate that the new learning paradigm where Hebbian/anti-Hebbian correlations are based on correct/incorrect predictions is a powerful concept that separates this paper from other biologically inspired learning algorithms.
AbstractList This paper addresses the biological plausibility of both backpropagation (BP) and contrastive Hebbian learning (CHL) used in the Boltzmann machines. The main claim of this paper is that CHL is a general learning algorithm that can be used to steer feedforward networks toward desirable outcomes, and steer them away from undesirable outcomes without any need for the specialized feedback circuit of BP or the symmetric connections used by the Boltzmann machines. After adding perturbations during the learning phase to all the neurons in the network, multiple feedforward outcomes are classified into Hebbian and anti-Hebbian sets based on the network predictions. The algorithm is applied to networks when optimizing a loss objective where BP excels and is also applied to networks with stochastic binary outputs where BP cannot be easily applied. The power of the proposed algorithm lies in its simplicity where both learning and gradient estimation through stochastic binary activations are combined into a single local Hebbian rule. We will also show that both Hebbian and anti-Hebbian correlations are evaluated from the readily available signals that are fundamentally different from CHL used in the Boltzmann machines. We will demonstrate that the new learning paradigm where Hebbian/anti-Hebbian correlations are based on correct/incorrect predictions is a powerful concept that separates this paper from other biologically inspired learning algorithms.
This paper addresses the biological plausibility of both backpropagation (BP) and contrastive Hebbian learning (CHL) used in the Boltzmann machines. The main claim of this paper is that CHL is a general learning algorithm that can be used to steer feedforward networks toward desirable outcomes, and steer them away from undesirable outcomes without any need for the specialized feedback circuit of BP or the symmetric connections used by the Boltzmann machines. After adding perturbations during the learning phase to all the neurons in the network, multiple feedforward outcomes are classified into Hebbian and anti-Hebbian sets based on the network predictions. The algorithm is applied to networks when optimizing a loss objective where BP excels and is also applied to networks with stochastic binary outputs where BP cannot be easily applied. The power of the proposed algorithm lies in its simplicity where both learning and gradient estimation through stochastic binary activations are combined into a single local Hebbian rule. We will also show that both Hebbian and anti-Hebbian correlations are evaluated from the readily available signals that are fundamentally different from CHL used in the Boltzmann machines. We will demonstrate that the new learning paradigm where Hebbian/anti-Hebbian correlations are based on correct/incorrect predictions is a powerful concept that separates this paper from other biologically inspired learning algorithms.This paper addresses the biological plausibility of both backpropagation (BP) and contrastive Hebbian learning (CHL) used in the Boltzmann machines. The main claim of this paper is that CHL is a general learning algorithm that can be used to steer feedforward networks toward desirable outcomes, and steer them away from undesirable outcomes without any need for the specialized feedback circuit of BP or the symmetric connections used by the Boltzmann machines. After adding perturbations during the learning phase to all the neurons in the network, multiple feedforward outcomes are classified into Hebbian and anti-Hebbian sets based on the network predictions. The algorithm is applied to networks when optimizing a loss objective where BP excels and is also applied to networks with stochastic binary outputs where BP cannot be easily applied. The power of the proposed algorithm lies in its simplicity where both learning and gradient estimation through stochastic binary activations are combined into a single local Hebbian rule. We will also show that both Hebbian and anti-Hebbian correlations are evaluated from the readily available signals that are fundamentally different from CHL used in the Boltzmann machines. We will demonstrate that the new learning paradigm where Hebbian/anti-Hebbian correlations are based on correct/incorrect predictions is a powerful concept that separates this paper from other biologically inspired learning algorithms.
Author Kermiche, Noureddine
Author_xml – sequence: 1
  givenname: Noureddine
  orcidid: 0000-0002-2762-5911
  surname: Kermiche
  fullname: Kermiche, Noureddine
  email: noureddine.kermiche@wdc.com
  organization: Western Digital Corporation, Irvine, CA, USA
BackLink https://www.ncbi.nlm.nih.gov/pubmed/31380771$$D View this record in MEDLINE/PubMed
BookMark eNp9kEtr3DAURkVJaR7NH2ihGLrJZqZXV09DN2HIozBMF02hOyHL10Wpx04lO0P_fTyZySyy6N18F3E-SZxTdtT1HTH2gcOccyi_3K1Wyx9zBF7OsURTKvOGnSDXOENh7dFhN7-O2XnO9zCNBqVl-Y4dCy4sGMNP2NdF3w3J5yE-UnFLVRV9V1wT1U2fNj7VxZJ86mL3u5gOihWNybdTDJs-_cnv2dvGt5nO93nGfl5f3S1uZ8vvN98Wl8tZEIoPMy2VMgYIoK50AI8SoZLCN6GWEDSCFVIjoaIgSXteQ628VY2R4C1VQZyxi929D6n_O1Ie3DrmQG3rO-rH7BC1VQaR44R-foXe92Pqpt85lFAKU2rYUp_21FitqXYPKa59-udevEwA7oCQ-pwTNQeEg9v6d8_-3da_2_ufSvZVKcTBD_HZcGz_X_24q0YiOrxljUUrtHgCrCeQrA
CODEN ITNNAL
CitedBy_id crossref_primary_10_3390_sym13081344
crossref_primary_10_1016_j_neunet_2024_106174
crossref_primary_10_1109_TITS_2023_3306559
crossref_primary_10_1109_TNNLS_2023_3319661
crossref_primary_10_1109_TNNLS_2023_3272153
crossref_primary_10_1109_TNNLS_2022_3174528
crossref_primary_10_1109_TVLSI_2021_3069221
crossref_primary_10_1002_cta_4044
crossref_primary_10_1007_s40747_022_00945_w
crossref_primary_10_1109_JSAC_2023_3345431
Cites_doi 10.1207/s15516709cog0901_7
10.1002/0471461288
10.1038/323533a0
10.3389/fncom.2017.00024
10.1016/j.envsoft.2006.05.021
10.1111/j.2517-6161.1965.tb01497.x
10.1162/089976603762552988
10.1073/pnas.79.8.2554
10.1214/ss/1177013818
10.1162/neco.1996.8.5.895
10.1214/aoms/1177729586
10.1037/h0085812
ContentType Journal Article
Copyright Copyright The Institute of Electrical and Electronics Engineers, Inc. (IEEE) 2020
Copyright_xml – notice: Copyright The Institute of Electrical and Electronics Engineers, Inc. (IEEE) 2020
DBID 97E
RIA
RIE
AAYXX
CITATION
NPM
7QF
7QO
7QP
7QQ
7QR
7SC
7SE
7SP
7SR
7TA
7TB
7TK
7U5
8BQ
8FD
F28
FR3
H8D
JG9
JQ2
KR7
L7M
L~C
L~D
P64
7X8
DOI 10.1109/TNNLS.2019.2927957
DatabaseName IEEE All-Society Periodicals Package (ASPP) 2005–Present
IEEE All-Society Periodicals Package (ASPP) 1998–Present
IEEE Electronic Library (IEL)
CrossRef
PubMed
Aluminium Industry Abstracts
Biotechnology Research Abstracts
Calcium & Calcified Tissue Abstracts
Ceramic Abstracts
Chemoreception Abstracts
Computer and Information Systems Abstracts
Corrosion Abstracts
Electronics & Communications Abstracts
Engineered Materials Abstracts
Materials Business File
Mechanical & Transportation Engineering Abstracts
Neurosciences Abstracts
Solid State and Superconductivity Abstracts
METADEX
Technology Research Database
ANTE: Abstracts in New Technology & Engineering
Engineering Research Database
Aerospace Database
Materials Research Database
ProQuest Computer Science Collection
Civil Engineering Abstracts
Advanced Technologies Database with Aerospace
Computer and Information Systems Abstracts – Academic
Computer and Information Systems Abstracts Professional
Biotechnology and BioEngineering Abstracts
MEDLINE - Academic
DatabaseTitle CrossRef
PubMed
Materials Research Database
Technology Research Database
Computer and Information Systems Abstracts – Academic
Mechanical & Transportation Engineering Abstracts
ProQuest Computer Science Collection
Computer and Information Systems Abstracts
Materials Business File
Aerospace Database
Engineered Materials Abstracts
Biotechnology Research Abstracts
Chemoreception Abstracts
Advanced Technologies Database with Aerospace
ANTE: Abstracts in New Technology & Engineering
Civil Engineering Abstracts
Aluminium Industry Abstracts
Electronics & Communications Abstracts
Ceramic Abstracts
Neurosciences Abstracts
METADEX
Biotechnology and BioEngineering Abstracts
Computer and Information Systems Abstracts Professional
Solid State and Superconductivity Abstracts
Engineering Research Database
Calcium & Calcified Tissue Abstracts
Corrosion Abstracts
MEDLINE - Academic
DatabaseTitleList
Materials Research Database
PubMed
MEDLINE - Academic
Database_xml – sequence: 1
  dbid: NPM
  name: PubMed
  url: https://proxy.k.utb.cz/login?url=http://www.ncbi.nlm.nih.gov/entrez/query.fcgi?db=PubMed
  sourceTypes: Index Database
– sequence: 2
  dbid: RIE
  name: IEEE Electronic Library (IEL)
  url: https://proxy.k.utb.cz/login?url=https://ieeexplore.ieee.org/
  sourceTypes: Publisher
DeliveryMethod fulltext_linktorsrc
Discipline Computer Science
EISSN 2162-2388
EndPage 2128
ExternalDocumentID 31380771
10_1109_TNNLS_2019_2927957
8782836
Genre orig-research
Journal Article
GroupedDBID 0R~
4.4
5VS
6IK
97E
AAJGR
AARMG
AASAJ
AAWTH
ABAZT
ABQJQ
ABVLG
ACIWK
ACPRK
AENEX
AFRAH
AGQYO
AGSQL
AHBIQ
AKJIK
AKQYR
ALMA_UNASSIGNED_HOLDINGS
ATWAV
BEFXN
BFFAM
BGNUA
BKEBE
BPEOZ
EBS
EJD
IFIPE
IPLJI
JAVBF
M43
MS~
O9-
OCL
PQQKQ
RIA
RIE
RNS
AAYXX
CITATION
NPM
RIG
7QF
7QO
7QP
7QQ
7QR
7SC
7SE
7SP
7SR
7TA
7TB
7TK
7U5
8BQ
8FD
F28
FR3
H8D
JG9
JQ2
KR7
L7M
L~C
L~D
P64
7X8
ID FETCH-LOGICAL-c351t-6455770e00db6c0a2420b43afcd40c62083462e25ec4e6a1d0d5a85f740a8ebc3
IEDL.DBID RIE
ISSN 2162-237X
2162-2388
IngestDate Thu Oct 02 11:10:08 EDT 2025
Mon Jun 30 02:59:18 EDT 2025
Thu Jan 02 22:59:31 EST 2025
Thu Apr 24 23:11:52 EDT 2025
Wed Oct 01 00:44:49 EDT 2025
Wed Aug 27 02:39:01 EDT 2025
IsPeerReviewed false
IsScholarly true
Issue 6
Language English
License https://ieeexplore.ieee.org/Xplorehelp/downloads/license-information/IEEE.html
https://doi.org/10.15223/policy-029
https://doi.org/10.15223/policy-037
LinkModel DirectLink
MergedId FETCHMERGED-LOGICAL-c351t-6455770e00db6c0a2420b43afcd40c62083462e25ec4e6a1d0d5a85f740a8ebc3
Notes ObjectType-Article-1
SourceType-Scholarly Journals-1
ObjectType-Feature-2
content type line 14
content type line 23
ORCID 0000-0002-2762-5911
PMID 31380771
PQID 2409379602
PQPubID 85436
PageCount 11
ParticipantIDs pubmed_primary_31380771
crossref_primary_10_1109_TNNLS_2019_2927957
proquest_miscellaneous_2268572212
proquest_journals_2409379602
crossref_citationtrail_10_1109_TNNLS_2019_2927957
ieee_primary_8782836
ProviderPackageCode CITATION
AAYXX
PublicationCentury 2000
PublicationDate 2020-06-01
PublicationDateYYYYMMDD 2020-06-01
PublicationDate_xml – month: 06
  year: 2020
  text: 2020-06-01
  day: 01
PublicationDecade 2020
PublicationPlace United States
PublicationPlace_xml – name: United States
– name: Piscataway
PublicationTitle IEEE transaction on neural networks and learning systems
PublicationTitleAbbrev TNNLS
PublicationTitleAlternate IEEE Trans Neural Netw Learn Syst
PublicationYear 2020
Publisher IEEE
The Institute of Electrical and Electronics Engineers, Inc. (IEEE)
Publisher_xml – name: IEEE
– name: The Institute of Electrical and Electronics Engineers, Inc. (IEEE)
References mackay (ref22) 2003
ref11
ref1
ref17
ref16
oland (ref14) 2017
ref19
(ref13) 2017
bengio (ref2) 2013
ref23
cires (ref15) 2010
movellan (ref12) 1990
hinton (ref3) 2016
lecun (ref18) 2018
ref21
(ref25) 2018
katharopoulos (ref20) 2018
almeida (ref10) 1987; 2
ref8
ref7
ref9
ref6
ralko (ref26) 2015
hinton (ref4) 1986; 1
ref5
schaul (ref24) 2015
References_xml – year: 2018
  ident: ref25
  publication-title: An overview of gradient descent optimization algorithms
– start-page: 10
  year: 1990
  ident: ref12
  article-title: Contrastive hebbian learning in the continuous hopfield model
  publication-title: Connectionist Models Proceedings
– ident: ref5
  doi: 10.1207/s15516709cog0901_7
– year: 2018
  ident: ref20
  article-title: Not all samples are created equal: Deep learning with importance sampling
  publication-title: arXiv 1803 00942
– ident: ref23
  doi: 10.1002/0471461288
– volume: 2
  start-page: 609
  year: 1987
  ident: ref10
  article-title: A learning rule for asynchronous perceptrons with feedback in a combinatorial environment
  publication-title: Proc IEEE Int Conf Neural Netw
– ident: ref1
  doi: 10.1038/323533a0
– year: 2003
  ident: ref22
  publication-title: Information Theory Inference and Learning Algorithms
– volume: 1
  year: 1986
  ident: ref4
  publication-title: Learning and relearning in Boltzmann machines in Parallel Distributed Processing Explorations in the Microstructure of Cognition
– year: 2015
  ident: ref24
  article-title: Prioritized experience replay
  publication-title: arXiv 1511 05952
– ident: ref8
  doi: 10.3389/fncom.2017.00024
– year: 2016
  ident: ref3
  publication-title: Can the Brain do Back-Propagation Stanford Seminar
– ident: ref17
  doi: 10.1016/j.envsoft.2006.05.021
– year: 2017
  ident: ref14
  article-title: Be careful what you back propagate: A case for linear output activations & gradient boosting
  publication-title: arXiv 1707 04199
– ident: ref16
  doi: 10.1111/j.2517-6161.1965.tb01497.x
– ident: ref9
  doi: 10.1162/089976603762552988
– ident: ref6
  doi: 10.1073/pnas.79.8.2554
– ident: ref21
  doi: 10.1214/ss/1177013818
– year: 2013
  ident: ref2
  article-title: Estimating or propagating gradients through stochastic neurons for conditional computation
  publication-title: arXiv 1308 3432
– year: 2015
  ident: ref26
  article-title: Techniques for learning binary stochastic feedforward neural networks
  publication-title: arXiv 1406 2989
– ident: ref11
  doi: 10.1162/neco.1996.8.5.895
– year: 2017
  ident: ref13
  publication-title: Non-Convex Optimization CS6867 Lecture 7-Fall
– year: 2010
  ident: ref15
  article-title: Deep big simple neural nets excel on handwritten digit recognition
  publication-title: arXiv 1003 0358
– ident: ref19
  doi: 10.1214/aoms/1177729586
– ident: ref7
  doi: 10.1037/h0085812
– year: 2018
  ident: ref18
  publication-title: The MNIST Database of Handwritten Digits
SSID ssj0000605649
Score 2.4043016
Snippet This paper addresses the biological plausibility of both backpropagation (BP) and contrastive Hebbian learning (CHL) used in the Boltzmann machines. The main...
SourceID proquest
pubmed
crossref
ieee
SourceType Aggregation Database
Index Database
Enrichment Source
Publisher
StartPage 2118
SubjectTerms Algorithms
Back propagation
Binary stochastic neurons
contrastive divergence
contrastive Hebbian
Correlation
deep learning
Feedback circuits
Feedforward neural networks
Feedforward systems
Learning
Learning algorithms
Learning systems
Machine learning
Neural networks
Neurons
Stochasticity
Training
Title Contrastive Hebbian Feedforward Learning for Neural Networks
URI https://ieeexplore.ieee.org/document/8782836
https://www.ncbi.nlm.nih.gov/pubmed/31380771
https://www.proquest.com/docview/2409379602
https://www.proquest.com/docview/2268572212
Volume 31
hasFullText 1
inHoldings 1
isFullTextHit
isPrint
journalDatabaseRights – providerCode: PRVIEE
  databaseName: IEEE Electronic Library (IEL)
  customDbUrl:
  eissn: 2162-2388
  dateEnd: 99991231
  omitProxy: false
  ssIdentifier: ssj0000605649
  issn: 2162-237X
  databaseCode: RIE
  dateStart: 20120101
  isFulltext: true
  titleUrlDefault: https://ieeexplore.ieee.org/
  providerName: IEEE
link http://utb.summon.serialssolutions.com/2.0.0/link/0/eLvHCXMwjV1Lb9QwEB61PXGhpeWRtlRB4gbZOn7GEheEWK0Q3QuttLfIr3AA7aLu7oVf3xnnIYQAoRwSJY6TeDyZz_bMNwCvA0k5ogKGWnSVNMJXPqau6pRTjTO4WQoUvlnqxZ38tFKrA3g7xcKklLLzWZrRYV7Lj5uwp6my6wbNWSP0IRyaRvexWtN8CkNcrjPa5bXmFRdmNcbIMHt9u1x-_kKOXHbGLTeWrNEvdignVvk7xsy2Zn4MN-Nb9i4m32b7nZ-Fn78ROP7vZ5zA4wF0lu_7XvIEDtL6FI7HhA7loN9n8I64qu7dln6B5SJ54iMv52jfENmSd205sLF-LfFESbweWOuydyTfPoW7-cfbD4tqSK9QBaHqXaWlUsawxFj0OjCHxpp5KVwXomRBcwRnUvPEVQoyaVdHFpVrVGckc03yQTyDo_VmnV5AaS1vulqohHvZRe-t07yLCeGg8Ca6AuqxsdswcI9TCozvbR6DMNtmAbUkoHYQUAFvpnt-9Mwb_yx9Rg09lRzauIDLUabtoJzbFkEMgjIcuvECXk2XUa1orcSt02aPZbhulOFo2At43veFqW5RE0u_qc___MwLeMRpUJ6nai7haHe_Ty8Ruez8Ve6yDyA36Is
linkProvider IEEE
linkToHtml http://utb.summon.serialssolutions.com/2.0.0/link/0/eLvHCXMwjV1Lb9QwEB6VcoBLCxRooIUgcYNsHT9jiUtVdbXAbi5spb1FfqUH0C7q7l749YydhxAChHJIlDhO4vFkPtsz3wC8dVHKHhXQlawtuGK2sD60RSuMqIzCTcdA4UUtZzf800qsDuD9GAsTQkjOZ2ESD9Navt-4fZwqu6jQnFVM3oP7gnMuumitcUaFIDKXCe_SUtKCMrUaomSIvljW9fxLdOXSE6qp0tEe_WKJUmqVv6PMZG2mx7AY3rNzMvk62e_sxP34jcLxfz_kERz1sDO_7PrJYzgI6ydwPKR0yHsNP4EPka3qzmzjTzCfBRsZyfMpWjjEttG_Nu_5WG9zPJFHZg-ste5cybdP4WZ6vbyaFX2ChcIxUe4KyYVQigRCvJWOGDTXxHJmWuc5cZIiPOOSBiqC40Ga0hMvTCVaxYmpgnXsGRyuN-twCrnWtGpLJgLueeut1UbS1gcEhMwqbzIoh8ZuXM8-HpNgfGvSKIToJgmoiQJqegFl8G6853vHvfHP0iexoceSfRtncDbItOnVc9sgjEFYhoM3msGb8TIqVlwtMeuw2WMZKiuhKJr2DJ53fWGsm5WRp1-VL_78zNfwYLZczJv5x_rzS3hI4xA9TdycweHubh_OEcfs7KvUfX8Cdbfr2A
openUrl ctx_ver=Z39.88-2004&ctx_enc=info%3Aofi%2Fenc%3AUTF-8&rfr_id=info%3Asid%2Fsummon.serialssolutions.com&rft_val_fmt=info%3Aofi%2Ffmt%3Akev%3Amtx%3Ajournal&rft.genre=article&rft.atitle=Contrastive+Hebbian+Feedforward+Learning+for+Neural+Networks&rft.jtitle=IEEE+transaction+on+neural+networks+and+learning+systems&rft.au=Kermiche%2C+Noureddine&rft.date=2020-06-01&rft.pub=IEEE&rft.issn=2162-237X&rft.volume=31&rft.issue=6&rft.spage=2118&rft.epage=2128&rft_id=info:doi/10.1109%2FTNNLS.2019.2927957&rft.externalDocID=8782836
thumbnail_l http://covers-cdn.summon.serialssolutions.com/index.aspx?isbn=/lc.gif&issn=2162-237X&client=summon
thumbnail_m http://covers-cdn.summon.serialssolutions.com/index.aspx?isbn=/mc.gif&issn=2162-237X&client=summon
thumbnail_s http://covers-cdn.summon.serialssolutions.com/index.aspx?isbn=/sc.gif&issn=2162-237X&client=summon