Tampere University of Technology

TUTCRIS Research Portal

A novel stochastic learning rule for neural networks

Research output: Chapter in Book/Report/Conference proceedingConference contributionScientificpeer-review

Details

Original languageEnglish
Title of host publicationLecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics)
PublisherSpringer Verlag
Pages414-423
Number of pages10
Volume3971 LNCS
ISBN (Print)9783540344391
DOIs
Publication statusPublished - 2006
Externally publishedYes
Publication typeA4 Article in a conference publication
Event3rd International Symposium on Neural Networks, ISNN 2006 - Advances in Neural Networks - Chengdu, China
Duration: 28 May 20061 Jun 2006

Publication series

NameLecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics)
Volume3971 LNCS
ISSN (Print)03029743
ISSN (Electronic)16113349

Conference

Conference3rd International Symposium on Neural Networks, ISNN 2006 - Advances in Neural Networks
CountryChina
CityChengdu
Period28/05/061/06/06

Abstract

The purpose of this article is the introduction of a novel stochastic Hebb-like learning rule for neural networks which combines features of unsupervised (Hebbian) and supervised (reinforcement) learning. This learning rule is stochastic with respect to the selection of the time points when a synaptic modification is induced by simultantious activation of the pre- and postsynaptic neuron. Moreover, the learning rule does not only affect the synapse between pre- and postsynaptic neuron which is called homosynaptic plasticity but effects also further remote synapses of the pre- and postsynaptic neuron. This more complex form of plasticity has recently come into the light of interest of experimental investigations in neurobiology and is called heterosynaptic plasticity. Our learning rule is motivated by these experimental findings and gives a qualitative explanation of this kind of synaptic plasticity. Additionally, we give some numerical results that demonstrate that our learning rule works well in training neural networks, even in the presence of noise.