Ebook: Hebbian Learning and Negative Feedback Networks
Author: Colin Fyfe (auth.)
- Tags: Probability and Statistics in Computer Science, Artificial Intelligence (incl. Robotics), Pattern Recognition, Simulation and Modeling, Computer Science general
- Series: Advanced Information and Knowledge Processing
- Year: 2005
- Publisher: Springer-Verlag London
- Edition: 1
- Language: English
- pdf
This book is the outcome of a decade’s research into a speci?c architecture and associated learning mechanism for an arti?cial neural network: the - chitecture involves negative feedback and the learning mechanism is simple Hebbian learning. The research began with my own thesis at the University of Strathclyde, Scotland, under Professor Douglas McGregor which culminated with me being awarded a PhD in 1995 [52], the title of which was “Negative Feedback as an Organising Principle for Arti?cial Neural Networks”. Naturally enough, having established this theme, when I began to sup- vise PhD students of my own, we continued to develop this concept and this book owes much to the research and theses of these students at the Applied Computational Intelligence Research Unit in the University of Paisley. Thus we discuss work from • Dr. Darryl Charles [24] in Chapter 5. • Dr. Stephen McGlinchey [127] in Chapter 7. • Dr. Donald MacDonald [121] in Chapters 6 and 8. • Dr. Emilio Corchado [29] in Chapter 8. We brie?y discuss one simulation from the thesis of Dr. Mark Girolami [58] in Chapter 6 but do not discuss any of the rest of his thesis since it has already appeared in book form [59]. We also must credit Cesar Garcia Osorio, a current PhD student, for the comparative study of the two Exploratory Projection Pursuit networks in Chapter 8. All of Chapters 3 to 8 deal with single stream arti?cial neural networks.
The central idea of Hebbian Learning and Negative Feedback Networks is that artificial neural networks using negative feedback of activation can use simple Hebbian learning to self-organise so that they uncover interesting structures in data sets. Two variants are considered: the first uses a single stream of data to self-organise. By changing the learning rules for the network, it is shown how to perform Principal Component Analysis, Exploratory Projection Pursuit, Independent Component Analysis, Factor Analysis and a variety of topology preserving mappings for such data sets.
The second variants use two input data streams on which they self-organise. In their basic form, these networks are shown to perform Canonical Correlation Analysis, the statistical technique which finds those filters onto which projections of the two data streams have greatest correlation.
The book encompasses a wide range of real experiments and displays how the approaches it formulates can be applied to the analysis of real problems.
The central idea of Hebbian Learning and Negative Feedback Networks is that artificial neural networks using negative feedback of activation can use simple Hebbian learning to self-organise so that they uncover interesting structures in data sets. Two variants are considered: the first uses a single stream of data to self-organise. By changing the learning rules for the network, it is shown how to perform Principal Component Analysis, Exploratory Projection Pursuit, Independent Component Analysis, Factor Analysis and a variety of topology preserving mappings for such data sets.
The second variants use two input data streams on which they self-organise. In their basic form, these networks are shown to perform Canonical Correlation Analysis, the statistical technique which finds those filters onto which projections of the two data streams have greatest correlation.
The book encompasses a wide range of real experiments and displays how the approaches it formulates can be applied to the analysis of real problems.
Content:
Front Matter....Pages I-XVIII
Introduction....Pages 1-5
Front Matter....Pages 7-10
Background....Pages 11-29
The Negative Feedback Network....Pages 31-56
Peer-Inhibitory Neurons....Pages 57-84
Multiple Cause Data....Pages 85-109
Exploratory Data Analysis....Pages 111-136
Topology Preserving Maps....Pages 137-168
Maximum Likelihood Hebbian Learning....Pages 169-186
Front Matter....Pages 187-190
Two Neural Networks for Canonical Correlation Analysis....Pages 191-208
Alternative Derivations of CCA Networks....Pages 209-216
Kernel and Nonlinear Correlations....Pages 217-246
Exploratory Correlation Analysis....Pages 247-273
Multicollinearity and Partial Least Squares....Pages 275-289
Twinned Principal Curves....Pages 291-307
The Future....Pages 309-313
Back Matter....Pages 315-383
The central idea of Hebbian Learning and Negative Feedback Networks is that artificial neural networks using negative feedback of activation can use simple Hebbian learning to self-organise so that they uncover interesting structures in data sets. Two variants are considered: the first uses a single stream of data to self-organise. By changing the learning rules for the network, it is shown how to perform Principal Component Analysis, Exploratory Projection Pursuit, Independent Component Analysis, Factor Analysis and a variety of topology preserving mappings for such data sets.
The second variants use two input data streams on which they self-organise. In their basic form, these networks are shown to perform Canonical Correlation Analysis, the statistical technique which finds those filters onto which projections of the two data streams have greatest correlation.
The book encompasses a wide range of real experiments and displays how the approaches it formulates can be applied to the analysis of real problems.
Content:
Front Matter....Pages I-XVIII
Introduction....Pages 1-5
Front Matter....Pages 7-10
Background....Pages 11-29
The Negative Feedback Network....Pages 31-56
Peer-Inhibitory Neurons....Pages 57-84
Multiple Cause Data....Pages 85-109
Exploratory Data Analysis....Pages 111-136
Topology Preserving Maps....Pages 137-168
Maximum Likelihood Hebbian Learning....Pages 169-186
Front Matter....Pages 187-190
Two Neural Networks for Canonical Correlation Analysis....Pages 191-208
Alternative Derivations of CCA Networks....Pages 209-216
Kernel and Nonlinear Correlations....Pages 217-246
Exploratory Correlation Analysis....Pages 247-273
Multicollinearity and Partial Least Squares....Pages 275-289
Twinned Principal Curves....Pages 291-307
The Future....Pages 309-313
Back Matter....Pages 315-383
....