Online Library TheLib.net » Theoretical Advances in Neural Computation and Learning

For any research field to have a lasting impact, there must be a firm theoretical foundation. Neural networks research is no exception. Some of the founda­ tional concepts, established several decades ago, led to the early promise of developing machines exhibiting intelligence. The motivation for studying such machines comes from the fact that the brain is far more efficient in visual processing and speech recognition than existing computers. Undoubtedly, neu­ robiological systems employ very different computational principles. The study of artificial neural networks aims at understanding these computational prin­ ciples and applying them in the solutions of engineering problems. Due to the recent advances in both device technology and computational science, we are currently witnessing an explosive growth in the studies of neural networks and their applications. It may take many years before we have a complete understanding about the mechanisms of neural systems. Before this ultimate goal can be achieved, an­ swers are needed to important fundamental questions such as (a) what can neu­ ral networks do that traditional computing techniques cannot, (b) how does the complexity of the network for an application relate to the complexity of that problem, and (c) how much training data are required for the resulting network to learn properly? Everyone working in the field has attempted to answer these questions, but general solutions remain elusive. However, encouraging progress in studying specific neural models has been made by researchers from various disciplines.




Theoretical Advances in Neural Computation and Learning brings together in one volume some of the recent advances in the development of a theoretical framework for studying neural networks. A variety of novel techniques from disciplines such as computer science, electrical engineering, statistics, and mathematics have been integrated and applied to develop ground-breaking analytical tools for such studies. This volume emphasizes the computational issues in artificial neural networks and compiles a set of pioneering research works, which together establish a general framework for studying the complexity of neural networks and their learning capabilities. This book represents one of the first efforts to highlight these fundamental results, and provides a unified platform for a theoretical exploration of neural computation. Each chapter is authored by a leading researcher and/or scholar who has made significant contributions in this area.
Part 1 provides a complexity theoretic study of different models of neural computation. Complexity measures for neural models are introduced, and techniques for the efficient design of networks for performing basic computations, as well as analytical tools for understanding the capabilities and limitations of neural computation are discussed. The results describe how the computational cost of a neural network increases with the problem size. Equally important, these results go beyond the study of single neural elements, and establish to computational power of multilayer networks.
Part 2 discusses concepts and results concerning learning using models of neural computation. Basic concepts such as VC-dimension and PAC-learning are introduced, and recent results relating neural networks to learning theory are derived. In addition, a number of the chapters address fundamental issues concerning learning algorithms, such as accuracy and rate of convergence, selection of training data, and efficient algorithms for learning useful classes of mappings.



Theoretical Advances in Neural Computation and Learning brings together in one volume some of the recent advances in the development of a theoretical framework for studying neural networks. A variety of novel techniques from disciplines such as computer science, electrical engineering, statistics, and mathematics have been integrated and applied to develop ground-breaking analytical tools for such studies. This volume emphasizes the computational issues in artificial neural networks and compiles a set of pioneering research works, which together establish a general framework for studying the complexity of neural networks and their learning capabilities. This book represents one of the first efforts to highlight these fundamental results, and provides a unified platform for a theoretical exploration of neural computation. Each chapter is authored by a leading researcher and/or scholar who has made significant contributions in this area.
Part 1 provides a complexity theoretic study of different models of neural computation. Complexity measures for neural models are introduced, and techniques for the efficient design of networks for performing basic computations, as well as analytical tools for understanding the capabilities and limitations of neural computation are discussed. The results describe how the computational cost of a neural network increases with the problem size. Equally important, these results go beyond the study of single neural elements, and establish to computational power of multilayer networks.
Part 2 discusses concepts and results concerning learning using models of neural computation. Basic concepts such as VC-dimension and PAC-learning are introduced, and recent results relating neural networks to learning theory are derived. In addition, a number of the chapters address fundamental issues concerning learning algorithms, such as accuracy and rate of convergence, selection of training data, and efficient algorithms for learning useful classes of mappings.

Content:
Front Matter....Pages i-xxiv
Front Matter....Pages 1-1
Neural Models and Spectral Methods....Pages 3-36
Depth-Efficient Threshold Circuits for Arithmetic Functions....Pages 37-84
Communication Complexity and Lower Bounds for Threshold Circuits....Pages 85-125
A Comparison of the Computational Power of Sigmoid and Boolean Threshold Circuits....Pages 127-151
Computing on Analog Neural Nets with Arbitrary Real Weights....Pages 153-172
Connectivity Versus Capacity in the Hebb Rule....Pages 173-240
Front Matter....Pages 241-241
Computational Learning Theory and Neural Networks: A Survey of Selected Topics....Pages 243-293
Perspectives of Current Research about the Complexity of Learning on Neural Nets....Pages 295-336
Learning an Intersection of K Halfspaces Over a Uniform Distribution....Pages 337-356
On the Intractability of Loading Neural Networks....Pages 357-389
Learning Boolean Functions via the Fourier Transform....Pages 391-424
LMS and Backpropagation are Minimax Filters....Pages 425-447
Supervised Learning: Can it Escape its Local Minimum?....Pages 449-461
Back Matter....Pages 463-468


Theoretical Advances in Neural Computation and Learning brings together in one volume some of the recent advances in the development of a theoretical framework for studying neural networks. A variety of novel techniques from disciplines such as computer science, electrical engineering, statistics, and mathematics have been integrated and applied to develop ground-breaking analytical tools for such studies. This volume emphasizes the computational issues in artificial neural networks and compiles a set of pioneering research works, which together establish a general framework for studying the complexity of neural networks and their learning capabilities. This book represents one of the first efforts to highlight these fundamental results, and provides a unified platform for a theoretical exploration of neural computation. Each chapter is authored by a leading researcher and/or scholar who has made significant contributions in this area.
Part 1 provides a complexity theoretic study of different models of neural computation. Complexity measures for neural models are introduced, and techniques for the efficient design of networks for performing basic computations, as well as analytical tools for understanding the capabilities and limitations of neural computation are discussed. The results describe how the computational cost of a neural network increases with the problem size. Equally important, these results go beyond the study of single neural elements, and establish to computational power of multilayer networks.
Part 2 discusses concepts and results concerning learning using models of neural computation. Basic concepts such as VC-dimension and PAC-learning are introduced, and recent results relating neural networks to learning theory are derived. In addition, a number of the chapters address fundamental issues concerning learning algorithms, such as accuracy and rate of convergence, selection of training data, and efficient algorithms for learning useful classes of mappings.

Content:
Front Matter....Pages i-xxiv
Front Matter....Pages 1-1
Neural Models and Spectral Methods....Pages 3-36
Depth-Efficient Threshold Circuits for Arithmetic Functions....Pages 37-84
Communication Complexity and Lower Bounds for Threshold Circuits....Pages 85-125
A Comparison of the Computational Power of Sigmoid and Boolean Threshold Circuits....Pages 127-151
Computing on Analog Neural Nets with Arbitrary Real Weights....Pages 153-172
Connectivity Versus Capacity in the Hebb Rule....Pages 173-240
Front Matter....Pages 241-241
Computational Learning Theory and Neural Networks: A Survey of Selected Topics....Pages 243-293
Perspectives of Current Research about the Complexity of Learning on Neural Nets....Pages 295-336
Learning an Intersection of K Halfspaces Over a Uniform Distribution....Pages 337-356
On the Intractability of Loading Neural Networks....Pages 357-389
Learning Boolean Functions via the Fourier Transform....Pages 391-424
LMS and Backpropagation are Minimax Filters....Pages 425-447
Supervised Learning: Can it Escape its Local Minimum?....Pages 449-461
Back Matter....Pages 463-468
....
Download the book Theoretical Advances in Neural Computation and Learning for free or read online
Read Download
Continue reading on any device:
QR code
Last viewed books
Related books
Comments (0)
reload, if the code cannot be seen