书籍详情
神经网络设计(英文影印版)
作者:(美)Martin T.Hagan,Howard B.Demuth,Mark Beale
出版社:中信出版社
出版时间:2002-05-01
ISBN:9787800734656
定价:¥69.00
内容简介
本书清楚而详细地论述了基本的神经网络体系结构和训练方法、作者重点调了三项内容:一是神经网络的数学分析,二是神经网络的训练方法三是神经网络的工程应用——主要是在模式识别、信号处理和控制系统领域的应用。本书特点:·广泛论述了能力学习方面的内容,包括Widrow-Hoff规则、反向传播算法和一些增强的反向传播算这(例如,变梯度法,Levenberg-Marquardt动量项法)·讨论了回归互联记忆神经网络(例如.Hopfield神经网络)·给出多个解决问题的详细实例:·以简单的积木形式解释了互联神经网络和竞争神经网络(包括特征映射、学习矢量量化和自适应共振理论)。·提供了用MATLAB4.O实现的神经网络设计演示程序(包含学生版和专业版)这是一本非常优秀的著作很难见到写得这么好的书。本书无论是插图还是范例都是一流的这些插图和范例不但丰富了内容,而且还增加了直觉感。
作者简介
暂缺《神经网络设计(英文影印版)》作者简介
目录
Preface
1. Introduction
Objectives
History
Applications
Biological Inspiration
Further Reading
2. Neuron Model and Network Architectures
Objectives
Theory and Examples
Notation
Neuron Model
Single-Input Neuron
Transfer Functions
Multiple-Input Neuron
Network Architectures
A Layer of Neurons
Multiple Layers of Neurons
Recrrent Networks
Summary of Results
Solved Problems
Epilogue
Exercises
3. An Illustrative Example
Objectives
Theory and Examples
Problem Statement
Perceptron
Two-Input Case
Pattern Recognition Example
Hamming Network
Feedforward Layer
Recurrent Layer
Hopfield Network
Epilogue
Exercise
4. Perceptron Learning Rule
Objectives
Theory and Examples
Learning Rules
Perceptron Architecture
Single-Neuron Perceptron
Multiple-Neuron Perceptron
Perceptron Learning Rule
Test Problem
Constructing Learning Rules
Unified Learning Rule
Training Multiple-Neuron Perceptrons
Proof of Convergence
Notation
Proof
Limitations
Summary of Results
Solved Problems
Epilogue
Further Reading
Exercises
5. Signal and Weight Vector Spaces
Objectives
Theory and Examples
Linear Vector Spaces
Linear independence
Spanning a Space
Inner Product
Norm
Orthogonality
Gram-Schmidt Orthogonalization
Vector Expansions
Reciprocal Basis Vectors
Summary of Results
Solved Problems
Epilogue
Further Reading
Exercises
6. Linear Transformations for Neural Networks
Objectives
Theory and Examples
Linear Transformations
Matrix Representations
Change of Basis
Eigenvalues and Eigenvectors
Diagonalization
Summary of Results
Solved Problems
Epilogue
Further Reading
Exercises
7. Supervised Hebbian Learning
Objectives
Theory and Examples
Linear Associator
The Hebb Rule
Performance Analysis
Pseudoinverse Rule
Application
Variations of Hebbian Learning
Summary of Results
Solved Problems
Epilogue
Further Reading
Exercises
8. Performance Surfaces and Optimum Points
Objectives
Theory and Examples
Taylor Series
Vector Case
Directional Derivatives
Minima
Necessary Conditions for Optimality
First-Order Conditions
Second-Order Conditions
Quadratic Functions
Eigensystem of the Hessian
Summary of Results
Solved Problems
Epilogue
Further Reading
Exercises
9. Performance Optimization
Objectives
Theory and Examples
Steepest Descent
Stable Learning Rates
Minimizing Along a Line
Newton's Method
Conjugate Gradient
Summary of Results
Solved Problems
Epilogue
Further Reading
Exercises
10. Widrow-Hoff Learning
Objectives
Theory and Examples
ADALINE Network
Single ADALINE
Mean Square Error
LMS Algorithm
Analysis of Convergence
Adaptive Filtering
Adaptive Noise Cancellation
Echo Cancellation
Summary of Results
Solved Problems
Epilogue
Further Reading
Exercises
11. Backpropagation
Objectives
Theory and Examples
Multilayer Perceptrons
Pattern Classification '
Function Approximation
The Backpropagation Algorithm
Performance Index
Chain Rule
Backpropagating the Sensitivities
Summary '
Example
Using Backpropagation
Choice of Network Architecture
Convergence
Generalization
Summary of Results
Solved Problems
Epilogue
Further Reading
Exercises
12. Variations on Backpropagation
Objectives
Theory and Examples
Drawbacks of Backpropagation
Performance Surface Example
Convergence Example
Heuristic Modifications of Backpropagation
Momentum
Variable Learning Rate
Numerical Optimization Techniques
Conjugate Gradient
Levenberg-Marquardt Algorithm
Summary of Results
Solved Problems
Epilogue
Further Reading
Exercises
13. Assoeiative Learning
Objectives
Theory and Examples
Simple Associative Network
Unsupervised Hebb Rule
Hebb Rule with Decay
Simple Recognition Network
Instar Rule
Kohonen Rule
Simple Recall Network
Outstar Rule
Summary of Results
Solved Problems .
Epilogue
Further Reading
Exercises
14. Competitive Networks
Objectives
Theory and Examples
Hamming Network
Layer 1
Layer 2
Competitive Layer
Competitive Learning
Problems with Competitive Layers
Competitive Layers in Biology
Self-Organizing Feature Maps
Improving Feature Maps
Learning Vector Quantization
LVQ Learning
Improving LVQ Networks (LVQ2)
Summary of Results
Solved Problems
Epilogue
Further Reading
Exercises
15. Grossberg Network
Objectives
Theory and Examples
Biological Motivation: Vision
Illusions
Vision Normalization
Basic Nonlinear Model
Two-Layer Competitive Network
Layer 1
Layer 2
Choice of Transfer Function
Learning Law
Relation to Kohonen Law
Summary of Results
Solved Problems
Epilogue
Further Reading
Exercises
16. Adaptive Resonance Theory
Objectives
Theory and Examples
Overview of Adaptive Resonance
Layer 1
Steady State Analysis '
Layer 2
Orienting Subsystem
Learning Law: LI-L2
Subset/Superset Dilemma
Learning Law
Learning Law: L2-LI
ARTI Algorithm Summary
Initialization
Algorithm
Other ART Architectures
Summary of Results
Solved Problems
Epilogue
Further Reading
Exercises
17. Stability
Objectives
Theory and Examples
Recurrent Networks
Stability Concepts
Definitions
Lyapunov Stability Theorem
Pendulum Example
LaSalle's Invariance Theorem
Definitions
Theorem
Example
Comments
Summary of Results
Solved Problems
Epilogue
Further Reading
Exercises
18. Hopfield Network
Objectives
Theory and Examples
Hopfield Model
Lyapunov Function
Invariant Sets
Example
Hopfield Attractors
Effect of Gain
Hopfield Design
Content-Addressable Memory
Hebb Rule
Lyapunov Surface
Summary of Results
Solved Problems
Epilogue
Further Reading
Exercises
19. Epilogue
Objectives
Theory and Examples
Feedforward and Related Networks
Competitive Networks
Dynamic Associative Memory Networks
Classical Foundations of Neural Networks
Books and Journals
Epilogue
Further Reading
1. Introduction
Objectives
History
Applications
Biological Inspiration
Further Reading
2. Neuron Model and Network Architectures
Objectives
Theory and Examples
Notation
Neuron Model
Single-Input Neuron
Transfer Functions
Multiple-Input Neuron
Network Architectures
A Layer of Neurons
Multiple Layers of Neurons
Recrrent Networks
Summary of Results
Solved Problems
Epilogue
Exercises
3. An Illustrative Example
Objectives
Theory and Examples
Problem Statement
Perceptron
Two-Input Case
Pattern Recognition Example
Hamming Network
Feedforward Layer
Recurrent Layer
Hopfield Network
Epilogue
Exercise
4. Perceptron Learning Rule
Objectives
Theory and Examples
Learning Rules
Perceptron Architecture
Single-Neuron Perceptron
Multiple-Neuron Perceptron
Perceptron Learning Rule
Test Problem
Constructing Learning Rules
Unified Learning Rule
Training Multiple-Neuron Perceptrons
Proof of Convergence
Notation
Proof
Limitations
Summary of Results
Solved Problems
Epilogue
Further Reading
Exercises
5. Signal and Weight Vector Spaces
Objectives
Theory and Examples
Linear Vector Spaces
Linear independence
Spanning a Space
Inner Product
Norm
Orthogonality
Gram-Schmidt Orthogonalization
Vector Expansions
Reciprocal Basis Vectors
Summary of Results
Solved Problems
Epilogue
Further Reading
Exercises
6. Linear Transformations for Neural Networks
Objectives
Theory and Examples
Linear Transformations
Matrix Representations
Change of Basis
Eigenvalues and Eigenvectors
Diagonalization
Summary of Results
Solved Problems
Epilogue
Further Reading
Exercises
7. Supervised Hebbian Learning
Objectives
Theory and Examples
Linear Associator
The Hebb Rule
Performance Analysis
Pseudoinverse Rule
Application
Variations of Hebbian Learning
Summary of Results
Solved Problems
Epilogue
Further Reading
Exercises
8. Performance Surfaces and Optimum Points
Objectives
Theory and Examples
Taylor Series
Vector Case
Directional Derivatives
Minima
Necessary Conditions for Optimality
First-Order Conditions
Second-Order Conditions
Quadratic Functions
Eigensystem of the Hessian
Summary of Results
Solved Problems
Epilogue
Further Reading
Exercises
9. Performance Optimization
Objectives
Theory and Examples
Steepest Descent
Stable Learning Rates
Minimizing Along a Line
Newton's Method
Conjugate Gradient
Summary of Results
Solved Problems
Epilogue
Further Reading
Exercises
10. Widrow-Hoff Learning
Objectives
Theory and Examples
ADALINE Network
Single ADALINE
Mean Square Error
LMS Algorithm
Analysis of Convergence
Adaptive Filtering
Adaptive Noise Cancellation
Echo Cancellation
Summary of Results
Solved Problems
Epilogue
Further Reading
Exercises
11. Backpropagation
Objectives
Theory and Examples
Multilayer Perceptrons
Pattern Classification '
Function Approximation
The Backpropagation Algorithm
Performance Index
Chain Rule
Backpropagating the Sensitivities
Summary '
Example
Using Backpropagation
Choice of Network Architecture
Convergence
Generalization
Summary of Results
Solved Problems
Epilogue
Further Reading
Exercises
12. Variations on Backpropagation
Objectives
Theory and Examples
Drawbacks of Backpropagation
Performance Surface Example
Convergence Example
Heuristic Modifications of Backpropagation
Momentum
Variable Learning Rate
Numerical Optimization Techniques
Conjugate Gradient
Levenberg-Marquardt Algorithm
Summary of Results
Solved Problems
Epilogue
Further Reading
Exercises
13. Assoeiative Learning
Objectives
Theory and Examples
Simple Associative Network
Unsupervised Hebb Rule
Hebb Rule with Decay
Simple Recognition Network
Instar Rule
Kohonen Rule
Simple Recall Network
Outstar Rule
Summary of Results
Solved Problems .
Epilogue
Further Reading
Exercises
14. Competitive Networks
Objectives
Theory and Examples
Hamming Network
Layer 1
Layer 2
Competitive Layer
Competitive Learning
Problems with Competitive Layers
Competitive Layers in Biology
Self-Organizing Feature Maps
Improving Feature Maps
Learning Vector Quantization
LVQ Learning
Improving LVQ Networks (LVQ2)
Summary of Results
Solved Problems
Epilogue
Further Reading
Exercises
15. Grossberg Network
Objectives
Theory and Examples
Biological Motivation: Vision
Illusions
Vision Normalization
Basic Nonlinear Model
Two-Layer Competitive Network
Layer 1
Layer 2
Choice of Transfer Function
Learning Law
Relation to Kohonen Law
Summary of Results
Solved Problems
Epilogue
Further Reading
Exercises
16. Adaptive Resonance Theory
Objectives
Theory and Examples
Overview of Adaptive Resonance
Layer 1
Steady State Analysis '
Layer 2
Orienting Subsystem
Learning Law: LI-L2
Subset/Superset Dilemma
Learning Law
Learning Law: L2-LI
ARTI Algorithm Summary
Initialization
Algorithm
Other ART Architectures
Summary of Results
Solved Problems
Epilogue
Further Reading
Exercises
17. Stability
Objectives
Theory and Examples
Recurrent Networks
Stability Concepts
Definitions
Lyapunov Stability Theorem
Pendulum Example
LaSalle's Invariance Theorem
Definitions
Theorem
Example
Comments
Summary of Results
Solved Problems
Epilogue
Further Reading
Exercises
18. Hopfield Network
Objectives
Theory and Examples
Hopfield Model
Lyapunov Function
Invariant Sets
Example
Hopfield Attractors
Effect of Gain
Hopfield Design
Content-Addressable Memory
Hebb Rule
Lyapunov Surface
Summary of Results
Solved Problems
Epilogue
Further Reading
Exercises
19. Epilogue
Objectives
Theory and Examples
Feedforward and Related Networks
Competitive Networks
Dynamic Associative Memory Networks
Classical Foundations of Neural Networks
Books and Journals
Epilogue
Further Reading
猜您喜欢