經典原版書庫:神經網路與機器學習

《神經網路與機器學習(英文版第3版)》的可讀性非常強,作者舉重若輕地對神經網路的基本模型和主要學習理論進行了深入探討和分析,通過大量的試驗報告、例題和習題來幫助讀者更好地學習神經網路。神經網路是計算智慧型和機器學習的重要分支,在諸多領域都取得了很大的成功。在眾多神經網路著作中,影響最為廣泛的是SimonHaykin的《神經網路原理》(第4版更名為《神經網路與機器學習》)。在《神經網路與機器學習(英文版第3版)》中,作者結合近年來神經網路和機器學習的最新進展,從理論和實際套用出發,全面。系統地介紹了神經網路的基本模型、方法和技術,並將神經網路和機器學習有機地結合在一起。《神經網路與機器學習(英文版第3版)》不但注重對數學分析方法和理論的探討,而且也非常關注神經網路在模式識別、信號處理以及控制系統等實際工程問題中的套用。本版在前一版的基礎上進行了廣泛修訂,提供了神經網路和機器學習這兩個越來越重要的學科的最新分析。

基本介紹

  • 書名:經典原版書庫:神經網路與機器學習
  • 類型:計算機與網際網路
  • 出版日期:2009年3月1日
  • 語種:英語
  • ISBN:9787111265283
  • 作者:Simon Haykin McMaster
  • 出版社:機械工業出版社
  • 頁數:906頁
  • 開本:16
  • 品牌:機械工業出版社
基本介紹,內容簡介,作者簡介,圖書目錄,序言,

基本介紹

內容簡介

《神經網路與機器學習(英文版第3版)》特色:
基於隨機梯度下降的線上學習算法;小規模和大規模學習問題。
核方法,包括支持向量機和表達定理。
資訊理論學習模型,包括連線、獨立分量分析(ICA),一致獨立分量分析和信息瓶頸。
隨機動態規劃,包括逼近和神經動態規劃。
逐次狀態估計算法,包括Kalman和粒子濾波器。
利用逐次狀態估計算法訓練遞歸神經網路。
富有洞察力的面向計算機的試驗。

作者簡介

作者:(加拿大)Simon Haykin (加拿大)McMaster

Simon Haykin,於1953年獲得英國伯明罕大學博士學位,目前為加拿大McMaster大學電子與計算機工程系教授、通信研究實驗室主任。他是國際電子電氣工程界的著名學者,曾獲得IEEE McNaughton金獎。他是加拿大皇家學會院士、IEEE會士,在神經網路、通信、自適應濾波器等領域成果頗豐,著有多部標準教材。

圖書目錄

Preface
Acknowledgements
Abbreviations and Symbols
GLOSSARY
Introduction
1 Whatis aNeuralNetwork?
2 The Human Brain
3 Models of a Neuron
4 Neural Networks Viewed As Dirccted Graphs
5 Feedback
6 Network Architecturns
7 Knowledge Representation
8 Learning Processes
9 Learninglbks
10 Concluding Remarks
Notes and Rcferences

Chapter 1 Rosenblatt's Perceptrou
1.1 Introduction
1.2 Perceptron
1.3 1he Pcrceptron Convergence Theorem
1.4 Relation Between the Perceptron and Bayes Classifier for a Gaussian Environment
1.5 Computer Experiment:Pattern Classification
1.6 The Batch Perceptron Algorithm
1.7 Summary and Discussion
Notes and Refercnces
Problems

Chapter 2 Model Building through Regression
2.1 Introduction 68
2.2 Linear Regression Model:Preliminary Considerafions
2.3 Maximum a Posteriori Estimation ofthe ParameterVector
2.4 Relationship Between Regularized Least-Squares Estimation and MAP Estimation
2.5 Computer Experiment:Pattern Classification
2.6 The Minimum.Description-Length Principle
2.7 Rnite Sample—Size Considerations
2.8 The Instrumental,variables Method
2 9 Summary and Discussion
Notes and References
Problems

Chapter 3 The Least—Mean-Square Algorithm
3.1 Introduction
3.2 Filtering Structure of the LMS Algorithm
3.3 Unconstrained optimization:a Review
3.4 ThC Wiener FiIter
3.5 ne Least.Mean.Square Algorithm
3.6 Markov Model Portraying the Deviation of the LMS Algorithm from the Wiener Filter
3.7 The Langevin Equation:Characterization ofBrownian Motion
3.8 Kushner’S Direct.Averaging Method
3.9 Statistical LMS Learning Iheory for Sinail Learning—Rate Parameter
3.10 Computer Experiment I:Linear PTediction
3.11 Computer Experiment II:Pattern Classification
3.12 Virtucs and Limitations of the LMS AIgorithm
3.13 Learning.Rate Annealing Schedules
3.14 Summary and Discussion
Notes and Refefences
Problems

Chapter 4 Multilayer Pereeptrons
4.1 IntroductlOn
4.2 Some Preliminaries
4.3 Batch Learning and on.Line Learning
4.4 The Back.Propagation Algorithm
4 5 XORProblem
4.6 Heuristics for Making the Back—Propagation Algorithm PerfoITn Better
4.7 Computer Experiment:Pattern Classification
4.8 Back Propagation and Differentiation
4.9 The Hessian and lIs Role 1n On-Line Learning
4.10 Optimal Annealing and Adaptive Control of the Learning Rate
4.11 Generalization
4.12 Approximations of Functions
4.13 Cross.Vjlidation
4.14 Complexity Regularization and Network Pruning
4.15 Virtues and Limitations of Back-Propagation Learning
4.16 Supervised Learning Viewed as an Optimization Problem
4.17 COUVOlutionaI Networks
4.18 Nonlinear Filtering
4.19 Small—Seale VerSus Large+Scale Learning Problems
4.20 Summary and Discussion
Notes and RCfcreilces
Problems

Chapter 5 Kernel Methods and Radial-Basis Function Networks
5.1 Intreduction
5.2 Cover’S Theorem on the Separability of Patterns
5.3 1he Interpolation Problem
5 4 Radial—Basis—Function Networks
5.5 K.Mcans Clustering
5.6 Recursive Least-Squares Estimation of the Weight Vector
5 7 Hybrid Learning Procedure for RBF Networks
5 8 Computer Experiment:Pattern Classification
5.9 Interpretations of the Gaussian Hidden Units
5.10 Kernel Regression and Its Relation to RBF Networks
5.11 Summary and Discussion
Notes and References
Problems
Chapter 6 Support Vector Machines
Chapter 7 Regularization Theory
Chapter 8 Prindpal-Components Aaalysis
Chapter 9 Self-Organizing Maps
Chapter 10 Information-Theoretic Learning Models
Chapter 11 Stochastic Methods Rooted in Statistical Mechanics
Chapter 12 Dynamic Programming
Chapter 13 Neurodynamics
Chapter 14 Bayseian Filtering for State Estimation ofDynamic Systems
Chaptel 15 Dynamlcaay Driven Recarrent Networks
Bibliography
Index

序言

In writing this third edition of a classic book, I have been guided by the same uuderly hag philosophy of the first edition of the book:
Write an up wdate treatment of neural networks in a comprehensive, thorough, and read able manner.The new edition has been retitied Neural Networks and Learning Machines, in order toreflect two reahties: L The perceptron, the multilayer perceptroo, self organizing maps, and neuro
dynamics, to name a few topics, have always been considered integral parts of neural networks, rooted in ideas inspired by the human brain.2. Kernel methods, exemplified by support vector machines and kernel principal components analysis, are rooted in statistical learning theory.Although, indeed, they share many fundamental concepts and applications, there aresome subtle differences between the operations of neural networks and learning ma chines. The underlying subject matter is therefore much richer when they are studiedtogether, under one umbrella, particulasiy so when ideas drawn from neural networks and machine learning are hybridized to perform improved learning tasks beyond the capability of either one operating on its own, and ideas inspired by the human brain lead to new perspectives wherever they are of particular importance.
  

相關詞條

熱門詞條

聯絡我們