《模式識別(英文版)(第4版)》是享譽世界的名著,內容既全面又相對獨立,既有基礎知識的介紹,又有本領域研究現狀的介紹,還有對未來發展的展望,是本領域最全面的參考書,被世界眾多高校選用為教材。《模式識別(英文版)(第4版)》可作為高等院校計算機。電子、通信。自動化等專業研究生和高年級本科生的教材,也可作為計算機信息處理、自動控制等相關領域的工程技術人員的參考用書。《模式識別(英文版)(第4版)》主要特點提供了大型數據集和高維數據的聚類算法以及網路挖掘和生物信息學套用的最新資料。涵蓋了基於圖像分析、光學字元識別,信道均衡,語音識別和音頻分類的多種套用。呈現了解決分類和穩健回歸問題的核心方法取得的最新成果。介紹了帶有Boosting方法的分類器組合技術。提供更多處理過的實例和圖例,加深讀者對各種方法的了解。增加了關於熱點話題的新的章節,包括非線性維數約減、非負矩陣分解、實用性反饋。穩健回歸、半監督學習,譜聚類和聚類組合技術。
基本介紹
- 書名:經典原版書庫:模式識別
- 類型:計算機與網際網路
- 出版日期:2009年8月1日
- 語種:英語
- ISBN:9787111268963
- 作者:西奧多里德斯
- 出版社:機械工業出版社
- 頁數:961頁
- 開本:32
- 品牌:機械工業出版社
基本介紹,內容簡介,作者簡介,圖書目錄,序言,
基本介紹
內容簡介
《模式識別(英文版)(第4版)》由機械工業出版社出版。
作者簡介
作者:(希臘)西奧多里德斯
西奧多里德斯,希臘雅典大學信息系教授。主要研究方向是自適應信號處理、通信與模式識別。他是歐洲並行結構及語言協會(PARLE-95)的主席和歐洲信號處理協會(EUSIPCO-98)的常務主席、《信號處理》雜誌編委。
西奧多里德斯,希臘雅典大學信息系教授。主要研究方向是自適應信號處理、通信與模式識別。他是歐洲並行結構及語言協會(PARLE-95)的主席和歐洲信號處理協會(EUSIPCO-98)的常務主席、《信號處理》雜誌編委。
圖書目錄
Preface
CHAPTER1 Introduction
1.1 Is Pattern Recognition Important?
1.2 Features, Feature Vectors, and Classifiers
1.3 Supervised, Unsupervised, and Semi-Supervised Learning
1.4 MATLAB Programs
1.5 Outline of The Book
CHAPTER2 Classifiers Based on Bayes Decision Theory
2.1 Introduction
2.2 Bayes Decision Theory
2.3 Discriminant Functions and Decision Surfaces
2.4 Bayesian Classification for Normal Distributions
2.5 Estimation of Unknown Probability Density Functions
2.6 The Nearest Neighbor Rule
2.7 Bayesian Networks
2.8 Problems
References
CHAPTER3 Linear Classifiers
3.1 Introduction
3.2 Linear Discriminant Functions and Decision Hyperplanes
3.3 The Perceptron Algorithm
3.4 Least Squares Methods
3.5 Mean Square Estimation Revisited
3.6 Logistic Discrimination
3.7 Support Vector Machines
3.8 Problems
References
CHAPTER 4 Nonlinear Classifiers
4.1 Introduction
4.2 The XOR Problem
4.3 TheTwo-Layer Perceptron
4.4 Three-Layer Perceptrons
4.5 Algorithms Based on Exact Classification of the Training Set
4.6 The Backpropagation Algorithm
4.7 Variations on the Backpropagation Theme
4.8 The Cost Function Choice
4.9 Choice of the Network Size
4.10 A Simulation Example
4.11 Networks with Weight Sharing
4.12 Generalized Linear Classifiers
4.13 Capacity of the/-Dimensional Space inLinear Dichotomies
4.14 Polynomial Classifiers
4.15 Radial Basis Function Networks
4.16 UniversalApproximators
4.17 Probabilistic Neural Networks
4.18 Support Vector Machines: The Nonlinear Case
4.19 Beyond the SVM Paradigm
4.20 Decision Trees
4.21 Combining Classifiers
4.22 The Boosting Approach to Combine Classifiers
4.23 The Class Imbalance Problem
4.24 Discussion
4.25 Problems
References
CHAPTER5 Feature Selection
5.1 Introduction
5.2 Preprocessing
5.3 The Peaking Phenomenon
5.4 Feature Selection Based on Statistical Hypothesis Testing
5.5 The Receiver Operating Characteristics (ROC) Curve
5.6 Class Separability Measures
5.7 Feature Subset Selection
5.8 Optimal Feature Generation
5.9 Neural Networks and Feature Generation/Selection
5.10 A Hint On Generalization Theory
5.11 The Bayesian Information Criterion
5.12 Problems
References
CHAPTER 6 FEATURE GENERATION Ⅰ:LINEAR TRANSFORMS
CHAPTER 7 FEATURE GENERATION Ⅱ
CHAPTER 8 TEMPLATE MATCHING
CHAPTER 9 CONTEXT-DEPENDENT CLASIFICATION
CHAPTER10 SYSTEM EVALUATION
CHAPTER11 CLUSTERING:BASIC CONCEPTS
CHAPTER12 CLUSTERING ALGORITHMSⅠ:SEQUENTIAL ALGORITHMS
CHAPTER13 CLUSTERING ALGORITHMSⅡ:HIERARCHICAL ALGORITHMS
CHAPTER14 CLUSTERING ALGORITHMSⅢ:SCHEMES BASED ON FUNCTION OPTIMIZATION
CHAPTER15 CLUSTERING ALGORITHMSⅣ
CHAPTER16 CLUSTER VALIDITY
Appendix A Hints form Probability and Statistics
Appendix B Linear Algebra Basics
CHAPTER1 Introduction
1.1 Is Pattern Recognition Important?
1.2 Features, Feature Vectors, and Classifiers
1.3 Supervised, Unsupervised, and Semi-Supervised Learning
1.4 MATLAB Programs
1.5 Outline of The Book
CHAPTER2 Classifiers Based on Bayes Decision Theory
2.1 Introduction
2.2 Bayes Decision Theory
2.3 Discriminant Functions and Decision Surfaces
2.4 Bayesian Classification for Normal Distributions
2.5 Estimation of Unknown Probability Density Functions
2.6 The Nearest Neighbor Rule
2.7 Bayesian Networks
2.8 Problems
References
CHAPTER3 Linear Classifiers
3.1 Introduction
3.2 Linear Discriminant Functions and Decision Hyperplanes
3.3 The Perceptron Algorithm
3.4 Least Squares Methods
3.5 Mean Square Estimation Revisited
3.6 Logistic Discrimination
3.7 Support Vector Machines
3.8 Problems
References
CHAPTER 4 Nonlinear Classifiers
4.1 Introduction
4.2 The XOR Problem
4.3 TheTwo-Layer Perceptron
4.4 Three-Layer Perceptrons
4.5 Algorithms Based on Exact Classification of the Training Set
4.6 The Backpropagation Algorithm
4.7 Variations on the Backpropagation Theme
4.8 The Cost Function Choice
4.9 Choice of the Network Size
4.10 A Simulation Example
4.11 Networks with Weight Sharing
4.12 Generalized Linear Classifiers
4.13 Capacity of the/-Dimensional Space inLinear Dichotomies
4.14 Polynomial Classifiers
4.15 Radial Basis Function Networks
4.16 UniversalApproximators
4.17 Probabilistic Neural Networks
4.18 Support Vector Machines: The Nonlinear Case
4.19 Beyond the SVM Paradigm
4.20 Decision Trees
4.21 Combining Classifiers
4.22 The Boosting Approach to Combine Classifiers
4.23 The Class Imbalance Problem
4.24 Discussion
4.25 Problems
References
CHAPTER5 Feature Selection
5.1 Introduction
5.2 Preprocessing
5.3 The Peaking Phenomenon
5.4 Feature Selection Based on Statistical Hypothesis Testing
5.5 The Receiver Operating Characteristics (ROC) Curve
5.6 Class Separability Measures
5.7 Feature Subset Selection
5.8 Optimal Feature Generation
5.9 Neural Networks and Feature Generation/Selection
5.10 A Hint On Generalization Theory
5.11 The Bayesian Information Criterion
5.12 Problems
References
CHAPTER 6 FEATURE GENERATION Ⅰ:LINEAR TRANSFORMS
CHAPTER 7 FEATURE GENERATION Ⅱ
CHAPTER 8 TEMPLATE MATCHING
CHAPTER 9 CONTEXT-DEPENDENT CLASIFICATION
CHAPTER10 SYSTEM EVALUATION
CHAPTER11 CLUSTERING:BASIC CONCEPTS
CHAPTER12 CLUSTERING ALGORITHMSⅠ:SEQUENTIAL ALGORITHMS
CHAPTER13 CLUSTERING ALGORITHMSⅡ:HIERARCHICAL ALGORITHMS
CHAPTER14 CLUSTERING ALGORITHMSⅢ:SCHEMES BASED ON FUNCTION OPTIMIZATION
CHAPTER15 CLUSTERING ALGORITHMSⅣ
CHAPTER16 CLUSTER VALIDITY
Appendix A Hints form Probability and Statistics
Appendix B Linear Algebra Basics
序言
This book is the outgrowth of our teaching advanced undergraduate and graduatecourses over the past 20 years.These courses have been taught to differentaudiences, including students in electrical and electronics engineering, computerengineering, computer science, and informatics, as well as to an interdisciplinaryaudience of a graduate course on automation. This experience led us to makethe book as self-contained as possible and to address students with different back-grounds. As prerequisitive knowledge, the reader requires only basic calculus,elementary linear algebra, and some probability theory basics. A number of mathe-matical tools, such as probability and statistics as well as constrained optimization,needed by various chapters, are treated in fourAppendices. The book is designed toserve as a text for advanced undergraduate and graduate students, and it can be usedfor either a one- or a two-semester course. Furthermore,it is intended to be used as aself-study and reference book for research and for the practicing scientist/engineer.This latter audience was also our second incentive for writing this book, due to theinvolvement of our group in a number of projects related to pattern recognition.