機器學習:貝葉斯和最佳化方法(英文版·原書第2版)

機器學習:貝葉斯和最佳化方法(英文版·原書第2版)

《機器學習:貝葉斯和最佳化方法(英文版·原書第2版)》是2020年機械工業出版社出版的圖書,作者是[希] 西格爾斯·西奧多里蒂斯(Sergios Theodoridis)。

基本介紹

  • 書名:機器學習:貝葉斯和最佳化方法(英文版·原書第2版)
  • 作者:[希] 西格爾斯·西奧多里蒂斯(Sergios Theodoridis)
  • 出版社:機械工業出版社
  • ISBN:9787111668374
內容簡介,圖書目錄,

內容簡介

本書通過講解監督學習的兩大支柱——回歸和分類——將機器學習納入統一視角展開討論。書中首先討論基礎知識,包括均方、*小二乘和*大似然方法、嶺回歸、貝葉斯決策理論分類、邏輯回歸和決策樹。然後介紹新近的技術,包括稀疏建模方法,再生核希爾伯特空間中的學習、支持向量機中的學習、關注EM算法的貝葉斯推理及其近似推理變分版本、蒙特卡羅方法、聚焦於貝葉斯網路的機率圖模型、隱拳譽烏馬爾科夫模型和煮櫻影粒子濾波。此外,本書還深入幾重驗盼討論了降維和隱藏變數建模。全書以關於神經網路和深度學習架講疊肯構的擴展章節結束。此外,書中還討論端殃了統計參數估計、維納和卡爾曼濾波、凸性和她捆捉紋凸最佳化的基礎知識,其中,用一章介紹了隨機逼近和梯度下降族的算法,並提出了分散式最佳化的相笑微晚關概念、算法和線上學習技術。

圖書目錄

Prefaceiv
Acknowledgmentsvi
About the Authorviii
Notationix
CHAPTER1 Introduction1
11 The Historical Context1
12 Artificia Intelligenceand Machine Learning2
13 Algorithms Can Learn WhatIs Hidden in the Data4
14 Typical Applications of Machine Learning6
Speech Recognition6
Computer Vision6
Multimodal Data6
Natural Language Processing7
Robotics7
Autonomous Cars7
Challenges for the Future8
15 Machine Learning: Major Directions8
151 Supervised Learning8
16 Unsupervised and Semisupervised Learning11
17 Structure and a Road Map of the Book12
References16
CHAPTER2 Probability and Stochastic Processes19
21 Introduction20
22 Probability and Random Variables20
221 Probability20
222 Discrete Random Variables22
223 Continuous Random Variables24
224 Meanand Variance25
225 Transformation of Random Variables28
23 Examples of Distributions29
231 Discrete Variables29
232 Continuous Variables32
24 Stochastic Processes41
241 First-and Second-Order Statistics42
242 Stationarity and Ergodicity43
243 Power Spectral Density46
244 Autoregressive Models51
25 Information Theory54
251 Discrete Random Variables56
252 Continuous Random Variables59
26 Stochastic Convergence61
Convergence Everywhere62
Convergence Almost Everywhere62
Convergence in the Mean-Square Sense62
Convergence in Probability63
Convergence in Distribution63
Problems63
References65
CHAPTER3 Learning in Parametric Modeling: Basic Concepts and Directions67
31 Introduction67
32 Parameter Estimation: the Deterministic Point of View68
33 Linear Regression71
34Classifcation75
Generative Versus Discriminative Learning78
35 Biased Versus Unbiased Estimation80
351 Biased or Unbiased Estimation?81
36 The Cram閞朢ao Lower Bound83
37 Suffcient Statistic87
38 Regularization89
Inverse Probl
223 Continuous Random Variables24
224 Meanand Variance25
225 Transformation of Random Variables28
23 Examples of Distributions29
231 Discrete Variables29
232 Continuous Variables32
24 Stochastic Processes41
241 First-and Second-Order Statistics42
242 Stationarity and Ergodicity43
243 Power Spectral Density46
244 Autoregressive Models51
25 Information Theory54
251 Discrete Random Variables56
252 Continuous Random Variables59
26 Stochastic Convergence61
Convergence Everywhere62
Convergence Almost Everywhere62
Convergence in the Mean-Square Sense62
Convergence in Probability63
Convergence in Distribution63
Problems63
References65
CHAPTER3 Learning in Parametric Modeling: Basic Concepts and Directions67
31 Introduction67
32 Parameter Estimation: the Deterministic Point of View68
33 Linear Regression71
34Classifcation75
Generative Versus Discriminative Learning78
35 Biased Versus Unbiased Estimation80
351 Biased or Unbiased Estimation?81
36 The Cram閞朢ao Lower Bound83
37 Suffcient Statistic87
38 Regularization89
Inverse Probl

相關詞條

熱門詞條

聯絡我們