Python機器學習第2版(影印版)

Python機器學習第2版(影印版)

《Python機器學習第2版(影印版)》是2018年東南大學出版社出版的圖書,作者是Sebastian,Raschka,Vahid,Mirjalili。

基本介紹

  • 中文名:Python機器學習第2版(影印版)
  • 作者:Sebastian,Raschka,Vahid,Mirjalili
  • 出版社:東南大學出版社
  • ISBN:9787564178666
內容簡介,圖書目錄,

內容簡介

《Python機器學習 第2版(影印版)》帶你進入預測分析的世界,通過演示告訴你為什麼Python是世界**的數據科學語言之一。書中涵蓋了包括 scikit-learn、Theano和Keras在內的大量功能強大的Python庫、操作指南以及從情感分析到神經網路的各色小技巧,很快你就能夠解答你個人及組織所面對的那些很重要的問題。

圖書目錄

Preface
Chapter 1: Giving Computers the Ability_ to Learn from Data
Building intelligent machines to transform data into knowledge
The three different types of machine learning
Making predictions about the future with supervised learning
Classification for predicting class labels
Regression for predicting continuous outcomes
Solving interactive problems with reinforcement learning
Discovering hidden structures with unsupervised learning
Finding subgroups with clustering
Dimensionality reduction for data compression
Introduction to the basic terminology and notations
A roadmap for building machine learning systems
Preprocessing - getting data into shape
Training and selecting a predictive model
Evaluating models and predicting unseen data instances
Using Python for machine learning
Installing Python and packages from the Python Package Index
Using the Anaconda Python distribution and package manager
Packages for scientific computing, data science, and machine learning
Summary
Chapter 2: Training Simple Machine Learning Algorithms
for Classification
Artificial neurons - a brief glimpse into the early history of
machine learning
The formal definition of an artificial neuron
The perceptron learning rule
Implementing a perceptron learning algorithm in Python
An object-oriented perceptron API
Training a perceptron model on the Iris dataset
Adaptive linear neurons and the convergence of learning
Minimizing cost functions with gradient descent
Implementing Adaline in Python
Improving gradient descent through feature scaling
Large-scale machine learning and stochastic gradient descent
Summary
Chapter 3: A Tour of Machine Learning Classifiers
Using scikit-learn
Choosing a classification algorithm
First steps with scikit-learn - training a perceptron
Modeling class probabilities via logistic regression
Logistic regression intuition and conditional probabilities
Learning the weights of the logistic cost function
Converting an Adaline implementation into an algorithm for
logistic regression
Training a logistic regression model with scikit-learn
Tackling overfitting via regularization
Maximum margin classification with support vector machines
Maximum margin intuition
Dealing with a nonlinearly separable case using slack variables
Alternative implementations in scikit-learn
Solving nonlinear problems using a kernel SVM
Kernel methods for linearly inseparable data
Using the kernel trick to find separating hyperplanes in
high-dimensional space
Decision tree learning
Maximizing information gain - getting the most bang for your buck
Building a decision tree
Combining multiple decision trees via random forests
K-nearest neighbors - a lazy learning algorithm
Summary
Chapter 4: Building Good Training Sets - Data Preprocessing
Dealing with missing data
Identifying missing values in tabular data
Eliminating samples or features with missing values
Imputing missing values
Understanding the scikit-learn estimator API

相關詞條

熱門詞條

聯絡我們