christopher m. bishop

Post on 20-Jan-2016

290 Views

Category:

Documents

40 Downloads

Preview:

Click to see full reader

DESCRIPTION

PATTERN RECOGNITION AND MACHINE LEARNING. CHAPTER 1: INTRODUCTION. Christopher M. Bishop. Lecturer: Xiaopeng Hong. - PowerPoint PPT Presentation

TRANSCRIPT

• Christopher M. Bishop

PATTERN RECOGNITION AND MACHINE LEARNING

CHAPTER 1: INTRODUCTION

• Lecturer: Xiaopeng Hong

• These slides follow closely the course textbook “Pattern Recognition and Machine Learning” by Christopher Bishop and the slides “Machine Learning and Music” by Prof. Douglas Eck

免责声明

CVPR

Feature Extraction

PatternClassification

ML Statistical

Symbol

Contents

IP

SP

ProbabilityTheory

InformationTheory

Mathematical logic

PR & ML

• Pattern recognition has its origins in engineering, whereas machine learning grew out of computer science. However, these activities can be viewed as two facets of the same field, and together they have undergone substantial development over the past ten years.

Learning

• Learning denotes changes in the system that is adaptive in the sense that they enable the system to do the same task or tasks drawn from the same population more effectively the next time.

• by H. Simon• 如果一个系统能够通过执行某种过程而改进它的

性能,这就是学习。by 陆汝钤教授

Related Publications

• Conference– ICML– KDD – NIPS– IJCNN – AIML– IJCAI– COLT– CVPR– ICCV– ECCV– …

• Journal– Machine Learning (ML)– Journal of Machine Learnin

g Research – Annals of Statistics– Data Mining and Knowledge

Discovery– IEEE-KDE – IEEE-PAMI – Artificial Intelligence– Journal of Artificial Intellige

nce Research– Computational Intelligence– Neural Computation– IEEE-NN – Research, Information and

Computation– …

History

• Classical statistical methods

• Division in feature space

• PAC

• Generalization

• Ensemble learning

M. Minsky. “Perceptron”

F. Rosenblatt. PerceptronBP Network

Leslie Gabriel Valiant • He introduced the "probably approximately corre

ct" (PAC) model of machine learning that has helped the field of computational learning theory grow.

by Wikipedia • 将计算复杂性作为一个必须考虑的因素。算法的复杂性必

须是多项式的。为了达到这个目的,不惜牺牲模型精度。• “ 对任意正数 ε>0 , 0≤δ<1 , |F(x)-f(x)|≤ε 成立的概

率大于 1-δ”• 对这个理念,传统统计学家难以接受

by 王珏教授

Vladimir N. Vapnik

• “ 不能将估计概率密度这个更为困难的问题作为解决机器学习分类或回归问题的中间步骤,因此,他直接将问题变为线性判别问题其本质是放弃机器学习建立的模型对自然模型的可解释性。”

• “ 泛化” “有限样本统计”– 泛化作为机器学习的核心问题– 在线性特征空间上设计算法– 泛化最大边缘

• “ 与其一无所有,不如求其次”。这是统计学的传统无法接受的

by 王珏教授

Robert Schapire

• “ 对任意正数 ε>0 , 0≤δ<1 , |F(x)-f(x)|≤ε 成立的概率大于 1/2 + δ”

• 构造性证明了 PAC 弱可学习的充要条件是 PAC 强可学习

• 集群学习有两个重要的特点: – 使用多个弱模型代替一个强模型– 决策方法是以弱模型投票,并以少数服从多数的原则决定解答。

by 王珏教授

Example

Handwritten Digit Recognition28

28

d=784

Pre-processing feature extraction1. reduce variability ; 2. speed up computation

Polynomial Curve Fitting

Sum-of-Squares Error Function

0th Order Polynomial

1st Order Polynomial

3rd Order Polynomial

9th Order Polynomial

Over-fitting

Root-Mean-Square (RMS) Error:

Polynomial Coefficients

Data Set Size:

9th Order Polynomial

Data Set Size:

9th Order Polynomial

Regularization

• Penalize large coefficient values

Regularization:

Regularization:

Regularization: vs.

Polynomial Coefficients

Probability Theory统计机器学习 / 模式分类问题 可以在 贝叶斯的框架下表示

MLMAPBayesian

We now seek a more principled approach to solving problems in pattern recognition by turning to a discussion of probability theory. As well as providing the foundation for nearly all of the subsequent developments in this book.

Probability Theory

Apples and Oranges

Probability Theory

• Marginal Probability

• Conditional Probability

Joint Probability

Probability Theory

• Sum Rule

Product Rule

The Rules of Probability

• Sum Rule

• Product Rule

Bayes’ Theorem

posterior likelihood × prior

Probability Densities

Transformed Densities

Markus Svensén
This figure was taken from Solution 1.4 in the web-edition of the solutions manual for PRML, available at http://research.microsoft.com/~cmbishop/PRML. A more thorough explanation of what the figure shows is provided in the text of the solution.

Expectations

Conditional Expectation(discrete)

Approximate Expectation(discrete and continuous)

Variances and Covariances

The Gaussian Distribution

Gaussian Mean and Variance

The Multivariate Gaussian

Gaussian Parameter Estimation

Likelihood function

Maximum (Log) Likelihood

Properties of and

Curve Fitting Re-visited

precision para.

Maximum Likelihood

Determine by minimizing sum-of-squares error, .

1. mean WML

2. precision β

Predictive Distribution

MAP: A Step towards Bayes

Determine by minimizing regularized sum-of-squares error, .

Bayesian Curve Fitting

fully Bayesian approach

Section 3.3

Bayesian Predictive Distribution

1.3 Model Selection

Many parameters…we need to determine the values of such parameters, and the principal objective in doing so is usually to achieve the best predictive performance on new data.

Section 3.3

we may wish to consider a range of different types of model in order to find the best one for our particular application.

Model Selection

• Cross-Validation

complexity information criteria tend to favour overly simple models

Section 3.4 & 4.4.1

1.4 Curse of Dimensionality

we will have to deal with spaces of high dimensionality comprising many input variables.

this poses some serious challenges and is an important factor influencing the design of pattern recognition techniques.

Curse of Dimensionality

Curse of Dimensionality

Polynomial curve fitting, M = 3

1.5 Decision TheoryDetermination of p(x, t) from a set of training data is an example of inference and is typically a very difficult problem whose solution forms the subject of much of this book.

In a practical application, we often make a specific prediction for the value of t, or more generally take a specific action based on our understanding of the values t is likely to take, and this aspect is the subject of decision theory.

Decision Theory

• Inference step• Determine either or .

• Decision step• For given x, determine optimal t.

Minimum Misclassification Rate

If our aim is to minimize the chance of assigning x to the wrong class, then intuitively we would choose the class having the higher posterior probability.

decision regions

Minimum Expected Loss• Example: classify medical images as ‘cancer’ or

‘normal’

DecisionTr

uth

loss function / cost function

Minimum Expected Loss

Regions are chosen to minimize

Reject Option

Why Separate Inference and Decision?• Minimizing risk (loss matrix may change over

time)• Reject option• Unbalanced class priors• Combining models

Decision Theory for Regression• Inference step• Determine .

• Decision step• For given x, make optimal

prediction, y(x), for t.

• Loss function:

Generative vs Discriminative• Generative approach: • Model• Use Bayes’ theorem

• Discriminative approach: • Model directly

Discriminant function

1.6 Information Theory

Information theory will also prove useful in our development of pattern recognition and machine learning techniques

Considering a discrete random variable x, how much information is received when we observe a specific value for this variable. The amount of information can be viewed as the ‘degree of surprise’ on learning the value of x.

Entropy

Important quantity in• coding theory• statistical physics• machine learning

Entropy

• Coding theory: x discrete with 8 possible states; how many bits to transmit the state of x?

• All states equally likely

Entropy

Conditional Entropy

The Kullback-Leibler Divergence

Mutual Information

Conclusion

• Machine Learning• Generalization• Classification/Regression• fitting/over-fitting• Regularization• Bayes’Theorem• Bayes’Decision• Entropy/KLD/MI

Q & A

top related