第 4 课 数据分类和预测

44
第 4 第 第第第第第第第 第第第第第第 第第第第第第第第第第第 第第第第第第第 第第第第第第第第第 《》

Upload: woods

Post on 21-Jan-2016

91 views

Category:

Documents


0 download

DESCRIPTION

浙江大学本科生 《 数据挖掘导论 》 课件. 第 4 课 数据分类和预测. 徐从富, 副教授 浙江大学人工智能研究所. 内容提纲. What is classification? What is prediction? Issues regarding classification and prediction Classification by decision tree induction Bayesian Classification Prediction Summary Reference. Classification vs. Prediction. - PowerPoint PPT Presentation

TRANSCRIPT

Page 1: 第 4 课 数据分类和预测

第 4 课 数据分类和预测

徐从富,副教授 浙江大学人工智能研究所

浙江大学本科生《数据挖掘导论》课件

Page 2: 第 4 课 数据分类和预测

内容提纲 What is classification? What is prediction?

Issues regarding classification and prediction

Classification by decision tree induction

Bayesian Classification

Prediction

Summary

Reference

Page 3: 第 4 课 数据分类和预测

Classification predicts categorical class labels (discrete or nominal) classifies data (constructs a model) based on the training set and the values

(class labels) in a classifying attribute and uses it in classifying new data Prediction

models continuous-valued functions, i.e., predicts unknown or missing values

Typical applications Credit approval Target marketing Medical diagnosis Fraud detection

I. Classification vs. Prediction

Page 4: 第 4 课 数据分类和预测

Classification—A Two-Step Process Model construction: describing a set of predetermined classes

Each tuple/sample is assumed to belong to a predefined class, as determined by the class label attribute

The set of tuples used for model construction is training set The model is represented as classification rules, decision trees, or mat

hematical formulae Model usage: for classifying future or unknown objects

Estimate accuracy of the model The known label of test sample is compared with the classified res

ult from the model Accuracy rate is the percentage of test set samples that are correctl

y classified by the model Test set is independent of training set, otherwise over-fitting will o

ccur If the accuracy is acceptable, use the model to classify data tuples who

se class labels are not known

Page 5: 第 4 课 数据分类和预测

Classification Process (1): Model Construction

TrainingData

NAME RANK YEARS TENUREDMike Assistant Prof 3 noMary Assistant Prof 7 yesBill Professor 2 yesJim Associate Prof 7 yesDave Assistant Prof 6 noAnne Associate Prof 3 no

ClassificationAlgorithms

IF rank = ‘professor’OR years > 6THEN tenured = ‘yes’

Classifier(Model)

Page 6: 第 4 课 数据分类和预测

Classification Process (2): Use the Model in Prediction

Classifier

TestingData

NAME RANK YEARS TENUREDTom Assistant Prof 2 noMerlisa Associate Prof 7 noGeorge Professor 5 yesJoseph Assistant Prof 7 yes

Unseen Data

(Jeff, Professor, 4)

Tenured?

Page 7: 第 4 课 数据分类和预测

Supervised vs. Unsupervised Learning Supervised learning (classification)

Supervision: The training data (observations, measurements, etc.)

are accompanied by labels indicating the class of the observations

New data is classified based on the training set

Unsupervised learning (clustering)

The class labels of training data is unknown

Given a set of measurements, observations, etc. with the aim of

establishing the existence of classes or clusters in the data

Page 8: 第 4 课 数据分类和预测

II. Issues Regarding Classification and Prediction (1): Data Preparation

Data cleaning Preprocess data in order to reduce noise and handle

missing values Relevance analysis (feature selection)

Remove the irrelevant or redundant attributes Data transformation

Generalize and/or normalize data

Page 9: 第 4 课 数据分类和预测

Issues regarding classification and prediction (2): Evaluating classification methods

Accuracy: classifier accuracy and predictor accuracy Speed and scalability

time to construct the model (training time) time to use the model (classification/prediction time)

Robustness handling noise and missing values

Scalability efficiency in disk-resident databases

Interpretability understanding and insight provided by the model

Other measures, e.g., goodness of rules, such as decision tree size or compactness of classification rules

Page 10: 第 4 课 数据分类和预测

III. Decision Tree Induction: Training Dataset

age income student credit_rating buys_computer<=30 high no fair no<=30 high no excellent no31…40 high no fair yes>40 medium no fair yes>40 low yes fair yes>40 low yes excellent no31…40 low yes excellent yes<=30 medium no fair no<=30 low yes fair yes>40 medium yes fair yes<=30 medium yes excellent yes31…40 medium no excellent yes31…40 high yes fair yes>40 medium no excellent no

This follows an example of Quinlan’s ID3 (Playing Tennis)

Page 11: 第 4 课 数据分类和预测

Output: A Decision Tree for “buys_computer”

age?

overcast

student? credit rating?

no yes fairexcellent

<=30 >40

no noyes yes

yes

30..40

Page 12: 第 4 课 数据分类和预测

Algorithm for Decision Tree Induction Basic algorithm (a greedy algorithm)

Tree is constructed in a top-down recursive divide-and-conquer manner

At start, all the training examples are at the root Attributes are categorical (if continuous-valued, they are discretiz

ed in advance) Examples are partitioned recursively based on selected attributes Test attributes are selected on the basis of a heuristic or statistical

measure (e.g., information gain) Conditions for stopping partitioning

All samples for a given node belong to the same class There are no remaining attributes for further partitioning – majorit

y voting is employed for classifying the leaf There are no samples left

Page 13: 第 4 课 数据分类和预测

Attribute Selection Measure: Information Gain (ID3/C4.5)

Select the attribute with the highest information gain S contains si tuples of class Ci for i = {1, …, m} information measures info required to classify any arbitrary tupl

e

entropy of attribute A with values {a1,a2,…,av}

information gained by branching on attribute A

2

1

I( ) logm

i i1 2 m

i

s ss ,s ,...,s

s s

1

1

1

...E(A) ( ,..., )j mj

j mj

v

j

s sI s s

s

1 2 mGain(A) I(s ,s ,...,s ) E(A)

Page 14: 第 4 课 数据分类和预测

Attribute Selection by Information Gain Computation Class P: buys_computer = “yes” Class N: buys_computer = “no” I(p, n) = I(9, 5) =0.940 Compute the entropy for age:

means “age <=30” has 5

out of 14 samples, with 2 ye

s’es and 3 no’s. Hence,

age pi ni I(pi, ni)<=30 2 3 0.97130…40 4 0 0>40 3 2 0.971

694.0)2,3(14

5

)0,4(14

4)3,2(

14

5)(

I

IIageE

048.0)_(

151.0)(

029.0)(

ratingcreditGain

studentGain

incomeGain

246.0)(),()( ageEnpIageGainage income student credit_rating buys_computer

<=30 high no fair no<=30 high no excellent no31…40 high no fair yes>40 medium no fair yes>40 low yes fair yes>40 low yes excellent no31…40 low yes excellent yes<=30 medium no fair no<=30 low yes fair yes>40 medium yes fair yes<=30 medium yes excellent yes31…40 medium no excellent yes31…40 high yes fair yes>40 medium no excellent no

)3,2(14

5I

Similarly,

Page 15: 第 4 课 数据分类和预测

Computing Information-Gain for Continuous-Value Attributes

Let attribute A be a continuous-valued attribute

Must determine the best split point for A

Sort the value A in increasing order

Typically, the midpoint between each pair of adjacent values is considered as a possible split point

(ai+ai+1)/2 is the midpoint between the values of ai and ai+1

The point with the minimum expected information requirement for A is selected as the split-point for A

Split:

D1 is the set of tuples in D satisfying A ≤ split-point, and D2 is the set of tu

ples in D satisfying A > split-point

Page 16: 第 4 课 数据分类和预测

Extracting Classification Rules from Trees

Represent the knowledge in the form of IF-THEN rules

One rule is created for each path from the root to a leaf

Each attribute-value pair along a path forms a conjunction

The leaf node holds the class prediction

Rules are easier for humans to understand

Example

IF age = “<=30” AND student = “no” THEN buys_computer = “no”

IF age = “<=30” AND student = “yes” THEN buys_computer = “yes”

IF age = “31…40” THEN buys_computer = “yes”

IF age = “>40” AND credit_rating = “excellent” THEN buys_computer = “yes”

IF age = “<=30” AND credit_rating = “fair” THEN buys_computer = “no”

Page 17: 第 4 课 数据分类和预测

Avoid Overfitting in Classification Overfitting: An induced tree may overfit the training data

Too many branches, some may reflect anomalies due to noise or outliers

Poor accuracy for unseen samples Two approaches to avoid overfitting

Prepruning: Halt tree construction early—do not split a node if this would result in the goodness measure falling below a threshold

Difficult to choose an appropriate threshold Postpruning: Remove branches from a “fully grown” tree—get a s

equence of progressively pruned trees Use a set of data different from the training data to decide whic

h is the “best pruned tree”

Page 18: 第 4 课 数据分类和预测

Approaches to Determine the Final Tree Size

Separate training (2/3) and testing (1/3) sets

Use cross validation

Use all the data for training but apply a statistical test (e.g., chi-square) to estimate w

hether expanding or pruning a node may improve the entire distribution

Page 19: 第 4 课 数据分类和预测

Enhancements to Basic Decision Tree Induction Allow for continuous-valued attributes

Dynamically define new discrete-valued attributes that partition the continuous attribute value into a discrete set of intervals

Handle missing attribute values Assign the most common value of the attribute Assign probability to each of the possible values

Attribute construction Create new attributes based on existing ones that are sparsely

represented This reduces fragmentation, repetition, and replication

Page 20: 第 4 课 数据分类和预测

Classification in Large Databases Classification—a classical problem extensively studied by

statisticians and machine learning researchers Scalability: Classifying data sets with millions of examples and

hundreds of attributes with reasonable speed Why decision tree induction in data mining?

relatively faster learning speed (than other classification methods)

convertible to simple and easy to understand classification rules

can use SQL queries for accessing databases comparable classification accuracy with other methods

Page 21: 第 4 课 数据分类和预测

Scalable Decision Tree Induction Methods SLIQ (EDBT’96 — Mehta et al.)

builds an index for each attribute and only class list and the current attribute list reside in memory

SPRINT (VLDB’96 — J. Shafer et al.) constructs an attribute list data structure

PUBLIC (VLDB’98 — Rastogi & Shim) integrates tree splitting and tree pruning: stop growing the tre

e earlier RainForest (VLDB’98 — Gehrke, Ramakrishnan & Ganti)

separates the scalability aspects from the criteria that determine the quality of the tree

builds an AVC-list (attribute, value, class label)

Page 22: 第 4 课 数据分类和预测

Presentation of Classification Results

Page 23: 第 4 课 数据分类和预测

Visualization of a Decision Tree in SGI/MineSet 3.0

Page 24: 第 4 课 数据分类和预测

Interactive Visual Mining by Perception-Based Classification (PBC)

Page 25: 第 4 课 数据分类和预测

IV. Bayesian Classification: Why? Probabilistic learning: Calculate explicit probabilities for

hypothesis, among the most practical approaches to certain types of learning problems

Incremental: Each training example can incrementally increase/decrease the probability that a hypothesis is correct. Prior knowledge can be combined with observed data.

Probabilistic prediction: Predict multiple hypotheses, weighted by their probabilities

Standard: Even when Bayesian methods are computationally intractable, they can provide a standard of optimal decision making against which other methods can be measured

Page 26: 第 4 课 数据分类和预测

Bayesian Theorem: Basics Let X be a data sample whose class label is unknown Let H be a hypothesis that X belongs to class C For classification problems, determine P(H|X): the probability that

the hypothesis holds given the observed data sample X P(H): prior probability of hypothesis H (i.e. the initial probability

before we observe any data, reflects the background knowledge) P(X): probability that sample data is observed P(X|H): probability of observing the sample X, given that the hypo

thesis holds

Page 27: 第 4 课 数据分类和预测

Bayesian Theorem Given training data X, posteriori probability of a hypothesis H, P(H|X) follo

ws the Bayes theorem

Informally, this can be written as

posteriori = likelihood x prior / evidence MAP (maximum posteriori) hypothesis

Practical difficulty: require initial knowledge of many probabilities, significant computational cost

)()()|()|(

XPHPHXPXHP

.)()|(maxarg)|(maxarg hPhDPHh

DhPHhMAP

h

Page 28: 第 4 课 数据分类和预测

Naïve Bayes Classifier A simplified assumption: attributes are conditionally independent:

The product of occurrence of say 2 elements x1 and x2, given the current class is C, is the product of the probabilities of each element taken separately, given the same class P([y1,y2], C) = P(y1, C) * P(y2, C)

No dependence relation between attributes Greatly reduces the computation cost, only count the class distrib

ution. Once the probability P(X|Ci) is known, assign X to the class with

maximum P(X|Ci) * P(Ci)

n

kCixkPCiXP

1)|()|(

Page 29: 第 4 课 数据分类和预测

Training datasetage income student credit_rating buys_computer

<=30 high no fair no<=30 high no excellent no30…40 high no fair yes>40 medium no fair yes>40 low yes fair yes>40 low yes excellent no31…40 low yes excellent yes<=30 medium no fair no<=30 low yes fair yes>40 medium yes fair yes<=30 medium yes excellent yes31…40 medium no excellent yes31…40 high yes fair yes>40 medium no excellent no

Class:C1:buys_computer=‘yes’C2:buys_computer=‘no’

Data sample X =(age<=30,Income=medium,Student=yesCredit_rating=Fair)

Page 30: 第 4 课 数据分类和预测

Naïve Bayesian Classifier: An Example

Compute P(X|Ci) for each classP(age=“<30” | buys_computer=“yes”) = 2/9=0.222

P(age=“<30” | buys_computer=“no”) = 3/5 =0.6 P(income=“medium” | buys_computer=“yes”)= 4/9 =0.444 P(income=“medium” | buys_computer=“no”) = 2/5 = 0.4 P(student=“yes” | buys_computer=“yes)= 6/9 =0.667 P(student=“yes” | buys_computer=“no”)= 1/5=0.2 P(credit_rating=“fair” | buys_computer=“yes”)=6/9=0.667 P(credit_rating=“fair” | buys_computer=“no”)=2/5=0.4 X=(age<=30 , income =medium, student=yes, credit_rating=fair) P(X|Ci) : P(X|buys_computer=“yes”)= 0.222 x 0.444 x 0.667 x 0.0.667 =0.044 P(X|buys_computer=“no”)= 0.6 x 0.4 x 0.2 x 0.4 =0.019 P(X|Ci)*P(Ci ) : P(X|buys_computer=“yes”) * P(buys_computer=“yes”)=0.028

P(X|buys_computer=“no”) * P(buys_computer=“no”)=0.007 Therefore, X belongs to class “buys_computer=yes”

Page 31: 第 4 课 数据分类和预测

Naïve Bayesian Classifier: Comments Advantages

Easy to implement Good results obtained in most of the cases

Disadvantages Assumption: class conditional independence, therefore loss of accuracy Practically, dependencies exist among variables E.g., hospitals: patients: Profile: age, family history etc

Symptoms: fever, cough etc., Disease: lung cancer, diabetes etc Dependencies among these cannot be modeled by Naïve Bayesian

Classifier

How to deal with these dependencies? Bayesian Belief Networks

Page 32: 第 4 课 数据分类和预测

Bayesian Belief Networks Bayesian belief network allows a subset of the variables

conditionally independent

A graphical model of causal relationships Represents dependency among the variables Gives a specification of joint probability distribution

X Y

ZP

Nodes: random variables

Links: dependency

X,Y are the parents of Z, and Y is the parent of P

No dependency between Z and P

Has no loops or cycles

Page 33: 第 4 课 数据分类和预测

Bayesian Belief Network: An Example

FamilyHistory

LungCancer

PositiveXRay

Smoker

Emphysema

Dyspnea

LC

~LC

(FH, S) (FH, ~S) (~FH, S) (~FH, ~S)

0.8

0.2

0.5

0.5

0.7

0.3

0.1

0.9

Bayesian Belief Networks

The conditional probability table for the variable LungCancer:Shows the conditional probability for each possible combination of its parents

n

iZParents iziPznzP

1))(|(),...,1(

Page 34: 第 4 课 数据分类和预测

Learning Bayesian Networks Several cases

Given both the network structure and all variables observable: learn only the CPTs

Network structure known, some hidden variables: method of gradient descent, analogous to neural network learning

Network structure unknown, all variables observable: search through the model space to reconstruct graph topology

Unknown structure, all hidden variables: no good algorithms known for this purpose

D. Heckerman, Bayesian networks for data mining

Page 35: 第 4 课 数据分类和预测

V. What Is Prediction? (Numerical) prediction is similar to classification

construct a model use model to predict continuous or ordered value for a given input

Prediction is different from classification Classification refers to predict categorical class label Prediction models continuous-valued functions

Major method for prediction: regression model the relationship between one or more independent or predictor

variables and a dependent or response variable

Regression analysis Linear and multiple regression Non-linear regression Other regression methods: generalized linear model, Poisson regression,

log-linear models, regression trees

Page 36: 第 4 课 数据分类和预测

Linear Regression Linear regression: involves a response variable y and a single pr

edictor variable x , y = w0 + w1xwhere w0 (y-intercept) and w1 (slope) are regression coefficients

Method of least squares: estimates the best-fitting straight line

Multiple linear regression: involves more than one predictor variable Training data is of the form (X1, y1), (X2, y2),…, (X|D|, y|D|)

Ex. For 2-D data, we may have: y = w0 + w1 x1+ w2 x2

Solvable by extension of least square method or using SAS, S-Plus Many nonlinear functions can be transformed into the above

||

1

2

||

1

)(

))((

1 D

ii

D

iii

xx

yyxxw xwyw

10

Page 37: 第 4 课 数据分类和预测

Some nonlinear models can be modeled by a polynomial function A polynomial regression model can be transformed into linear reg

ression model. For example,

y = w0 + w1 x + w2 x2 + w3 x3

convertible to linear with new variables: x2 = x2, x3= x3

y = w0 + w1 x + w2 x2 + w3 x3

Other functions, such as power function, can also be transformed to linear model

Some models are intractable nonlinear (e.g., sum of exponential terms) possible to obtain least square estimates through extensive calculation on m

ore complex formulae

Nonlinear Regression

Page 38: 第 4 课 数据分类和预测

Generalized linear model: Foundation on which linear regression can be applied to modeling

categorical response variables Variance of y is a function of the mean value of y, not a constant Logistic regression: models the prob. of some event occurring as a linear

function of a set of predictor variables Poisson regression: models the data that exhibit a Poisson distribution

Log-linear models: (for categorical data) Approximate discrete multidimensional prob. distributions Also useful for data compression and smoothing

Regression trees and model trees Trees to predict continuous values rather than class labels

Other Regression-Based Models

Page 39: 第 4 课 数据分类和预测

Regression Trees and Model Trees Regression tree: proposed in CART system (Breiman et al. 1984)

CART: Classification And Regression Trees

Each leaf stores a continuous-valued prediction

It is the average value of the predicted attribute for the training tuples that re

ach the leaf

Model tree: proposed by Quinlan (1992) Each leaf holds a regression model—a multivariate linear equation for the pr

edicted attribute

A more general case than regression tree

Regression and model trees tend to be more accurate than linear reg

ression when the data are not represented well by a simple linear m

odel

Page 40: 第 4 课 数据分类和预测

Predictive modeling: Predict data values or construct generalized linear models based on the database data

One can only predict value ranges or category distributions Method outline:

Minimal generalization Attribute relevance analysis Generalized linear model construction Prediction

Determine the major factors which influence the prediction Data relevance analysis: uncertainty measurement, entropy

analysis, expert judgement, etc. Multi-level prediction: drill-down and roll-up analysis

Predictive Modeling in Multidimensional Databases

Page 41: 第 4 课 数据分类和预测

Prediction: Numerical Data

Page 42: 第 4 课 数据分类和预测

Prediction: Categorical Data

Page 43: 第 4 课 数据分类和预测

VI. Summary Classification and prediction are two forms of data analysis that can be used

to extract models describing important data classes or to predict future data t

rends.

Effective and scalable methods have been developed for decision trees induc

tion, Naive Bayesian classification, Bayesian belief network, rule-based clas

sifier, Backpropagation, Support Vector Machine (SVM), associative classifi

cation, nearest neighbor classifiers, and case-based reasoning, and other clas

sification methods such as genetic algorithms, rough set and fuzzy set appro

aches.

Linear, nonlinear, and generalized linear models of regression can be used fo

r prediction. Many nonlinear problems can be converted to linear problems

by performing transformations on the predictor variables. Regression trees a

nd model trees are also used for prediction.

Page 44: 第 4 课 数据分类和预测

VII. Reference

J. R. Quinlan. Induction of decision trees. Machine Learning,

1:81-106, 1986.

J. R. Quinlan. C4.5: Programs for Machine Learning. Morgan

Kaufmann, 1993.

R. O. Duda, P. E. Hart, and D. G. Stork. Pattern Classification

(2nd ed.). John Wiley & Sons, 2001.

T. M. Mitchell. Machine Learning. McGraw Hill, 1997.