introduction to data mining. evolution of database technology 1960s: –data collection, database...

42
Introduction to Data mining

Upload: christiana-carter

Post on 26-Dec-2015

252 views

Category:

Documents


5 download

TRANSCRIPT

Introduction to Data mining

Evolution of Database Technology

• 1960s:– Data collection, database creation, IMS and network DBMS

• 1970s: – Relational data model, relational DBMS implementation

• 1980s: – RDBMS, advanced data models (extended-relational, OO,

deductive, etc.) and application-oriented DBMS (spatial, scientific, engineering, etc.)

• 1990s—2000s: – Data mining and data warehousing, multimedia databases, and

Web databases

A definition

“Data Mining is the process of extracting

previously unknown, valid and actionable information from large databases and then using the information to make crucial business decisions”

Data mining is supported by three sufficiently mature technologies:

• Massive data collectionsCommercial databases (using high performance engines)are growing at exceptional rates

• Powerful multiprocessor computerscost-effective parallel multiprocessor computer technology

• Data mining algorithmsunder development for decades, in research areas such asstatistics, artificial intelligence, and machine learning,but now implemented as mature, reliable, understandabletools that consistently outperform older statistical methods

Why Mine Data?Scientific Viewpoint...

• Data collected and stored at enormous speeds (Gbyte/hour)– remote sensor on a satellite– telescope scanning the skies– microarrays generating gene expression data– scientific simulations generating terabytes of data

• Traditional techniques are infeasible for raw data• Data mining for data reduction..

– cataloging, classifying, segmenting data– Helps scientists in Hypothesis Formation

Motivation: The Sizes

Databases today are huge:– More than 1,000,000 entities/records/rows– From 10 to 10,000 fields/attributes/variables– Giga-bytes and tera-bytes

Databases a growing at an unprecedented rate The corporate world is a cut-throat world– Decisions must be made rapidly– Decisions must be made with maximum knowledge

Motivation for doing Data Mining

• Investment in Data Collection/Data Warehouse– Add value to the data holding– Competitive advantage– More effective decision making

OLTP =) Data Warehouse =) Decision Support– Work to add value to the data holding– Support high level and long term decision making– Fundamental move in use of Databases

Data Mining vs. Database

• DB’s user knows what is looking for.• DM’s user might/might not know what is looking for.• DB’s answer to query is 100% accurate, if data correct.• DM’s effort is to get the answer as accurate as possible.• DB’s data are retrieved as stored.• DM’s data need to be cleaned (some what) before

producing results.• DB’s results are subset of data.• DM’s results are the analysis of the data.• The meaningfulness of the results is not the concern of

Database as• it is the main issue in Data Mining.

Data Mining vs. KDD

• Knowledge Discovery in Databases (KDD) is the process of finding useful information and patterns in the data.

• Data Mining is the use of algorithms to find the usefulinformation in the KDD process.

• KDD process is:» Data cleaning & integration (Data Pre-processing)» Creating a common data repository for all sources, such as data warehouse.

Data mining» Visualization for the generated results

Need for Data mining

• Corporations have huge databases containing a wealth of information

• Business databases potentially constitute a goldmine of valuable business information

• Very little functionality in database systems to support data mining applications

• Data mining: The efficient discovery of previously unknown patterns in large databases

Data mining is not• Brute-force crunching of

bulk data • “Blind” application of

algorithms• Going to find relationships

where none exist• Presenting data in different

ways• A database intensive task• A difficult to understand

technology requiring an advanced degree in computer science

Data Mining: On What Kind of Data?

• Relational databases

• Data warehouses• Transactional databases• Advanced DB and information repositories

– Object-oriented and object-relational databases– Spatial databases– Time-series data and temporal data– Text databases and multimedia databases– Heterogeneous and legacy databases– WWW

Data Mining Tasks...

• Classification [Predictive]

• Clustering [Descriptive]

• Association Rule Discovery [Descriptive]

• Sequential Pattern Discovery [Descriptive]

• Regression [Predictive]

• Deviation Detection [Predictive]

Association Rules

• Given:– A database of customer transactions– Each transaction is a set of items

• Find all rules X => Y that correlate the presence of one set of items X with another set of items Y– Example: 98% of people who purchase diapers and

baby food also buy beer.– Any number of items in the consequent/antecedent of

a rule– Possible to specify constraints on rules (e.g., find only

rules involving expensive imported products)

Confidence and Support

• A rule must have some minimum user-specified confidence1 & 2 => 3 has 90% confidence if when a

customer bought 1 and 2, in 90% of cases, the customer also bought 3.

• A rule must have some minimum user-specified support1 & 2 => 3 should hold in some minimum

percentage of transactions to have business value

Example

• Example:

• For minimum support = 50%, minimum confidence = 50%, we have the following rules

1 => 3 with 50% support and 66% confidence

3 => 1 with 50% support and 100% confidence

Transaction Id Purchased Items 1 {1, 2, 3}2 {1, 4}3 {1, 3}4 {2, 5, 6}

Problem Decomposition - Example

TID Items1 {1, 2, 3}2 {1, 3}3 {1, 4}4 {2, 5, 6}

For minimum support = 50% = 2 transactionsand minimum confidence = 50%

Frequent Itemset Support{1} 75%{2} 50%{3} 50%{1, 3} 50%

For the rule 1 => 3:•Support = Support({1, 3}) = 50%•Confidence = Support({1,3})/Support({1}) = 66%

The Apriori Algorithm

• Fk : Set of frequent itemsets of size k • Ck : Set of candidate itemsets of size k F1 = {large items}for ( k=1; Fk != 0; k++) do { Ck+1 = New candidates generated from Fk

foreach transaction t in the database do Increment the count of all candidates in Ck+1 that are contained in t Fk+1 = Candidates in Ck+1 with minimum support }Answer = Uk Fk

Key Observation

• Every subset of a frequent itemset is also frequent

=> a candidate itemset in Ck+1 can be pruned if even one of its subsets is not contained in Fk

Apriori - Example

TID Items1 {1, 3, 4}2 {2, 3, 5}3 {1, 2, 3, 5}4 {2, 5}

Itemset Sup.{1} 2{2} 3{3} 3{4} 1{5} 3

Itemset Sup.{2} 3{3} 3{5} 3

Itemset{2, 3}{2, 5}{3, 5}

{2, 3} 2{2, 5} 3{3, 5} 2

Itemset Sup.{2, 5} 3

Database D C1 F1

C2 C2 F2

Scan D

Scan D

Partitioning

• Divide database into partitions D1,D2,…,Dp

• Apply Apriori to each partition

• Any large itemset must be large in at least one partition.

Partitioning Algorithm

1. Divide D into partitions D1,D2,…,Dp;

2. For I = 1 to p do

3. Li = Apriori(Di);

4. C = L1 … Lp;

5. Count C on D to generate L;

Partitioning Example

D1

D2

S=10%

L1 ={{Bread}, {Jelly}, {Bread}, {Jelly}, {PeanutButter}, {PeanutButter}, {Bread,Jelly}, {Bread,Jelly}, {Bread,PeanutButter}, {Bread,PeanutButter}, {Jelly, PeanutButter}, {Jelly, PeanutButter}, {Bread,Jelly,PeanutButter}}{Bread,Jelly,PeanutButter}}

L2 ={{Bread}, {Milk}, {Bread}, {Milk}, {PeanutButter}, {Bread,Milk}, {PeanutButter}, {Bread,Milk}, {Bread,PeanutButter}, {Milk, {Bread,PeanutButter}, {Milk, PeanutButter}, PeanutButter}, {Bread,Milk,PeanutButter}, {Bread,Milk,PeanutButter}, {Beer}, {Beer,Bread}, {Beer}, {Beer,Bread}, {Beer,Milk}}{Beer,Milk}}

Partitioning Adv/Disadv

• Advantages:– Adapts to available main memory– Easily parallelized– Maximum number of database scans is

two.

• Disadvantages:– May have many candidates during second

scan.

Classification

• Given:– Database of tuples, each assigned a class label

• Develop a model/profile for each class– Example profile (good credit):– (25 <= age <= 40 and income > 40k) or (married =

YES)

• Sample applications:– Credit card approval (good, bad)– Bank locations (good, fair, poor)– Treatment effectiveness (good, fair, poor)

Decision Tree

• Flow-chart like tree structure• Each node denotes a test on an attribute value• Each branch denotes outcome of the test• Tree leaves represent classes or class distribution• Decision tree can be easily converted into set of

classification rules

Classification Example

Class C

(>50K)(<=50K)

c

Sample Decision Tree

ClassSalaryAgeJobTid

Industry

0

1

2

3

4

5

Univ.

Self

Self

Univ.

Industry

60K35

30

45

50

35

30

70K

60K

70K

40K

30K

B

A

B

C

C

C

Training Data Set

Sal

Age

(>40)(<=40)

Job

Class B Class A

Class C(Univ., Industry)

(Self)

Self6 60K35 A

Self7 70K30 A

Example Decision Tree

Tid Refund MaritalStatus

TaxableIncome Cheat

1 Yes Single 125K No

2 No Married 100K No

3 No Single 70K No

4 Yes Married 120K No

5 No Divorced 95K Yes

6 No Married 60K No

7 Yes Divorced 220K No

8 No Single 85K Yes

9 No Married 75K No

10 No Single 90K Yes10

categoric

al

categoric

al

continuous

class

Refund

MarSt

TaxInc

YESNO

NO

NO

Yes No

Married Single, Divorced

< 80K > 80K

Splitting Attributes

The splitting attribute at a node is

determined based on the Gini index.

Decision Trees

• Pros– Fast execution time– Generated rules are easy to interpret by

humans– Scale well for large data sets– Can handle high dimensional data

• Cons– Cannot capture correlations among attributes– Consider only axis-parallel cuts

Regression

Mapping a data item to a real-value

E.g., linear regression

Risk score=0.01*(Balance)-0.3*(Age)+4*(HouseOwned)

Clustering– Identifies natural groups or clusters of instances.

Example: customer segmentation

– Unsupervised learning: Different from classification – clusters are not predefined but are formed based on the data

– Objects in each cluster are very similar to each other and are different from those in other clusters.

Specific Data Mining Applications:

What data mining has done for...

Scheduled its workforce to provide faster, more accurate

answers to questions.

The US Internal Revenue Service needed to improve customer service and...

What data mining has done for...

analyzed suspects’ cell phone usage to focus investigations.

The US Drug Enforcement Agency needed to be more effective in their drug “busts” and

What data mining has done for...

Reduced direct mail costs by 30% while garnering 95% of the

campaign’s revenue.

HSBC need to cross-sell more effectively by identifying profiles that would be interested in higheryielding investments and...

Privacy Issues

• DM applications derive demographics about• customers via• – Credit card use• – Store card• – Subscription• – Book, video, etc rental• – and via more sources…• As the DM results are deemed to be a good• estimate or prediction, one has to be sensitive to• the results not to violate privacy.

Final Comments

• Data Mining can be used in any organization that needs to find patterns or relationships in their data.

• DM analysts can have a reasonable level of assurance that their Data Mining efforts will render useful, repeatable, and valid results.

Questions?