Professional Documents
Culture Documents
Introd M
Introd M
3,500,000
3,000,000
The Data Gap
2,500,000
2,000,000
1,500,000
Total new disk (TB) since 1995
1,000,000
500,000
Number of
0
analysts
1995 1996 1997 1998 1999
From: R. Grossman, C. Kamath, V. Kumar, “Data Mining for Scientific and Engineering Applications”
What is (not) Data Mining?
What is not Data What is Data Mining?
Mining?
Data Mining
Database
systems
Data Mining Tasks
Data mining tasks are generally divided into two major categories:
• memory,
• averaging, and
• generalization.
Example problem
(Adapted from Leslie Kaelbling's example in the MIT courseware)
• Imagine that I'm trying predict whether my neighbor is going to
drive into work, so I can ask for a ride.
Set
Set Classifier
Example of a Decision Tree
cal cal us
i i o
or or nu
teg
teg
nti
ass
ca ca co cl
Tid Refund Marital Taxable
Splitting Attributes
Status Income Cheat
No Married 80K ?
Refund 10
Yes No
NO MarSt
Single, Divorced Married
TaxInc NO
< 80K > 80K
NO YES
Apply Model to Test Data
Test Data
Refund Marital Taxable
Status Income Cheat
No Married 80K ?
Refund 10
Yes No
NO MarSt
Single, Divorced Married
TaxInc NO
< 80K > 80K
NO YES
Apply Model to Test Data
Test Data
Refund Marital Taxable
Status Income Cheat
No Married 80K ?
Refund 10
Yes No
NO MarSt
Single, Divorced Married
TaxInc NO
< 80K > 80K
NO YES
Apply Model to Test Data
Test Data
Refund Marital Taxable
Status Income Cheat
No Married 80K ?
Refund 10
Yes No
NO MarSt
Single, Divorced Married
TaxInc NO
< 80K > 80K
NO YES
Apply Model to Test Data
Test Data
Refund Marital Taxable
Status Income Cheat
No Married 80K ?
Refund 10
Yes No
NO MarSt
Single, Divorced Married
TaxInc NO
< 80K > 80K
NO YES
Apply Model to Test Data
Test Data
Refund Marital Taxable
Status Income Cheat
No Married 80K ?
Refund 10
Yes No
NO MarSt
Single, Divorced Married Assign Cheat to “No”
TaxInc NO
< 80K > 80K
NO YES
Classification: Direct Marketing
– Goal: Reduce cost of mailing by targeting a set of consumers likely
to buy a new cell-phone product.
– Approach:
• Use the data for a similar product introduced before.
• We know which customers decided to buy and which decided
otherwise. This {buy, don’t buy} decision forms the class attribute.
• Collect various demographic, lifestyle, and other related information
about all such customers. E.g.
• Type of business,
• where they stay,
• how much they earn, etc.
• Use this information as input attributes to learn a classifier model.
Classification: Fraud Detection
• Goal: Predict fraudulent cases in credit
card transactions.
• Approach:
• Use credit card transactions and the
information associated with them as
attributes, e.g.
– when does a customer buy,
– what does he buy,
– where does he buy, etc.
• Label some past transactions as fraud or fair
transactions. This forms the class attribute.
• Learn a model for the class of the
transactions.
• Use this model to detect fraud by observing
credit card transactions on an account.
Classification: Attrition/Churn
• Situation: Attrition rate for mobile
phone customers is around 25-30% a
year!
• Goal: To predict whether a customer is
likely to be lost to a competitor. Success story (Reported in 2003):
• Approach: • Verizon Wireless performed
• Use detailed record of transactions with this kind of data mining
each of the past and present customers, to reducing attrition rate from
find attributes. E.g. over 2% per month to under
1.5% per month.
– how often the customer calls,
• Huge impact, with >30 M
– where he calls,
subscribers (0.5% is 150,000
– what time-of-the day he calls most, customers).
– his financial status,
– marital status, etc.
• Label the customers as loyal or disloyal.
Find a model for loyalty.
Assessing Credit Risk
• Situation: Person applies for a loan
• Task: Should a bank approve the loan?
– People who have the best credit don’t need the loans
– People with worst credit are not likely to repay.
– Bank’s best customers are in the middle
Fundamental problem
• What sets of items are often bought together?
Application
• If a large number of baskets contain both hot dogs and mustard,
we can use this information in several ways. How?
Hot Dogs and Mustard
1. Apparently, many people walk from where the hot dogs are to
where the mustard is.
• We can put them close together, and put between them other foods
that might also be bought with hot dogs and mustard, e.g., ketchup
or potato chips.
• Doing so can generate additional "impulse" sales.
2. The store can run a sale on hot dogs and at the same time raise
the price of mustard.
• People will come to the store for the cheap hot dogs, and many will
need mustard too.
• It is not worth the trouble to go to another store for cheaper
mustard, so they buy that too.
• The store makes back on mustard what it loses on hot dogs, and
also gets more customers into the store.
Beer and Diapers
• What’s the explanation here?
On-Line Purchases
• Amazon.com offers several million different items for sale, and
has several tens of millions of customers.
Intracluster
Intraclusterdistances
distances Intercluster
Interclusterdistances
distances
are
areminimized
minimized are
aremaximized
maximized
Clustering: Application 1
• Market Segmentation:
– Goal: subdivide a market into distinct subsets of customers
where any subset may conceivably be selected as a market
target to be reached with a distinct marketing mix.
– Approach:
• Collect different attributes of customers based on their
geographical and lifestyle related information.
• Find clusters of similar customers.
Clustering: Application 2
• Document Clustering:
– Goal: To find groups of documents that are similar to each other based
on the important words appearing in them.
– Approach:
• Identify frequently occurring words in each document.
• Form a similarity measure based on the frequencies of different
terms. Use it to cluster.
– Gain: Information Retrieval can utilize the clusters to relate a new
document to clustered documents.
There are two natural Each article is represented as a set of
clusters in the data set. word-frequency pairs (w, c).
The first cluster
consists of the first four
articles, which
correspond to news
about the economy.
The second cluster
contains the last four
articles, which
correspond to news
about health care.