Professional Documents
Culture Documents
Concepts and Techniques: - Chapter 1
Concepts and Techniques: - Chapter 1
— Chapter 1 —
September 5, 2020 3
Chapter 1. Introduction
Why Data Mining?
What Is Data Mining?
A Multi-Dimensional View of Data Mining
What Kind of Data Can Be Mined?
What Kinds of Patterns Can Be Mined?
What Technology Are Used?
What Kind of Applications Are Targeted?
Major Issues in Data Mining
A Brief History of Data Mining and Data Mining Society
Summary
September 5, 2020 4
What Is Data Mining?
September 5, 2020 5
Knowledge Data Discovery (KDD) Process
This is a view from typical
database systems and data
warehousing communities Pattern Evaluation
Data mining plays an essential role
in the knowledge discovery process
Data Mining
Task-relevant Data
Data Selection
Warehouse
Data Cleaning
Data Integration
Databases
September 5, 2020 6
A Data Mining Framework
September 5, 2020 7
Data Mining in Business Intelligence
Increasing potential
to support
business decisions End User
Decision
Making
Data Exploration
Statistical Summary, Querying, and Reporting
September 5, 2020 9
Data to be mined
Relational databases
Data warehouses:
Definition: A data warehouse is a repository of
information collected from multiple sources,
stored under a unified schema, and usually
residing at a single site.
Data mart: Subset of data warehouse
September 5, 2020 10
Data Cube
September 5, 2020 11
Transaction Id List of Items
Transactional data
T100 Monitor, cpu, mouse
T201 Antivirus s/w
T102 I1, I3
T103 I1,I6, I3, I4, I5
T104 I1, I3, I6, I7
I1 I2 I3 I4 I5 I6 I7
T100 1 0 0 1 1 0 1
T101 0 1 0 1 0 1 0
T102 1 0 0 1 0 0 0
T103 0 0 1 1 1 1 0
T104 0 0 1 0 0 1 1
September 5, 2020 12
Data Streams
Temporal Databases, Sequence Databases and Time Series
Databases
Spatial Databases and Spatio temporal Databases
September 5, 2020 13
Text databases
Unstructured, Semi-Structured, Well Structured
multi-media databases
Heterogeneous and Legacy databases
Heterogeneous databases consists of a set of interconnected,
September 5, 2020 14
Knowledge to be mined (or: Data mining
functions)
1.Characterization
2.Discrimination
3.Association
-Frequent patterns:
Frequent Item set, Frequent Subsequence,
Frequent substructure
Support Count: support>=THRESHOLD
September 5, 2020 15
Association Rule:
Buys(X, “Computer”)=>Buys(X, “Software”)
[Support=10%, Confidence=50%]
Support means 10% of all the transactions under
September 5, 2020 17
Example
September 5, 2020 18
Data Mining Function: (4) Classification
September 5, 2020 19
Typical methods
Decision trees, naïve Bayesian classification,
support vector machines, neural networks, rule-
based classification, pattern-based
classification, logistic regression, …
Typical applications:
Credit card fraud detection, direct marketing,
classifying stars, diseases, web-pages, …
September 5, 2020 20
Data Mining Function: (5) Cluster Analysis
September 5, 2020 21
Data Mining Function: (6) Outlier Analysis
Outlier analysis
Outlier: A data object that does not comply with the general
behavior of the data
Noise or exception? ― One person’s garbage could be another
person’s treasure
Methods: by product of clustering or regression analysis, …
Useful in fraud detection, rare events analysis
September 5, 2020 22
Data Mining: Confluence of Multiple Disciplines
September 5, 2020 23
Why Confluence of Multiple Disciplines?
Tremendous amount of data
Algorithms must be highly scalable to handle such as tera-bytes of
data
High-dimensionality of data
Micro-array may have tens of thousands of dimensions
High complexity of data
Data streams and sensor data
Time-series data, temporal data, sequence data
Structure data, graphs, social networks and multi-linked data
Heterogeneous databases and legacy databases
Spatial, spatiotemporal, multimedia, text and Web data
Software programs, scientific simulations
New and sophisticated applications
September 5, 2020 24
Applications of Data Mining
Financial Loan payment prediction
Market Basket data analysis to targeted marketing
analysis of sales, customers, products, time and region
Biological and medical data analysis: classification, cluster analysis
(microarray data analysis), biological sequence analysis, biological
network analysis
Web mining
Fraudulent pattern analysis.
-Identification of unusual patterns.
September 5, 2020 25
A typical DM System Architecture
Classification of data mining systems
Database
Relational, data warehouse, transactional, stream, object-oriented/relational, active,
spatial, time-series, text, multi-media, heterogeneous, legacy, WWW
Knowledge
Characterization, discrimination, association, classification, clustering,
trend/deviation, outlier analysis, etc.
Multiple/integrated functions and mining at multiple levels
Techniques utilized
Database-oriented, data warehouse (OLAP), machine learning, statistics,
visualization, etc.
Applications adapted
Retail, telecommunication, banking, fraud analysis, bio-data mining, stock
market analysis, text mining, Web mining, etc.
Data Mining Task Primitives
(1) The set of task-relevant data – which portion of the database to be used
Database or data warehouse name
(5) Visualization methods – what form to display the result, e.g. rules,
• Semi-tight
– Efficient implementations of a few essential data mining primitives in
a DB/DW system are provided, e.g., sorting, indexing, aggregation,
histogram analysis, multiway join, precomputation of some stat
functions
– Enhanced DM performance
• Tight
– DM is smoothly integrated into a DB/DW system, mining query is
optimized based on mining query analysis, data structures, indexing,
query processing methods of a DB/DW system
– A uniform information processing environment, highly desirable
Mining Methodology
Mining various and new kinds of knowledge
Mining knowledge in multi-dimensional space
Data mining: An interdisciplinary effort
Boosting the power of discovery in a networked environment
Handling noise, uncertainty, and incompleteness of data
Pattern evaluation and pattern- or constraint-guided mining
User Interaction
Interactive mining
Incorporation of background knowledge
Presentation and visualization of data mining results
September 5, 2020 35
Major Issues in Data Mining (2)
September 5, 2020 36
Data Preprocessing
37
Data Quality: Why Preprocess the Data?
38
Major Tasks in Data Preprocessing
Data cleaning
Fill in missing values, smooth noisy data, identify or remove
outliers, and resolve inconsistencies
Data integration
Integration of multiple databases, data cubes, or files
Data reduction
Dimensionality reduction
Numerosity reduction
Data compression
Data transformation and data discretization
Normalization
Concept hierarchy generation
39
Data Preprocessing
technology limitation
incomplete data
inconsistent data
44
How to Handle Noisy Data?
Binning
first sort data and partition into (equal-frequency) bins
Clustering
detect and remove outliers
45
43,65,42,86,23,12,46,17,45
Sorted : 12,17,23,42,43,45,46,65,86 Bin size=3
Bin1:12,17,23 bin2: 42,43,45 bin3:46,65,86
Smooth by Bin means:
Bin1: mean=(12+17+23)/3=17.3
Bin1: 17.3, 17.3, 17.3 Bin2: 43.3, 43.3,43.3
Bin3: 65.6,65.6,65.6
smooth by bin median
Bin1:12,17,22,23 bin2: 42,43,45 bin3:46,65,86
Median of Bin1=17 Bin1: 17,17,17
Bin2: 43,43,43 Bin3: 65,65,65
smooth by bin boundaries Bin1: 12,12,23,23
Bin2:42,42,45 Bin3: 46,86,86
September 5, 2020 46
Data Cleaning as a Process
Data discrepancy detection
Use metadata (e.g., domain, range, dependency, distribution)
47
Data Preprocessing
51
Chi-Square Calculation: An Example
(n 1) A B (n 1) A B
53
Correlation (viewed as linear
relationship)
Correlation measures the linear relationship
between objects
To compute correlation, we standardize data
objects, A and B, and then take their dot product
54
Covariance (Numeric Data)
Covariance is similar to correlation
Correlation coefficient:
Suppose two stocks A and B have the following values in one week:
(2, 5), (3, 8), (5, 10), (4, 11), (6, 14).
Question: If the stocks are affected by the same industry trends, will
their prices rise or fall together?
E(A) = (2 + 3 + 5 + 4 + 6)/ 5 = 20/5 = 4
E(B) = (5 + 8 + 10 + 11 + 14) /5 = 48/5 = 9.6
Cov(A,B) = (2×5+3×8+5×10+4×11+6×14)/5 − 4 × 9.6 = 4
Thus, A and B rise together since Cov(A, B) > 0.
Data Preprocessing
Wavelet transforms
Data compression
58
Data Reduction 1: Dimensionality
Reduction
Curse of dimensionality
When dimensionality increases, data becomes increasingly sparse
Density and distance between points, which is critical to clustering, outlier
analysis, becomes less meaningful
The possible combinations of subspaces will grow exponentially
Dimensionality reduction
Avoid the curse of dimensionality
Help eliminate irrelevant features and reduce noise
Reduce time and space required in data mining
Allow easier visualization
Dimensionality reduction techniques
Wavelet transforms
Principal Component Analysis
Supervised and nonlinear techniques (e.g., feature selection)
59
Mapping Data to a New Space
Fourier transform
Wavelet transform
60
What Is Wavelet Transform?
Decomposes a signal into
different frequency subbands
Applicable to n-
dimensional signals
Data are transformed to
preserve relative distance
between objects at different
levels of resolution
Allow natural clusters to
become more distinguishable
Used for image compression
61
Why Wavelet Transform?
Effective removal of outliers
Insensitive to noise, insensitive to input order
Multi-resolution
Detect arbitrary shaped clusters at different scales
62
Principal Component Analysis (PCA)
Find a projection that captures the largest amount of variation in data
The original data are projected onto a much smaller space, resulting
in dimensionality reduction. We find the eigenvectors of the
covariance matrix, and these eigenvectors define the new space
x2
x1
63
Principal Component Analysis (Steps)
Given N data vectors from n-dimensions, find k ≤ n orthogonal vectors
(principal components) that can be best used to represent data
Normalize input data: Each attribute falls within the same range
Compute k orthonormal (unit) vectors, i.e., principal components
Each input data (vector) is a linear combination of the k principal
component vectors
The principal components are sorted in order of decreasing
“significance” or strength
Since the components are sorted, the size of the data can be
reduced by eliminating the weak components, i.e., those with low
variance (i.e., using the strongest principal components, it is
possible to reconstruct a good approximation of the original data)
Works for numeric data only
64
Attribute Subset Selection
66
Attribute Creation (Feature
Generation)
Create new attributes (features) that can capture the
important information in a data set more effectively than
the original ones
Three general methodologies
Attribute extraction
Domain-specific
patterns in Chapter 7)
Data discretization
67
Data Reduction 2: Numerosity
Reduction
Reduce data volume by choosing alternative, smaller
forms of data representation
Parametric methods (e.g., regression)
Assume the data fits some model, estimate model
68
Parametric Data Reduction:
Regression and Log-Linear Models
Linear regression
Data modeled to fit a straight line
Multiple regression
Allows a response variable Y to be modeled as a
distributions
69
y
Regression Analysis
Y1
Regression analysis: A collective name for
techniques for the modeling and analysis Y1’
y=x+1
of numerical data consisting of values of a
dependent variable (also called
response variable or measurement) and X1 x
of one or more independent variables (aka.
explanatory variables or predictors) Used for prediction
The parameters are estimated so as to (including forecasting of
give a "best fit" of the data time-series data), inference,
hypothesis testing, and
Most commonly the best fit is evaluated by
modeling of causal
using the least squares method, but relationships
other criteria have also been used
70
Regress Analysis and Log-Linear
Models
Linear regression: Y = w X + b
Two regression coefficients, w and b, specify the line and are to be
estimated by using the data at hand
Using the least squares criterion to the known values of Y1, Y2, …, X1,
X2, ….
Multiple regression: Y = b0 + b1 X1 + b2 X2
Many nonlinear functions can be transformed into the above
Log-linear models:
Approximate discrete multidimensional probability distributions
Estimate the probability of each point (tuple) in a multi-dimensional
space for a set of discretized attributes, based on a smaller subset of
dimensional combinations
Useful for dimensionality reduction and data smoothing
71
Histogram Analysis
Divide data into buckets and 40
store average (sum) for each35
bucket
30
Partitioning rules:
25
Equal-width: equal bucket
20
range
Equal-frequency (or equal-15
depth) 10
5
0
10000 30000 50000 70000 90000
72
Clustering
Partition data set into clusters based on similarity, and
store cluster representation (e.g., centroid and diameter)
only
Can be very effective if data is clustered but not if data
is “smeared”
Can have hierarchical clustering and be stored in multi-
dimensional index tree structures
There are many choices of clustering definitions and
clustering algorithms
Cluster analysis will be studied in depth in Chapter 10
73
Sampling
item
Sampling without replacement
Once an object is selected, it is removed from the
population
Sampling with replacement
A selected object is not removed from the population
Stratified sampling:
Partition the data set, and draw samples from each
75
Sampling: With or without Replacement
W O R
SRS le random
i m p h ou t
( s e wi t
l
samp ment)
pl a c e
re
SRSW
R
Raw Data
76
Sampling: Cluster or Stratified
Sampling
77
Data Cube Aggregation
79
Data Compression
ss y
lo
Original Data
Approximated
80
Data Preprocessing
82
Normalization
Min-max normalization: to [new_minA, new_maxA]
v minA
v' (new _ maxA new _ minA) new _ minA
maxA minA
Ex. Let income range $12,000 to $98,000 normalized to [0.0,
73,600 12,000
(1.0 0) 0 0.716
1.0]. Then $73,600 is mapped to 98,000 12,000
Z-score normalization (μ: mean, σ: standard deviation):
v A
v'
A
73,600 54,000
1.225
Ex. Let μ = 54,000, σ = 16,000. Then 16,000
Normalization by decimal scaling
v
v' j Where j is the smallest integer such that Max(|ν’|) < 1
10
83
Discretization
Three types of attributes
Nominal—values from an unordered set, e.g., color, profession
Ordinal—values from an ordered set, e.g., military or academic
rank
Numeric—real numbers, e.g., integer or real numbers
Discretization: Divide the range of a continuous attribute into intervals
Interval labels can then be used to replace actual data values
Reduce data size by discretization
Supervised vs. unsupervised
Split (top-down) vs. merge (bottom-up)
Discretization can be performed recursively on an attribute
Prepare for further analysis, e.g., classification
84
Data Discretization Methods
Typical methods: All the methods can be applied recursively
Binning
Top-down split, unsupervised
Histogram analysis
Top-down split, unsupervised
Clustering analysis (unsupervised, top-down split or
bottom-up merge)
Decision-tree analysis (supervised, top-down split)
Correlation (e.g., 2) analysis (unsupervised, bottom-up
merge)
85
Simple Discretization: Binning
87
Discretization by Classification &
Correlation Analysis
Classification (e.g., decision tree analysis)
Supervised: Given class labels, e.g., cancerous vs. benign
Using entropy to determine split point (discretization point)
Top-down, recursive split
Correlation analysis (e.g., Chi-merge: χ2-based discretization)
Supervised: use class information
Bottom-up merge: find the best neighboring intervals (those having
similar distributions of classes, i.e., low χ2 values) to merge
Merge performed recursively, until a predefined stopping condition
88
Concept Hierarchy Generation
Concept hierarchy organizes concepts (i.e., attribute values)
hierarchically and is usually associated with each dimension in a data
warehouse
Concept hierarchies facilitate drilling and rolling in data warehouses to
view data in multiple granularity
Concept hierarchy formation: Recursively reduce the data by collecting
and replacing low level concepts (such as numeric values for age) by
higher level concepts (such as youth, adult, or senior)
Concept hierarchies can be explicitly specified by domain experts
and/or data warehouse designers
Concept hierarchy can be automatically formed for both numeric and
nominal data. For numeric data, use discretization methods shown.
89
Concept Hierarchy Generation
for Nominal Data
Specification of a partial/total ordering of attributes
explicitly at the schema level by users or experts
street < city < state < country
Specification of a hierarchy for a set of values by explicit
data grouping
{Urbana, Champaign, Chicago} < Illinois
Specification of only a partial set of attributes
E.g., only street < city, not others
Automatic generation of hierarchies (or attribute levels) by
the analysis of the number of distinct values
E.g., for a set of attributes: {street, city, state, country}
90
Automatic Concept Hierarchy Generation
Some hierarchies can be automatically generated based on
the analysis of the number of distinct values per attribute in
the data set
The attribute with the most distinct values is placed at