Download as pdf or txt
Download as pdf or txt
You are on page 1of 41

Data Pre-processing

• Data cleansing
• Data integration
• Data reduction
• Data transformation

1
What is Data Pre-processing?
• "Every data analysis task starts by gathering,
characterizing, and cleaning a new, unfamiliar data
set...“.
– More than 80% of researchers working on data mining
projects spend 40-60% of their time on cleaning and
preparation of data (Kalashnikov & Mehrotra, 2005).
• Data pre-processing in data mining activity refers to
the processing of the various data elements to
prepare for the mining operation.
• Any activity performed prior to mining the data to
get knowledge out of it is called data pre-processing.
2
Data Collection for Mining
• Data mining requires collecting great amount of data
(available in data warehouses or databases) to achieve
the intended objective.
– Data mining starts by understanding the business or problem
domain in order to gain the business knowledge
• Business knowledge guides the process towards useful
results, and enables the recognition of those results that
are useful.
– Based on the business knowledge, data related to the business
problem are identified from the database/data warehouse for
mining.
• Before feeding data to DM, we have to make sure the
quality of data?
3
Data Quality Measures
• A well-accepted multidimensional data quality
measures are the following:
– Accuracy (No errors, no outliers)
– Completeness (no missing values)
– Consistency (no inconsistent values and attributes)
– Timeliness (appropriateness)
– Believability (acceptability)
– Interpretability (easy to understand)
• Most of the data in the real world are poor quality;
that is:
– Incomplete, Inconsistent, Noisy, Invalid, Redundant, …
4
Data is often of low quality
• Collecting the required data is challenging
– In addition to its heterogeneous & distributed nature
of data, real world data is low in quality.

• Why?
– You didn’t collect it yourself!
– It probably was created for some other use, and then
you came along wanting to integrate it
– People make mistakes (typos)
– People are busy to systematically organize carefully
using structured formats
5
Types of problems with data
• Some data have problems on their own that needs to be
cleaned:
– Outliers: misleading data that do not fit to most of the data/facts
– Missing data: attributes values might be absent which needs to
be replaced with estimates
– Irrelevant data: attributes in the database that might not be of
interest to the DM task being developed
– Noisy data: attribute values that might be invalid or incorrect. E.g.
typographical errors
– Inconsistent data, duplicate data, etc.
• Other data are problematic only when we want to integrate it
– Everyone had their own way of structuring and formatting data,
based on what was convenient for them
– How to integrate data organized in different format following
different conventions.
6
Case study: Government Agency Data
• What we want:

ID Name City State

1 Ministry of Transportation Addis Ababa Addis Ababa

2 Ministry of Finance Addis Ababa Addis Ababa

3 Office of Foreign Affairs Addis Ababa Addis Ababa

• How to prepare enough and complete data that we


need for data mining?
✓ Coming up with good quality data needs to pass
through different data preprocessing tasks.
7
Major Tasks in Data Preprocessing
• Data cleansing: to get rid of bad data
– Fill in missing values, smooth noisy data, identify or remove
outliers, and resolve inconsistencies
• Data integration
– Integration of data from multiple sources, such as databases,
data warehouses, or files
• Data reduction
– obtains a reduced representation of the data set that is much
smaller in volume, yet produces almost the same results.
o Dimensionality reduction
o Numerosity/size reduction
o Data compression
• Data transformation
– Normalization
– Discretization and/or Concept hierarchy generation
8
Data Cleaning: Redundancy
• Duplicate or redundant data is data problems which
require data cleaning.
• What’s wrong here?
ID Name City State
1 Ministry of Transportation Addis Ababa Addis Ababa
2 Ministry of Finance Addis Ababa Addis Ababa
3 Ministry of Finance Addis Ababa Addis Ababa

• How to clean it: manually or automatically?


9
Data Cleaning: Incomplete Data
• The dataset may lack certain attributes of interest
– Is that enough if you have patient demographic profile and
address of region to predict the vulnerability or exposure of a
given region to Malaria outbreak?

• The dataset may contain only aggregate data. E.g.: traffic


police car accident report
– this much accident occurs this day in this sub-city

No of accident Date address


3 Oct 23, 2012 Yeka, Addis Ababa
2 Oct 12, 2011 Ormoia region

10
Data Cleaning: Missing Data
• Data is not always available, lacking attribute values. E.g.,
Occupation=“ ”
✓many tuples have no recorded value for several
attributes, such as customer income in sales data

ID Name City State


1 Ministry of Transportation Addis Ababa Addis Ababa
2 Ministry of Finance ? Addis Ababa
3 Office of Foreign Affairs Addis Ababa Addis Ababa

• What’s wrong here? A missing required field

11
Data Cleaning: Missing Data
• Missing data may be due to
– inconsistent with other recorded data and thus deleted
– data not entered due to misunderstanding and may not be
considered important at the time of entry
– not register history or changes of the data
• How to handle Missing data? Missing data may need to be
inferred
– Ignore the missing value: not effective when the percentage
of missing values per attribute varies considerably
– Fill in the missing value manually: tedious + infeasible?
– Fill automatically
• calculate, say, using Expected Maximization (EM) Algorithm
the most probable value
12
Predict missing value using EM
• Solves estimation with incomplete data.
– Obtain initial estimates for parameters using mean value.
– use estimates for calculating a value for missing data &
– The process continue Iteratively until convergence ((μi - μi+1) ≤ Ө).
• E.g.: out of six data items given known values= {1, 5, 10, 4}, estimate
the two missing data items?
– Let the EM converge if two estimates differ in 0.05 & our initial guess of
the two missing values= 3.
• The algorithm
stop since the
last two
estimates are
only 0.05 apart.
• Thus, our
estimate for the
two items is
4.97. 13
Data Cleaning: Noisy Data
• Noisy: containing noise, errors, or outliers
– e.g., Salary=“−10” (an error)
• Typographical errors are errors that corrupt data
• Let say ‘green’ is written as ‘rgeen’

• Incorrect attribute values may be due to


– faulty data collection instruments
– data entry problems
– data transmission problems
– technology limitation
– inconsistency in naming convention
14
14
Data Cleaning: How to catch Noisy Data
• Manually check all data : tedious + infeasible?
• Sort data by frequency
– ‘green’ is more frequent than ‘rgeen’
– Works well for categorical data
• Use, say Numerical constraints to Catch Corrupt Data
• Weight can’t be negative
• People can’t have more than 2 parents
• Salary can’t be less than Birr 3000
• Use statistical techniques to Catch Corrupt Data
– Check for outliers (the case of the 8 meters man)
– Check for correlated outliers using n-gram (“pregnant male”)
• People can be male
• People can be pregnant
• People can’t be male AND pregnant
15
15
Data Integration
• Data integration combines data from multiple sources
(database, data warehouse, files & sometimes from
non-electronic sources) into a coherent store
• Because of the use of different sources, data that is
fine on its own may become problematic when we
want to integrate it.
• Some of the issues are:
– Different formats and structures
– Conflicting and redundant data
– Data at different levels

16
Data Integration: Formats
• Not everyone uses the same format. Do you agree?
– Schema integration: e.g., A.cust-id  B.cust-#
• Integrate metadata from different sources
• Dates are especially problematic:
– 12/19/97
– 19/12/97
– 19/12/1997
– 19-12-97
– Dec 19, 1997
– 19 December 1997
– 19th Dec. 1997
• Are you frequently writing money as:
– Birr 200, Br. 200, 200 Birr, …
17
Data Integration: Inconsistent
• Inconsistent data: containing discrepancies in codes or
names, which is also the problem of lack of
standardization / naming conventions. e.g.,
– Age=“26” vs. Birthday=“03/07/1986”
– Some use “1,2,3” for rating; others “A, B, C”
• Discrepancy between duplicate records
ID Name City State
1 Ministry of Transportation Addis Ababa Addis Ababa region
Addis Ababa
2 Ministry of Finance Addis Ababa administration
Addis Ababa regional
3 Office of Foreign Affairs Addis Ababa administration
18
Data Integration: different structure
What’s wrong here? No data type constraints

ID Name City State


Ministry of
1234 Transportation Addis Ababa AA

ID Name City State


GCR34 Ministry of Finance Addis Ababa AA

Name ID City State


Office of Foreign
Affairs GCR34 Addis Ababa AA
19
Data Integration: Conflicting Data
• Detecting and resolving data value conflicts
–For the same real world entity, attribute values from different
sources are different
–Possible reasons: different representations, different scales, e.g.,
American vs. British units
• weight measurement: KG or pound
• Height measurement: meter or inch
• Information source #1 says that Alex lives in Bahirdar
– Information source #2 says that Alex lives in Mekele
• What to do?
– Use both (He lives in both places)
– Use the most recently updated piece of information
– Use the “most trusted” information
– Flag row to be investigated further by hand
– Use neither (We’d rather be incomplete than wrong)
20
Handling Redundancy in Data Integration
• Redundant data occur often when integration of
multiple databases
– Object identification: The same attribute or object may have
different names in different databases
– Derivable data: One attribute may be a “derived” attribute in
another table, e.g., if we have an attribute for birth date, then
age is derivable.
• Redundant attributes may be able to be detected by
correlation analysis and covariance analysis
• Careful integration of the data from multiple sources
may help reduce/avoid redundancies and inconsistencies
and improve mining speed and quality 21
Covariance
• Covariance is similar to correlation

where n is the number of tuples, p and q are the respective


mean of p and q, σp and σq are the respective standard
deviation of p and q.
• It can be simplified in computation as

• Positive covariance: If Covp,q > 0, then p and q both tend to be


directly related.
• Negative covariance: If Covp,q < 0 then p and q are inversely
related.
• Independence: Covp,q = 0
22
Example: Co-Variance
• Suppose two stocks A and B have the following values in
one week: (2, 5), (3, 8), (5, 10), (4, 11), (6, 14).

• Question: If the stocks are affected by the same industry


trends, will their prices rise or fall together?

– E(A) = (2 + 3 + 5 + 4 + 6)/ 5 = 20/5 = 4

– E(B) = (5 + 8 + 10 + 11 + 14) /5 = 48/5 = 9.6

– Cov(A,B) = (2×5+3×8+5×10+4×11+6×14)/5 − 4 × 9.6 = 4

• Thus, A and B rise together since Cov(A, B) > 0.


23
Data Reduction Strategies
• Data reduction: Obtain a reduced representation of the data set that
is much smaller in volume but yet produces the same (or almost the
same) analytical results
• Why data reduction? A database/data warehouse may store
terabytes of data. Complex data analysis may take a very long time
to run on the complete data set.

• Data reduction strategies


– Dimensionality reduction,
• Select best attributes or remove unimportant attributes
– Numerosity reduction
• Reduce data volume by choosing alternative, smaller forms of
data representation
– Data compression
• Is a technology that reduce the size of large files such that
smaller files take less memory space and fast to transfer over a
network or the Internet,
24
Data Reduction: Dimensionality Reduction
• Curse of dimensionality
– When dimensionality increases, data becomes increasingly sparse
• Dimensionality reduction
– Helps to eliminate Irrelevant attributes and reduce noise: that
contain no information useful for the data mining task at hand
• E.g., is students' ID relevant to predict students' GPA?
–Helps to avoid redundant attributes : that contain duplicate
information in one or more other attributes
• E.g., purchase price of a product & the amount of sales tax paid
– Reduce time and space required in data mining
– Allow easier visualization
• Method: attribute subset selection
–One of the method to reduce dimensionality of data is by selecting
best attributes
–Given M attributes there are 2M possible attribute combinations25
Heuristic Search in Attribute Selection
• Commonly used heuristic attribute selection methods:
– Best step-wise attribute selection:
• Start with empty set of attributes
• The best single-attribute is picked first
• Then combine best attribute with the remaining to select the
best combined two attributes, then three attributes,…
• The process continues until the performance of the combined
attributes starts to decline
– Step-wise attribute elimination:
• Start with all attributes as best
• Eliminate one of the worst performing attribute
• Repeatedly continue the process if the performance of the
combined attributes increases
– Best combined attribute selection and elimination
26
Data Reduction: Numerosity Reduction
• Different methods can be used, including Clustering and
sampling
• Clustering
– Partition data set into clusters based on similarity, and store
cluster representation (e.g., centroid) only
– There are many choices of clustering definitions and clustering
algorithms
• Sampling
– obtaining a small sample s to represent the whole data set N
– Allow a mining algorithm to run in complexity that is potentially
sub-linear to the size of the data
– Key principle: Choose a representative subset of the data using
suitable sampling technique
27
Types of Sampling
• Stratified sampling:
– Develop adaptive sampling methods, e.g., stratified sampling;
which partition the data set, and draw samples from each
partition (proportionally, i.e., approximately the same
percentage of the data)
– Used in conjunction with skewed data
• Simple random sampling
– There is an equal probability of selecting any particular item
– Simple random sampling may have very poor performance in
the presence of skew
• Sampling without replacement
– Once an object is selected, it is removed from the population
• Sampling with replacement
– A selected object is not removed from the population
28
Sampling: Cluster or Stratified Sampling

Raw Data Cluster/Stratified Sample

29
Data Transformation
• A function that maps the entire set of values of a given attribute to
a new set of replacement values such that each old value can be
identified with one of the new values
• Methods for data transformation
– Normalization: Scaled to fall within a smaller, specified range of
values
• min-max normalization
• z-score normalization
– Discretization: Reduce data size by dividing the range of a
continuous attribute into intervals. Interval labels can then be
used to replace actual data values
• Discretization can be performed recursively on an attribute
using method such as
– Binning: divide values into intervals
– Concept hierarchy climbing: organizes concepts (i.e.,
attribute values) hierarchically
30
Normalization
• Min-max normalization:
v − minA
v' = (newMax − newMin) + newMin
maxA − minA
– Ex. Let income range $12,000 to $98,000 is normalized to
[0.0, 1.0]. Then $73,600 is mapped to
73,600 − 12,000
(1.0 − 0) + 0 = 0.716
98,000 − 12,000

• Z-score normalization (μ: mean, σ: standard deviation):


v − A
v' =
 A

73,600 − 54,000
– Ex. Let μ = 54,000, σ = 16,000. Then, = 1.225
16,000
31
Simple Discretization: Binning
• Equal-width (distance) partitioning
–Divides the range into N intervals of equal size (uniform grid)
–if A and B are the lowest and highest values of the attribute,
the width of intervals for N bins will be:
W = (B –A)/N.
–This is the most straightforward, but outliers may dominate
presentation
• Skewed data is not handled well
• Equal-depth (frequency) partitioning
–Divides the range into N bins, each containing approximately
same number of samples
–Good data scaling
–Managing categorical attributes can be tricky
32
Binning into Ranges
• Given the following AGE attribute values for 9 instances:
– 0, 4, 12, 16, 16, 18, 24, 26, 28
• rearrange the data in increasing order if not sorted

• Equi-width binning for bin width of e.g., 10:


– Bin 1: 0, 4 [-,10) bin • – denote negative
– Bin 2: 12, 16, 16, 18 [10,20) bin infinity
• + positive infinity
– Bin 3: 24, 26, 28 [20,+) bin

• Equi-frequency binning for bin density of e.g., 3:


– Bin 1: 0, 4, 12 [-, 14) bin
– Bin 2: 16, 16, 18 [14, 21) bin
– Bin 3: 24, 26, 28 [21,+] bin
33
Binning Methods for Data Smoothing
❑ Sorted data for price (in dollars): 4, 8, 9, 15, 21, 21, 24, 25, 26, 28,
29, 34
* Partition into equal-frequency (equi-depth) bins:
- Bin 1: 4, 8, 9, 15
- Bin 2: 21, 21, 24, 25
- Bin 3: 26, 28, 29, 34
* Smoothing by bin means:
- Bin 1: 9, 9, 9, 9
- Bin 2: 23, 23, 23, 23
- Bin 3: 29, 29, 29, 29
* Smoothing by bin boundaries:
- Bin 1: 4, 4, 4, 15
- Bin 2: 21, 21, 25, 25
- Bin 3: 26, 26, 26, 34
34
Concept Hierarchy Generation
• Concept hierarchy organizes concepts (i.e., country
attribute values) hierarchically and is usually
associated with each dimension in a data
warehouse Region or state
– Concept hierarchy formation: Recursively
reduce the data by collecting and replacing city
low level concepts (such as numeric values for
age) by higher level concepts (such as child,
youth, adult, or senior) Sub city
• Concept hierarchies can be explicitly specified
by domain experts and/or data warehouse
designers Kebele
• Concept hierarchy can be automatically formed by the analysis of
the number of distinct values. E.g., for a set of attributes: {Kebele,
city, state, country}
✓For numeric data, use discretization methods.
Data sets preparation for learning
• A standard machine learning technique is to divide the
dataset into a training set and a test set.
– Training dataset is used for learning the parameters of
the model in order to produce hypotheses.
• A training set is a set of problem instances (described as a set
of properties and their values), together with a classification
of the instance.
– Test dataset, which is never seen during the
hypothesis forming stage, is used to get a final,
unbiased estimate of how well the model works.
• Test set evaluates the accuracy of the model/hypothesis in
predicting the categorization of unseen examples.
• A set of instances and their classifications used to test the
accuracy of a learned hypothesis.
Divide the dataset into training & test
• There are various ways in which to separate the data
into training and test sets
– The established ways by which to use the two sets to assess
the effectiveness and the predictive/ descriptive accuracy of a
machine learning techniques over unseen examples.

– The holdout method


• Repeated holdout method
– Cross-validation
– The bootstrap
The holdout method
• The holdout method reserves a certain amount for
testing and uses the remainder for training
– Usually: one third for testing, the rest for training

• For small or “unbalanced” datasets, samples might not


be representative
– Few or none instances of some classes

• Stratified sample: advanced version of balancing the


data
– Make sure that each class is represented with approximately
equal proportions in both subsets

38
Cross-validation
• Cross-validation works as follows:
– First step: data is split into k subsets of equal-sized sets
randomly. A partition of a set is a collection of subsets for
which the intersection of any pair of sets is empty. That is, no
element of one subset is an element of another subset in a
partition.
– Second step: each subset in turn is used for testing and the
remainder for training
• This is called k-fold cross-validation
– Often the subsets are stratified before the cross-validation is
performed
• The error estimates are averaged to yield an overall error
estimate
39
Cross-validation example:
— Break up data into groups of the same size

— Hold aside one group for testing and use the rest to build model

Test
— Repeat

4040
Thank You!

41

You might also like