Download as pptx, pdf, or txt
Download as pptx, pdf, or txt
You are on page 1of 203

DATA MINING

LECTURE 1
Introduction
2

What is data mining?


• After years of data mining there is still no unique
answer to this question.

• A tentative definition:

Data mining is the use of efficient techniques for


the analysis of very large collections of data and the
extraction of useful and possibly unexpected
patterns in data.
3

Why do we need data mining?


•Really, really huge amounts of raw data!!
•In the digital age, TB of data is generated by the second
•Mobile devices, digital photographs, web documents.
•Facebook updates, Tweets, Blogs, User-generated content
•Transactions, sensor data, surveillance data
•Queries, clicks, browsing
•Cheap storage has made possible to maintain this data
•Needto analyze the raw data to extract
knowledge
4

Why do we need data mining?


• “The data is the computer”
• Large amounts of data can be more powerful than complex
algorithms and models
• Google has solved many Natural Language Processing problems,
simply by looking at the data
• Example: misspellings, synonyms
• Data is power!
• Today, the collected data is one of the biggest assets of an online
company
• Query logs of Google
• The friendship and updates of Facebook
• Tweets and follows of Twitter
• Amazon transactions
• We need a way to harness the collective intelligence
5

The data is also very complex


• Multiple types of data: tables, time series,
images, graphs, etc

• Spatial and temporal aspects

• Interconnected data of different types:


• From the mobile phone we can collect, location of the
user, friendship information, check-ins to venues,
opinions through twitter, images though cameras,
queries to search engines
6

Example: transaction data


• Billions of real-life customers:
• WALMART: 20M transactions per day
• AT&T 300 M calls per day
• Credit card companies: billions of transactions per day.

• The point cards allow companies to collect


information about specific users
7

Example: document data


• Web as a document repository: estimated 50
billions of web pages

• Wikipedia: 4 million articles (and counting)

• Online news portals: steady stream of 100’s of


new articles every day

• Twitter: ~300 million tweets every day


8

Example: network data


• Web: 50 billion pages linked via hyperlinks

• Facebook: 500 million users

• Twitter: 300 million users

• Instant messenger: ~1billion users

• Blogs: 250 million blogs worldwide, presidential


candidates run blogs
9

Example: genomic sequences


• http://www.1000genomes.org/page.php

• Full sequence of 1000 individuals

• 3*109 nucleotides per person  3*1012


nucleotides

• Lots more data in fact: medical history of the


persons, gene expression data
10

Example: environmental data


• Climate data (just an example)
http://www.ncdc.gov/oa/climate/ghcn-monthly/index.php

• “a database of temperature, precipitation and


pressure records managed by the National Climatic
Data Center, Arizona State University and the
Carbon Dioxide Information Analysis Center”

• “6000 temperature stations, 7500 precipitation


stations, 2000 pressure stations”
• Spatiotemporal data
11

Behavioral data
• Mobile phones today record a large amount of information about the user
behavior
• GPS records position
• Camera produces images
• Communication via phone and SMS
• Text via facebook updates
• Association with entities via check-ins

• Amazon collects all the items that you browsed, placed into your basket,
read reviews about, purchased.

• Google and Bing record all your browsing activity via toolbar plugins. They
also record the queries you asked, the pages you saw and the clicks you
did.

• Data collected for millions of users on a daily basis


Attributes
So, what is Data?
Tid Refund Marital Taxable
• Collection of data objects and Status Income Cheat

their attributes 1 Yes Single 125K No


2 No Married 100K No
3 No Single 70K No
• An attribute is a property or
4 Yes Married 120K No
characteristic of an object
5 No Divorced 95K Yes
• Examples: eye color of a person, Objects
6 No Married 60K No
temperature, etc.
7 Yes Divorced 220K No
• Attribute is also known as
variable, field, characteristic, or 8 No Single 85K Yes

feature 9 No Married 75K No

• A collection of attributes 10
10 No Single 90K Yes

describe an object
• Object is also known as record, Size: Number of objects
point, case, sample, entity, or Dimensionality: Number of attributes
instance Sparsity: Number of populated
object-attribute pairs
13

Attribute values
• Each attribute has a set of values objects draw
from
• The same attribute can be mapped to different
attribute values
• Example: height can be mesured in feet or meters
• Different attributes can be mapped to the same
set of values
• Example: Attribute values for ID and age are integers
14

Types of Attributes
• There are different types of attributes
• Categorical
• Examples: eye color, zip codes, words, rankings (e.g, good,
fair, bad), height in {tall, medium, short}
• Nominal (no order or comparison) vs Ordinal (order but not
comparable)
• Numeric
• Examples: dates, temperature, time, length, value, count.
• Discrete (counts) vs Continuous (temperature)
• Special case: Binary attributes (yes/no, exists/not exists)
15

Numeric Record Data


• If data objects have the same fixed set of numeric
attributes, then the data objects can be thought of as
points in a multi-dimensional space, where each
dimension represents a distinct attribute

• Such data set can be represented by an n-by-d data


matrix, where there are n rows, one for each object, and d
columns, one for each attribute
Projection Projection Distance Load Thickness
of x Load of y load

10.23 5.27 15.22 2.7 1.2


12.65 6.25 16.22 2.2 1.1
16

Categorical Data
• Data that consists of a collection of records, each
of which consists of a fixed set of categorical
attributes
Tid Refund Marital Taxable
Status Income Cheat

1 Yes Single High No


2 No Married Medium No
3 No Single Low No
4 Yes Married High No
5 No Divorced Medium Yes
6 No Married Low No
7 Yes Divorced High No
8 No Single Medium Yes
9 No Married Medium No
10 No Single Medium Yes
10
17

Document Data
• Each document becomes a `term' vector,
• each term is a component (attribute) of the vector,
• the value of each component is the number of times the
corresponding term occurs in the document.
• Bag-of-words representation – no ordering

timeout

season
coach

game
score
team

ball

lost
pla

wi
n
y

Document 1 3 0 5 0 2 6 0 2 0 2

Document 2 0 7 0 2 1 0 0 3 0 0

Document 3 0 1 0 0 1 2 2 0 3 0
18

Transaction Data
• Each record (transaction) is a set of items.
TID Items
1 Bread, Coke, Milk
2 Beer, Bread
3 Beer, Coke, Diaper, Milk
4 Beer, Bread, Diaper, Milk
5 Coke, Diaper, Milk

• A set of items can also be represented as a binary


vector, where each attribute is an item.
• A document can also be represented as a set of words
(no counts)
Sparsity: average number of products bought by a customer
19

Ordered Data
• Genomic sequence data

GGTTCCGCCTTCAGCCCCGCGCC
CGCAGGGCCCGCCCCGCGCCGTC
GAGAAGGGCCCGCCTGGCGGGCG
GGGGGAGGCGGGGCCGCCCGAGC
CCAACCGAGTCCGACCAGGTGCC
CCCTCTGCTCGGCCTAGACCTGA
GCTCATTAGGCGGCAGCGGACAG
GCCAAGTAGAACACGCGAAGCGC
TGGGCTGCCTGCTGCGACCAGGG

• Data is a long ordered string


20

Ordered Data
• Time series
• Sequence of ordered (over “time”) numeric values.
21

Graph Data
• Examples: Web graph and HTML Links
<a href="papers/papers.html#bbbb">
Data Mining </a>
<li>
2 <a href="papers/papers.html#aaaa">
Graph Partitioning </a>
<li>
5 1 <a href="papers/papers.html#aaaa">
Parallel Solution of Sparse Linear System of Equations </a>
<li>
2 <a href="papers/papers.html#ffff">
N-Body Computation and Dense Linear System Solvers
5
22

Types of data
• Numeric data: Each object is a point in a
multidimensional space
• Categorical data: Each object is a vector of
categorical values
• Set data: Each object is a set of values (with or
without counts)
• Sets can also be represented as binary vectors, or
vectors of counts
• Ordered sequences: Each object is an ordered
sequence of values.
• Graph data
23

Data Mining: On What Kind of Data?


• Relational databases
• Data warehouses
• Transactional databases
• Advanced DB and information repositories
• Object-oriented and object-relational databases
• Spatial databases
• Time-series data and temporal data
• Text databases and multimedia databases
• Heterogeneous and legacy databases
• WWW
24

What can you do with the data?


• Suppose that you are the owner of a supermarket
and you have collected billions of market basket
data. What information would you extract from it
and how would you use it?
TID Items
Product placement
1 Bread, Coke, Milk
2 Beer, Bread
3 Beer, Coke, Diaper, Milk Catalog creation
4 Beer, Bread, Diaper, Milk
5 Coke, Diaper, Milk Recommendations

• What if this was an online store?


25

What can you do with the data?


• Suppose you are a search engine and you have
a toolbar log consisting of
• pages browsed,
• queries,
Ad click prediction
• pages clicked,
• ads clicked Query reformulations

each with a user id and a timestamp. What


information would you like to get our of the data?
26

What can you do with the data?


• Suppose you are biologist who has microarray
expression data: thousands of genes, and their
expression values over thousands of different settings
(e.g. tissues). What information would you like to get out
of your data?

Groups of genes and tissues


27

What can you do with the data?


• Suppose you are a stock broker and you observe
the fluctuations of multiple stocks over time. What
information would you like to get our of your
data?
Clustering of stocks

Correlation of stocks

Stock Value prediction


28

What can you do with the data?


• You are the owner of a social network, and you
have full access to the social graph, what kind of
information do you want to get out of your graph?

• Who is the most important node in the graph?


• What is the shortest path between two nodes?
• How many friends two nodes have in common?
• How does information spread on the network?
29
Data mining (Knowledge Discovery in Databases)
consists of iterative sequences of the following seven
steps
1. Learning the application domain:
• Learn relevant prior knowledge to be used in the
DM process
• Learn the goals of DM application
2. Create target data set (data selection)
• Data cleaning and preprocessing: (may take 60%
of effort!)
• Data reduction and transformation:
• Find useful features, dimensionality/variable reduction,
invariant representation.
3. Choosing functions of data mining
• summarization, classification, regression,
association, clustering.
30

4. Choosing the mining algorithm(s)


• Data mining: search for patterns of interest
5. Identify the relevant knowledge by measuring the
mining result interestingness
• Pattern evaluation
6. Present the knowledge to the user
• knowledge presentation
• visualization, transformation, removing redundant patterns, etc.
7. Use of discovered knowledge
31

Why data mining?


• Commercial point of view
• Data has become the key competitive advantage of companies
• Examples: Facebook, Google, Amazon
• Being able to extract useful information out of the data is key for exploiting
them commercially.
• Scientific point of view
• Scientists are at an unprecedented position where they can collect TB of
information
• Examples: Sensor data, astronomy data, social network data, gene data
• We need the tools to analyze such data to get a better understanding of the
world and advance science
• Scale (in data size and feature dimension)
• Why not use traditional analytic methods?
• Enormity of data, curse of dimensionality
• The amount and the complexity of data does not allow for manual processing
of the data. We need automated techniques.
32

Why Data Mining?


Potential Applications
• Data mining has many and varied fields of
application some of which are listed below.
• Retail/Marketing
• Banking
• Insurance and Health Care
• Transportation
• Medicine
33

What is Data Mining again?


• “Data mining is the analysis of (often large)
observational data sets to find unsuspected relationships
and to summarize the data in novel ways that are both
understandable and useful to the data analyst” (Hand,
Mannila, Smyth)

• “Data mining is the discovery of models for data”


(Rajaraman, Ullman)
• We can have the following types of models
• Models that explain the data (e.g., a single function)
• Models that predict the future data instances.
• Models that summarize the data
• Models the extract the most prominent features of the data.
34

What is Data Mining & Data warehousing?


• Alternative names
• Knowledge discovery(mining) from databases (KDD),
• knowledge extraction,
• data/pattern analysis,
• data archeology,
• data dredging,
• information harvesting,
• business intelligence, etc.

• Note that:
• query processing systems, Expert systems (knowledge base
systems) or Information retrieval systems are not data mining
tasks
35

What is data mining again?


• The industry point of view: The analysis of huge
amounts of data for extracting useful and
actionable information, which is then integrated
into production systems in the form of new
features of products
• Data Scientists should be good at data analysis, math,
statistics, but also be able to code with huge amounts of
data and use the extracted information to build
products.
36

Data Mining Functionalities


• The supervised predictive data mining functionalities
includes
• Classification
• Regression
• Time series
• Estimation

• The unsupervised descriptive data mining functionalities


includes
• Concept /class description: Characterization (summarization) and
discrimination
• Association Analysis
• Clustering analysis
• Outlier analysis
• Evolution analysis
37

What can we do with data mining?


• Some examples:
• Frequent itemsets and Association Rules extraction
• Coverage
• Clustering
• Classification
• Ranking
• Exploratory analysis
38

Frequent Itemsets and Association Rules


• Given a set of records each of which contain some
number of items from a given collection;
• Identify sets of items (itemsets) occurring frequently
together
• Produce dependency rules which will predict occurrence
of an item based on occurrences of other items.
Itemsets Discovered:
TID Items {Milk,Coke}
1 Bread, Coke, Milk {Diaper, Milk}
2 Beer, Bread
3 Beer, Coke, Diaper, Milk Rules Discovered:
4 Beer, Bread, Diaper, Milk {Milk} --> {Coke}
5 Coke, Diaper, Milk {Diaper, Milk} --> {Beer}
39

Frequent Itemsets: Applications


• Text mining: finding associated phrases in text
• There are lots of documents that contain the phrases
“association rules”, “data mining” and “efficient
algorithm”

• Recommendations:
• Users who buy this item often buy this item as well
• Users who watched James Bond movies, also watched
Jason Bourne movies.

• Recommendations make use of item and user similarity


40

Association Rule Discovery: Application


• Supermarket shelf management.
• Goal: To identify items that are bought together by
sufficiently many customers.
• Approach: Process the point-of-sale data collected
with barcode scanners to find dependencies among
items.
• A classic rule --
• If a customer buys diaper and milk, then he is very likely to
buy beer.
• So, don’t be surprised if you find six-packs stacked next to
diapers!
41

Clustering Definition
• Given a set of data points, each having a set of
attributes, and a similarity measure among them,
find clusters such that
• Data points in one cluster are more similar to one
another.
• Data points in separate clusters are less similar to
one another.
• Similarity Measures?
• Euclidean Distance if attributes are continuous.
• Other Problem-specific Measures.
42
43
44

Illustrating Clustering
Euclidean Distance Based Clustering in 3-D space.

Intracluster distances Intercluster distances


are minimized are maximized
45

Clustering: Application 1
• Bioinformatics applications:
• Goal: Group genes and tissues together such that genes are
coexpressed on the same tissues
46

Clustering: Application 2
• Document Clustering:
• Goal: To find groups of documents that are similar to
each other based on the important terms appearing in
them.
• Approach: To identify frequently occurring terms in
each document. Form a similarity measure based on
the frequencies of different terms. Use it to cluster.
• Gain: Information Retrieval can utilize the clusters to
relate a new document or search term to clustered
documents.
47

Coverage
• Given a set of customers and items and the
transaction relationship between the two, select a
small set of items that “covers” all users.
• For each user there is at least one item in the set that
the user has bought.

• Application:
• Create a catalog to send out that has at least one item
of interest for every customer.
48

Classification: Definition
• Given a collection of records (training set )
• Each record contains a set of attributes, one of the
attributes is the class.
• Find a model for class attribute as a function of
the values of other attributes.

• Goal: previously unseen records should be


assigned a class as accurately as possible.
• A test set is used to determine the accuracy of the
model. Usually, the given data set is divided into
training and test sets, with training set used to build the
model and test set used to validate it.
49

Classification Example l l
us
ir ca ir ca u o
ego e go tin
t t n ss
ca ca co cla
Tid Refund Marital Taxable Refund Marital Taxable
Status Income Cheat Status Income Cheat

1 Yes Single 125K No No Single 75K ?


2 No Married 100K No Yes Married 50K ?
3 No Single 70K No No Married 150K ?
4 Yes Married 120K No Yes Divorced 90K ?
5 No Divorced 95K Yes No Single 40K ?
6 No Married 60K No No Married 80K ? Test
Set
10

7 Yes Divorced 220K No


8 No Single 85K Yes
No
9 No Married 75K
Training
Learn
10
10 No Single 90K Yes
Set Classifier Model
50

Classification: Application 1
• Ad Click Prediction
• Goal: Predict if a user that visits a web page will click
on a displayed ad. Use it to target users with high
click probability.
• Approach:
• Collect data for users over a period of time and record who
clicks and who does not. The {click, no click} information
forms the class attribute.
• Use the history of the user (web pages browsed, queries
issued) as the features.
• Learn a classifier model and test on new users.
51

Classification: Application 2
• Fraud Detection
• Goal: Predict fraudulent cases in credit card
transactions.
• Approach:
• Use credit card transactions and the information on its
account-holder as attributes.
• When does a customer buy, what does he buy, how often he pays on
time, etc
• Label past transactions as fraud or fair transactions. This
forms the class attribute.
• Learn a model for the class of the transactions.
• Use this model to detect fraud by observing credit card
transactions on an account.
52

Network data analysis


• Link Analysis Ranking: Given a collection of web
pages that are linked to each other, rank the
pages according to importance
(authoritativeness) in the graph
• Intuition: A page gains authority if it is linked to by
another page.

• Application: When retrieving pages, the


authoritativeness is factored in the ranking.
53

Network data analysis


• Given a social network can you predict which
individuals will connect in the future?
• Triadic closure principle: Links are created in a way that
usually closes a triangle
• If both Bob and Charlie know Alice, then they are likely to meet
at some point.

• Application: Friend/Connection recommendations


in social networks
54

Exploratory Analysis
• Trying to understand the data as a physical
phenomenon, and describe them with simple metrics
• What does the web graph look like?
• How often do people repeat the same query?
• Are friends in facebook also friends in twitter?

• The important thing is to find the right metrics and


ask the right questions

• It helps our understanding of the world, and can lead


to models of the phenomena we observe.
55

Exploratory Analysis: The Web


• What is the structure and the properties of the
web?
56

Exploratory Analysis: The Web


• What is the distribution of the incoming links?
57

Connections of Data Mining with other areas


• Draws ideas from machine learning/AI, pattern
recognition, statistics, and database systems
• Traditional Techniques
may be unsuitable due to
• Enormity of data Statistics/ Machine Learning/
AI Pattern
• High dimensionality Recognition
of data
• Heterogeneous, Data Mining

distributed nature
of data Database
• Emphasis on the use of data systems
58

Cultures
• Databases: concentrate on large-scale (non-
main-memory) data.
• AI (machine-learning): concentrate on complex
methods, small data.
• In today’s world data is more important than algorithms
• Statistics: concentrate on models.
59

Models vs. Analytic Processing

• To a database person, data-mining is an


extreme form of analytic processing – queries
that examine large amounts of data.
• Result is the query answer.
• To a statistician, data-mining is the inference
of models.
• Result is the parameters of the model.
60

(Way too Simple) Example


• Given a billion numbers, a DB person would
compute their average and standard deviation.
• A statistician might fit the billion points to the best
Gaussian distribution and report the mean and
standard deviation of that distribution.
61

Data Mining: Confluence of Multiple Disciplines

Database
Technology Statistics

Machine Visualization
Data Mining
Learning

Pattern
Recognition Other
Algorithm Disciplines
62

Data Mining: Confluence of Multiple Disciplines

Database
Technology Statistics

Machine Visualization
Data Mining
Learning

Pattern
Recognition Other
Algorithm Disciplines
63

Data Mining: Confluence of Multiple Disciplines

Database
Technology Statistics

Machine Visualization
Data Mining
Learning

Pattern
Recognition Distributed
Algorithm Computing
64

Single-node architecture

CPU
Machine Learning, Statistics

Memory

“Classical” Data Mining

Disk
65

Commodity Clusters
• Web data sets can be very large
• Tens to hundreds of terabytes
• Cannot mine on a single server
• Standard architecture emerging:
• Cluster of commodity Linux nodes, Gigabit ethernet interconnect
• Google GFS; Hadoop HDFS; Kosmix KFS
• Typical usage pattern
• Huge files (100s of GB to TB)
• Data is rarely updated in place
• Reads and appends are common
• How to organize computations on this architecture?
• Map-Reduce paradigm
66

Cluster Architecture
2-10 Gbps backbone between racks
1 Gbps between Switch
any pair of nodes
in a rack
Switch Switch

CPU CPU CPU CPU

Mem … Mem Mem … Mem

Disk Disk Disk Disk

Each rack contains 16-64 nodes


67

Map-Reduce paradigm
• Map the data into key-value pairs
• E.g., map a document to word-count pairs
• Group by key
• Group all pairs of the same word, with lists of counts
• Reduce by aggregating
• E.g. sum all the counts to produce the total count.
68

Data Mining Models


69

Data Mining: A KDD Process


• Data mining: the core of knowledge
discovery in Database Pattern Evaluation

Data Mining

Task-relevant Data

Selection &
Data Transformation
Warehouse
Data Cleaning

Data Integration

Databases
70

Data Mining and Business Intelligence


Increasing potential
to support
business decisions End User
Making
Decisions

Data Presentation Business Analyst

Visualization Techniques
Data Mining Data Analyst
Information Discovery

Data Exploration
Statistical Analysis, Querying and Reporting
Data Warehouses / Data Marts
OLAP
Data Sources DBA
Paper, Files, Information Providers, Database Systems, OLTP
71

Architecture of a Typical Data Mining


System

Graphical user interface

Pattern evaluation

Data mining engine

Knowledge-base
Database or data
warehouse server
Data cleaning & data integration Filtering

Data
Warehouse
Databases
72

Data Mining: Classification Schemes

• General functionality
• Descriptive data mining

• Predictive data mining

• Different views, different classifications


• Kinds of databases to be mined

• Kinds of knowledge to be discovered

• Kinds of techniques utilized

• Kinds of applications adapted


73

A Multi-Dimensional View of Data Mining


Classification
• Databases to be mined
• Relational, transactional, object-oriented, object-relational, active,
spatial, time-series, text, multi-media, heterogeneous, legacy,
WWW, etc.
• Knowledge to be mined
• Characterization, discrimination, association, classification,
clustering, trend, deviation and outlier analysis, etc.
• Multiple/integrated functions and mining at multiple levels
• Techniques utilized
• Database-oriented, data warehouse (OLAP) oriented, machine
learning, statistics, visualization, neural network, etc.
• Applications adapted
• Retail, telecommunication, banking, fraud analysis, DNA mining, stock
market analysis, Web mining, Weblog analysis, etc.
74

Major Issues in Data Mining


• The major issue in data mining includes the following
aspects
• Mining methodology and user interaction
• Performance and scalability
• Issues related to the diversity of data types
• Issues related to applications and social impacts
75

Major Issues: Mining methodology and user interaction


• Reflects the kind of knowledge mined, the ability to mine knowledge at
different granularities, the use of domain knowledge, and knowledge
visualization
• Mining different kinds of knowledge in databases for different users
• Require different data mining functionalities and algorithms

• Interactive mining of knowledge at multiple levels of abstraction


• Enable to verify the discovered knowledge is relevant or not for the intended purpose

• Incorporation of background knowledge


• Background knowledge is vital in refining the data mining performance and evaluate its
relevance
• Data mining query languages and ad-hoc data mining
• Such query language enable to retrieve knowledge from a data warehouse just like what
SQL does from the database in a DBMS
• Presentation and visualization of data mining results
• Result should be presented visually or using high level natural language

• Handling noise, incomplete data and exceptions


• Pattern evaluation: the interestingness problem
76

Major Issues: Performance and scalability


• This includes
• Efficiency and scalability of data mining algorithms
• Parallel, distributed and incremental mining methods for huge
amount of data
77

Major Issues: Issues related to the diversity of


data types
• This involves
• Handling relational and complex types of data.
• Relational data need efficient and effective data mining system as it is
widely used
• Complex data refers to data such as hyper text, multimedia, spatial,
temporal and transactional which further require attention to do data
mining on such data
• Mining information from heterogeneous databases and
global information systems (WWW)
• Which is a major issue to use as a source of data in data mining
78

Major Issues: Issues related to applications and social impacts


• Data mining also concerned on issues related to its
application and the impact it may have on the social aspect
• These includes
• Identifying the application of discovered knowledge which can be
• Domain-specific data mining tools
• Intelligent query answering
• Process control and decision making
• How to integrate the discovered knowledge with existing knowledge:
A knowledge fusion problem
• Protection of data security, integrity, and privacy that may have
social impact
DATA MINING
LECTURE 2
Data Preprocessing
Data Warehousing
OLAP Technology
80

The data analysis pipeline


• Mining is not the only step in the analysis process

Data Result
Data Mining
Preprocessing Post-processing

• Preprocessing: real data is noisy, incomplete and inconsistent. Data cleaning is


required to make sense of the data
• Techniques: Sampling, Dimensionality Reduction, Feature selection.
• A dirty work, but it is often the most important step for the analysis.

• Post-Processing: Make the data actionable and useful to the user


• Statistical analysis of importance
• Visualization.

• Pre- and Post-processing are often data mining tasks as well


81

Data Quality
• Examples of data quality problems:
• Noise and outliers
Tid Refund Marital Taxable
• Missing values Status Income Cheat

• Duplicate data 1 Yes Single 125K No


2 No Married 100K No
3 No Single 70K No
4 Yes Married 120K No

A mistake or a millionaire? 5 No Divorced 10000K Yes


6 No NULL 60K No

Missing values 7 Yes Divorced 220K NULL


8 No Single 85K Yes
9 No Married 90K No
Inconsistent duplicate entries 9 No Single 90K No
10
82

Why Data Preprocessing?


• Quality decisions must be based on quality data
• Data in the real world is full of dirty
• incomplete:
• lacking attribute values that is vital for decision making so they
have to be added,
• lacking certain attributes of interest in certain dimension and should
be again added with the required value,
• containing only aggregate data so that the primary source of the
aggregation should be included
• noisy: containing errors or outliers that deviate from the
expected
• inconsistent: containing discrepancies in codes or names
of the organization or domain
• etc
83

Why Data Preprocessing?


• Incomplete, noisy and inconsistent data are
commonplace properties of large real world
databases and data sources
• Data cleaning routine work to clean such problems
so that results can be accepted
• Before starting data preprocessing, it will be
advisable to have overall picture of the data we have
so that it tell as high level summary such as
• General property of the data
• Which data values should be considered as noise or
outliers
• This can be done with the help of descriptive data
summarization
84

Exploratory analysis of data


• Summary statistics: numbers that summarize
properties of the data

• Summarized properties include frequency, location and


spread
• Examples: location - mean
spread - standard deviation

• Most summary statistics can be calculated in a single


pass through the data
85

Descriptive data summarization


• Descriptive summary about data can be generated with the
help of
• measure of central tendency of the data,
• dispersion of the data and
• their graphic display

• Measure of central tendency includes


• Mean
• Median
• Mode and/or
• Range
86

Descriptive data summarization


• Measure of dispersion includes
• Range
• The five number summary
• Inter quartile range
• Variance and standard deviation

• Graphical display includes


• The Box Plot
• Chart (Pie chart, bar chart, etc)
• Graph (line, etc)
• Histogram
• Quantile plot
• Q-Q plot
• Scatter plot
• Loess curve plot
87

Descriptive data summarization


• Report Assignment : Write a clear report the discusses each
of the above descriptive data summarization concepts one
by one

• Use the data bellow for your analysis


• What if the data is not continuous and numeric (i.e. if it is
nominal or categorical)?

1, 3, 4, 4, 12, 19,
23, 34, 43, 45, 45, 56,
56, 56, 56, 65, 67, 76,
78, 78, 86, 250, 456
88

Major Tasks in Data Preprocessing


• Data pre-processing in data mining activity refers to the
processing of the various data elements to prepare for the
mining operation.
• Any activity performed prior to mining the data to get
knowledge out of it is called data pre-processing
• This involves:
• Data cleaning
• Data integration
• Data transformation
• Data reduction
• Data Discretization and concept hierarchy generation
89

Data Cleaning
• Refers to the process of

• filling in missing values,


• smooth noisy data,
• identify or remove outliers, and
• resolve inconsistencies
90

Data Cleaning: Missing Data


• Data is not always available
• E.g., many tuples have no recorded value for several
attributes, such as customer income in sales data
• Missing data may be due to
• equipment malfunction
• inconsistent with other recorded data and thus ignored
• data not entered due to misunderstanding
• certain data may not be considered important at the time
of entry
• Missing data may need to be inferred.
91

Data Cleaning: How to Handle Missing Data?


• Ignore the tuple: usually done when class label is missing

(assuming the tasks in classification—not effective when


the percentage of missing values per attribute varies
considerably.
• Fill in the missing value manually: tedious and infeasible

• Use a global constant to fill in the missing value: e.g.,

“unknown”, a new class?! Simple but not recommended


as this constant may form some interesting pattern for the
data mining task which mislead decision process
92

Data Cleaning: How to Handle Missing Data?


• Use the attribute mean for all samples belonging to the

same class to fill in the missing value

• Use the most probable value to fill in the missing value:

inference-based such as Bayesian formula or decision tree

• Except the first two approach, the rest filled values are

incorrect

• The last two approaches are the most commonly used

technique to fill missing data


93

Data Cleaning: Noisy Data


• Noise: random error or variance in a measured variable
• Incorrect attribute values may be due to
• faulty data collection instruments
• data entry problems
• data transmission problems
• technology limitation
• inconsistency in naming convention
• duplicate records
• incomplete data per field
94

Data Cleaning: How to Handle Noisy Data?


• Noisy data can be handled by the techniques such
as
• Simple Discretization Methods (Binning method)
• Clustering
• Regression
• Combined computer and human inspection
• detect suspicious values and check by human
95

Data Cleaning: Handling Noisy Data by


Simple Discretization Methods (Binning)
• Sort data and partition into bins
• The bins can be equal-depth or equal-width bin
• Choose smoothing algorithm and apply the algorithm on each
bin
• The algorithm can be
• smooth by bin means,
• smooth by bin median,
• smooth by bin boundaries, etc.
96

Data Cleaning: Handling Noisy Data by


Simple Discretization Methods (Binning)
• Equal-width (distance) partitioning:
• It divides the range into N intervals of equal size: uniform grid
• if A and B are the lowest and highest values of the attribute, the
width of intervals will be: W = (B-A)/N.
• This approach leads into bins with non uniform distribution of data
elements per bin
• The most straightforward approach
• But outliers may dominate presentation
• Skewed data is not handled well.
97

Data Cleaning: Handling Noisy Data by


Simple Discretization Methods (Binning)
• Equal-width (distance) partitioning:

• Given the data set (say 24, 21, 28, 8, 4, 26, 34, 21, 29, 15,
9, 25)
• Determine the number of bins : N (say 3)
• Determine the range R = Max – Min
• Divide the range into N equal width where the ith bin is [Xi-1, Xi)
where X0=Min and XN=Max and Xi = Xi-1 + R/N
• For the above data R = 30, R/3 = 10, X1 = 14, X2 = 24, and X3 = 34
• First sort the data as 4, 8, 9, 15, 21, 21, 24, 25, 26, 28, 29, 34
• Therefore in our case Bin 1 = 4,8,9 Bin 2 = 15, 21, 21 Bin3 = 24,
25, 26, 28, 29, 34
98

Data Cleaning: Handling Noisy Data by


Simple Discretization Methods (Binning)
• Equal-depth (frequency) partitioning:
• It divides the range into N intervals, each containing
approximately same number of samples
• Given the data set (say 24, 21, 28, 8, 4, 26, 34, 21, 29, 15, 9,
25)
• Determine the number of bins : N (say 3)
• Determine the frequency F of the data set (12)
• Determine the number of sample per bin F/N (12/3 = 4)
• Sort the data as 4, 8, 9, 15, 21, 21, 24, 25, 26, 28, 29, 34 and place
F/N element in order into a bin
• Therefore in our case Bin1 = 4,8,9 ,15 Bin2 = 21, 21,24, 25
Bin3 = 26, 28, 29, 34
• Good data scaling
• Managing categorical attributes can be tricky.
99

Data Cleaning: Handling Noisy Data by


Simple Discretization Methods (Binning)
Smoothing algorithm
• Given the data set in bins as say:
- Bin 1: 4, 8, 9, 15
- Bin 2: 21, 21, 24, 25
- Bin 3: 26, 28, 29, 34

• Smoothing by bin means:


• Find the mean in each bin and replace all the element by the bin mean
• Bin 1: 9, 9, 9, 9
• Bin 2: 23, 23, 23, 23
• Bin 3: 29, 29, 29, 29
100

Data Cleaning: Handling Noisy Data by


Simple Discretization Methods (Binning)
Smoothing algorithm
• Smoothing by bin median:
• Find the median in each bin and replace all the element by the bin median
• Bin 1: 8.5, 8.5, 8.5, 8.5
• Bin 2: 22.5, 22.5, 22.5, 22.5
• Bin 3: 28.5, 28.5, 28.5, 28.5

• Smoothing by bin boundaries:


• Replace each element by the bin min or bin max which ever is the nearest
• Distance is measured just as the absolute of the difference
• Bin 1: 4, 4, 4, 15
• Bin 2: 21, 21, 25, 25
• Bin 3: 26, 26, 26, 34
101

Data Cleaning: Handling Noisy Data by


Cluster Analysis

• Detect and remove outliers


102

Data Cleaning: Handling Noisy Data by Regression


• smooth by fitting the data into regression functions
• Finding a fitting function for two dimensional case so that the
value of the second variable can be estimated using the value of
the first variable
y
Y1

Y1’ y=x+1

X1 x

• It can be extended to N dimensional case in which regression


finds multidimensional space so that value of one of the
dimension can be predicted from the rest N-1 values
103

Data Cleaning:
identifying and removing outliers
• Can be done by
• cluster analysis
• The box plot techniques
104

Data Cleaning:
resolve inconsistencies
• Involve write the same information in many ways such as
Bahrdar vs Bahirdar, AAU vs A.A.U., $X vs Y Birr

• Very difficult and most important task which require manual


investigation
105

Data Integration
• Data integration:
• Combines data from multiple sources (databases, data
cubes, or files) into a coherent store

• There are a number of issues to consider during data


integration

• Some of these are


• Schema integration issue
• Entity identification issue
• Data value conflict issue
• Avoiding redundancy issue
106

Data Integration
• Schema integration
• Schema refers to the design of an entity and its relation in
the data source
• Integrate metadata from different sources

• Entity identification problem:


• identify real world entities from multiple data sources
which are identical so that they can be integrated properly
• As data source for data mining differ, the same entity will
have different representation in the different sources
• How can equivalent real world entities from multiple
sources can be matched up?
• e.g., A.cust-id  B.cust-#
107

Data Integration

• Data value conflict issue


• Involves detecting and resolving data value conflicts

• For the same real world entity, attribute values from


different sources may be different

• Possible reasons: different representations, different


scales, measurement unit used
108

Data Integration
• Avoiding redundancy data issue
Redundant data occur often during integration of multiple
databases
The same attribute may have different names in different databases
One attribute may be a “derived” attribute in another table, e.g.,
annual revenue from monthly revenue
Redundant data may be able to be detected by correlation
analysis for numeric data
109

Data Integration
• Avoiding redundancy data issue

The correlation between two attribute A and B (rA,B ) is


always in the range from -1 to +1.

-1 is to mean negatively correlated, 0 to mean uncorrelated


and +1 is perfectly correlated

Careful integration of the data from multiple sources may


help to reduce/avoid redundancies and inconsistencies and
improve mining speed and quality
110

Data Transformation
• Data transformation is the process of transforming or
consolidating data into a form appropriate for mining which
is more appropriate for measurement of similarity and
distance

• This involves
• Smoothing
• Aggregation
• Generalization
• Normalization
• Attribute/feature construction
111

Data Transformation
• Smoothing: concerned mainly to remove noise from data using
techniques such as binning, clustering, and regression

• Aggregation: summarization, data cube construction

• Generalization: concept hierarchy climbing (from low level into higher


level)

• Normalization: scaled to fall within a small, specified range


– Used mainly
• for classification algorithms such as neural network,
• distance measurements such as clustering, nearest neighbor approach

– Exist in various forms


1. min-max normalization
2. z-score normalization
3. normalization by decimal scaling
4. Attribute/feature construction
112

Data Transformation: Normalization


• min-max normalization
• Perform a linear transformation on the original data into a
range specified range of min and max value
MINA v
MAXA

NEW_MINA NEW_MAXA
V’
v  minA
v'  (new _ maxA  new _ minA)  new _ minA
maxA  minA
113

Data Transformation: Normalization


• z-score (zero mean) normalization
• A value will be normalized based on the mean and
standard deviation of the original data
• The transformed data will have zero mean value

v  meanA
v' 
stand _ devA
114

Data Transformation: Normalization


• Normalization by decimal scaling
• Normalizes by moving the decimal point of values of attribute A.
• The resulting value ranges from -1 to +1 exclusive

v
v'  j v'
Where j is the smallest integer such that Max(| |)<1

10
• For example, given the data set V= 132, -89, 756, -1560, 234, -345 and
1234
• The value of v’ becomes in the range from -1 to +1 if j=4
• In that case V’ = 0.0132, -0.0089, 0.0756, -0.156, 0.0234, -0.0345, and
0.1234
115

Data Transformation: Attribute construction


• Attribute construction is the process of driving new attributes
from the existing attributes.
• Attribute construction is important to improve performance of
data mining as the derived attribute will have more
discriminative power than the base attributes
• Enable to discover missing information or information hidden
within the data set

• For example
– Rate of telephone charge can be derived from billable amount and
call duration
– area can be derived from width and height
116

Data Reduction
• Data sources may store terabytes of data

• Complex data analysis/mining may take a very long time to run


on the complete data set

• Data reduction tries to obtain a reduced representation of the


data set that is much smaller in volume but yet produces the
same (or almost the same or better) analytical results
117

Data Reduction
• Data reduction strategies includes
• Data cube aggregation
• Apply data cube aggregation to have summarized data to work with DM
functionalities

• Attribute subset selection


• Select only relevant attribute from the set of attributes that the data set has

• Numerosity reduction
• Regression and log-linear models (once analyzed store only the parameters that
determine the regression and the independent variable values
• Histograms: store the frequency of each bin in the histogram rather than each
element in detail
• Clustering: store the cluster labels rather that the individual elements
• Sampling: use representative sample data rather than the entire population of data
118

Data Reduction
• Data reduction strategies includes
• Dimensionality reduction
• Huffman coding (text coding or compression algorithm)
• Wavelet transforms (transforming usually image data into
important set of coefficients called wavelets)
• Principal component analysis (representing N sequence of M
dimensional data by using small parameters having either N or
M elements which ever is possible)
• It can be used for a series image data reduction where N can be the
number of pixels and M the important information on the pixel at the same
location
• If each parameter has N elements, we use it for generating every image
from the parameters and the reduced image coefficients
• If each parameter has M elements, we use it for generating every pixel of
all the image from the parameters and the reduced pixel coefficient values
119
Data Reduction (Aggregation)
120

Data Reduction (Sampling)


• Sampling is the main technique employed for data selection.
• It is often used for both the preliminary investigation of the data and the
final data analysis.

• Statisticians sample because obtaining the entire set of data


of interest is too expensive or time consuming.
• Example: What is the average height of a person in wachemo?
• We cannot measure the height of everybody

• Sampling is used in data mining because processing the


entire set of data of interest is too expensive or time
consuming.
• Example: We have 1M documents. What fraction has at least 100
words in common?
• Computing number of common words for all pairs requires 1012 comparisons
• Example: What fraction of tweets in a year contain the word “Greece”?
• 300M tweets per day, if 100 characters on average, 86.5TB to store all tweets
121

Data Reduction (Sampling)


• The key principle for effective sampling is the
following:
• using a sample will work almost as well as using the entire
data sets, if the sample is representative

• A sample is representative if it has approximately the same


property (of interest) as the original set of data

• Otherwise we say that the sample introduces some bias

• What happens if we take a sample from the university


campus to compute the average height of a person at
Wachemo?
122

Data Reduction (Sampling)


• Sample Size

8000 points 2000 Points 500 Points


123

Data Discretization and concept hierarchy generation


• Data discritization refers to transforming the data set which is
usually continous into discrete interval values

• Concept hierarchy refers to generating the concept levels so that


data mining function can be applied at specific concept level

• Data discretization techniques can be used to reduce the number


of values for a given continuous attribute by dividing the range of
attribute into intervals

• Interval labels can be used to replace actual data values

• This leads to concise, easy to use, knowledge level representation


of mining result
124

Data Discretization and concept hierarchy generation


• Discretization:
• Divide the range of a continuous attribute into intervals
• Reduce data size by discretization
• Prepare for further analysis
• Some classification algorithms only accept categorical attributes.

• Concept hierarchies
• Reduce the data by collecting and replacing low level concepts (such as
numeric values for the attribute age) by higher level concepts (such as
young, middle-aged, or senior).
• Concept hierarchy generation refers to finding hierarchical relation in the
data set and build the concept hierarchy automatically
125

Discretization and concept hierarchy generation for


numeric data
• Three types of attributes:
– Nominal — values from an unordered set like location, address
– Ordinal — values from an ordered set like age, salary
– Continuous — real numbered values such as height, frequency

• It is difficult and laborious to specify concept hierarchies for numerical

attributes because of the wide diversity of possible data ranges and

frequent update of data values

• Concept hierarchies for numerical data can be constructed

automatically based on data discretization algorithms


126

Discretization and concept hierarchy generation for


numeric data
• The following are methods for discretization and concept hierarchy

generation
1. Binning :
• This will discretize the attributes by putting the attribute instances and into the different

bins and replace their values by the bin statistics

• Doing it recursively in a top down approach will generate concept hierarchy

2. Histogram analysis
• This will discretize the attribute by generating equal width or depth interval and replace the

attribute instance value by the depth or width of the histogram

• Doing it recursively in a top down approach will generate concept hierarchy


127

Discretization and concept hierarchy generation for


numeric data
3. Clustering analysis
• This will discretize the attribute and doing it recursively in a top down or

bottom up approach will generate concept hierarchy

• The bottom up approach will start by generating larger number of class and

incrementally merge some of them forming the concept hierarchy from the

bottom up till the top

4. Entropy-based discretization
• This is appropriate if the data set has class label and the task is classification

• For a given continuous or nominal values it finds the best split point for the

possible values using concept of information gain and entropy


128

Discretization and concept hierarchy generation for


numeric data
5. Segmentation by natural partitioning

6. Interval merging by Analysis

7. Clustering

• Report Assignment: Write a report on the above discretization


and concept hierarchy generation methods for numeric data
129

Discretization and concept hierarchy generation for


categorical data
• Categorical data are discrete data.

• Categorical attributes have a finite (but possibly large)


number of distinct values with no ordering among the
values

• Examples of categorical attributes: geographic location,


job category, item type, fertilizer recommended, citizen,
and color
130

Discretization and concept hierarchy generation for


categorical data

• Methods for the generation of concepts hierarchies for


categorical data includes
1. Specification of a partial or total ordering of attributes
explicitly at the schema level by users or experts
• If telephone number start with +251 it is from Ethiopia
• Indication of telephone number type at schema level
mobile/wired
• Others
131

Discretization and concept hierarchy generation for


categorical data
2. Specification of a portion of a hierarchy by explicit data
grouping
• This is essentially the manual definition of a portion of a
concept hierarchy.
• It helps to enrich the concept hierarchy after the initial
hierarchy is explicitly stated
• The user will in reach the concept hierarchy by saying
{scanner, telephone} {peripherals},
{laptop, desktop, workstation} {computer} after giving explicit
concept hierarchy as {peripherals, computer}  {electronics}
and {machine }  {electronics}
132

Discretization and concept hierarchy generation for


categorical data
3. Specification of a set of attributes, but not of their partial ordering
• Concept hierarchy can be automatically generated based on the
number of distinct values per attribute in the given attribute set.
• The attribute with the most distinct values is placed at the lowest level
of the hierarchy.

country 15 distinct values

province_or_ state 65 distinct values

city 3567 distinct values

street 674,339 distinct values


133

Discretization and concept hierarchy generation for


categorical data
4. Specification of only a partial set of attributes
• User will provide some of the concept hierarchies and system will fill
the remaining hierarchical levels by analyzing the semantics of the
attributes of the data set
134

Mining Task (data preprocessing example)


• Collect all reviews for the top-10 most reviewed
restaurants in NY in Yelp
• (thanks to Hady Law)

• Find few terms that best describe the restaurants.


• Algorithm?
135

Example data
• I heard so many good things about this place so I was pretty juiced to try it. I'm
from Cali and I heard Shake Shack is comparable to IN-N-OUT and I gotta say,  Shake
Shake wins hands down.   Surprisingly, the line was short and we waited about 10
MIN. to order.  I ordered a regular cheeseburger, fries and a black/white shake.  So
yummerz. I love the location too!  It's in the middle of the city and the view is
breathtaking. Definitely one of my favorite places to eat in NYC.

• I'm from California and I must say, Shake Shack is better than IN-N-OUT, all day,
err'day.

• Would I pay $15+ for a burger here? No. But for the price point they are asking for,
this is a definite bang for your buck (though for some, the opportunity cost of
waiting in line might outweigh the cost savings) Thankfully, I came in before the
lunch swarm descended and I ordered a shake shack (the special burger with the patty
+ fried cheese &amp; portabella topping) and a coffee milk shake. The beef patty was
very juicy and snugly packed within a soft potato roll. On the downside, I could do
without the fried portabella-thingy, as the crispy taste conflicted with the juicy,
tender burger. How does shake shack compare with in-and-out or 5-guys? I say a very
close tie, and I think it comes down to personal affliations. On the shake side, true
to its name, the shake was well churned and very thick and luscious. The coffee
flavor added a tangy taste and complemented the vanilla shake well. Situated in an
open space in NYC, the open air sitting allows you to munch on your burger while
watching people zoom by around the city. It's an oddly calming experience, or perhaps
it was the food coma I was slowly falling into. Great place with food at a great
price.
136

First cut
• Do simple processing to “normalize” the data (remove punctuation, make
into lower case, clear white spaces, other?)
• Break into words, keep the most popular words
the 27514 the 16710 the 16010 the 14241
and 14508 and 9139 and 9504 and 8237
i 13088 a 8583 i 7966 a 8182
a 12152 i 8415 to 6524 i 7001
to 10672 to 7003 a 6370 to 6727
of 8702 in 5363 it 5169 of 4874
ramen 8518 it 4606 of 5159 you 4515
was 8274 of 4365 is 4519 it 4308
is 6835 is 4340 sauce 4020 is 4016
it 6802 burger 432 in 3951 was 3791
in 6402 was 4070 this 3519 pastrami 3748
for 6145 for 3441 was 3453 in 3508
but 5254 but 3284 for 3327 for 3424
that 4540 shack 3278 you 3220 sandwich 2928
you 4366 shake 3172 that 2769 that 2728
with 4181 that 3005 but 2590 but 2715
pork 4115 you 2985 food 2497 on 2247
my 3841 my 2514 on 2350 this 2099
this 3487 line 2389 my 2311 my 2064
wait 3184 this 2242 cart 2236 with 2040
not 3016 fries 2240 chicken 2220 not 1655
we 2984 on 2204 with 2195 your 1622
at 2980 are 2142 rice 2049 so 1610
on 2922 with 2095 so 1825 have 1585
137

First cut
• Do simple processing to “normalize” the data (remove punctuation, make
into lower case, clear white spaces, other?)
• Break into words, keep the most popular words
the 16710 the 14241
the 27514 the 16010
and 9139 and 8237
and 14508 and 9504
i 13088 a 8583 a 8182
i 7966
i 8415 i 7001
a 12152 to 6524
to 7003 to 6727
to 10672 a 6370
in 5363 of 4874
of 8702 it 5169
it 4606 you 4515
ramen 8518 of 5159
of 4365 it 4308
was 8274 is 4519
is 4340 is 4016
is 6835 sauce 4020
it 6802 burger 432 was 3791
in 3951
was 4070 pastrami 3748
in 6402 this 3519
for 3441 in 3508
for 6145 was 3453
but 3284 for 3424
but 5254 for 3327
shack 3278 sandwich 2928
that 4540 you 3220
shake 3172 that 2728
you 4366 that 2769
that 3005 but 2715
with 4181 but 2590
pork 4115 you 2985 on 2247
food 2497
my 2514 this 2099
my 3841 on 2350
this 3487 line 2389
this 2242
Most frequent words are stop words
my 2311
my 2064
with 2040
wait 3184 cart 2236
fries 2240 not 1655
not 3016 chicken 2220
on 2204 your 1622
we 2984 with 2195
are 2142 so 1610
at 2980 rice 2049
on 2922 with 2095 have 1585
so 1825
138

Second cut
• Remove stop words
• Stop-word lists can be found online.

a,about,above,after,again,against,all,am,an,and,any,are,aren't,as,at,be,be
cause,been,before,being,below,between,both,but,by,can't,cannot,could,could
n't,did,didn't,do,does,doesn't,doing,don't,down,during,each,few,for,from,f
urther,had,hadn't,has,hasn't,have,haven't,having,he,he'd,he'll,he's,her,he
re,here's,hers,herself,him,himself,his,how,how's,i,i'd,i'll,i'm,i've,if,in
,into,is,isn't,it,it's,its,itself,let's,me,more,most,mustn't,my,myself,no,
nor,not,of,off,on,once,only,or,other,ought,our,ours,ourselves,out,over,own
,same,shan't,she,she'd,she'll,she's,should,shouldn't,so,some,such,than,tha
t,that's,the,their,theirs,them,themselves,then,there,there's,these,they,th
ey'd,they'll,they're,they've,this,those,through,to,too,under,until,up,very
,was,wasn't,we,we'd,we'll,we're,we've,were,weren't,what,what's,when,when's
,where,where's,which,while,who,who's,whom,why,why's,with,won't,would,would
n't,you,you'd,you'll,you're,you've,your,yours,yourself,yourselves,
139

Second cut
• Remove stop words
• Stop-word lists can be found online.
ramen 8572 burger 4340 sauce 4023 pastrami 3782
pork 4152 shack 3291 food 2507 sandwich 2934
wait 3195 shake 3221 cart 2239 place 1480
good 2867 line 2397 chicken 2238 good 1341
place 2361 fries 2260 rice 2052 get 1251
noodles 2279 good 1920 hot 1835 katz's 1223
ippudo 2261 burgers 1643 white 1782 just 1214
buns 2251 wait 1508 line 1755 like 1207
broth 2041 just 1412 good 1629 meat 1168
like 1902 cheese 1307 lamb 1422 one 1071
just 1896 like 1204 halal 1343 deli 984
get 1641 food 1175 just 1338 best 965
time 1613 get 1162 get 1332 go 961
one 1460 place 1159 one 1222 ticket 955
really 1437 one 1118 like 1096 food 896
go 1366 long 1013 place 1052 sandwiches 813
food 1296 go 995 go 965 can 812
bowl 1272 time 951 can 878 beef 768
can 1256 park 887 night 832 order 720
great 1172 can 860 time 794 pickles 699
best 1167 best 849 long 792 time 662
people 790
140

Second cut
• Remove stop words
• Stop-word lists can be found online.
ramen 8572 burger 4340 sauce 4023 pastrami 3782
pork 4152 shack 3291 food 2507 sandwich 2934
wait 3195 shake 3221 cart 2239 place 1480
good 2867 line 2397 chicken 2238 good 1341
place 2361 fries 2260 rice 2052 get 1251
noodles 2279 good 1920 hot 1835 katz's 1223
ippudo 2261 burgers 1643 white 1782 just 1214
buns 2251 wait 1508 line 1755 like 1207
broth 2041 just 1412 good 1629 meat 1168
like 1902 cheese 1307 lamb 1422 one 1071
just 1896 like 1204 halal 1343 deli 984
get 1641 food 1175 just 1338 best 965
time 1613 get 1162 get 1332 go 961
one 1460 place 1159 one 1222 ticket 955
really 1437
go 1366
Commonly used words in reviews, not so interesting
one 1118 like 1096 food 896
long 1013 place 1052 sandwiches 813
food 1296 go 995 go 965 can 812
bowl 1272 time 951 can 878 beef 768
can 1256 park 887 night 832 order 720
great 1172 can 860 time 794 pickles 699
best 1167 best 849 long 792 time 662
people 790
141

IDF
• Important words are the ones that are unique to the document
(differentiating) compared to the rest of the collection
• All reviews use the word “like”. This is not interesting
• We want the words that characterize the specific restaurant

• Document Frequency : fraction of documents


: num of docs that contain
that contain word .
word
: total number of documents

• Inverse Document Frequency :

• Maximum when unique to one document :


• Minimum when the word is common to all documents:
142

TF-IDF
• The words that are best for describing a document are the
ones that are important for the document, but also unique
to the document.

• TF(w,d): term frequency of word w in document d


• Number of times that the word appears in the document
• Natural measure of importance of the word for the document

• IDF(w): inverse document frequency


• Natural measure of the uniqueness of the word w

• TF-IDF(w,d) = TF(w,d)  IDF(w)


143

Third cut
• Ordered by TF-IDF
ramen 3057.41761944282
fries7 806.085373301536
lamb 7985.655290756243 5 pastrami 1931.94250908298 6
akamaru 2353.24196503991
custard1 729.607519421517
halal 686.038812717726
3 6katz's 1120.62356508209 4
noodles 1579.68242449612
shakes 5628.473803858139
53rd 375.685771863491
3 5 rye 1004.28925735888 2
broth 1414.71339552285
shroom 5 515.779060830666
gyro 305.809092298788
1 3 corned 906.113544700399 2
miso 1252.60629058876
burger
1 457.264637954966
pita 304.984759446376
9 5 pickles 640.487221580035 4
hirata 709.196208642166
crinkle 1 398.34722108797
cart 235.902194557873
1 9 reuben 515.779060830666 1
hakata 591.76436889947
burgers1 366.624854809247
platter8 139.459903080044 matzo7 430.583412389887 1
shiromaru 587.1591987134
madison1 350.939350307801
chicken/lamb
4 135.8525204 sally
1 428.110484707471 2
noodle 581.844614740089
shackburger
4 292.428306810
carts 120.274374158359
1 8harry 226.323810772916 4
tonkotsu 529.594571388631
'shroom 1287.823136624256
hilton 184.2987473324223 mustard
4 216.079238853014 6
ippudo 504.527569521429
portobello
8 239.8062489526
lamb/chicken
2 82.8930633 cutter
1 209.535243462458 1
buns 502.296134008287
custards
8 211.837828555452
yogurt 70.0078652365545
1 5
carnegie 198.655512713779 3
ippudo's 453.609263319827
concrete1 195.169925889195
52nd 67.5963923222322
4 2 katz 194.387844446609 7
modern 394.839162940177
bun 186.962178298353
7 6th6 60.7930175345658 9 knish 184.206807439524 1
egg 367.368005696771milkshakes
5 174.9964670675
4am 55.4517744447956
1 5 sandwiches 181.415707218 8
shoyu 352.295519228089
concretes
1 165.786126695571
yellow 54.4470265206673
1 8
brisket 131.945865389878 4
chashu 347.690349042101
portabello
1 163.4835416025
tzatziki1 52.9594571388631fries 1 131.613054313392 7
karaka 336.177423577131
shack's 1 159.334353330976
lettuce2 51.3230168022683 salami
8 127.621117258549 3
kakuni 276.310211159286
patty 1152.226035882265
sammy's
6 50.656872045869 knishes
1 124.339595021678 1
ramens 262.494700601321
ss 149.668031044613
1 sw1 50.5668577816893 3 delicatessen 117.488967607 2
bun 236.512263803654patties
6 148.068287943937
platters 2 49.9065970003161deli's5 117.431839742696 1
wasabi 232.366751234906
cam 105.949606780682
3 falafel
3 49.4796995212044 carver
4 115.129254649702 1
dama 221.048168927428
milkshake
1 103.9720770839
sober 49.2211422635451
5 7brown's 109.441778045519 2
brulee 201.179739054263
lamps 299.011158998744moma1 48.1589121730374 3 matzoh 108.22149937072 1
144

Third cut
• TF-IDF takes care of stop words as well
• We do not need to remove the stopwords since
they will get IDF(w) = 0
145

Decisions, decisions…
• When mining real data you often need to make some decisions
• What data should we collect? How much? For how long?
• Should we throw out some data that does not seem to be useful?

An actual review AAAAAAAAAAAAA


AAAAAAAAAAAAAAAAAAAAAAAAA AAAAAAAAAAAAAAAAAAAAAAAAA AAA

• Too frequent data (stop words), too infrequent (errors?), erroneous data, missing data, outliers
• How should we weight the different pieces of data?

• Most decisions are application dependent. Some information may be


lost but we can usually live with it (most of the times)

• We should make our decisions clear since they affect our findings.

• Dealing with real data is hard…


146

Data Warehousing and OLAP Technology for


Data Mining
147

What is Data Warehouse?


• Defined in many different ways, but none are rigorous
definition.
• A decision support database that is maintained
separately from the organization’s operational database
• Support information processing by providing a solid
platform of consolidated, historical data for analysis.
• A short and more comprehensive definition is given by
Inmon as
• “A data warehouse is a subject-oriented, integrated, time-variant,
and nonvolatile collection of data in support of management’s
decision-making process.”
• We will see each of the elements that define data
warehouse one by one
148

Data Warehouse:
Subject-Oriented
• Organized around major subjects, such as customer,
product, sales.
• Focusing on the modeling and analysis of data for
decision makers, not on daily operations or transaction
processing.
• Provide a simple and concise view around particular
subject issues by excluding data that are not useful in the
decision support process.
149

Data Warehouse:
Integrated
• Constructed by integrating multiple, heterogeneous data
sources
• relational databases, flat files, on-line transaction
records

• Data cleaning and data integration techniques are


applied.
• Ensure consistency in naming conventions, encoding
structures, attribute measures, etc. among different
data sources
• E.g., Hotel price: currency, tax, breakfast covered, etc.
• When data is moved to the warehouse, it is converted.
150

Data Warehouse:
Time Variant

• The time horizon for the data warehouse is significantly longer


than that of operational systems.
• Operational database: current value data.
• Data warehouse data: provide information from a historical
perspective (e.g., past 5-10 years)

• Every key structure in the data warehouse


• Contains an element of time, explicitly or implicitly
• But the key of operational data may or may not contain “time
element”.
151

Data Warehouse:
Non-Volatile

• A physically separate store of data transformed from the


operational environment.
• Operational update of data does not occur in the data
warehouse environment.
• Does not require transaction processing, recovery, and
concurrency control mechanisms
• Requires only two operations in data accessing:
• initial loading of data and access of data.
152

So What is Data Warehousing?


• Data warehousing is the process of constructing and
using data warehouses
• Construction of data warehouse requires data integration,
data cleaning and data consolidation.
• Using data warehouse usually necessitate a collection of
decision support systems
• This allows “knowledge workers” (such as managers,
analysts, and executives) to use the warehouse to quickly
and conveniently obtain an overview of the data and to
make sound decision based on information in the
warehouse.
153

Integrating data sources


• Many organization typically collect diverse kind of data
and maintain large databases from multiple,
heterogeneous, autonomous, and distributed information
sources.
• Integrating such data is challenging
• The traditional approach to integrate heterogeneous DB is
• To build wrappers/mediators on top of multiple heterogeneous
databases
• This approach focus on Query driven approach to integrate
heterogeneous DB.
• In this approach, when a query is posed to a client site, a meta-
dictionary is used to translate the query into queries appropriate for
individual heterogeneous sites involved, and the results are
integrated into a global answer set
• Require complex information filtering and integration process
154

Integrating data sources


• Alternate approach to integrate heterogeneous DB which
is update-driven,
• In this case, information from multiple, heterogeneous sources is
integrated in advance and stored in a warehouse for direct
querying and analysis.
• It provide high performance
155

Data Warehouse vs. Operational DBMS

• OLTP (On-Line Transaction Processing)


• Major task of traditional relational DBMS
• Day-to-day operations: purchasing, inventory, banking,
manufacturing, payroll, registration, accounting, etc.
• OLAP (On-Line Analytical Processing)
• Major task of data warehouse system
• Data analysis and decision making
156

Data Warehouse vs. Operational DBMS


• Distinct features (OLTP vs. OLAP):
• User and system orientation:
• OLTP is customer oriented system used for transaction and query processing by
clerks, clients and information technology professionals
• OLAP is market oriented system used for data analysis by knowledge workers
including managers, executives, and analysts
• Data contents: OLTP contains current, detailed data where as OLAP
systems contains large, historical, consolidated data and provides
facilities for summarization, aggregation
• Database design: OLTP adopt ER for data modeling and application
oriented DB design where as OLAP uses star type model and subject
oriented DB design.
• View: OLTP focus on current and local data view where as OLAP has
multiple version of DB schema due to evolutionary process of the
enterprise
• Access patterns: OLTP access pattern is usually update where as
OLAP access pattern is read-only but complex queries
157

OLTP vs. OLAP

OLTP OLAP
users clerk, IT professional knowledge worker
function day to day operations decision support
DB design application-oriented subject-oriented
data current, up-to-date historical,
detailed, flat relational summarized, multidimensional
isolated integrated, consolidated
usage repetitive ad-hoc
access read/write lots of scans
index/hash on prim. key
unit of work short, simple transaction complex query
# records accessed tens millions
#users thousands hundreds
DB size 100MB-GB 100GB-TB
metric transaction throughput query throughput, response
158

Why Separate Data Warehouse?


• High performance for both systems
• DBMS— tuned for OLTP: access methods, indexing,
concurrency control, recovery mechanism which is not
desirable for OLAP
• Warehouse—tuned for OLAP: complex OLAP queries,
multidimensional view, consolidation.
• Different functions and different data:
• missing data: Decision Support requires historical data which
operational DBs do not typically maintain
• data consolidation: DS requires consolidation (aggregation,
summarization) of data from heterogeneous sources in
which DBs don’t require
• data quality: different sources typically use inconsistent data
representations, codes and formats which have to be
reconciled for DS system
159

From Tables and Spreadsheets to Data Cubes


• A data warehouse is based on a multidimensional data model which
views data in the form of a data cube
• A data cube allows data to be modeled and viewed in multiple
dimensions
• A data cube is modeled around a central team like sales which is
maintained by a table called fact table.
• Dimensions are the perspective or entities with respect to which an
organization wants to keep records
• Records of store sales can be maintained with respect to the
dimension time(day, week, month, quarter, year), item(item_name,
brand, type), branch, and location
• Fact table contains measures (such as dollars_sold, unit sold,
amount_budgeted) and keys to each of the related dimension tables
160

From Tables and Spreadsheets to Data Cubes


• Consider the amount of money collected in Birr at Fantu
Supermarket at different branches
• Branch = 4 killo

 
  Time 
 
  Mon Tue Wed Thu Fri Sat Sun

Checholet 20 19 21 34 30 35 28

Alcholic Drink 80 74 45 87 90 99 91
Item

canned foods 67 68 63 55 64 52 55

Soft drink 44 60 63 54 64 45 54

Baby diper 45 54 55 65 65 54 67
161

From Tables and Spreadsheets to Data Cubes


• Branch = Bole
    Time            
 
  Mon Tue Wed Thu Fri Sat Sun

Checholet 43 45  34 78 54 34 19

Alcholic Drink  45 43 26  33 54  71 31


Item

canned foods 22 76 34 34 91 42 21

Soft drink 41 53 94 54 29 61 42

Baby diper 76 34 89 67 18 27 53
162

From Tables and Spreadsheets to Data Cubes


• Branch=SarBiet
    Time            
 
  Mon Tue Wed Thu Fri Sat Sun

Checholet 34 54  43 87 45 43 91

Alcholic Drink  54 34 62  33 45  27 13


Item

canned foods 22 67 43 43 19 24 12

Soft drink 14 35 49 45 92 16 24

Baby diper 67 43 98 76 81 72 35
163

From Tables and Spreadsheets to Data Cubes


• This data can be seen at various granularity such as
amount of money per day, per week, for coca cola, sprite,
biscuits, etc.
• The above three tables
34 54
can be45seen
43 87 43
as sub-cuboids of the
91
Item
cube shown bellow

91
43 45 34 78 54 34 19

19
20 19 21 34 30 35 28

13
28

31
20 19 21 34 30 35 28
h
Branc
checholet

12
91

21
80 74 45 87 90 99 91
Alcholic drink

24
55
67 68 63 55 64 52 55

42
Canned food t

35
ie
Sar B

54
44 60 63 54 64 45 54

53
Soft drink
67 Bole
Bay Dipper 45 54 55 65 65 54 67
4Kilo Time
n
Mo ue ed ur i
Fr Sat Sun
T W Th
164

From Tables and Spreadsheets to Data Cubes


• As data warehouse can be seen from various views, we can see
terms of such views
• In data warehousing literature, an n dimensional (n-D) cube is
called a base cuboid.
• Base cuboid shows some information about every attribute at
different granularity
• The top most 0-D cuboid, which holds the highest-level of
summarization, is called the apex cuboid.
• This shows the most summarized information which is free from
any attribute
• The lattice of cuboids forms a data cube.
• The fact and dimension table model will be discussed soon
165

Cube: A Lattice of Cuboids

all
0-D(apex) cuboid

time item location supplier


1-D cuboids

time,item time,location item,location location,supplier


2-D cuboids
time,supplier item,supplier

time,location,supplier
time,item,location 3-D cuboids
time,item,supplier item,location,supplier

4-D(base) cuboid
time, item, location, supplier
166

Cuboids Corresponding to the Cube

all
0-D(apex) cuboid
product date country
1-D cuboids

product,date product,country date, country


2-D cuboids

3-D(base) cuboid
product, date, country
167

Conceptual Modeling of Data Warehouses

• Modeling data warehouses: dimensions & measures


• Star schema: A fact table in the middle connected to a set
of dimension tables
• Snowflake schema: A refinement of star schema where
some dimensional hierarchy is normalized into a set of
smaller dimension tables, forming a shape similar to
snowflake
• Fact constellations: Multiple fact tables share dimension
tables, viewed as a collection of stars, therefore called
galaxy schema or fact constellation
168

Example of Star Schema


time
time_key item
day item_key
day_of_the_week Sales Fact Table item_name
month brand
quarter time_key type
year supplier_type
item_key
branch_key
branch location
location_key
branch_key location_key
branch_name units_sold street
branch_type city
dollars_sold province_or_street
country
avg_sales
Measure
s
169

Example of Snowflake Schema


time
time_key item
day item_key supplier
day_of_the_week Sales Fact Table item_name supplier_key
month brand supplier_type
quarter time_key type
year item_key supplier_key

branch_key
branch location
location_key
location_key
branch_key
units_sold street
branch_name
city_key city
branch_type
dollars_sold
city_key
avg_sales city
province_or_stre
Measure country
s
170

Example of Fact Constellation


time
time_key item Shipping Fact Table
day item_key
day_of_the_week Sales Fact Table item_name time_key
month brand
quarter time_key type item_key
year supplier_type shipper_key
item_key
branch_key from_location

branch location_key location to_location


branch_key location_key dollars_cost
branch_name units_sold
street
branch_type dollars_sold city units_shipped
province_or_street
avg_sales country shipper
Measures shipper_key
shipper_name
location_key
shipper_type
171

A Data Mining Query Language, DMQL: Language Primitives


• Cube Definition (Fact Table)
define cube <cube_name> [<dimension_list>]: <measure_list>

Example
define cube sales_star [time, item, branch, location]:
dollars_sold = sum(sales_in_dollars), avg_sales =
avg(sales_in_dollars), units_sold = count(*)
172

Defining a Star Schema in DMQL


• Dimension Definition ( Dimension Table)
define dimension <dimension_name> as
(<attribute_or_subdimension_list>)

Example: The following defines all the dimensions in the sales_star cube

define dimension time as (time_key, day, day_of_week, month, quarter,


year)
define dimension item as (item_key, item_name, brand, type, supplier_type)
define dimension branch as (branch_key, branch_name, branch_type)
define dimension location as (location_key, street, city, province_or_state,
country)
173

Defining a Star Schema in DMQL


• Special Case (Shared Dimension Tables)

• This is definition of dimension which is already defined in some other cube


and wanted to be shared into the current cube
• First time as “cube definition”
• define dimension <dimension_name> as <dimension_name_first_time>
in cube <cube_name_first_time>

Example: The following defines a dimension for the sales_star cube which
is defined say into orders cube

define dimension customerTypeInSales as customerTypeInOrder in cube


Orders
174

Defining a Snowflake Schema in DMQL


define cube sales_snowflake [time, item, branch, location]:
dollars_sold = sum(sales_in_dollars), avg_sales =
avg(sales_in_dollars), units_sold = count(*)
define dimension time as (time_key, day, day_of_week, month, quarter,
year)
define dimension item as (item_key, item_name, brand, type,
supplier(supplier_key, supplier_type))
define dimension branch as (branch_key, branch_name, branch_type)
define dimension location as (location_key, street, city(city_key,
location country))
province_or_state, item
location_key item_key
street
city supplier
item_name supplier_key
city_key city_key
city
brand supplier_type
province_or_street type
country supplier_key
175

Defining a Fact Constellation in DMQL


Cube Definition (Fact Tables)
define cube sales [time, item, branch, location]:
dollars_sold = sum(sales_in_dollars), avg_sales = avg(sales_in_dollars),
units_sold = count(*)
define cube shipping [time, item, shipper, from_location, to_location]:
dollar_cost = sum(cost_in_dollars), unit_shipped = count(*)

Sales Fact Table


time_key

item_key
branch_key

location_key

units_sold
dollars_sold
avg_sales
176

Defining a Fact Constellation in DMQL


Dimension definition for sales cube
define dimension time as (time_key, day, day_of_week, month, quarter, year)
define dimension item as (item_key, item_name, brand, type, supplier_type)
define dimension branch as (branch_key, branch_name, branch_type)
define dimension location as (location_key, street, city, province_or_state,
country)
time
time_key time_key item
day item_key
day_of_the_week item_key item_name
month branch_key brand
quarter type
year location_key supplier_type location
branch location_key
street
branch_key
city
branch_name
province_or_street
branch_type
country
177

Defining a Fact Constellation in DMQL


Dimension definition for shipping cube
define dimension time as time in cube sales
define dimension item as item in cube sales
define dimension shipper as (shipper_key, shipper_name, location as location in
cube sales, shipper_type)
define dimension from_location as location in cube sales
define dimension to_location as location in cube sales item
time item_key
time_key item_name
time_key brand
day item_key
type
day_of_the_week shipper_key supplier_type
month
quarter from_location location
year
to_location location_key
shipper street
shipper_key city
shipper_name province_or_street
location_key country
shipper_type
178

Measures: Three Categories


• The data cube space is a set of points where each point is dimension-value
pair.
• Dimension refers to the technique that uniquely define the point where as
the value refer to a numerical value (s) at that point.
• An aggregate function is applied on a set of point values to analyze the data
cube to make various decisions.
• The result of the aggregate function is said to be a measure
• A measure is a numeric function defined on data cube (which is dimension-
value pair)
• A data cube can be partitioned different ways.

• For example a data cube defined with dimension time, location and item can
be partitioned into four sub-cuboids where each refers to cuboids with
dimension location and item but at time Q1, Q2, Q4, and Q4.
179

Measures: Three Categories


• Various measure functions can be applied onto the data
cube and these measures can be categorized into three as:
• Distributive
• Algebraic
• Holistic
180

Measures: Three Categories


• Distributive: A function is said to be distributive if the result derived by
applying the same/another function on to the n aggregate values is the
same as that derived by applying the function on all the data without
partitioning.
• An aggregate value is a value produced from sub set of the entire data
• E.g., count(), sum(), min(), max().
count(v1, v2, v3,v4,v5,v6)
= sum(count(v1,v2,v3), count(v4,v5,v6))
=sum(3,3)=6
sum(v1, v2, v3,v4,v5,v6)
= sum(sum(v1,v2,v3), sum(v4,v5,v6))
=sum(v7,v8)
Max(v1, v2, v3,v4,v5,v6)
= max(max(v1,v2,v3), max(v4,v5,v6))
=max(v7,v8)
181

A Sample Data Cube with the distributive


aggregate function
Total annual sales
Date
of TV in U.S.A.
1Qtr 2Qtr 3Qtr 4Qtr sum
ct

TV
du

PC U.S.A
o
Pr

VCR
sum

Country
es Canada

Mexico

sum
182

Measures: Three Categories


• Algebraic: : A function is said to be Algebraic if it can be computed as a an
algebraic function of M (where M is a bounded integer) arguments in which
each of the arguments are obtained by applying distributive aggregate function.
• E.g., avg(), min_N(), max_N(), standard_deviation().
Avg(.)= f(sum(.),count(.)) = sum(.)/count(.) where sum and count are distributive
aggregate function and f is algebraic function that apply division on its two
arguments
Similarly,
183

Measures: Three Categories


• Holistic: if there is no constant bound on the storage size needed to
describe a sub-aggregate.
• That means, there doesn’t exist an algebraic function with M argument that
characterizes the computation
• E.g., median(), mode(), rank().
• There are efficient computation techniques for distributive and algebraic
function but not for holistic type aggregate functions
184

Concept Hierarchy
• A concept hierarchy defines a sequence of mappings from a set of low
level concepts to higher-level and more general concepts.
• Concept hierarchies defined to dimensions
• As shown in the concept hierarchy, each level refers to values of some
type.
• The type of hierarchy define ordering which can be partial ordering or total
ordering.
• Location dimension can be seen as a total ordering continent  country 
Region  Zone  city  kifle ketema  kebele
• Time dimension shows partial ordering second minute  hour  day 
{monthquarter, week}year
Industry Region Year
• More ordering
Category Country Quarter

Product City Month Week

Office Day
185

Concept Hierarchy
• Many concept hierarchy are implicit within the database schema as location
and time are described by the fields shown above.
define dimension time as (time_key, day, day_of_week, month, quarter, year)
• These are called schema hierarchy.

• The concept hierarchy for location is schema hierarchy where as for annual
income concept hierarchy may be set as grouping hierarchy
• Concept hierarchies may also be defined by discretizing or grouping values
for a given dimension or attribute resulting in a set-grouping hierarchy
186

Typical OLAP Operations


• In multidimensional model, data are organized into multiple
dimensions, and each dimension contains multiple level of
abstraction defined by concept hierarchies.
• This organization provides users with flexibility to view data from
different perspectives.

• Different OLAP data cube operations exists to materialize these


views:
• Roll up (drill-up)
• Drill down (roll down)
• Slice and dice
• Pivot (rotate)
187

Typical OLAP Operations


• Roll up (drill-up): summarize data
• by climbing up hierarchy (say from day
into week or year) or by dimension
reduction
188

Typical OLAP Operations

• Drill down (roll down): reverse of roll-up


• from higher level summary to lower
level summary (say from region to town)
or detailed data, or introducing new
dimensions
189

Typical OLAP Operations


• Slice:
• Slice operation performs selection
on one dimension of a given cube
resulting in a sub-cube (say time =
Q1)
190

Typical OLAP Operations

• Dice:
• Dice operation performs defines a sub cube
by performing a selection on two or more
dimension
191

Typical OLAP Operations


• Pivot (rotate):
• reorient the cube, visualization, 3D to
series of 2D planes.
192

Typical OLAP Operations


193

A Star-Net Query Model


• Star net model consists of radial lines emanating from a single point call the
origin in which each line refers to a dimension in a data cube and points in a
line refers to attributes of the dimension in partial and/or total ordering
• Star net model is the basis for querying multidimensional databases
Location
Customer
continent
p
country grou
Each circle (abstraction
province ategory level) is called a footprint
c

city
e
nam
Street
month
Day na br ca ty
Item
quarter m an te pe
e d g
Time Year or
y
194

Design of a Data Warehouse: A Business Analysis


Framework
• The basic steps involved in the design process of data warehouse
mainly involves business analysis framework
• The business Analysis framework
• It is one of the issue to be clear while designing data warehouse
• It involves to answer the following question
• What can a business analysts gain from having a data
warehouse?
• May provide a competitive advantage by presenting relevant information
• May enhance business productivity as it enable to quickly and efficiently
gather information that accurately describe the organization
• May facilitate customer relationship management by providing consistent
view of customers and items across all lines of business, all departments and
all markets
195

Design of a Data Warehouse: A Business Analysis


Framework
• Four views that should be considered regarding the design of
a data warehouse
• Top-down view
• allows selection of the relevant information necessary for the data
warehouse
• Data source view
• exposes the information being captured, stored, and managed by
operational systems
• Data warehouse view
• consists of fact tables and dimension tables
• Business query view
• sees the perspectives of data in the warehouse from the view of end-
user
196

Data Warehouse Design Process


• Can be built using top-down approaches, bottom-up approaches or a
combination of both
• Top-down: Starts with overall design and planning
• Start with the over all design and planning
• Require huge investment and comitment
• Appropriate when the technology is mature and well known
• Bottom-up: Starts with experiments and prototypes
• Appropriate in the early stage of business modeling and technology
development
• Enables the business to move forward at considerably less expense and to
evaluate the benefits of technology before making significant commitment
• From software engineering point of view
• Waterfall: structured and systematic analysis at each step before
proceeding to the next
• Spiral: rapid generation of increasingly functional systems, short turn
around time, quick turn around
197

Data Warehouse Design Process


• In general, data warehose design process consists of
1. Choosing a business process to model, e.g., orders,
invoices, sales, shipment, inventory, account
administration, general ledger etc.

2. Choosing the grain (atomic level of data) of the business


process that will be represented in the fact table

3. Choosing the dimensions that will apply to each fact table


record

4. Chooseing the measure that will populate each fact table


record
198

Data Warehouse Architecture


Executive
Information
System/Highly
summarized data
199

Three-Tier Data warehouse Systems


• Data warehouse often adopt three-tier architecture
• Warehouse database server (The bottom tier)
• Almost always a relational DBMS, rarely flat files
• Back end tools and utilities are used to feed data into the middle tier
• The tools and utilities perform data extraction, cleaning and transformation
as well as load and refresh functions to update the warehouse
• OLAP servers (Middle tier)
• Implemented either as Relational OLAP (ROLAP) or Multidimensional
OLAP (MOLAP)
• ROLAP: extended relational DBMS that maps operations on
multidimensional data to standard relational operators
• Multidimensional OLAP (MOLAP): special-purpose server that directly
implements multidimensional data and operations
• Clients(the top tier)
• Query and reporting tools
• Analysis tools
• Data mining tools
200

The Complete Data Warehouse


System
Information Sources Data Warehouse OLAP Servers Clients
Server (Tier 2) (Tier 3)
(Tier 1)
e.g., MOLAP
Semistructured Analysis
Sources
Data
Warehouse serve

extract Query/Reporting
transform
load serve
refresh
etc. e.g., ROLAP
Operational
DB’s Data Mining
serve

Data Marts
201

Three Data Warehouse Models


• From the architecture point of view, there are three data
warehouse models described as Enterprise warehouse,
Data Mart, or Virtual warehouse

• Enterprise warehouse
• collects all information about subjects that span the entire organization
(customers, products, sales, assets, personnel)
• Requires extensive business modeling (may take years to design and
build)
202

Three Data Warehouse Models


• Data Mart
• a subset of corporate-wide data that is of value to a specific
groups of users.
• Its scope is confined to specific, selected groups
• For example, a marketing data mart my confine its subject to
customer, product and sales
• Data marts depending on the data source can be dependent on
independent
• Dependent data mart are sourced directly from the enterprise data
warehouse
• Independent data marts source can be from some operational data
sources, external information providers, from data generated locally
within a particular department or geographic area
203

Three Data Warehouse Models


• Virtual warehouse
• A set of views over operational databases
• Only some of the possible summary views may be
materialized
• Easy to build but requires excess capacity on operational
database servers

You might also like