Title: Topics Related to Data Mining
1Topics Related to Data Mining
2Information Retrieval
- Relevance Ranking Using Terms
- Relevance Using Hyperlinks
- Synonyms., Homonyms, and Ontologies
- Indexing of Documents
- Measuring Retrieval Effectiveness
- Information Retrieval and Structured Data
3Information Retrieval Systems
- Information retrieval (IR) systems use a simpler
data model than database systems - Information organized as a collection of
documents - Documents are unstructured, no schema
- Information retrieval locates relevant documents,
on the basis of user input such as keywords or
example documents - e.g., find documents containing the words
database systems - Can be used even on textual descriptions provided
with non-textual data such as images
4Keyword Search
- In full text retrieval, all the words in each
document are considered to be keywords. - We use the word term to refer to the words in a
document - Information-retrieval systems typically allow
query expressions formed using keywords and the
logical connectives and, or, and not - Ands are implicit, even if not explicitly
specified - Ranking of documents on the basis of estimated
relevance to a query is critical - Relevance ranking is based on factors such as
- Term frequency
- Frequency of occurrence of query keyword in
document - Inverse document frequency
- How many documents the query keyword occurs in
- Fewer ? give more importance to keyword
- Hyperlinks to documents
- More links to a document ? document is more
important
5Relevance Ranking Using Terms
- TF-IDF (Term frequency/Inverse Document
frequency) ranking - Let n(d) number of terms in the document d
- n(d, t) number of occurrences of term t in the
document d. - Relevance of a document d to a term t
- The log factor is to avoid excessive weight to
frequent terms - Relevance of document to query Q
n(d, t)
1
TF (d, t) log
n(d)
TF (d, t)
?
r (d, Q)
n(t)
t?Q
IDF1/n(t), n(t) is the number of documents that
contain the term t
6Relevance Ranking Using Terms (Cont.)
- Most systems add to the above model
- Words that occur in title, author list, section
headings, etc. are given greater importance - Words whose first occurrence is late in the
document are given lower importance - Very common words such as a, an, the, it
etc are eliminated - Called stop words
- Proximity if keywords in query occur close
together in the document, the document has higher
importance than if they occur far apart - Documents are returned in decreasing order of
relevance score - Usually only top few documents are returned, not
all
7Synonyms and Homonyms
- Synonyms
- E.g. document motorcycle repair, query
motorcycle maintenance - need to realize that maintenance and repair
are synonyms - System can extend query as motorcycle and
(repair or maintenance) - Homonyms
- E.g. object has different meanings as noun/verb
- Can disambiguate meanings (to some extent) from
the context - Extending queries automatically using synonyms
can be problematic - Need to understand intended meaning in order to
infer synonyms - Or verify synonyms with user
- Synonyms may have other meanings as well
8Indexing of Documents
- An inverted index maps each keyword Ki to a set
of documents Si that contain the keyword - Documents identified by identifiers
- Inverted index may record
- Keyword locations within document to allow
proximity based ranking - Counts of number of occurrences of keyword to
compute TF - and operation Finds documents that contain all
of K1, K2, ..., Kn. - Intersection S1? S2 ?..... ? Sn
- or operation documents that contain at least one
of K1, K2, , Kn - union, S1?S2 ?..... ? Sn,.
- Each Si is kept sorted to allow efficient
intersection/union by merging - not can also be efficiently implemented by
merging of sorted lists
9Word-Level Inverted File
lexicon
posting
10Measuring Retrieval Effectiveness
- Information-retrieval systems save space by using
index structures that support only approximate
retrieval. May result in - false negative (false drop) - some relevant
documents may not be retrieved. - false positive - some irrelevant documents may be
retrieved. - For many applications a good index should not
permit any false drops, but may permit a few
false positives. - Relevant performance metrics
- precision - what percentage of the retrieved
documents are relevant to the query. - recall - what percentage of the documents
relevant to the query were retrieved.
11Measuring Retrieval Effectiveness (Cont.)
- Recall vs. precision tradeoff
- Can increase recall by retrieving many documents
(down to a low level of relevance ranking), but
many irrelevant documents would be fetched,
reducing precision - Measures of retrieval effectiveness
- Recall as a function of number of documents
fetched, or - Precision as a function of recall
- Equivalently, as a function of number of
documents fetched - E.g. precision of 75 at recall of 50, and 60
at a recall of 75 - Problem which documents are actually relevant,
and which are not
12Information Retrieval and Structured Data
- Information retrieval systems originally treated
documents as a collection of words - Information extraction systems infer structure
from documents, e.g. - Extraction of house attributes (size, address,
number of bedrooms, etc.) from a text
advertisement - Extraction of topic and people named from a new
article - Relations or XML structures used to store
extracted data - System seeks connections among data to answer
queries - Question answering systems
13Probabilities and Statistic
14Probabilities
1.
2.
Event E is defined as a any subset of
f(x) is called a probability distribution
function (pdf)
15Conditional Probabilities
Conditional probability of E, provided that G
occurred is
E and G are independent if and only if
.
Expected Value
Expected value of X is
For continuous function f(x), the E(X) is
E(XY) E(X)E(Y) E(aXb) aE(X)b
16Variance
2
- Var(X) E(X-E(X))
- It indicates how values of random variable are
distributed around its expected value - Standard deviation of X is defined as
- VAR(XY) VAR(X) VAR(Y)
- VAR(aXb) VAR(X)b
- P(S - E(S) r) VAR(S)/r
(Chebyshevs Ineequality) - Example
- 1,2,3,4,5,6 p(i) 1/6 E(X)
11/621/631/641/651/661/6 - E(X)1/6213.5
- (1,2,9,16,25,36) VAR(X) E(X2-2XE(X)E2(X)E(X
2)-E2(X) - E(X2)1/6(129162536)1/689
- E2(X)3.53.512.25
- VAR(X)89/6 -12.252.58
- Standard deviation sqrt(2.58)1.61
2
2
2
17Random Distributions
Normal
µ
E(X)
2
Var(X) s
Bernoulli
E(X) np Var(X) np(1-p)
18Normal Distributions
E(x)
19Random Distributions
Geometric
2
E(X) 1/p VAR(X) (1-p)/p
Poisson
E(X)VAR(X)m
P(Xx) 1/(b-a)
Uniform
2
E(X)(b-a)/2 VAR(X) (b-a) /12
20Correlation between age and mortality
21Systolic Blood Pressure Distribution
22Distribution of heart rate by systolic blood
pressure
23Data and their characteristics
24Types of Attributes
- There are different types of attributes
- Nominal
- Examples ID numbers, eye color, zip codes
- Ordinal
- Examples rankings (e.g., taste of potato chips
on a scale from 1-10), grades, height in tall,
medium, short - Interval
- Examples calendar dates, temperatures in Celsius
or Fahrenheit. - Ratio
- Examples temperature in Kelvin, length, time,
counts
25Properties of Attribute Values
- The type of an attribute depends on which of the
following properties it possesses - Distinctness ?
- Order lt gt
- Addition -
- Multiplication /
- Nominal attribute distinctness
- Ordinal attribute distinctness order
- Interval attribute distinctness, order
addition - Ratio attribute all 4 properties
26(No Transcript)
27Discrete and Continuous Attributes
- Discrete Attribute
- Has only a finite or countably infinite set of
values - Examples zip codes, counts, or the set of words
in a collection of documents - Often represented as integer variables.
- Note binary attributes are a special case of
discrete attributes - Continuous Attribute
- Has real numbers as attribute values
- Examples temperature, height, or weight.
- Practically, real values can only be measured and
represented using a finite number of digits. - Continuous attributes are typically represented
as floating-point variables.
28Data Matrix
- If data objects have the same fixed set of
numeric attributes, then the data objects can be
thought of as points in a multi-dimensional
space, where each dimension represents a distinct
attribute - Such data set can be represented by an m by n
matrix, where there are m rows, one for each
object, and n columns, one for each attribute
29Data Quality
- What kinds of data quality problems?
- How can we detect problems with the data?
- What can we do about these problems?
- Examples of data quality problems
- Noise and outliers
- missing values
- duplicate data
30Noise
- Noise refers to modification of original values
- Examples distortion of a persons voice when
talking on a poor phone and snow on television
screen
Two Sine Waves
Two Sine Waves Noise
31Outliers
- Outliers are data objects with characteristics
that are considerably different than most of the
other data objects in the data set
32Data Preprocessing
- Aggregation
- Sampling
- Dimensionality Reduction
- Feature subset selection
- Feature creation
- Discretization and Binarization
- Attribute Transformation
33Aggregation
- Combining two or more attributes (or objects)
into a single attribute (or object) - Purpose
- Data reduction
- Reduce the number of attributes or objects
- Change of scale
- Cities aggregated into regions, states,
countries, etc - More stable data
- Aggregated data tends to have less variability
34Sampling
- Sampling is the main technique employed for data
selection. - It is often used for both the preliminary
investigation of the data and the final data
analysis. -
- Statisticians sample because obtaining the entire
set of data of interest is too expensive or time
consuming. -
- Sampling is used in data mining because
processing the entire set of data of interest is
too expensive or time consuming.
35Sampling
- The key principle for effective sampling is the
following - using a sample will work almost as well as using
the entire data sets, if the sample is
representative - A sample is representative if it has
approximately the same property (of interest) as
the original set of data
36Types of Sampling
- Simple Random Sampling
- There is an equal probability of selecting any
particular item - Sampling without replacement
- As each item is selected, it is removed from the
population - Sampling with replacement
- Objects are not removed from the population as
they are selected for the sample. - In sampling with replacement, the same object
can be picked up more than once - Stratified sampling
- Split the data into several partitions then draw
random samples from each partition
37Curse of Dimensionality
- When dimensionality increases, data becomes
increasingly sparse in the space that it occupies - Definitions of density and distance between
points, which is critical for clustering and
outlier detection, become less meaningful
- Randomly generate 500 points
- Compute difference between max and min distance
between any pair of points
38Discretization
- Three types of attributes
- Nominal values from an unordered set
- Ordinal values from an ordered set
- Continuous real numbers
- Discretization
- divide the range of a continuous attribute into
intervals - Some classification algorithms only accept
categorical attributes. - Reduce data size by discretization
- Prepare for further analysis
39Discretization and Concept hierarchy
- Discretization
- reduce the number of values for a given
continuous attribute by dividing the range of the
attribute into intervals. Interval labels can
then be used to replace actual data values. - Concept hierarchies
- reduce the data by collecting and replacing low
level concepts (such as numeric values for the
attribute age) by higher level concepts (such as
young, middle-aged, or senior).
40Discretization
- Three types of attributes
- Nominal values from an unordered set
- Ordinal values from an ordered set
- Continuous real numbers
- Discretization
- divide the range of a continuous attribute into
intervals - Some classification algorithms only accept
categorical attributes. - Reduce data size by discretization
- Prepare for further analysis
41Discretization and generation for numeric data
- Binning
- Histogram analysis
- Entropy-based discretization
- Segmentation by natural partitioning
42Discretization
Sort Attribute
Select cut Point
Evaluate Measure
NO
NO
Satisfied
Yes
DONE
Split/Merge
Stop
43Simple Discretization Methods Binning
- Equal-width (distance) partitioning
- It divides the range into N intervals of equal
size uniform grid - if A and B are the lowest and highest values of
the attribute, the width of intervals will be W
(B-A)/N. - The most straightforward
- But outliers may dominate presentation
- Skewed data is not handled well.
- Equal-depth (frequency) partitioning
- It divides the range into N intervals, each
containing approximately same number of samples - Good data scaling
- Managing categorical attributes can be tricky.
44Discretization
- Three types of attributes
- Nominal values from an unordered set
- Ordinal values from an ordered set
- Continuous real numbers
- Discretization
- divide the range of a continuous attribute into
intervals - Some classification algorithms only accept
categorical attributes. - Reduce data size by discretization
- Prepare for further analysis
45Entropy-Based Discretization
- Given a set of samples S, if S is partitioned
into two intervals S1 and S2 using boundary T,
the entropy after partitioning is - The boundary that minimizes the entropy function
over all possible boundaries is selected as a
binary discretization. - The process is recursively applied to partitions
obtained until some stopping criterion is met,
e.g., - Experiments show that it may reduce data size and
improve classification accuracy
46Specification of a set of attributes
- Concept hierarchy can be automatically generated
based on the number of distinct values per
attribute in the given attribute set. The
attribute with the most distinct values is placed
at the lowest level of the hierarchy.
15 distinct values
country
65 distinct values
province_or_ state
3567 distinct values
city
674,339 distinct values
street
47Similarity and Dissimilarity
- Similarity
- Numerical measure of how alike two data objects
are. - Is higher when objects are more alike.
- Often falls in the range 0,1
- Dissimilarity
- Numerical measure of how different are two data
objects - Lower when objects are more alike
- Minimum dissimilarity is often 0
- Upper limit varies
- Proximity refers to a similarity or dissimilarity
48Similarity/Dissimilarity for Simple Attributes
p and q are the attribute values for two data
objects.
49Euclidean Distance
- Euclidean Distance
-
- Where n is the number of dimensions
(attributes) and pk and qk are, respectively, the
kth attributes (components) or data objects p and
q.
50Euclidean Distance
Distance Matrix
51Minkowski Distance
- Minkowski Distance is a generalization of
Euclidean Distance -
- Where r is a parameter, n is the number of
dimensions (attributes) and pk and qk are,
respectively, the kth attributes (components) or
data objects p and q.
52Minkowski Distance
Distance Matrix
53Covariance
- CovarianceE((x-E(x)(y E(y))
- Describes a some sort of dependency between
variables. - Describes how the X and Y change together
54Common Properties of a Distance
- Distances, such as the Euclidean distance, have
some well known properties. - d(p, q) ? 0 for all p and q and d(p, q) 0
only if p q. (Positive definiteness) - d(p, q) d(q, p) for all p and q. (Symmetry)
- d(p, r) ? d(p, q) d(q, r) for all points p,
q, and r. (Triangle Inequality) - where d(p, q) is the distance (dissimilarity)
between points (data objects), p and q. - A distance that satisfies these properties is
called a metric
55Common Properties of a Similarity
- Similarities, also have some well known
properties. - s(p, q) 1 (or maximum similarity) only if p
q. - s(p, q) s(q, p) for all p and q. (Symmetry)
- where s(p, q) is the similarity between points
(data objects), p and q.
56Similarity Between Binary Vectors
- Common situation is that objects, p and q, have
only binary attributes - Compute similarities using the following
quantities - M01 the number of attributes where p was 0 and
q was 1 - M10 the number of attributes where p was 1 and
q was 0 - M00 the number of attributes where p was 0 and
q was 0 - M11 the number of attributes where p was 1 and
q was 1 - Simple Matching and Jaccard Coefficients
- SMC number of matches / number of attributes
- (M11 M00) / (M01 M10 M11
M00) - J number of 11 matches / number of
not-both-zero attributes values - (M11) / (M01 M10 M11)
57SMC versus Jaccard Example
- p 1 0 0 0 0 0 0 0 0 0
- q 0 0 0 0 0 0 1 0 0 1
- M01 2 (the number of attributes where p was 0
and q was 1) - M10 1 (the number of attributes where p was 1
and q was 0) - M00 7 (the number of attributes where p was 0
and q was 0) - M11 0 (the number of attributes where p was 1
and q was 1) -
- SMC (M11 M00)/(M01 M10 M11 M00) (07)
/ (2107) 0.7 - J (M11) / (M01 M10 M11) 0 / (2 1 0)
0
58Data Warehousing and OLAP Technology for Data
Mining
- What is a data warehouse?
- Data warehouse architecture
- Data warehouse implementation
- Further development of data cube technology
- From data warehousing to data mining
59Data Warehousing
- Large organizations have complex internal
organizations, and have data stored at different
locations, on different operational (transaction
processing) systems, under different schemas - Data sources often store only current data, not
historical data - Corporate decision making requires a unified view
of all organizational data, including historical
data - A data warehouse is a repository (archive) of
information gathered from multiple sources,
stored under a unified schema, at a single site - Greatly simplifies querying, permits study of
historical trends - Shifts decision support query load away from
transaction processing systems
60Data Warehouse vs. Operational DBMS
- OLTP (on-line transaction processing)
- Major task of traditional relational DBMS
- Day-to-day operations purchasing, inventory,
banking, manufacturing, payroll, registration,
accounting, etc. - OLAP (on-line analytical processing)
- Major task of data warehouse system
- Data analysis and decision making
- Distinct features (OLTP vs. OLAP)
- User and system orientation customer vs. market
- Data contents current, detailed vs. historical,
consolidated - Database design ER application vs. star
subject - View current, local vs. evolutionary, integrated
- Access patterns update vs. read-only but complex
queries
61OLTP vs. OLAP
62Data Warehousing
63Managing of a warehouse
- When and how to gather data
- Source driven architecture data sources transmit
new information to warehouse, either continuously
or periodically (e.g. at night) - Destination driven architecture warehouse
periodically requests new information from data
sources - What schema to use
- Schema integration
64Managing of a warehouse(Cont.)
- Data cleansing
- E.g. correct mistakes in addresses
- E.g. misspellings, zip code errors
- Merge address lists from different sources and
purge duplicates - Keep only one address record per household
(householding) - How to propagate updates
- Warehouse schema may be a (materialized) view of
schema from data sources - Efficient techniques for update of materialized
views - What data to summarize
- Raw data may be too large to store on-line
- Aggregate values (totals/subtotals) often suffice
- Queries on raw data can often be transformed by
query optimizer to use aggregate values
65Conceptual Modeling of Data Warehouses
- Modeling data warehouses dimensions measures
- Star schema A fact table in the middle connected
to a set of dimension tables - Snowflake schema A refinement of star schema
where some dimensional hierarchy to snowflake - Fact constellations is normalized into a set of
smaller dimension tables, forming a shape similar
Multiple fact tables share dimension tables,
viewed as a collection of stars, therefore called
galaxy schema or fact constellation
66Star Schema Example
67Example of Snowflake Schema
Sales Fact Table
time_key
item_key
branch_key
location_key
units_sold
dollars_sold
avg_sales
Measures
68Example of Fact Constellation
Shipping Fact Table
time_key
Sales Fact Table
item_key
time_key
shipper_key
item_key
from_location
branch_key
to_location
location_key
dollars_cost
units_sold
units_shipped
dollars_sold
avg_sales
Measures
69Online Analytical Processing
- The operation of changing the dimensions used in
a cross-tab is called pivoting - Suppose an analyst wishes to see a cross-tab on
item-name and color for a fixed value of size,
for example, large, instead of the sum across all
sizes. - Such an operation is referred to as slicing.
- The operation is sometimes called dicing,
particularly when values for multiple dimensions
are fixed. - The operation of moving from finer-granularity
data to a coarser granularity is called a rollup. - The opposite operation - that of moving from
coarser-granularity data to finer-granularity
data is called a drill down.
70Three-Dimensional Data Cube
- A data cube is a multidimensional generalization
of a crosstab - Cannot view a three-dimensional object in its
entirety - but crosstabs can be used as views on a data cube
71Cube A Lattice of Cuboids
all
0-D(apex) cuboid
time
item
location
supplier
1-D cuboids
time,item
time,location
item,location
location,supplier
2-D cuboids
time,supplier
item,supplier
time,location,supplier
time,item,location
3-D cuboids
item,location,supplier
time,item,supplier
4-D(base) cuboid
time, item, location, supplier
72Hierarchies on Dimensions
- Hierarchy on dimension attributes lets
dimensions to be viewed at different levels of
detail - E.g. the dimension DateTime can be used to
aggregate by hour of day, date, day of week,
month, quarter or year
73OLAP Implementation
- The earliest OLAP systems used multidimensional
arrays in memory to store data cubes, and are
referred to as multidimensional OLAP (MOLAP)
systems. - OLAP implementations using only relational
database features are called relational OLAP
(ROLAP) systems - Hybrid systems, which store some summaries in
memory and store the base data and other
summaries in a relational database, are called
hybrid OLAP (HOLAP) systems.
74OLAP Implementation (Cont.)
- Early OLAP systems precomputed all possible
aggregates in order to provide online response - Space and time requirements for doing so can be
very high - 2n combinations of group by
- It suffices to precompute some aggregates, and
compute others on demand from one of the
precomputed aggregates - Can compute aggregate on (item-name, color) from
an aggregate on (item-name, color, size) - For all but a few non-decomposable aggregates
such as median - is cheaper than computing it from scratch
- Several optimizations available for computing
multiple aggregates - Can compute aggregate on (item-name, color) from
an aggregate on (item-name, color, size) - Can compute aggregates on (item-name, color,
size), (item-name, color) and (item-name) using
a single sorting of the base data
75Group By cube
- The cube operation computes union of group bys
on every subset of the specified attributes - E.g. consider the query
- select item-name, color, size,
sum(number) from sales group by cube(item-name,
color, size) - This computes the union of eight different
groupings of the sales relation - (item-name, color, size), (item-name,
color), (item-name, size),
(color, size), (item-name),
(color), (size),
( ) - where ( ) denotes an empty group by list.
- For each grouping, the result contains the null
value for attributes not present in the
grouping.
76Group BY Cube (cont)
- The function grouping() can be applied on an
attribute - Returns 1 if the value is a null value
representing all, and returns 0 in all other
cases. - select item-name, color, size,
sum(number), grouping(item-name) as
item-name-flag, grouping(color) as
color-flag, grouping(size) as size-flag,from
salesgroup by cube(item-name, color, size) - Can use the function decode() in the select
clause to replace such nulls by a value such as
all - E.g. replace item-name in first query by
decode( grouping(item-name), 1, all, item-name)
77Alternative A Data Mining Query Language - DMQL
- Cube Definition (Fact Table)
- define cube ltcube_namegt ltdimension_listgt
ltmeasure_listgt - Dimension Definition ( Dimension Table )
- define dimension ltdimension_namegt as
(ltattribute_or_subdimension_listgt) - Special Case (Shared Dimension Tables)
- First time as cube definition
- define dimension ltdimension_namegt as
ltdimension_name_first_timegt in cube
ltcube_name_first_timegt
78Defining a Star Schema in DMQL
- define cube sales_star time, item, branch,
location - dollars_sold sum(sales_in_dollars), avg_sales
avg(sales_in_dollars), units_sold count() - define dimension time as (time_key, day,
day_of_week, month, quarter, year) - define dimension item as (item_key, item_name,
brand, type, supplier_type) - define dimension branch as (branch_key,
branch_name, branch_type) - define dimension location as (location_key,
street, city, province_or_state, country)
79Defining a Snowflake Schema in DMQL
- define cube sales_snowflake time, item, branch,
location - dollars_sold sum(sales_in_dollars), avg_sales
avg(sales_in_dollars), units_sold count() - define dimension time as (time_key, day,
day_of_week, month, quarter, year) - define dimension item as (item_key, item_name,
brand, type, supplier(supplier_key,
supplier_type)) - define dimension branch as (branch_key,
branch_name, branch_type) - define dimension location as (location_key,
street, city(city_key, province_or_state,
country))
80Defining a Fact Constellation in DMQL
- define cube sales time, item, branch, location
- dollars_sold sum(sales_in_dollars), avg_sales
avg(sales_in_dollars), units_sold count() - define dimension time as (time_key, day,
day_of_week, month, quarter, year) - define dimension item as (item_key, item_name,
brand, type, supplier_type) - define dimension branch as (branch_key,
branch_name, branch_type) - define dimension location as (location_key,
street, city, province_or_state, country) - define cube shipping time, item, shipper,
from_location, to_location - dollar_cost sum(cost_in_dollars), unit_shipped
count() - define dimension time as time in cube sales
- define dimension item as item in cube sales
- define dimension shipper as (shipper_key,
shipper_name, location as location in cube sales,
shipper_type) - define dimension from_location as location in
cube sales - define dimension to_location as location in cube
sales
81Measures Three Categories
- distributive if the result derived by applying
the function to n aggregate values is the same as
that derived by applying the function on all the
data without partitioning. - E.g., count(), sum(), min(), max().
- algebraic if it can be computed by an algebraic
function with M arguments (where M is a bounded
integer), each of which is obtained by applying a
distributive aggregate function. - E.g., avg(), min_M(), standard_deviation().
- holistic if there is no constant bound on the
storage size needed to describe a subaggregate. - E.g., median(), mode(), rank().
82Data Warehouse Design Process
- Top-down, bottom-up approaches or a combination
of both - Top-down Starts with overall design and planning
(mature) - Bottom-up Starts with experiments and prototypes
(rapid) - From software engineering point of view
- Waterfall structured and systematic analysis at
each step before proceeding to the next - Spiral rapid generation of increasingly
functional systems, short turn around time, quick
turn around - Typical data warehouse design process
- Choose a business process to model, e.g., orders,
invoices, etc. - Choose the grain (atomic level of data) of the
business process - Choose the dimensions that will apply to each
fact table record - Choose the measure that will populate each fact
table record
83Multi-Tiered Architecture
Monitor Integrator
OLAP Server
Metadata
Analysis Query Reports Data mining
Serve
Data Warehouse
Data Marts
Data Sources
OLAP Engine
Front-End Tools
Data Storage
84Efficient Processing OLAP Queries
- Determine which operations should be performed on
the available cuboids - transform drill, roll, etc. into corresponding
SQL and/or OLAP operations, e.g, dice selection
projection - Determine to which materialized cuboid(s) the
relevant operations should be applied. - Exploring indexing structures and compressed vs.
dense array structures in MOLAP
85Metadata Repository
- Meta data is the data defining warehouse objects.
It has the following kinds - Description of the structure of the warehouse
- schema, view, dimensions, hierarchies, derived
data defn, data mart locations and contents - Operational meta-data
- data lineage (history of migrated data and
transformation path), currency of data (active,
archived, or purged), monitoring information
(warehouse usage statistics, error reports, audit
trails) - The algorithms used for summarization
- The mapping from operational environment to the
data warehouse - Data related to system performance
- warehouse schema, view and derived data
definitions - Business data
- business terms and definitions, ownership of
data, charging policies
86Data Warehouse Back-End Tools and Utilities
- Data extraction
- get data from multiple, heterogeneous, and
external sources - Data cleaning
- detect errors in the data and rectify them when
possible - Data transformation
- convert data from legacy or host format to
warehouse format - Load
- sort, summarize, consolidate, compute views,
check integrity, and build indicies and
partitions - Refresh
- propagate the updates from the data sources to
the warehouse
87Discovery-Driven Exploration of Data Cubes
- Hypothesis-driven exploration by user, huge
search space - Discovery-driven (Sarawagi et al.98)
- pre-compute measures indicating exceptions, guide
user in the data analysis, at all levels of
aggregation - Exception significantly different from the value
anticipated, based on a statistical model - Visual cues such as background color are used to
reflect the degree of exception of each cell - Computation of exception indicator (modeling
fitting and computing SelfExp, InExp, and PathExp
values) can be overlapped with cube construction
88Examples Discovery-Driven Data Cubes
89Data Warehouse Usage
- Three kinds of data warehouse applications
- Information processing
- supports querying, basic statistical analysis,
and reporting using crosstabs, tables, charts and
graphs - Analytical processing
- multidimensional analysis of data warehouse data
- supports basic OLAP operations, slice-dice,
drilling, pivoting - Data mining
- knowledge discovery from hidden patterns
- supports associations, constructing analytical
models, performing classification and prediction,
and presenting the mining results using
visualization tools. - Differences among the three tasks
90From On-Line Analytical Processing to On Line
Analytical Mining (OLAM)
- Why online analytical mining?
- High quality of data in data warehouses
- DW contains integrated, consistent, cleaned data
- Available information processing structure
surrounding data warehouses - ODBC, OLEDB, Web accessing, service facilities,
reporting and OLAP tools - OLAP-based exploratory data analysis
- mining with drilling, dicing, pivoting, etc.
- On-line selection of data mining functions
- integration and swapping of multiple mining
functions, algorithms, and tasks. - Architecture of OLAM
91An OLAM Architecture
Layer4 User Interface
Mining query
Mining result
User GUI API
OLAM Engine
OLAP Engine
Layer3 OLAP/OLAM
Data Cube API
Layer2 MDDB
MDDB
Meta Data
Database API
FilteringIntegration
Filtering
Layer1 Data Repository
Data Warehouse
Data cleaning
Databases
Data integration