Cost%20based%20transformations - PowerPoint PPT Presentation

About This Presentation
Title:

Cost%20based%20transformations

Description:

Cost based transformations. Initial logical query plan. Two ... The (estimated) size of a=10(R) is. 5000/50 = 100. The (estimated) size ... bushy tree. Why Left ... – PowerPoint PPT presentation

Number of Views:36
Avg rating:3.0/5.0
Slides: 17
Provided by: scie232
Category:

less

Transcript and Presenter's Notes

Title: Cost%20based%20transformations


1
Cost based transformations
2
Cost based transformations
  • The (estimated) size of ?a10(R) is
  • 5000/50 100
  • The (estimated) size of ?(?a10(R)) is
  • min1100, 100/2 50
  • The (estimated) size of ?(S) is
  • min200100, 2000/2 1000
  • The (estimated) size of ?(?a10(R)) ?? ?(S) is
  • 501000/200 250

3
Cost based transformations
The (estimated) size of ?a10(R) is 5000/50
100 The (estimated) size of ?a10(R) ?? S is
1002000/200 1000 The (estimated) size of
?(?a10(R) ?? S) is From the preservation of
value sets V(?a10(R) ?? S,b)minV(R,b),V(S,b)
100 V(?a10(R) ?? S,c)V(S,c)100, while
V(?a10(R) ?? S,a)1 So, min100100, 1000/2
500
4
Cost based transformations
  • Adding up the costs of plan (a) and (b),
    (regarding the intermediary relations) we get
  • 1150
  • 1100
  • So, the conclusion is that plan (b) is better,
  • i.e. deferring the duplicate elimination to the
    end is a better plan for this query.

5
Cost based transformations
  • Notice that the estimates at the roots of the two
    trees are different 250 in one case and 500 in
    the other.
  • Estimation is an inexact science, so these sorts
    of anomalies will occur.
  • Intuitively, the estimate for plan (b) is higher
    because if there are duplicates in both R and S,
    these duplicates will be multiplied in the join.
  • e.g., for tuples that appear 3 times in R and
    twice in S, their join will appear 6 times.
  • Our simple formula for estimating the result of ?
    does not take into account the possibility that
    the of duplicates has been amplified by previous
    operations.

6
Heuristics for selecting the physical pl.
  1. If the logical plan calls for a selection
    ?Ac(R), and stored relation R has an index on
    attribute A, then perform an index-scan to obtain
    only the tuples of R with A value equal to c.
  2. More generally, if the selection involves one
    condition like Ac above, and other conditions as
    well, implement the selection by an index-scan
    followed by a further selection on the tuples.
  3. If an argument of a join has an index on the join
    attribute(s), then use an index-join with that
    relation in the inner loop.
  4. If one argument of a join is sorted on the join
    attribute(s), then prefer a sort-join to a
    hash-join, although not necessarily to an
    index-join if one is possible.
  5. When computing the union or intersection of three
    or more relations, group the smallest relations
    first.

7
Choosing an Order for Joins
  • Critical problem in cost-based optimization
  • Selecting an order for the (natural) join of
    three or more relations.
  • Cost is the total size of intermediate relations.
  • Example
  • R(a,b), T(R)1000, V(R,b)20
  • S(b,c), T(S)2000, V(S,b)50, V(S,c)100
  • U(c,d), T(U)5000, V(U,c)500
  • (R ?? S) ?? U versus R ?? (S ?? U)
  • T(R ?? S)
  • 10002000 / 50 40,000
  • T((R ?? S) ?? U)
  • 40000 5000 / 500 400,000
  • T(S ?? U)
  • 20,000
  • T(R ?? (S ?? U))
  • 100020000 / 50 400,000

Both plans are estimated to produce the same
number of tuples (no coincidence here). However,
the first plan is more costly that the second
plan because the size of its intermediate
relation is bigger than the size of the
intermediate relation in the second plan.
8
Assymetricity of Joins
  • That is, the roles played by the two argument
    relations are different, and the cost of the join
    depends on which relation plays which role.
  • E.g., the one-pass join reads one relation -
    preferably the smaller - into main memory.
  • The left relation (the smaller) is called the
    build relation.
  • The right relation, called the probe relation, is
    read a block at a time and its tuples are matched
    in main memory with those of the build relation.
  • Other join algorithms that distinguish between
    their arguments
  • Nested-Loop join, where we assume the left
    argument is the relation of the outer loop.
  • Index-join, where we assume the right argument
    has the index.

9
Join Trees
  • When we have the join of two relations, we need
    to order the arguments.
  • SELECT movieTitle
  • FROM StarsIn, MovieStar
  • WHERE starName name AND
  • birthdate LIKE '1960'

Not the right order The smallest relation
should be left.
10
?title
starNamename
This is the preferred order
?name
StarsIn
?birthdate LIKE 1960
MovieStar
11
Join Trees
  • There are only two choices for a join tree when
    there are two relations
  • Take either of the two relations to be the left
    argument.
  • When the join involves more than two relations,
    the number of possible join trees grows rapidly.
  • E.g. suppose R, S, T, and U, being joined. What
    are the join trees?
  • There are 5 possible shapes for the tree.
  • Each of these trees can have the four relations
    in any order. So, the total number of tree is
    54! 524 120 different trees!!

12
Ways to join four relations
left-deep tree All right children are leaves.
righ-deep tree All left children are leaves.
bushy tree
13
Why Left-Deep Join Trees?
  • The number of possible left-deep trees with a
    given number of leaves is large, but not nearly
    as large as the number of all trees.
  • Left-deep trees for joins interact well with
    common join algorithms - nested-loop joins and
    one-pass joins in particular.
  • Query plans based on left-deep trees plus these
    algorithms will tend to be more efficient than
    the same algorithms used with non-left-deep trees.

14
Number of plans on Left-Deep Join Trees
  • For n relations, there is only one left-deep tree
    shape, to which we may assign the relations in n!
    ways.
  • There are the same number of right-deep trees for
    n relations.
  • However, the total number of tree shapes T(n) for
    n relations is given by the recurrence
  • T(1) 1
  • T(n) ?i1n-1 T(i)T(n - i)

T(1)1, T(2)1, T(3)2, T(4)5, T(5)14, and
T(6)42. To get the total number of trees once
relations are assigned to the leaves, we multiply
T(n) by n!. Thus, for instance, the number of
leaf-labeled trees of 6 leaves is 426! 30,240,
of which 6!, or 720, are left-deep trees.
We may pick any number i between 1 and n - 1 to
be the number of leaves in the left subtree of
the root, and those leaves may be arranged in any
of the T(i) ways that trees with i leaves can be
arranged. Similarly, the remaining n-i leaves in
the right subtree can be arranged in any of
T(n-i) ways.
15
Dynamic Programming to Select a Join Order and
Grouping
  • Dynamic programming
  • Fill in a table of costs, remembering only the
    minimum information we need to proceed to a
    conclusion.
  • Suppose we want to join Rl ?? R2 ??. . . ?? Rn
  • We construct a table with an entry for each
    subset of one or more of the n relations. In that
    table we put
  • The estimated size of the join of these
    relations. (We know the formula for this)
  • The least cost of computing the join of these
    relations.
  • The expression that yields the least cost. This
    expression joins the set of relations in
    question, with some grouping.
  • We can optionally restrict ourselves to
    left-deep expressions, in which case the
    expression is just an ordering of the relations.

16
Example
Table for singleton sets
Table for pairs
Table for triples
Write a Comment
User Comments (0)
About PowerShow.com