 Algorithms for Query Processing and Optimization 1

advertisement


Algorithms for Query Processing
and Optimization

1


0. Basic Steps in Query Processing 




1. Parsing and translation:
 Translate the query into its internal form.
 This is then translated into relational algebra.
 Parser checks syntax, verifies relations
2. Query optimization:
 The process of choosing a suitable execution
strategy for processing a query.
3. Evaluation:
 The query evaluation engine takes a query
execution plan, executes that plan, and returns
the answers to the query.
2


Basic Steps in Query Processing


3


Basic Steps in Query Processing


4


Basic Steps in Query Processing

Two internal representations of a query:
 Query Tree
 Query Graph

Select balance From account where
balance < 2500
 balance2500(balance(account))

is equivalent to


balance(balance2500(account))
5


Basic Steps in Query Processing


Annotated relational algebra expression
specifying detailed evaluation strategy is called an
evaluation-plan.
 E.g., can use an index on balance to find
accounts with balance  2500,
 or can perform complete relation scan and
discard accounts with balance  2500

Query Optimization:
 Amongst all equivalent evaluation plans choose
the one with lowest cost.
 Cost is estimated using statistical information
from the database catalog
 e.g. number of tuples in each relation, size of tuples

6



Measures of Query Cost

Cost is generally measured as total elapsed time for
answering query. Many factors contribute to time cost
 disk accesses, CPU, or even network communication

Typically disk access is the predominant cost, and is also
relatively easy to estimate.

For simplicity we just use number of block transfers from
disk as the cost measure

Costs depends on the size of the buffer in main memory
 Having more memory reduces need for disk access
7



1. Translating SQL Queries into Relational 
Algebra (1)





Query Block:
 The basic unit that can be translated into the
algebraic operators and optimized.
A query block contains a single SELECT-FROMWHERE expression, as well as GROUP BY and
HAVING clause if these are part of the block.
Nested queries within a query are identified as
separate query blocks
Aggregate operators in SQL must be included in
the extended algebra.
8


Translating SQL Queries into Relational 
Algebra (2)
SELECT
FROM
WHERE
SELECT
FROM
WHERE
LNAME, FNAME
EMPLOYEE
SALARY > ( SELECT
FROM
WHERE
SELECT
FROM
WHERE
LNAME, FNAME
EMPLOYEE
SALARY > C
πLNAME, FNAME (σSALARY>C(EMPLOYEE))

MAX (SALARY)
EMPLOYEE
DNO = 5);
MAX (SALARY)
EMPLOYEE
DNO = 5
ℱMAX SALARY (σDNO=5 (EMPLOYEE))
9


2. Algorithms for External Sorting (1) 

Sorting is needed in


Order by, join, union, intersection, distinct, …

For relations that fit in memory, techniques like
quicksort can be used.

External sorting:
 Refers to sorting algorithms that are suitable for
large files of records stored on disk that do not
fit entirely in main memory, such as most
database files.

For relations that don’t fit in memory, external
sort-merge is a good choice
10


External Sort-Merge






External sort-merge algorithm has two steps:
 Partial sort step, called runs.
 Merge step, merges the sorted runs.
Sort-Merge strategy:
 Starts by sorting small subfiles (runs) of the main file
and then merges the sorted runs, creating larger sorted
subfiles that are merged in turn.
Sorting phase:
 Sorts nB pages at a time
 nB = # of main memory pages buffer
 creates nR = b/nB initial sorted runs on disk
 b = # of file blocks (pages) to be sorted
Sorting Cost = read b blocks + write b blocks = 2 b
11


External Sort-Merge


Example:
 nB = 5 blocks and file size b = 1024 blocks, then
 nR = (b/nB) = 1024/5 = 205 initial sorted runs
each of size 5 bocks (except the last run which
will have 4 blocks)
nB = 2, b = 7, nR = b/nB = 4
run

12


External Sort-Merge


Sort Phase: creates nR sorted runs.
i = 0
Repeat
Read next nB blocks into RAM
Sort the in-memory blocks
Write sorted data to run file Ri
i = i + 1
Until the end of the relation
nR = i

13



External Sort-Merge


Merging phase:
 The sorted runs are merged during one or more
passes.
 The degree of merging (dM) is the number of
runs that can be merged in each pass.

In each pass,
 One buffer block is needed to hold one block
from each of the runs being merged, and
 One block is needed for containing one block of
the merged result.
14


External Sort-Merge




We assume (for now) that nR  nB.
Merge the runs (nR -way merge).
 Use nR blocks of memory to buffer input runs,
and 1 block to buffer output.
Merge nR Runs Step
Read 1st block of each nR runs Ri into its buffer page
Repeat
Select 1st record (sort order) among nR buffer pages
Write the record to the output buffer.
If the output buffer is full write it to disk
Delete the record from its input buffer page
If the buffer page becomes empty then read the next block (if any)
of the run into the buffer
Until all input buffer pages are empty

15


External Sort-Merge



Merge nB - 1 Runs Step
If nR  nB, several merge passes are required.
 merge a group of contiguous nB - 1 runs using
one buffer for output
 A pass reduces the number of runs by a factor
of nB - 1, and creates runs longer by the same
factor.
 E.g. If nB = 11, and there are 90 runs, one pass
reduces the number of runs to 9, each 10 times the
size of the initial runs


Repeated passes are performed till all runs have
been merged into one
16


External Sort-Merge

Degree of merging (dM)
 # of runs that can be merged together in each pass =
min (nB - 1, nR)



Number of passes nP = (logdM(nR))
In our example
 dM = 4 (four-way merging)
 min (nB-1, nR) = min(5-1, 205) = 4

Number of passes nP = (logdM(nR)) = (log4(205)) = 4
 First pass:
– 205 initial sorted runs would be merged into 52 sorted runs
 Second pass:
– 52 sorted runs would be merged into 13
 Third pass:
– 13 sorted runs would be merged into 4
 Fourth pass:
– 4 sorted runs would be merged into 1

17


External Sort-Merge

Blocking factor bfr = 1 record, nB = 3, b = 12, nR = 4, dM = min(3-1, 4) = 2

18


External Sort-Merge


19


External Sort-Merge




External Sort-Merge: Cost Analysis
Disk accesses for initial run creation (sort phase) as well as
in each merge pass is 2b
 reads every block once and writes it out once

Initial # of runs is nR = b/nB and # of runs decreases by a
factor of nB - 1 in each merge pass, then the total # of merge
passes is np = logdM(nR)

In general, the cost performance of Merge-Sort is
 Cost = sort cost + merge cost
 Cost =
2b
+ 2b * np
 Cost =
2b
+ 2b * logdM nR

=
2b (logdM(nR) + 1)
20


External Sort-Merge


21


Catalog Information



File
 r:
 R:
 b:
 bfr:
# of records in the file
record size
# of blocks in the file
blocking factor
Index
 x:
 bI1:
# of levels of a multilevel index
# of first-level index blocks
22



Catalog Information


Attribute
 d:
# of distinct values of an attribute
 sl
(selectivity):
the ratio of the # of records satisfying the condition to
the total # of records in the file.
s
(selection cardinality) = sl * r
average # of records that will satisfy an equality
condition on the attribute
For a key attribute:
 d = r,
sl = 1/r,
s=1
 For a nonkey attribute:
 assuming that d distinct values are uniformly
distributed among the records
 the estimated sl = 1/d,
s = r/d


23


File Scans


Types of scans
 File scan – search algorithms that locate and
retrieve records that fulfill a selection condition.
 Index
scan – search algorithms that use an index
selection condition must be on search-key of index.


Cost estimate C = # of disk blocks scanned
24



3. Algorithms for SELECT Operations

Implementing the SELECT Operation

Examples:
 (OP1): SSN='123456789' (EMP)
 (OP2): DNUMBER>5(DEPT)
 (OP3): DNO=5(EMP)
 (OP4): DNO=5 AND SALARY>30000 AND SEX=F(EMP)
 (OP5): ESSN=123456789 AND PNO=10(WORKS_ON)
25


Algorithms for Selection Operation 
Search Methods for Simple Selection:
 S1 (linear search)
 Retrieve every record in the file, and test whether
its attribute values satisfy the selection condition.

 If
selection is on a nonkey attribute, C = b
 If
selection is equality on a key attribute,
if record found, average cost C = b/2, else C = b

26


Algorithms for Selection Operation 

S2 (binary search)
 Applicable if selection is an equality
comparison on the attribute on which file is
ordered.
 Assume that the blocks of a relation are stored
contiguously
 If selection is on a nonkey attribute:
C = log2b: cost of locating the 1st tuple +
s/bfr - 1: # of blocks containing records that satisfy
selection condition
 If
selection is equality on a key attribute:
C = log2b,

since s = 1, in this case
27


Selections Using Indices


S3 (primary or hash index on a key, equality)
 Retrieve a single record that satisfies the corresponding
equality condition
 If the selection condition involves an equality comparison on a
key attribute with a primary index (or a hash key), use the primary
index (or the hash key) to retrieve the record.

Primary index: retrieve 1 more block than the # of index
levels,
 C = x + 1:

Hash index:
 C = 1: for static or linear hashing
 C = 2: for extendable hashing

28


Selections Using Indices


S4 (primary index on a key, range selection)

S4 Using a primary index to retrieve multiple
records:
 If the comparison condition is >, ≥, <, or ≤ on a key field with a
primary index, use the index to find the record satisfying the
corresponding equality condition, then retrieve all subsequent
records in the (ordered) file.

Assuming relation is sorted on A
 For Av(r) use index to find 1st tuple = v and retrieve all
subsequent records.
 For Av(r) use index to find 1st tuple = v and retrieve all
preceding records.
– OR just scan relation sequentially till 1st tuple  v; do not use index
with average cost C = b/2


Average cost C = x + b/2
29


Selections Using Indices


S5 (clustered index on nonkey, equality)
 Retrieve multiple records.
 Records will be on consecutive blocks
 C = x + s/bfr
# of blocks containing records that satisfy selection
condition

30


Selections Using Indices


S6-1 (secondary index B+-tree, equality)
 Retrieve a single record if the search-key is a
candidate key,
C = x + 1
 Retrieve
multiple records if search-key is not a
candidate key,
C = x + s
Can be very expensive!. Each record may be on a
different block , one block access for each retrieved
record

31


Selections Using Indices


S6-2 (secondary index B+-tree, comparison)
 For Av(r) use index to find 1st index entry = v
and scan index sequentially from there, to find
pointers to records.
 For Av(r) just scan leaf pages of index finding
pointers to records, till first entry  v
 If
½ records are assumed to satisfy the condition,
then ½ first-level index blocks are accessed, plus
½ the file records via the index
C = x + bI1/2 + r/2

32

Complex Selections: 12…n(r) 



S7 (conjunctive selection using one index)
 Select i and algorithms S1 through S6 that
results in the least cost for i(r).
 Test other conditions on tuple after fetching it into
memory buffer.
 Cost of the algorithms chosen.
33

Complex Selections: 12…n(r) 


S8 (conjunctive selection using composite index).
 If two or more attributes are involved in equality
conditions in the conjunctive condition and a
composite index (or hash structure) exists on the
combined field, we can use the index directly.
 Use
appropriate composite index if available
using one the algorithms S3 (primary index), S5,
or S6 (B+-tree, equality).

34

Complex Selections: 12…n(r) 



S9 (conjunctive selection by intersection of record
pointers)
 Requires indices with record pointers.
 Use corresponding index for each condition, and
take intersection of all the obtained sets of record
pointers, then fetch records from file
 If some conditions do not have appropriate
indices, apply test in memory.
 Cost is the sum of the costs of the individual
index scan plus the cost of retrieving records
from disk.
35

Complex Selections: 12… n(r) 


S10 (disjunctive selection by union of identifiers)
 Applicable if all conditions have available indices.
Otherwise use linear scan.
 Use
corresponding index for each condition, and
take union of all the obtained sets of record
pointers. Then fetch records from file


READ
 “Examples of Cost Functions for Select” page 569-570.
36


Join Algorithms

Join (EQUIJOIN, NATURAL JOIN)
 two–way join: a join on two files e.g. R A=B S
 multi-way joins: joins involving more than two
files. e.g., R A=B S C=D T
 Examples
 (OP6): EMP DNO=DNUMBER DEPT
 (OP7): DEPT MGRSSN=SSN EMP



Factors affecting JOIN performance
 Available buffer space
 Join selection factor
 Choice of inner vs outer relation
37



Join Algorithms


Join selectivity (js) : 0  js  1
 the ratio of the # of tuples of the resulting join file
to the # of tuples of the Cartesian product file.
 = |R A=BS| / |R  S| = |R A=BS| / (rR * rS)

If A is a key of R, then
 |R S|  rS,
so js  1/rR

If B is a key of S, then
 |R S|  rR,
so js  1/rS
38



Join Algorithms


Having an estimate of js,
 the estimate # of tuples of the resulting join file is
|R S| = js * rR * rS

bfrRS is the blocking factor of the resulting join file

Cost of writing the resulting join file to disk (bRS) is
 (js * rR * rS)/bfrRS
39



Join Algorithms


J1: Nested-loop join (brute force):
 For each record t in R (outer loop), retrieve every
record s from S (inner loop) and test whether the
two records satisfy the join condition t[A] = s[B].

J2: Single-loop join (Using an index):
 If an index (or hash key) exists for one of the two
join attributes — say, B of S — retrieve each
record t in R, one at a time, and then use the index
to retrieve directly all matching records s from S
that satisfy s[B] = t[A].
40


Join Algorithms



J3 Sort-merge join:
 If the records of R and S are physically sorted
(ordered) by value of the join attributes A and B,
respectively, we can implement the join in the
most efficient way possible.
 Both files are scanned in order of the join
attributes, matching the records that have the
same values for A and B.
 In this method, the records of each file are
scanned only once each for matching with the
other file—unless both A and B are non-key
attributes, in which case the method needs to be
modified slightly.
41


Join Algorithms



J4 Hash-join:
 The records of files R and S are both hashed to
the same hash file, using the same hashing
function on the join attributes A of R and B of S
as hash keys.
 A single pass through the file with fewer records
(say, R) hashes its records to the hash file buckets.
 A single pass through the other file (S) then
hashes each of its records to the appropriate
bucket, where the record is combined with all
matching records from R.
42


Nested-Loop Join (J1)

To compute the theta join
R


S
for each tuple tR in R do
for each tuple tS in S do
if tR,tS satisfy , add tR•tS to result

R is called the outer relation and S the inner
relation of the join.
Requires no indices and can be used with any kind
of join condition.
 Expensive since it examines every pair of tuples in
the two relations.


43



Nested-Loop Join (Cont.)

In the worst case, if there is enough memory
only to hold one block of each relation (3 in
total), the estimated cost is
 C = bR + (rR  bS) + bRS

If the smaller relation fits entirely in memory
 use that as the inner relation
 reduces cost to bR + bS + bRS

If both relations fit entirely in memory
 cost to bR + bS + bRS
44



Nested-Loop Join (Cont.)



Examples use the following information
 customer depositor
 # of records of rcustomer=10000, rdepositor=5000
 # of blocks of bcustomer=400, bdepositor=100
Assuming worst case, join cost estimate is
 If depositor is used as outer relation
 C = 5000  400 + 100 + bRS = 2,000,100 + bRS

If customer is used as the outer relation
 C = 1000  100 + 400 + bRS = 1,000,400 + bRS


If smaller relation (depositor) fits entirely in memory,
 C = 100 + 400 + bRS = 500 + bRS
45


Block Nested-Loop Join


Every block of inner relation is paired with every
block of outer relation.
for each block BR of R do
for each block BS of S do
for each tuple tR in BR do
for each tuple tS in BS do
if tR,tS satisfy , add tR•tS to result

46



Block Nested-Loop Join

In the worst case, if there is enough memory only to hold
one block of each relation (3 in total), the estimated cost is
 C = bR + (bR  bS) + bRS

If both relations fit entirely in memory
 C = bR + bS + bRS

If the smaller relation fits entirely in memory
 use that as the inner relation
 C = bR + bS + bRS

Choose smaller relation for outer loop, if neither of R and
S fits in memory.
47



Block Nested-Loop Join



Cost can be reduced to bR /(nB-2)  bS + bR + bRS
 use nB-2 blocks for outer relations
 where nB = memory buffer size in blocks
 use remaining two blocks for inner relation and
output
48


Block Nested-Loop Join
 Cost

can be reduced
 If
equi-join or natural join attribute forms a key
on the inner relation, stop inner loop on first
match
 Scan
inner loop forward and backward
alternately, to make use of the blocks remaining
in buffer.
 Use

index on inner relation if available
49


Indexed Nested-Loop Join (J2)


Index lookups can replace file scans if
 an index is available on the inner relation’s join attribute
 Can construct an index just to compute a join.


For each tuple tR in the outer relation R,
 use the index to retrieve tuples in S that satisfy the join
condition with tuple tR

If indices are available on join attributes of both R and S,
 use the relation with fewer tuples as the outer relation.
50


Indexed Nested-Loop Join

Worst case:
 buffer has space for only one page of R, and, for
each tuple in R, we perform index lookup on S.

The cost C depends on the type of the index used
 Secondary index bR + (rR  (x + s)) + bRS
 Cluster
index
 Primary
 Hash
index
index

bR + (rR  (x + (s/bfrS))) + bRS
bR + (rR  (x + 1)) + bRS
bR + (rR  h) + bRS
h is the average # of blocks acesses to retrieve a record

51



Example of Nested-Loop Join Costs 

Compute depositor customer,
 with depositor as the outer relation.
 # of records of rcustomer=10000, rdepositor=5000
 # of blocks of bcustomer=400, bdepositor=100

Let customer have a primary B+-tree index on the
join attribute customer-name, which contains 20
entries in each index node.

Since customer has 10,000 tuples, the height of the
tree is 4, and one more access is needed to find the
actual data = 5
52


Example of Nested-Loop Join Costs 

Cost of block nested loops join (worst case)
 100 + 100*400 = 40,100 + bRS
may be significantly less with more memory

Cost of indexed nested loops join
 100


+ 5000 * 5 = 25,100 + bRS
READ
 “Effects of Available Buffer …” pages 549-550.
 “Example of Using the Cost …” page 572.
53


Sort-Merge Join (J3)

Sort both relations on their join attribute
 if not already sorted on the join attributes

Merge the sorted relations to join them
 Join step is similar to the merge stage of the
external sort-merge algorithm .


Main difference is handling of duplicate values
in join attribute
 every pair with same value on join attribute must be
matched


Can be used only for equi-joins and natural joins
54


Sort-Merge Join

sort R on A and S on B
while !eof(R) and !eof(S) do {
scan R and S concurrently until tR.A=tS.B;
if (tR.A=tS.B=c) {output A=c(R)  B=c(S))}}
tR.A = c
r
s
tS.B = c

55



Sort-Merge Join

Each block needs to be read only once

Cost of sorts assuming nB buffers =
 2bR (lognB-1(bR/nB) + 1)
+
 2bS (lognB-1(bS/nB) + 1)

Thus number of block accesses for merge-join is
 bR + bS + sorting cost (if not sorted) + bRS
56




Hash-Join (J4)

Applicable for equi & natural joins R
A=BS.

Applicable if a hash table for the smaller of the
two files can be kept entirely in memeory.
A hash function h is used to partition tuples of
both relations
 h maps join attribute A values to {1, ..., M}
 R1, R2, …, RM denote partitions of R tuples

Each tR  R is put in partition Ri, where i = h(tR [A])
 S1,
S2, …, SM denotes partitions of s tuples
Each tS  S is put in partition Si, where i = h(tS [A])

57


Hash-Join


58


Hash-Join

R tuples in Ri need only to be compared with S
tuples in Si

Need not be compared with S tuples in any other
partition, since:
 an R tuple and an S tuple that satisfy the join
condition will have the same value for the join
attributes.

 If
that value is hashed to some value i, the R
tuple has to be in Ri and the S tuple in Si

59


Hash-Join Algorithm

1. Partition Phase (using hashing function h)
 partition R into M partions: R1, R2, …, RM
 partition S similarly.
 M+1 blocks of memory buffer are needed.
 A disk sub-file is created per partition to store the tuples
for that partition.
2. Probing Phase (Joining)
For i = 1 to M (two partitions Ri & Si are joined)
(a) Load first the partiton with # of smaller blocks (say Ri)
into memory and build an in-memory hash index on it
using the join attributes.
 This hash index uses a different hash function than the earlier
one h.
(b) For each tuple tS, locate each matching tuple tR in Ri
using the in-memory hash index.
(c) Output the concatenation of their attributes.

60



Hash-Join Algorithm

Relation R is called the build input and S is
called the probe (search) input

The value M and the hash function h is chosen
such that each Ri and its in-memory hash index
should fit in memory.
 Typically M  bR/nB
 Use the smaller relation as the build relation.
 The probe relation partitions Si need not fit in
memory
61




Cost of Hash-Join

Cost of hash join is
 Reading and writing each record from R & S
during the partitioning phase: (bR+bS), (bR+bS)
 Reading each record during the joining phase:
(bR+bS)
 Writing the result of join: bRS
 Total Cost: 3(bR + bS) + bRS

If the entire build input can be kept in main
memory, the algorithm does not partition the
relations into temporary files. Cost estimate goes
down to bR + bS + bRS
62



Example of Cost of Hash-Join
customer

depositor
Assume that memory size is nB = 20 blocks
 bdepositor= 100 and bcustomer = 400.
 depositor is to be used as build input.
 Partition it into M = 5 partitions, each of size 20
blocks.
 This partitioning can be done in one pass.



Similarly, partition customer into five partitions,each
of size 80. This is also done in one pass.

Therefore total cost = 3(100 + 400)+bRS=1500+bRS
63


Partition Hash Join


Partition hash join
 Partitioning phase:
 Each file (R and S) is first partitioned into M partitions using a
partitioning hash function on the join attributes:
– R1 , R2 , R3 , ...... Rm and S1 , S2 , S3 , ...... Sm
 Minimum number of in-memory buffers needed for the
partitioning phase: M+1.
 A disk sub-file is created per partition to store the tuples for
that partition.

Joining or probing phase:
 Involves M iterations, one per partitioned file.
 Iteration i involves joining partitions Ri and Si.

64


Partition Hash Join


Partitioned Hash Join Procedure:
 Assume Ri is smaller than Si.
1. Copy records from Ri into memory buffers.
2. Read all blocks from Si, one at a time and each
record from Si is used to probe for a matching
record(s) from partition Si.
3. Write matching record from Ri after joining to the
record from Si into the result file.

65



Partition Hash Join

Cost analysis of partition hash join:
1. Reading and writing each record from R and S during
the partitioning phase:
(bR + bS), (bR + bS)
2. Reading each record during the joining phase:
(bR + bS)
3. Writing the result of join:
bRES

Total Cost:
 3* (bR + bS) + bRES
66



Hybrid Hash–Join


Hybrid hash join:
 Same as partitioned hash join except:
 Joining phase of one of the partitions is included during the
partitioning phase.

Partitioning phase:
 Allocate buffers for smaller relation- one block for each of the
M-1 partitions, remaining blocks to partition 1.
 Repeat for the larger relation in the pass through S.)

Joining phase:
 M-1 iterations are needed for the partitions R2, R3, R4, …, RM
and S2, S3, S4, …, SM.
 R1 and S1 are joined during the partitioning of S1, and results
of joining R1 and S1 are already written to the disk by the end
of partitioning phase.

67


Hybrid Hash–Join

E.g. With memory size of 25 blocks, depositor can
be partitioned into five partitions, each of size 20
blocks.
 Division of memory:
 The first partition of the R (R1) can be kept in
memory. (It occupies 20 blocks of memory)
 1 block is used for input, and 1 block each for
buffering the other 4 partitions.
 customer is similarly partitioned into five partitions
each of size 80; the 1st partition is used right away
for probing, instead of being written out and read
back.


68


Hybrid Hash–Join



Cost of hybrid hash join
 3(100-20 + 400-80) + 20 + 80 + bRS = 1300+bRS
 instead of 1500 + bRS with plain hash-join
69


Complex Joins


Join with a conjunctive condition:
R 1 2...  n S
 Either use nested-loop/block nested-loop
joins, or
 Compute the result of one of the simpler
joins R i S
 final
result comprises those tuples in the
intermediate result that satisfy the remaining
conditions
 1  . . .  i –1  i +1  . . .  n

70


Complex Joins


Join with a disjunctive condition
R
1  2 ...  n S
 Compute
as the union of the records in
individual joins R  i S:
(R 1 S)  (R 2 S)  . . .  (R n S)

71


Duplicate Elimination


Duplicate elimination can be implemented via
hashing or sorting.
 On sorting, duplicates will come adjacent to each
other, and all but one set of duplicates can be
deleted.
 Optimization: duplicates can be deleted during
run generation as well as at intermediate merge
steps in external sort-merge.
 Cost is the same as the cost of sorting
is similar – duplicates will come into the
same bucket.
 Hashing

72


PROJECT


Algorithm for PROJECT operations (Figure 15.3b)
 <attribute list>(R)
1. If <attribute list> has a key of relation R, extract all
tuples from R with only the values for the attributes in
<attribute list>.
2. If <attribute list> does NOT include a key of relation R,
duplicated tuples must be removed from the results.


Methods to remove duplicate tuples
1. Sorting
2. Hashing
73



Cartesian Product


CARTESIAN PRODUCT of relations R and S include all
possible combinations of records from R and S. The
attribute of the result include all attributes of R and S.

Cost analysis of CARTESIAN PRODUCT
 If R has n records and j attributes and S has m records
and k attributes, the result relation will have n*m
records and j+k attributes.

CARTESIAN PRODUCT operation is very expensive and
should be avoided if possible.
74


Set Operations





R  S: (See Figure 15.3c)
 1. Sort the two relations on the same attributes.
 2. Scan and merge both sorted files concurrently,
whenever the same tuple exists in both relations, only
one is kept in the merged results.
R  S: (See Figure 15.3d)
 1. Sort the two relations on the same attributes.
 2. Scan and merge both sorted files concurrently, keep
in the merged results only those tuples that appear in
both relations.
R – S: (See Figure 15.3e)
 keep in the merged results only those tuples that
appear in relation R but not in relation S.
75



Aggregate Operations


The aggregate operations MIN, MAX, COUNT,
AVERAGE, and SUM can be computed by scanning
the whole records (the worst case)

If index exists on the attribute of MAX , MIN
operation, then these operations can be done in a
much more efficient way: select max/min (salary)
from employee
 If an (ascending) index on SALARY exists for the
employee relation, then the optimizer could
decide on traversing the index for the
largest/smallest value, which would entail
following the right/left most pointer in each
index node from the root to a leaf.
76


Aggregate Operations


SUM, COUNT and AVG
For a dense index (each record has one index entry):




Actual number of records associated with each index entry
must be accounted for
With GROUP BY: the aggregate operator must be applied separately to
each group of tuples.


Apply the associated computation to the values in the index.
For a non-dense index:



Use sorting or hashing on the group attributes to partition the
file into the appropriate groups;
Computes the aggregate function for the tuples in each
group.
What if we have Clustering index on the grouping attributes?
77


Outer Joins

Outer join can be computed either as:

1. Modifying Join Algorithms:


Nested Loop or Sort-Merge joins can be modified to
implement outer join. E.g.,
 For left outer join, use the left relation as outer relation and
construct result from every tuple in the left relation.
 If there is a match, the concatenated tuple is saved in the result.
 However, if an outer tuple does not match, then the tuple is still
included in the result but is padded with a null value(s).

78




Outer Joins
2. Executing a combination of relational algebra operators.
SELECT FNAME, DNAME
FROM
(EMPLOYEE LEFT OUTER JOIN DEPARTMENT
ON DNO = DNUMBER);

{Compute the JOIN of the EMPLOYEE and DEPARTMENT
tables}
 TEMP1FNAME,DNAME(EMPLOYEE

DNO=DNUMBER
DEPARTMENT)
{Find the EMPLOYEEs that do not appear in the JOIN}
 TEMP2   FNAME (EMPLOYEE) - FNAME (Temp1)

{Pad each tuple in TEMP2 with a null DNAME field}
 TEMP2  TEMP2 x 'null'

{UNION the temporary tables to produce the LEFT OUTER
JOIN}
 RESULT  TEMP1 υ TEMP2


The cost of the outer join, as computed above, would include the cost of the
associated steps (i.e., join, projections and union).
79


6. Combining Operations using Pipelining 
Motivation
 A query is mapped into a sequence of operations.
 Each execution of an operation produces a
temporary result (Materialization).
 Generating & saving temporary files on disk is
time consuming.
 Alternative:
 Avoid constructing temporary results as much as
possible.
 Pipeline the data through multiple operations

pass the result of a previous operator to the next
without waiting to complete the previous operation.

 Also
known as stream-based processing.
80


Materialization



Materialized evaluation:
 evaluate one operation at a time, starting
at the lowest-level.
 Use intermediate results materialized into
temporary relations to evaluate next-level
operations.
 balance2500 (account )
81


Pipelining


Pipelined evaluation
 evaluate several operations simultaneously,
passing the results of one operation on to the next.
 E.g., in previous expression tree, don’t store result
of  balance 2500 (account )
 instead,
pass tuples directly to the join.
 Similarly, don’t store result of join, pass tuples
directly to projection.
Much cheaper than materialization:
 no need to store a temporary relation to disk.
 Pipelining may not always be possible
 e.g., sort, hash-join.


82

 7. Using Heuristics in Query Optimization(1)


Process for heuristics optimization
1. The parser of a high-level query generates an initial
internal representation;
2. Apply heuristics rules to optimize the internal
representation.
3. A query execution plan is generated to execute groups of
operations based on the access paths available on the
files involved in the query.

The main heuristic is to apply first the operations that reduce
the size of intermediate results.
 E.g., Apply SELECT and PROJECT operations before
applying the JOIN or other binary operations.
83


Using Heuristics in Query Optimization (2) 




Query tree:
 A tree data structure that corresponds to a relational
algebra expression. It represents the input relations of
the query as leaf nodes of the tree, and represents the
relational algebra operations as internal nodes.
An execution of the query tree consists of executing an
internal node operation whenever its operands are available
and then replacing that internal node by the relation that
results from executing the operation.
Query graph:
 A graph data structure that corresponds to a relational
calculus expression. It does not indicate an order on
which operations to perform first. There is only a single
graph corresponding to each query.
84


Using Heuristics in Query Optimization (3) 

Example:


For every project located in ‘Stafford’, retrieve the project
number, the controlling department number and the
department manager’s last name, address and birthdate.
Relation algebra:
PNUMBER, DNUM, LNAME, ADDRESS, BDATE
(((PLOCATION=‘STAFFORD’(PROJECT))
DNUM=DNUMBER
(DEPARTMENT))
(EMPLOYEE))

MGRSSN=SSN
SQL query:
Q2: SELECT P.NUMBER,P.DNUM,E.LNAME, E.ADDRESS,
E.BDATE FROM PROJECT AS P,DEPARTMENT AS D,
EMPLOYEE AS E WHERE P.DNUM=D.DNUMBER AND
D.MGRSSN=E.SSN AND P.PLOCATION=‘STAFFORD’;

85



Using Heuristics in Query Optimization (4) 
86



Using Heuristics in Query Optimization (5) 
87


Using Heuristics in Query Optimization (6) 



Heuristic Optimization of Query Trees:
 The same query could correspond to many
different relational algebra expressions — and
hence many different query trees.
 The task of heuristic optimization of query trees
is to find a final query tree that is efficient to
execute.
Example:
Q: SELECT LNAME FROM EMPLOYEE,
WORKS_ON, PROJECT WHERE PNAME
= ‘AQUARIUS’ AND PNMUBER=PNO AND
ESSN=SSN AND BDATE > ‘1957-12-31’;
88



Using Heuristics in Query Optimization (7) 
89



Using Heuristics in Query Optimization (8) 
90



Transformation Rules for RA Operations 
1.
Cascade of :
A conjunctive selection condition can be broken up into a cascade
(sequence) of individual  operations:
c1 AND c2 AND ... AND cn(R) = c1(c2(...(cn(R))...))
2.
Commutativity of :
The  operation is commutative:
c1(c2(R)) = c2(c1(R))
3.
Cascade of :
In a cascade (sequence) of  operations, all but the last one can be
ignored:
List1(List2(...(Listn(R))...)) = List1(R)
4.
Commuting  with :
If the selection condition c involves only the attributes A1, ..., An in the
projection list, the two operations can be commuted:
A1, A2, ..., An(c(R)) = c(A1, A2, ..., An(R))
91


Transformation Rules for RA Operations 
5.
Commutativity of ⋈ (or ):
The ⋈ operation is commutative as is the  operation:
R ⋈c S = S ⋈c R; R  S = S  R
6.
Commuting  with ⋈ (or ):
If all the attributes in the selection condition c involve only the
attributes of one of the relations being joined—say, R—the two
operations can be commuted as follows:
c(R ⋈ S) = (c(R)) ⋈ S
Alternatively, if the selection condition c can be written as (c1
and c2), where condition c1 involves only the attributes of R and
condition c2 involves only the attributes of S, the operations
commute as follows:
c(R ⋈ S) = (c1(R)) ⋈ (c2(S))

92


Transformation Rules for RA Operations 
7. Commuting  with ⋈ (or ):
Suppose that the projection list is L = {A1, ..., An, B1,
..., Bm}, where A1, ..., An are attributes of R and B1,
..., Bm are attributes of S. If the join condition c
involves only attributes in L, the two operations can be
commuted as follows:
L(R ⋈c S) = (A1, ..., An(R)) ⋈c (B1, ..., Bm(S))
If the join condition c contains additional attributes not
in L, these must be added to the projection list, and a
final  operation is needed.

93


Transformation Rules for RA Operations 
8. Commutativity of set operations:
The set operations  and  are commutative but – is
not.
9. Associativity of ⋈, , , and  :
These four operations are individually associative; that
is, if  stands for any one of these 4 operations
(throughout the expression), we have
(R  S)  T = R  (S  T)
10. Commuting  with set operations:
The  operation commutes with , , and –. If 
stands for any one of these 3 operations, we have
c(R  S) = (c(R))  (c(S))

94


Transformation Rules for RA Operations 
11. The  operation commutes with .
 L(R υ S) = (L(R)) υ (L(S))
12. Converting a (, ) sequence into ⋈:
If the condition c of a  that follows a  corresponds to a
join condition, convert the (, ) sequence into a ⋈ as
(c(R  S)) = (R ⋈c S)
13. Other transformations (See page 564)

95

 Heuristic Algebraic Optimization Algorithm
1.
2.
3.

Using rule 1, break up any select operations with
conjunctive conditions into a cascade of select
operations.
σc1 AND c2 AND … AND cn(R) ≡ σc1(σc2(…(σcn(R))…))
Using rules 2, 4, 6, and 10 concerning the
commutativity of select with other operations, move
each select operation as far down the query tree as is
permitted by the attributes involved in the select
condition.
Using rule 9 concerning associativity of binary
operations, rearrange the leaf nodes of the tree:
 Position the leaf node relation with the most restrictive σ
operations so they are executed first,
 Make sure that the ordering of leaf nodes does
not cause CARTESIAN PRODUCT operations
96

 Heuristic Algebraic Optimization Algorithm
Using Rule 12, combine a cartesian product operation
with a subsequent select operation in the tree into a
join operation.
Combine a  with a subsequent σ in the tree into a
⋈
5. Using rules 3, 4, 7, and 11 concerning the cascading of
project and the commuting of project with other
operations, break down and move lists of projection
attributes down the tree as far as possible by creating
new project operations as needed.
4.
6. Identify subtrees that represent groups of operations
that can be executed by a single algorithm.

97

Summary of Heuristics for Algebraic
Optimization




The main heuristic is to apply first the operations that
reduce the size of intermediate results.

Perform select operations as early as possible to reduce the
number of tuples and perform project operations as early as
possible to reduce the number of attributes. (This is done by
moving select and project operations as far down the tree as
possible.)

The select and join operations that are most restrictive
should be executed before other similar operations. (This is
done by reordering the leaf nodes of the tree among
themselves and adjusting the rest of the tree appropriately.)
98



Query Execution Plans


An execution plan for a relational algebra query consists of
a combination of the relational algebra query tree and
information about the access methods to be used for each
relation as well as the methods to be used in computing the
relational operators stored in the tree.

Materialized evaluation: the result of an operation is
stored as a temporary relation.

Pipelined evaluation: as the result of an operator is
produced, it is forwarded to the next operator in sequence.
99

8. Using Selectivity and Cost Estimates in 
Query Optimization (1)



Cost-based query optimization:
 Estimate and compare the costs of executing a
query using different execution strategies and
choose the strategy with the lowest cost
estimate.
 (Compare to heuristic query optimization)

Issues
 Cost function
 Number of execution strategies to be considered
100

Using Selectivity and Cost Estimates in 
Query Optimization (2)



Cost Components for Query Execution
1. Access cost to secondary storage
2. Storage cost
3. Computation cost
4. Memory usage cost
5. Communication cost

Note: Different database systems may focus on
different cost components.
101

Using Selectivity and Cost Estimates in 
Query Optimization (3)


Catalog Information Used in Cost Functions
 Information about the size of a file





Information about indexes and indexing attributes of a
file






number of records (tuples) (r),
record size (R),
number of blocks (b)
blocking factor (bfr)
Number of levels (x) of each multilevel index
Number of first-level index blocks (bI1)
Number of distinct values (d) of an attribute
Selectivity (sl) of an attribute
Selection cardinality (s) of an attribute. (s = sl * r)
102

Using Selectivity and Cost Estimates in 
Query Optimization (4)






Examples of Cost Functions for SELECT
S1. Linear search (brute force) approach
 CS1a = b;
 For an equality condition on a key, CS1a = (b/2) if the
record is found; otherwise CS1a = b.
S2. Binary search:
 CS2 = log2b + (s/bfr) –1
 For an equality condition on a unique (key) attribute,
CS2 =log2b
S3. Using a primary index (S3a) or hash key (S3b) to
retrieve a single record
 CS3a = x + 1; CS3b = 1 for static or linear hashing;
 CS3b = 1 for extendible hashing;
103

Using Selectivity and Cost Estimates in 
Query Optimization (5)






Examples of Cost Functions for SELECT (contd.)
S4. Using an ordering index to retrieve multiple records:
 For the comparison condition on a key field with an
ordering index, CS4 = x + (b/2)
S5. Using a clustering index to retrieve multiple records:
 CS5 = x + ┌ (s/bfr) ┐
S6. Using a secondary (B+-tree) index:
 For an equality comparison, CS6a = x + s;
 For an comparison condition such as >, <, >=, or <=,
 CS6a = x + (bI1/2) + (r/2)
104

Using Selectivity and Cost Estimates in 
Query Optimization (6)






Examples of Cost Functions for SELECT (contd.)
S7. Conjunctive selection:
 Use either S1 or one of the methods S2 to S6 to solve.
 For the latter case, use one condition to retrieve the
records and then check in the memory buffer whether
each retrieved record satisfies the remaining conditions
in the conjunction.
S8. Conjunctive selection using a composite index:
 Same as S3a, S5 or S6a, depending on the type of
index.
Examples of using the cost functions.
105

Using Selectivity and Cost Estimates in 
Query Optimization (7)


Examples of Cost Functions for JOIN
 Join selectivity (js)
 js = | (R
C S) | / | R x S | = | (R
|S |)
C
S) | / (|R| *
 If condition C does not exist, js = 1;
 If no tuples from the relations satisfy condition C, js
= 0;
 Usually, 0 <= js <= 1;


Size of the result file after join operation
 | (R
C S) | = js * |R| * |S |
106

Using Selectivity and Cost Estimates in 
Query Optimization (8)





Examples of Cost Functions for JOIN (contd.)
J1. Nested-loop join:
 CJ1 = bR + (bR*bS) + ((js* |R|* |S|)/bfrRS)
 (Use R for outer loop)
J2. Single-loop join (using an access structure to retrieve the
matching record(s))
 If an index exists for the join attribute B of S with
index levels xB, we can retrieve each record s in R and
then use the index to retrieve all the matching records t
from S that satisfy t[B] = s[A].
 The cost depends on the type of index.
107

Using Selectivity and Cost Estimates in 
Query Optimization (9)



Examples of Cost Functions for JOIN (contd.)
J2. Single-loop join (contd.)
 For a secondary index,
 CJ2a = bR + (|R| * (xB + sB)) + ((js* |R|* |S|)/bfrRS);

For a clustering index,
 CJ2b = bR + (|R| * (xB + (sB/bfrB))) + ((js* |R|* |S|)/bfrRS);

For a primary index,
 CJ2c = bR + (|R| * (xB + 1)) + ((js* |R|* |S|)/bfrRS);

If a hash key exists for one of the two join attributes —
B of S
 CJ2d = bR + (|R| * h) + ((js* |R|* |S|)/bfrRS);

J3. Sort-merge join:
 CJ3a = CS + bR + bS + ((js* |R|* |S|)/bfrRS);
 (CS: Cost for sorting files)

108

Using Selectivity and Cost Estimates in 
Query Optimization (10)



Multiple Relation Queries and Join Ordering
 A query joining n relations will have n-1 join
operations, and hence can have a large number of
different join orders when we apply the algebraic
transformation rules.
 Current query optimizers typically limit the structure of
a (join) query tree to that of left-deep (or right-deep)
trees.

Left-deep tree:
 A binary tree where the right child of each non-leaf
node is always a base relation.
 Amenable to pipelining
 Could utilize any access paths on the base relation (the right
child) when executing the join.
109


10. Semantic Query Optimization

Semantic Query Optimization:



Uses constraints specified on the database schema in order to
modify one query into another query that is more efficient to
execute.
Consider the following SQL query,
SELECT E.LNAME, M.LNAME
FROM
EMPLOYEE E M
WHERE
E.SUPERSSN=M.SSN AND E.SALARY>M.SALARY
Explanation:



Suppose that we had a constraint on the database schema that
stated that no employee can earn more than his or her direct
supervisor. If the semantic query optimizer checks for the
existence of this constraint, it need not execute the query at
all because it knows that the result of the query will be
empty. Techniques known as theorem proving can be used
for this purpose.
110


Example (1)

An un-optimized relational algebra expression:
 Name ( GPA  3.5 and Title = 'Ada Programming
Language’ and Students.SSN = Enrollment.SSN and
Enrollment.Course_no = Courses.Course_no
(Students  Enrollment  Courses))

111


Example (2)

Initial query tree:
 Name
 GPA  3.5 and Title = 'Ada Programming Language’
and Students.SSN = Enrollment.SSN
and Enrollment.Course_no = Courses.Course_no



Students
Courses
Enrollment
112


Example (3)

Perform selections as early as possible.
 Name
 Enrollment.Course_no = Courses.Course_no

Students.SSN = Enrollment.SSN Title = 'Ada Programming
Language’

 GPA  3.5 Enrollment
Courses
Students


113

Example (4)

Replace Cartesian products by joins.
 Name


Title = 'Ada Programming Language’
 GPA  3.5 Enrollment
Students

Courses
114



Example (5)
Perform more restrictive joins first.
 Name

 GPA  3.5

Students Enrollment  Title = 'Ada Programming
Language’

Courses
115



Example (6)
Project out useless attributes early.
 Name

 SSN
 SSN, Name
 GPA  3.5
Students


 SSN, Course_no
Enrollment
116
 Course_no
 Title = 'Ada Programming
Language’
Courses


Example (7)

The optimized algebra expression is:
 Name (( SSN, Name ( GPA  3.5 (Students))) 
((SSN, Course_no(Enrollments))  (Course_no (
Title = 'Ada Programming Language’ (Courses)))))
 Projections
and selections on the same relation
are usually performed using the same scan of
the relation.

117

Download