1Z0-117 - ErgonomiA.ca

advertisement
1Z0-117
Number: 1Z0-117
Passing Score: 660
Time Limit: 150 min
File Version: 1.0
http://www.gratisexam.com/
Exam A
QUESTION 1
Examine the query and its execution plan:
Which statement is true regarding the execution plan?
A. This query first fetches rows from the CUSTOMERS table that satisfy the conditions, and then
the join return NULL from the CUSTOMER_ID column when it does not find any corresponding
rows in the ORDERS table.
B. The query fetches rows from CUSTOMERS and ORDERS table simultaneously, and filters the
rows that satisfy the conditions from the resultset.
C. The query first fetches rows from the ORDERS table that satisfy the conditions, and then the
join returns NULL form CUSTOMER_ID column when it does not find any corresponding rows in
the CUSTOMERS table.
D. The query first joins rows from the CUSTOMERS and ORDERS tables and returns NULL for
the ORDERS table columns when it does not find any corresponding rows in the ORDERS table,
and then fetches the rows that satisfy the conditions from the result set.
Correct Answer: A
Section: (none)
Explanation
Explanation/Reference:
QUESTION 2
Which three statements are true about histograms?
A. They capture the distribution of different values in an index for better selectivity estimates.
B. They can be used only with indexed columns.
C. They provide metadata about distribution of and occurrences of values in a table column.
D. They provide improved selectivity estimates in the presence of data skew, resulting in execution
plans with uniform distribution.
E. They help the optimizer in deciding whether to use an index or a full table scan.
F. They help the optimizer to determine the fastest table join order.
Correct Answer: CEF
Section: (none)
Explanation
Explanation/Reference:
C:A histogram is a frequency distribution (metadata) that describes the distribution
of data values within a table.
E:It's well established that histograms are very useful for helping the optimizer choose between a
full-scan and and index-scan.
F:Histograms may help the Oracle optimizer in deciding whether to use an index vs. a full-table
scan (where index values are skewed) or help the optimizer determine the fastest table join order.
For determining the best table join order, the WHERE clause of the query can be inspected along
with the execution plan for the original query. If the cardinality of the table is too-high, then
histograms on the most selective column in the WHERE clause will tip-off the optimizer and
change the table join order.
Note:
*The Oracle Query Optimizer uses histograms to predict better query plans. The ANALYZE
command or DBMS_STATS package can be used to compute these histograms.
Incorrect:
B:Histograms are NOT just for indexed columns.
– Adding a histogram to an un-indexed column that is used in
a where clause can improve performance.
D:Histograms Opportunities
Any column used in a where clause with skewed data
Columns that are not queried all the time
Reduced overhead for insert, update, delete
QUESTION 3
View the exhibit and examine the query and its execution plan from the PLAN_TABLE.
Which statement is true about the execution?
Exhibit:
A. The row with the ID column having the value 0 is the first step execution plan.
B. Rows are fetched from the indexes on the PRODUCTS table and from the SALES table using
full table scan simultaneously, and then hashed into memory.
C. Rows are fetched from the SALES table, and then a hash join operator joins with rows fetched
from indexes on the PRODUCTS table.
D. All the partitions of the SALES table are read in parallel.
Correct Answer: C
Section: (none)
Explanation
Explanation/Reference:
QUESTION 4
Which four statements are correct about communication between parallel execution process?
A. The number of logical pathways between parallel execution producers and consumers depends
on the degree parallelism.
B. The shared pool can be used for parallel execution messages buffers.
C. The large pool can be used for parallel execution messages buffers.
D. The buffer cache can be used for parallel execution message buffers.
E. Communication between parallel execution processes is never required if a query uses full
partition-wise joins.
F. Each parallel execution process has an additional connection to the parallel execution
coordinator.
Correct Answer: ABEF
Section: (none)
Explanation
Explanation/Reference:
A:Note that the degree of parallelism applies directly only to intra-operation
parallelism. If inter-operation parallelism is possible, the total number of parallel execution servers
for a statement can be twice the specified degree of parallelism. No more than two sets of parallel
execution servers can run simultaneously. Each set of parallel execution servers may process
multiple operations. Only two sets of parallel execution servers need to be active to guarantee
optimal inter-operation parallelism.
B:By default, Oracle allocates parallel execution buffers from the shared pool.
F:When executing a parallel operation, the parallel execution coordinator obtains parallel
execution servers from the pool and assigns them to the operation. If necessary, Oracle can
create additional parallel execution servers for the operation. These parallel execution servers
remain with the operation throughout job execution, then become available for other operations.
After the statement has been processed completely, the parallel execution servers return to the
pool.
Reference: OracleDatabase Data Warehousing Guide,Using Parallel Execution
QUESTION 5
You have enabled DML by issuing: ALTER session ENABLE PARALLEL DML;
The PARELLEL_DEGREE_POLICY initialization parameter is set to AUTO.
Which two options true about DML statements for which parallel execution is requested?
A. Statements for which PDML is requested will execute serially estimated time is less than the
time specified by the PARALLEL_MIN_THRESHOLD parameter.
B. Statements for which PDML is requested will be queued if the number of busy parallel
execution servers greater than PARALLEL_MIN_SERVERS parameter.
C. Statements for which PDML is requested will always execute in parallel if estimated execution
in parallel if estimated execution time is greater than the time specified bythe
PARELLEL_MIN_TIME_THRESHOLD parameter.
D. Statements for which PDML is requested will be queued if the number of busy parallel
execution servers is greater than PARELLEL_SERVERS_TARGET parameter.
E. Statement for which PDML is requested will be queued if the number of busy parallel execution
servers is greater than PARELLEL_DEGREE_LIMIT parameter.
Correct Answer: CD
Section: (none)
Explanation
Explanation/Reference:
C:PARALLEL_MIN_TIME_THRESHOLD specifies the minimum execution time a
statement should have before the statement is considered for automatic degree of parallelism. By
default, this is set to 30 seconds. Automatic degree of parallelism is only enabled if
PARALLEL_DEGREE_POLICY is set to AUTO or LIMITED.
D:PARALLEL_SERVERS_TARGET specifies the number of parallel server processes allowed to
run parallel statements before statement queuing will be used. When the parameter
PARALLEL_DEGREE_POLICY is set to AUTO, Oracle will queue SQL statements that require
parallel execution, if the necessary parallel server processes are not available. Statement queuing
will begin once the number of parallel server processes active on the system is equal to or greater
than PARALLEL_SERVER_TARGET.
Note:
*PARALLEL_DEGREE_POLICY specifies whether or not automatic degree of Parallelism,
statement queuing, and in-memory parallel execution will be enabled.
AUTO
Enables automatic degree of parallelism, statement queuing, and in-memory parallel execution.
* PARALLEL_MIN_SERVERS specifies the minimum number of parallel execution processes for
the instance. This value is the number of parallel execution processes Oracle creates when the
instance is started.
Reference: OracleDatabase Reference;PARALLEL_SERVERS_TARGET
QUESTION 6
Examine Exhibit1 to view the query and its AUTOTRACE output.
Which two statements are true about tracing?
http://www.gratisexam.com/
Exhibit:
A. The displayed plan will be stored in PLAN_TABLE.
B. Subsequent execution of this statement will use the displayed plan that is stored in v$SQL.
C. The displayed plan may not necessarily be used by the optimizer.
D. The query will not fetch any rows; it will display only the execution plan and statistics.
E. The execution plan generated can be viewed from v$SQLAREA.
Correct Answer: AC
Section: (none)
Explanation
Explanation/Reference:
A:The PLAN_TABLE is automatically created as a public synonym to a global
temporary table. This temporary table holds the output of EXPLAIN PLAN statements for all users.
PLAN_TABLE is the default sample output table into which the EXPLAIN PLAN statement inserts
rows describing execution plans
Incorrect:
B: V$SQL lists statistics on shared SQL area without the GROUP BY clause and contains one row
for each child of the original SQL text entered. Statistics displayed in V$SQL are normally updated
at the end of query execution. However, for long running queries, they are updated every 5
seconds. This makes it easy to see the impact of long running SQL statements while they are still
in progress.
D:autotrace traceonly – Displays execution plan and statistics without displaying the returned
rows. This option should be used when a large result set is expected.
E:V$SQLAREA lists statistics on shared SQL area and contains one row per SQL string. It
provides statistics on SQL statements that are in memory, parsed, and ready for execution.
Note:
*The autotrace provides instantaneous feedback including the returned rows, execution plan, and
statistics. The user doesn’t need to be concerned about trace file locations and formatting since
the output is displayed instantly on the screen. This is very important data that can be used to tune
the SQL statement.
*SET AUTOTRACE ON
The AUTOTRACE report includes both the optimizer execution path and the SQL statement
execution statistics.
SET AUTOTRACE TRACEONLY
Similar to SET AUTOTRACE ON, but suppresses the printing of the user's query output, if any. If
STATISTICS is enabled, query data is still fetched, but not printed.
QUESTION 7
Which two types of column filtering may benefit from partition pruning?
A.
B.
C.
D.
E.
Equally operates on range-partitioned tables.
In-list operators on system-partitioned tables
Equality operators on system-partitioned tables
Operators on range-partitioned tables
Greater than operators on hash-partitioned tables
Correct Answer: AD
Section: (none)
Explanation
Explanation/Reference:
The query optimizer can perform pruning whenever a WHERE condition can be
reduced to either one of the following two cases:
partition_column = constant
partition_column IN (constant1, constant2, ..., constantN)
In the first case, the optimizer simply evaluates the partitioning expression for the value given,
determines which partition contains that value, and scans only this partition. In many cases, the equal sign can
be replaced with another arithmetic comparison, including <, >, <=, >=, and <>.
Some queries using BETWEEN in the WHERE clause can also take advantage of partition
pruning.
Note:
*The core concept behind partition pruning is relatively simple, and can be described as “Do not
scan partitions where there can be no matching values”.
When the optimizer can make use of partition pruning in performing a query, execution of the
query can be an order of magnitude faster than the same query against a nonpartitioned table
containing the same column definitions and data.
* Example:
Suppose that you have a partitioned table t1 defined by this statement:
CREATE TABLE t1 (
fname VARCHAR(50) NOT NULL,
lname VARCHAR(50) NOT NULL,
region_code TINYINT UNSIGNED NOT NULL,
dob DATE NOT NULL
)
PARTITION BY RANGE( region_code ) (
PARTITION p0 VALUES LESS THAN (64),
PARTITION p1 VALUES LESS THAN (128),
PARTITION p2 VALUES LESS THAN (192),
PARTITION p3 VALUES LESS THAN MAXVALUE
);
Consider the case where you wish to obtain results from a query such as this one:
SELECT fname, lname, region_code, dob
FROM t1
WHERE region_code > 125 AND region_code < 130;
It is easy to see that none of the rows which ought to be returned will be in either of the partitions
p0 or p3; that is, we need to search only in partitions p1 and p2 to find matching rows. By doingso,
it is possible to expend much less time and effort in finding matching rows than would be required
to scan all partitions in the table. This“cutting away” of unneeded partitions is known as pruning.
QUESTION 8
Which two statements about In-Memory Parallel Execution are true?
A.
B.
C.
D.
It can be configured using the Database Resource Manager.
It increases the number of duplicate block images in the global buffer cache.
It requires setting PARALLEL_DEGREE_POLICY to LIMITED.
Objects selected for In-Memory Parallel Execution have blocks mapped to specific RAC
instances.
E. It requires setting PARALLEL_DEGREE_POLICY to AUTO
F. Objects selected for In-Memory Parallel Execution must be partitioned tables or indexes.
Correct Answer: DE
Section: (none)
Explanation
Explanation/Reference:
D, E:In-Memory Parallel Execution
When the parameter PARALLEL_DEGREE_POLICY is set to AUTO, Oracle Database decides if
an object that is accessed using parallel execution would benefit from being cached in the SGA
(also called the buffer cache). The decision to cache an object is based on a well-defined set of
heuristics including the size of the object and frequency on which it is accessed. In an Oracle RAC
environment, Oracle Database maps pieces of the object into each of the buffer caches on the
active instances. By creating this mapping, Oracle Database automatically knows which buffer
cache to access to find different parts or pieces of the object. Using this information, Oracle
Database prevents multiple instances from reading the same information from disk over and over
again, thus maximizing the amount of memory that can cache objects. If the size of the object is
larger than the size of the buffer cache (single instance) or the size of the buffer cache multiplied
by the number of active instances in an Oracle RAC cluster, then the object is read using directpath
reads.
E:PARALLEL_DEGREE_POLICY specifies whether or not automatic degree of Parallelism,
statement queuing, and in-memory parallel execution will be enabled.
AUTO
Enables automatic degree of parallelism, statement queuing, and in-memory parallel execution.
Incorrect:
C:
LIMITED
Enables automatic degree of parallelism for some statements but statement queuing and inmemory
Parallel Execution are disabled. Automatic degree of parallelism is only applied to those
statements that access tables or indexes decorated explicitly with the PARALLEL clause. Tables
and indexes that have a degree of parallelism specified will use that degree of parallelism.
Reference:Oracle Database VLDB and Partitioning Guide11g,How Parallel Execution Works
QUESTION 9
Which two are benefits of In-Memory Parallel Execution?
A.
B.
C.
D.
E.
Reduction in the duplication of block images across multiple buffer caches
Reduction in CPU utilization
Reduction in the number of blocks accessed
Reduction in physical I/O for parallel queries
Ability to exploit parallel execution servers on remote instance
Correct Answer: AC
Section: (none)
Explanation
Explanation/Reference:
Note:In-Memory Parallel Execution
When the parameter PARALLEL_DEGREE_POLICY is set to AUTO, Oracle Database decides if
an object that is accessed using parallel execution would benefit from being cached in the SGA
(also called the buffer cache). The decision to cache an object is based on a well-defined set of
heuristics including the size of the object and frequency on which it is accessed. In an Oracle RAC
environment, Oracle Database maps pieces of the object into each of the buffer caches on the
active instances. By creating this mapping, Oracle Database automatically knows which buffer
cache to access to find different parts or pieces of the object. Using this information, Oracle
Database prevents multiple instances from reading the same information from disk over and over
again, thus maximizing the amount of memory that can cache objects. If the size of the object is
larger than the size of the buffer cache (single instance) or the size of the buffer cache multiplied
by the number of active instances in an Oracle RAC cluster, then the object is read using directpath
reads.
Reference:Oracle Database VLDB and Partitioning Guide11g,How Parallel Execution Works
QUESTION 10
You plan to bulk load data INSERT INTO . . . SELECT FROM statements.
Which two situations benefit from parallel INSERT operations on tables that have no materialized
views defined on them?
A. Direct path insert of a million rows into a partitioned, index-organizedtable containing one
millionrows anda conventional B*treesecondary index.
B. Direct path insert of a million rows into a partitioned, index-organized table containing 10 rows
and a bitmapped secondary index.
C. Direct path insert of 10 rows into a partitioned, index-organized table containing one million
rows and conventional B* tree secondary index.
D. Direct path insert of 10 rows into a partitioned, index-organized table containing10rowsand a
bitmapped secondary index
E. Conventionalpath insert of a million rows into a nonpartitioned, heap-organized containing 10
rows and having a conventional B* tree index.
F. Conventional path insert of 10 rows into a nonpartitioned, heap-organized table one million rows
and a bitmapped index.
Correct Answer: AB
Section: (none)
Explanation
Explanation/Reference:
Note:
*A materialized view is a database object that contains the results of a query.
*You can use the INSERT statement to insert data into a table, partition, or view in two ways:
conventional INSERTand direct-path INSERT.
* With direct-path INSERT, the database appends the inserted data after existing data in the table.
Data is written directly into datafiles, bypassing the buffer cache. Free space in the existing data is
not reused. This alternative enhances performance during insert operations and is similar to the
functionality of the Oracle direct-path loader utility, SQL*Loader. When you insert into a table that
has been created in parallel mode, direct-path INSERT is the default.
* Direct-path INSERT is not supported for an index-organized table (IOT) if it is not partitioned, if it
has a mapping table, or if it is reference by a materialized view.
* When you issue a conventional INSERT statement, Oracle Database reuses free space in the
table into which you are inserting and maintains referential integrity constraints
* Conventional INSERT always generates maximal redo and undo for changes to both data and
metadata, regardless of the logging setting of the table and the archivelog and force logging
settings of the database
QUESTION 11
Which are the two prerequisites for enabling star transformation on queries?
A. The STAR_TRANSFORMATION_ENABLED parameter should be set to TRUE or
TEMP_DISABLE.
B. A B-tree index should be built on each of the foreign key columns of the fact table(s),
C. A bitmap index should be built on each of the primary key columns of the fact table(s).
D. A bitmap index should be built on each of the foreign key columns of the fact table(s).
E. A bitmap index must exist on all the columns that are used in the filter predicates of the query.
Correct Answer: AE
Section: (none)
Explanation
Explanation/Reference:
A:Enabling the transformation
E:Star transformation is essentially about adding subquery predicates corresponding to the
constraint dimensions. These subquery predicates are referred to as bitmap semi-join predicates.
The transformation is performed when there are indexes on the fact join columns (s.timeid,
s.custid...). By driving bitmap AND and OR operations (bitmaps can be from bitmap indexes or
generated from regular B-Tree indexes) of the key values supplied by the subqueries, only the
relevant rows from the fact table need to be retrieved. If the filters on the dimension tables filter out
a lot of data, this can be much more efficient than a full table scan on the fact table. After the
relevant rows have been retrieved from the fact table, they may need to be joined back to the
dimension tables, using the original predicates. In some cases, the join back can be eliminated.
Star transformation is controlled by the star_transformation_enabled parameter. The parameter
takes 3 values.
TRUE - The Oracle optimizer performs transformation by identifying fact and constraint dimension
tables automatically. This is done in a cost-based manner, i.e. the transformation is performed
only if the cost of the transformed plan is lower than the non-transformed plan. Also the optimizer
will attempt temporary table transformation automatically whenever materialization improves
performance.
FALSE - The transformation is not tried.
TEMP_DISABLE - This value has similar behavior as TRUE except that temporary table
transformation is not tried.
The default value of the parameter is FALSE. You have to change the parameter value and create
indexes on the joining columns of the fact table to take advantage of this transformation.
Reference:Optimizer Transformations: Star Transformation
QUESTION 12
An application accessing your database got the following error in response to SQL query:
ORA-12827: insufficient parallel query slaves available
View the parallel parameters for your instance:
No hints are used and the session use default parallel settings.
What four changes could you make to help avoid the error and ensure that the query executes in
parallel?
A.
B.
C.
D.
E.
F.
G.
Set PARELLEL_DEGREE_POLICY to AUTO
Increase the value of PARELLEL_MAX_SERVERS.
Increase PARELLEL_SERVERS_TARGET.
Decrease PARELLEL_MIN_PERCENT.
Increase PARELLEL_MIN_SERVERS.
Decrease PARELLEL_MIN_TIME_THRESHOLD.
Increase PARELLEL__MIN_TIME_THRESHOLD.
Correct Answer: ACDG
Section: (none)
Explanation
Explanation/Reference:
C:PARALLEL_SERVERS_TARGET specifies the number of parallel server
processes allowed to run parallel statements before statement queuing will be used. When the
parameter PARALLEL_DEGREE_POLICY is set to AUTO, Oracle will queue SQL statements
thatrequire parallel execution, if the necessary parallel server processes are not available.
Statement queuing will begin once the number of parallel server processes active on the system is
equal to or greater than PARALLEL_SERVER_TARGET.
By default, PARALLEL_SERVER_TARGET is set lower than the maximum number of parallel
server processes allowed on the system (PARALLEL_MAX_SERVERS) to ensure each parallel
statement will get all of the parallel server resources required and to prevent overloading the
system with parallel server processes.
D:
Note:ORA-12827:insufficient parallel query slaves available
Cause:PARALLEL_MIN_PERCENT parameter was specified and fewer than minimum slaves
were acquired
Action:either re-execute query with lower PARALLEL_MIN_PERCENT or wait until some running
queries are completed, thus freeing up slaves
A, G:PARALLEL_MIN_TIME_THRESHOLD specifies the minimum execution time a statement
should have before the statement is considered for automatic degree of parallelism. By default,
this is set to 30 seconds. Automatic degree of parallelism is only enabled
if PARALLEL_DEGREE_POLICY is set to AUTO or LIMITED.
QUESTION 13
Examine the Exhibit 1 to view the structure of and indexes for EMPLOYEES and DEPARTMENTS
tables.
Which three statements are true regarding the execution plan?
Exhibit:
A. The view operator collects all rows from a query block before they can be processed but higher
operations in the plan.
B. The in-line query in the select list is processed as a view and then joined.
C. The optimizer pushes the equality predicate into the view to satisfy the join condition.
D. The optimizer chooses sort-merge join because sorting is required for the join equality
predicate.
E. The optimizer chooses sort-merge join as a join method because an equality predicate is used
for joining the tables.
Correct Answer: BCE
Section: (none)
Explanation
Explanation/Reference:
QUESTION 14
In Your Database, The Cursor_Shareing Parameter is set to EXACT. In the Employees table, the
data is significantly skewed in the DEPTNO column. The value 10 is found in 97% of rows.
Examine the following command and out put.
Which three statements are correct?
A.
B.
C.
D.
The DEPTNO column will become bind aware once histogram statistics are collected.
The value for the bind variable will considered by the optimizer to determine the execution plan.
The same execution plan will alwaysbe used irrespective of the bind variable value.
The instance collects statistics and based on the pattern of executions creates a histogram on
the column containing the bind value.
E. Bind peeking will take place only for the first execution of the statement and subsequent
execution will use the same plan.
Correct Answer: ABD
Section: (none)
Explanation
Explanation/Reference:
* We here see that the cursor is marked as bind sensitive (IS_BIND_SENis Y).
*In 11g, the optimizer has been enhanced to allow multiple execution plans to be used for a single
statement that uses bind variables. This ensures that the best execution plan will be used
depending on the bind value.
* A cursor is marked bind sensitive if the optimizer believes the optimal plan may depend on the
value of the bind variable. When a cursor is marked bind sensitive, Oracle monitorsthe behavior of
the cursor using different bind values, to determine if a different plan for different bind values is
called for.
* (B, not C):A cursor is marked bind sensitive if the optimizer believes the optimal plan may
depend on the value of the bind variable. When a cursor is marked bind sensitive, Oracle monitors
the behavior of the cursor using different bind values, to determine if a different plan for different
bind values is called for.
Note:Setting CURSOR_SHARING to EXACT allows SQL statements to share the SQL area only
when their texts match exactly. This is the default behavior. Using this setting, similar statements
cannot shared; only textually exact statements can be shared.
Reference:Why are there more cursors in 11g for my query containing bind variables?
QUESTION 15
You created a SQL Tuning Set (STS) containing resource-intensive SQL statements. You plan to
run the SQL Tuning Advisor.
Which two types of recommendations can be provided by the SQL Tuning Advisor?
A.
B.
C.
D.
E.
Semantic restructuring for each SQL statement
Gathering missing or stale statistics at the schema level for the entire workload
Creating a materialized view to benefit from query rewrite for the entire workload
Gathering missing or stale statistics for objects used by the statements.
Creating a partition table to benefit from partition pruning for each statement
Correct Answer: AD
Section: (none)
Explanation
Explanation/Reference:
The output of the SQL Tuning Advisor is in the form of an advice or
recommendations, along with a rationale for each recommendation and its expected benefit. The
recommendation relates to collection of statistics on objects( D), creation of new indexes,
restructuring of the SQL statement(A), or creation of a SQL profile. You can choose to accept the
recommendation to complete the tuning of the SQL statements.
Note:
*A SQL Tuning Set can be used as input to the SQL Tuning Advisor, which performs automatic
tuning of the SQL statements based on other input parameters specified by the user.
*A SQL Tuning Set (STS) is a database object that includes one or more SQL statements along
with their execution statistics and execution context, and could include a user priority ranking. The
SQL statements can be loaded into a SQL Tuning Set from different SQL sources, such as the
Automatic Workload Repository, the cursor cache, or custom SQL provided by the user.
Reference: OracleDatabase Performance Tuning Guide11g,SQL Tuning Advisor
QUESTION 16
When would bind peeking be done for queries that vary only in values used in the WHERE
clause?
A. When the column used in the WHERE clause has evenly distributed data and histogram exists
on that column.
B. When the column used in the WHERE clause has evenly distributed data and index exists on
that column.
C. When the column used in the WHERE clause has non uniform distribution of data, uses a bind
variable, and no histogram exists for the column.
D. When the column used in the WHERE clause has non uniform distribution of data and
histogram exists for the column.
Correct Answer: B
Section: (none)
Explanation
Explanation/Reference:
QUESTION 17
Which type of SQL statement would be selected for tuning by the automatic SQL framework?
A. Serial queries that are among the costliest in any or all of the four categories: the past week,
any day in the past week, any hour in the past week, or single response, and have the potential for
improvement
B. Serial queries that have been tuned within the last 30days and have been SQL profiled by the
SQL tuning Advisor.
C. Serial and parallel queries that top the AWR Top SQL in the past week only and have been
SQL profiled by the SQL Tuning Advisor.
D. Serial queries that top the AWR Top SQL in the past week only and whose poor performance
can be traced to concurrency issues.
E. Serial and parallel queries that are among the costliest in any or all of the four categories: the
past week, and day in the past week, any hour in the past week, or a single response, and that
can benefit from access method changes.
Correct Answer: E
Section: (none)
Explanation
Explanation/Reference:
The Automatic Tuning Optimizer is meant to be used for complex and high-load
SQL statements that have non-trivial impact on the entire system. The Automatic Database
Diagnostic Monitor (ADDM) proactively identifies high-load SQL statements which are good
candidates for SQL tuning.
Note:
*When SQL statements are executed by the Oracle database, the query optimizer is used to
generate the execution plans of the SQL statements. The query optimizer operates in two modes:
a normal mode and a tuning mode.
In normal mode, the optimizer compiles the SQL and generates an execution plan. The normal
mode of the optimizer generates a reasonable execution plan for the vast majority of SQL
statements. Under normal mode, the optimizer operates with very strict time constraints, usually a
fraction of a second, during which it must find a good execution plan.
In tuning mode, the optimizer performs additional analysis to check whether the execution plan
produced under normal mode can be improved further. The output of the query optimizer is not an
execution plan, but a series of actions, along with their rationale and expected benefit for
producing a significantly superior plan. When running in the tuning mode, the optimizer is referred
to as the Automatic Tuning Optimizer.
Reference: OracleDatabase Performance Tuning Guide,Automatic SQL Tuning
QUESTION 18
You instance has these parameter settings:
Which three statements are true about these settings if no hints are used in a SQL statement?
A. A statement estimated for more than 10 seconds always has its degree of parallelism computed
automatically.
B. A statement with a computed degree of parallelism greater than 8 will be queued for a
maximum of 10 seconds.
C. A statement that executes for more than 10 seconds always has its degree of parallelism
computed automatically.
D. A statement with a computed degree of parallelism greater than 8 will raise an error.
E. A statement with any computed degree of parallelism will be queued if the number of busy
parallel execution processes exceeds 64.
F. A statement with a computed degree of parallelism of 20 will be queued if the number of
available parallel execution processes is less 5.
Correct Answer: CEF
Section: (none)
Explanation
Explanation/Reference:
C (not A): PARALLEL_MIN_TIME_THRESHOLD specifies the minimum execution
time a statement should have before the statement is considered for automatic degree of
parallelism. By default, this is set to 30 seconds. Automatic degree of parallelism is only enabled if
PARALLEL_DEGREE_POLICY is set to AUTO or LIMITED.
PARALLEL_DEGREE_LIMIT integer
A numeric value for this parameter specifies the maximum degree of parallelism the optimizer can
choose for a SQL statement when automatic degree of parallelism is active. Automatic degree of
parallelism is only enabled if PARALLEL_DEGREE_POLICY is set to AUTO or LIMITED.
E:PARALLEL_SERVERS_TARGET specifies the number of parallel server processes allowed to
run parallel statements before statement queuing will be used. When the
parameter PARALLEL_DEGREE_POLICY is set to AUTO, Oracle will queue SQL statements that
require parallel execution, if the necessary parallel server processes are not available. Statement
queuing will begin once the number of parallel server processes active on the system is equal to
or greater than PARALLEL_SERVER_TARGET.
F:PARALELL_MIN_MINPERCENT
PARALLEL_MIN_PERCENT operates in conjunction
with PARALLEL_MAX_SERVERS and PARALLEL_MIN_SERVERS. It lets you specify the
minimum percentage of parallel execution processes (of the value
of PARALLEL_MAX_SERVERS) required for parallel execution. Setting this parameter ensures
that parallel operations will not execute sequentially unless adequate resources are available. The
default value of 0 means that no minimum percentage of processes has been set.
Consider the following settings:
PARALLEL_MIN_PERCENT = 50
PARALLEL_MIN_SERVERS = 5
PARALLEL_MAX_SERVERS = 10
If 8 of the 10 parallel execution processes are busy, only 2 processes are available. If you then
request a query with a degree of parallelism of 8, the minimum 50% will not be met.
Note:With automatic degree of parallelism, Oracle automatically decides whether or not a
statement should execute in parallel and what degree of parallelism the statement should use. The
optimizer automatically determines the degree of parallelism for a statement based on the
resource requirements of the statement. However, the optimizer will limit the degree of parallelism
used to ensure parallel server processes do not flood the system. This limit is enforced by
PARALLEL_DEGREE_LIMIT.
Values:
CPU
IO
integer
A numeric value for this parameter specifies the maximum degree of parallelism the optimizer can
choose for a SQL statement when automatic degree of parallelism is active. Automatic degree of
parallelism is only enabled if PARALLEL_DEGREE_POLICY is set to AUTO or LIMITED.
Reference:
PARALLEL_MIN_TIME_THRESHOLD
PARALLEL_DEGREE_LIMIT
PARALELL_MIN_MINPERCENT
PARALELL_SERVERS_TARGET
QUESTION 19
Examine the following SQL statement:
Examine the exhibit to view the execution plan.
Which statement is true about the execution plan?
Exhibit:
A. The EXPLAIN PLAN generates the execution plan and stores it in c$SQL_PLAN after
executing the query. Subsequent executions will use the same plan.
B. The EXPLAIN PLAN generates the execution plan and stores it in PLAN_TABLE without
executing the query. Subsequent executions will always use the same plan.
C. The row with the ID 3 is the first step executed in the execution plan.
D. The row with the ID 0 is the first step executed in the execution plan.
E. The rows with the ID 3 and 4 are executed simultaneously.
Correct Answer: E
Section: (none)
Explanation
Explanation/Reference:
Note the other_tag parallel in the execution plan.
Note:
Within the Oracle plan_table, we see that Oracle keeps the parallelism in a column called
other_tag. The other_tag column will tell you the type of parallel operation that is being performed
within your query.
For parallel queries, it is important to display the contents of the other_tag in the execution.
QUESTION 20
Which two types of SQL statements will benefit from dynamic sampling?
A. SQL statements that are executed parallel
B. SQL statement that use a complex predicate expression when extended statistics are not
available.
C. SQL statements that are resource-intensive and have the current statistics
D. SQL statements with highly selective filters on column that has missing index statistics
E. Short-running SQL statements
Correct Answer: BD
Section: (none)
Explanation
Explanation/Reference:
B:Onescenario where DS is used is when the statement contains a complex predicate expression
and extended statistics are not available. Extended statistics were introduced in Oracle Database
11g Release 1 with the goal to help the optimizer get good quality cardinality estimates for
complex predicate expressions.
D: DSIt is typically used to compensate for missing or insufficient statistics that would otherwise
lead to a very bad plan.
Note:
*Dynamic sampling (DS) was introduced in Oracle Database 9i Release 2 to improve the
optimizer's ability to generate good execution plans.
*During the compilation of a SQL statement, the optimizer decides whether to use DS or not by
considering whether the available statistics are sufficient to generate a good execution plan. If the
available statistics are not enough, dynamic sampling will be used. It is typically used to
compensate for missing or insufficient statistics that would otherwise lead to a very bad plan.
It is typically used to compensate for missing or insufficient statistics that would otherwise lead to a
very bad plan. For the case where one or more of the tables in the query does not have statistics,
DS is used by the optimizer to gather basic statistics on these tables before optimizing the
statement. The statistics gathered in this case are not as high a quality or as complete as
thestatistics gathered using the DBMS_STATS package. This trade off is made to limit the impact
on the compile time of the statement.
QUESTION 21
You are administering a database supporting an OLTP workload. A new module was added to one
of the applications recently in which you notice that the SQL statements are highly resource
intensive in terms of CPU, I/O and temporary space. You created a SQL Tuning Set (STS)
containing all resource-intensive SQL statements. You want to analyze the entire workload
captured in the STS. You plan to run the STS through the SQL Advisor.
Which two recommendations can you get?
A.
B.
C.
D.
E.
F.
Combing similar indexes into a single index
Implementing SQL profiles for the statements
Syntactic and semantic restructuring of SQL statements
Dropping unused or invalid index.
Creating invisible indexes for the workload
Creating composite indexes for the workload
Correct Answer: CF
Section: (none)
Explanation
Explanation/Reference:
The output of the SQL Tuning Advisor is in the form of an advice or
recommendations, along with a rationale for each recommendation and its expected benefit. The
recommendation relates to collection of statistics on objects, creation of new indexes(F),
restructuring of the SQL statement(C), or creation of a SQL profile. You can choose to accept the
recommendation to complete the tuning of the SQL statements.
Reference: OracleDatabase Performance Tuning Guide11g,SQL Tuning Advisor
QUESTION 22
A new application module is deployed on middle tier and is connecting to your database. You want
to monitor the performance of the SQL statements generated from the application.
To accomplish this, identify the required steps in the correct order from the steps given below:
1. Use DBNMS_APPLICATION_INFO to set the name of the module
2. Use DBMS_MONITOR.SERV_MOD_ACT_STAT_ENABLE to enable statistics gathering for the
module.
3. Use DBMS_MONITOR.SERV_MOD_ACT_TRACE_ENABLE to enable tracing for the service
4. Use the trcsess utility to consolidate the trace files generated.
5. Use the tkprof utility to convert the trace files into formatted output.
A.
B.
C.
D.
E.
F.
1, 2, 3, 4, 5
2, 3, 1, 4, 5
3, 1, 2, 4, 5
1, 2, 4, 5
1, 3, 4, 5
2, 1, 4, 5
Correct Answer: A
Section: (none)
Explanation
Explanation/Reference:
Note:
*Before tracing can be enabled, the environment must first be configuredto enable gathering of
statistics.
* (gather statistics):DBMS_MONITOR.SERV_MOD_ACT_STAT_ENABLE
Enables statistic gathering for a given combination of Service Name, MODULE and ACTION
*DBMS_MONITOR.SERV_MOD_ACT_TRACE_ENABLE
Enables SQL tracing for a given combination of Service Name, MODULE and ACTION globally
unless an instance_name is specified.
dbms_monitor.serv_mod_act_trace_enable(
service_name IN VARCHAR2,
module_name IN VARCHAR2 DEFAULT ANY_MODULE,
action_name IN VARCHAR2 DEFAULT ANY_ACTION,
waits IN BOOLEAN DEFAULT TRUE,
binds IN BOOLEAN DEFAULT FALSE,
instance_name IN VARCHAR2 DEFAULT NULL,
plan_stat IN VARCHAR2 DEFAULT NULL);
SELECT instance_name
FROM gv$instance;
exec dbms_monitor.serv_mod_act_trace_enable('TESTSERV', dbms_monitor.all_modules,
dbms_monitor.all_actions, TRUE, TRUE, 'orabase');
exec dbms_monitor.serv_mod_act_trace_disable('TESTSERV', dbms_monitor.all_modules,
dbms_monitor.all_actions, 'orabase');
*When solving tuning problems, session traces are very useful and offer vital information. Traces
are simple and straightforward for dedicated server sessions, but for shared server sessions,
many processes are involved. The trace pertaining to the user session is scattered across different
trace files belonging to different processes. This makes it difficult to get a complete picture of the
life cycle of a session.
Now there is a new tool, a command line utility called trcsess to help read the trace files. The
trcsess command-line utility consolidates trace information from selected trace files, based on
specified criteria. The criteria include session id, client id, service name, action name and module
name.
*Once the trace files have been consolidated (with trcsess), tkprof can be run against the
consolidated trace file for reporting purposes.
QUESTION 23
One of your databases supports a mixed workload.
When monitoring SQL performance, you detect many direct paths reads full table scans.
What are the two possible causes?
A.
B.
C.
D.
E.
Histograms statistics not available
Highly selective filter on indexed columns
Too many sort operations performed by queries
Indexes not built on filter columns
Too many similar type of queries getting executed with cursor sharing disabled
Correct Answer: BD
Section: (none)
Explanation
Explanation/Reference:
Note:
*The direct path read Oracle metric occurs during Direct Path operations when the data is
asynchronously read from the database files into the PGA instead of into the SGA data buffer.
Direct reads occur under these conditions:
- When reading from the TEMP tablespace (a sort operation)
- When reading a parallel full-table scan (parallel query factotum (slave) processes)
- Reading a LOB segment
*The optimizer uses a full table scan in any of the following cases:
- Lack of Index
- Large Amount of Data
- Small Table
- High Degree of Parallelism
QUESTION 24
Examine the Exhibit.
Which two options are true about the execution plan and the set of statements?
Exhibit:
A. The query uses a partial partition-wise join.
B. The degree of parallelism is limited to the number of partitions in the EMP_RANGE_DID table.
C. The DEPT table id dynamically distributed based on the partition keys of the
EMP_RANGE_DID table.
D. The server process serially scans the entire DEPT table for each range partition onthe
EMP_RANGE_DID table.
E. The query uses a full partition-wise join.
Correct Answer: CE
Section: (none)
Explanation
Explanation/Reference:
Note the "px partition range all" in the execution plan.
Note:
*PX PARTITION RANGE (ALL)
Description
Parallel execution - iterate over all range partitioned table
*Full partition-wise joins can occur if two tables that are co-partitioned on the same key are joined
in a query. The tables can be co-partitioned at the partition level, or at the subpartition level, or at a
combination of partition and subpartition levels. Reference partitioning is an easy way to
guarantee co-partitioning. Full partition-wise joins can be executed serially and in parallel.
*Oracle Database can perform partial partition-wise joins only in parallel. Unlike full partition-wise
joins, partial partition-wise joins require you to partition only one table on the join key, not both
tables. The partitioned table is referred to as the reference table. The other table may or may not
be partitioned. Partial partition-wise joins are more common than full partition-wise joins.
To execute a partial partition-wise join, the database dynamically partitions or repartitions the other
table based on the partitioning of the reference table. After the other table is repartitioned, the
execution is similar to a full partition-wise join.
QUESTION 25
What are three common reasons for SQL statements to perform poorly?
A.
B.
C.
D.
E.
Full table scans for queries with highly selective filters
Stale or missing optimizer statistics
Histograms not existing on columns with evenly distributed data
High index clustering factor
OPTIMIZER_MODE parameter set to ALL_ROWS for DSS workload
Correct Answer: ABD
Section: (none)
Explanation
Explanation/Reference:
D:The clustering_factor measures how synchronized an index is with the data in a table. A table
with a high clustering factor is out-of-sequence with the rows and large index range scans will
consume lots of I/O. Conversely, an index with a low clustering_factor is closely aligned with the
table and related rows reside together of each data block, making indexes very desirable for
optimal access.
Note:
* (Not C)Histograms are feature in CBO and it helps to optimizer to determine how data are
skewed(distributed) with in the column. Histogram is good to create for the column which are
included in the WHERE clause where the column is highly skewed. Histogram helps to optimizer
to decide whether to use an index or full-table scan or help the optimizer determine the fastest
table join order.
*OPTIMIZER_MODE establishes the default behavior for choosing an optimization approach for
the instance.
all_rows
The optimizer uses a cost-based approach for all SQL statements in the session and optimizes
with a goal of best throughput (minimum resource use to complete the entire statement).
QUESTION 26
Examine the utilization parameters for an instance:
You notice that despite having an index on the column used in the where clause, queries use full
table scans with highly selective filters.
What are two possible reasons for the optimizer to use full table scans instead of index unique
scans and index range scans?
A.
B.
C.
D.
E.
The OPTIMIZER_MODE parameter is set to ALL_ROWS.
The clustering factor for the indexes is high.
The number of leaf blocks for the indexes is high.
The OPTIMIZER_INDEX_COST_ADJ initialization parameter is set to 100.
The blocks fetched by the query are greater than the value specified by the
DB_FILE_MULTIBLOCK_READ_COUNT parameter.
Correct Answer: DE
Section: (none)
Explanation
Explanation/Reference:
D:OPTIMIZER_INDEX_COST_ADJ lets you tune optimizer behavior for access path
selection to be more or less index friendly—that is, to make the optimizer more or less prone to
selecting an index access path over a full table scan.
The default for this parameter is 100 percent, at which the optimizer evaluates index access paths
at the regular cost. Any other value makes the optimizer evaluate the access path at that
percentage of the regular cost. For example, a setting of 50 makes the index access path look half
as expensive as normal.
E:DB_FILE_MULTIBLOCK_READ_COUNT is one of the parameters you can use to minimize I/O
during table scans. It specifies the maximum number of blocks read in one I/O operation during a
sequential scan. The total number of I/Os needed to perform a full table scan depends on such
factors as the size of the table, the multiblock read count, and whether parallel execution is being
utilized for the operation.
As of Oracle Database 10g release 2, the default value of this parameter is a value that
corresponds to the maximum I/O size that can be performed efficiently. This value is platformdependent
and is 1MB for most platforms.Because the parameter is expressed in blocks, it will be
set to a value that is equal to the maximum I/O size that can be performed efficiently divided by
the standard block size. Note that if the number of sessions is extremely large the multiblock read
count value is decreased to avoid the buffer cache getting flooded with too many table scan
buffers.
Even though the default value may be a large value, the optimizer will not favor large plans if you
do not set this parameter. It would do so only if you explicitly set this parameter to a large value.
Online transaction processing (OLTP) and batch environments typically have values in the range
of 4 to 16 for this parameter. DSS and data warehouse environments tend to benefit most from
maximizing the value of this parameter. The optimizer is more likely to choose a full table scan
over an index if the value of this parameter is high.
Note:
*OPTIMIZER_MODE establishes the default behavior for choosing an optimization approach for
the instance.
Values:
first_rows_n
The optimizer uses a cost-based approach and optimizes with a goal of best response time to
return the first n rows (where n = 1, 10, 100, 1000).
first_rows
The optimizer uses a mix of costs and heuristics to find a best plan for fast delivery of the first few
rows.
all_rows
The optimizer uses a cost-based approach for all SQL statements in the session and optimizes
with a goal of best throughput (minimum resource use to complete the entire statement).
QUESTION 27
Tracing has been enabled for the HR user. You execute the following command to check the
contents of the orcl_25052.trc trace file, which was generated during tracing:
Which two statements are correct about the execution of the command?
A. SCRIPT.SQL stores the statistics for all traced SWL statements.
B. Execution plans for SQL statements are stored in TEMP_PLAN_TABLE and can be queried by
the user.
C. SQL statements in the output files are stored in the order of elapsed time.
D. TKPROF useTEMP_PLAN_TABLE in the HR schema as a temporary plan table.
E. Recursive SQL statements are included in the output file.
Correct Answer: CD
Section: (none)
Explanation
Explanation/Reference:
QUESTION 28
You are administering a database that supports an OLTP application. To set statistics
preferences, you issued the following command:
SQL > DBMS_STATS.SET_GLOBAL_PREFS (‘ESTIMATE_PERCENT’, ‘9’);
What will be the effect of executing this procedure?
A. It will influence the gathering of statistics for a table based on the value specified for
ESTIMATE_PERCENT provided on table preferences for the same table exist.
B. It will influence dynamic sampling for a query to estimate the statistics based on
ESTIMATE_PERCENT.
C. The automatic statistics gathering job running in the maintenance window will use global
preferences even if table preferences for the same table exist.
D. New objects created will use global preference even if table preferences are specified.
Correct Answer: C
Section: (none)
Explanation
Explanation/Reference:
Note:
*With the DBMS_STATS package you can view and modify optimizer statistics gathered for
database objects.
* TheSET_GLOBAL_PREFSproceduree is used to set the global statistics preferences.
*ESTIMATE_PERCENT - The value determines the percentage of rows to estimate. The valid
range is [0.000001,100]. Use the constant DBMS_STATS.AUTO_SAMPLE_SIZE to have Oracle
determine the appropriate sample size for good statistics. This is the default.
QUESTION 29
Examine the parallelism parameter for your instance:
Now examine the Consumer Group resource plan containing parallel statement directives:
Which two are true about parallel statement queuing when this plan is active?
A. Urgent_group sessions collectively can consume up to 64 parallel execution servers before
queuing starts for this consumer group.
B. ETL_GROUP sessions can collectively consume up to 64 parallel execution servers before the
queuing starts for this consumer.
C. A single OTHER_GROUPS session will execute serially once it is queued for six minutes.
D. A single ETL_GROUP session can consume up to eight parallel execution servers.
E. A single ETL_GROUP session can consume up to 32 parallel execution servers.
F. A single OTHER_GROUPS session will execute in parallel once it is queued for six minutes.
Correct Answer: CD
Section: (none)
Explanation
Explanation/Reference:
*PARALLEL_SERVERS_TARGET specifies the number of parallel server
processes allowed to run parallel statements before statement queuing will be used. When the
parameter PARALLEL_DEGREE_POLICY is set to AUTO, Oracle will queue SQL statements that
require parallel execution, if the necessary parallel server processes are not available. Statement
queuing will begin once the number of parallel server processes active on the system is equal to
or greater than PARALLEL_SERVER_TARGET.
By default, PARALLEL_SERVER_TARGET is set lower than the maximum number of parallel
server processes allowed on the system (PARALLEL_MAX_SERVERS) to ensure each parallel
statement will get all of the parallel server resources required and to prevent overloading the
system with parallel server processes.
* PARALLEL_QUIT_TIMEOUT
* PARALLEL_DEGREE_LIMIT
With automatic degree of parallelism, Oracle automatically decides whether or not a statement
should execute in parallel and what degree of parallelism the statement should use. The optimizer
automatically determines the degree of parallelism for a statement based on the resource
requirements of the statement. However, the optimizer will limit the degree of parallelism used to
ensure parallel server processes do not flood the system. This limit is enforced by
PARALLEL_DEGREE_LIMIT.
Values:
The maximum degree of parallelism is limited by the number of CPUs in the system. The formula
used to calculate the limit isPARALLEL_THREADS_PER_CPU * CPU_COUNT * the number of
instances available (by default, all the opened instances on the cluster but can be constrained
using PARALLEL_INSTANCE_GROUP or service specification). This is the default.
The maximum degree of parallism the optimizer can use is limited by the I/O capacity of the
system. The value is calculated by dividing the total system throughput by the maximum I/O
bandwidth per process. You must run
the DBMS_RESOURCE_MANAGER.CALIBRATE_IO procedure on the system in order to use
the IO setting. This procedure will calculate the total system throughput and the maximum I/O
bandwidth per process.
integer
A numeric value for this parameter specifies the maximum degree of parallelism the optimizer can
choose for a SQL statement when automatic degree of parallelism is active. Automatic degree of
parallelism is only enabled if PARALLEL_DEGREE_POLICY is set to AUTO orLIMITED.
QUESTION 30
View the Exhibit1 and examine the structure and indexes for the MYSALES table.
The application uses the MYSALES table to insert sales record. But this table is also extensively
used for generating sales reports. The PROD_ID and CUST_ID columns are frequently used in
the WHERE clause of the queries. These columns are frequently used in WHERE clause of the
queries. These columns have few distinct values relative to the total number of rows in the table.
View exhibit 2 and examine one of the queries and its auto trace output.
What should you do to improve the performance of the query?
30_1 (exhibit):
30_2 (exhibit):
A.
B.
C.
D.
Use the INDEX_COMBINE hint in the query.
Create composite index involving the CUST_ID and PROD_ID columns.
Gather histograms statistics for the CUST_ID and PROD_ID columns.
Gather index statistics for the MYSALES_PRODID_IDX and MYSALES_CUSTID_IDX indexes.
Correct Answer: D
Section: (none)
Explanation
Explanation/Reference:
Note:
*Statistics quantify the data distribution and storage characteristics of tables, columns, indexes,and partitions.
*INDEX_COMBINE
Forces a bitmap index access path on tab.
Primarily this hint just tells Oracle to use the bitmap indexes on table tab. Otherwise Oracle will
choose the best combination of indexes it can think of based on the statistics. If it is ignoring a
bitmap index that you think would be helpful, you may specify that index plus all of the others taht
you want to be used. Note that this does not force the use of those indexes, Oracle will still make
cost based choices.
*Histograms Opportunities
Any column used in a where clause with skewed data
Histograms are NOT just for indexed columns.
– Adding a histogram to an un-indexed column that is used in
QUESTION 31
You are administering database that supports an OLTP workloads. Most of the queries use an
index range scan or index unique scan as access methods.
Which three scenarios can prevent the index access being used by the queries?
A.
B.
C.
D.
When highly selective filters is applied on an indexed column of a table with sparsely populated blocks.
When the rows are filtered with an IS NULL operator on the column with a unique key defined
When the histogram statistics are not collected for the columns used in where clause.
When a highly selective filter is applied on the indexed column and the index has very low value for
clustering factor.
E. When the statistics for the table are not current.
Correct Answer: BDE
Section: (none)
Explanation
Explanation/Reference:
D:The clustering_factor measures how synchronized an index is with the data in a
table. A table with a high clustering factor is out-of-sequence with the rows and large index range
scans will consume lots of I/O. Conversely, an index with a low clustering_factor is closely aligned
with the table and related rows reside together of each data block, making indexes very desirable
for optimal access.
Note:
*Oracle SQL not using an index is a common complaint, and it’s often because the optimizer
thinks that a full-scan is cheaper than index access. Oracle not using an index can be due to:
*(E)Bad/incomplete statistics – Make sure to re-analyze the table and index with dbms_stats to
ensure that the optimizer has good metadata.
*Wrong optimizer_mode – The first_rows optimizer mode is to minimize response time, and it is
more likely to use an index than the default all_rows mode.
*Bugs – See these important notes on optimizer changes in 10g that cause Oracle not to use an
index.
*Cost adjustment – In some cases, the optimizer will still not use an index, and you must decrease
optimizer_index_cost_adj.
QUESTION 32
You are logged in as the HR user and you execute the following procedure:
SQL > exec DBMS_STATS.SET_TABLE_PREFS (‘HR’, ‘EMPLOYEES’, ‘PUBLISH’, ‘FALSE’);
SQL> exec DBMS_STATS.GATHER_TABLE_STATS (‘HR’, ‘EMPLOYEES’);
Which statement is true about the newly gathered statistics?
A.
B.
C.
D.
They are temporary and purged when the session exits.
They are used by the optimizer for all sessions.
They are locked and cannot be overwritten.
They are marked as pending and stored in the pending statistics table.
Correct Answer: D
Section: (none)
Explanation
Explanation/Reference:
In previous database versions, new optimizer statistics were automatically published
when they were gathered. In 11g this is still the default action, but you now have the option of
keeping the newly gathered statistics in a pending state until you choose to publish them.
The DBMS_STATS.GET_PREFS function allows you to check the 'PUBLISH' attribute to see if
statistics are automatically published. The default value of TRUE means they are automatically
published, while FALSE indicates they are held in a pending state.
Reference:Statistics Collection Enhancements in Oracle Database 11g Release 1,Pending
Statistics
QUESTION 33
You enable auto degree of parallelism (DOP) for your database instance.
Examine the following query:
Which two are true about the execution of statement?
A. Dictionary DOP for the objects accessed by the query is used to determine the statement DOP.
B. Auto DOP is used to determine the statement DOP only if the estimated serial execution time
exceeds PARALLEL_MIN_TIME_THRESHOLD.
C. Dictionary DOP is used to determine the statement DOP only if the estimated serial execution
time exceeds PARALLEL_MIN_TIME_THRESHOLD.
D. The statement will be queued if insufficient parallel execution slaves are available to satisfy the
statements DOP.
E. The statement will be queued if the number of busy parallel execution servers exceeds
PARALLEL_SERVERS_TARGET.
F. The statements may execute serially.
Correct Answer: BF
Section: (none)
Explanation
Explanation/Reference:
Not B, not D:MANUAL - This is the default. Disables Auto DOP, statement queuing
and in-memory
parallel execution. It reverts the behavior of parallel execution to what it was previous to Oracle
Database 11g, Release 2 (11.2).
C:PARALLEL_MIN_TIME_THRESHOLD specifies the minimum execution time a statement
should have before the statement is considered for automatic degree of parallelism. By default,
this is set to 30 seconds. Automatic degree of parallelism is only enabled if
PARALLEL_DEGREE_POLICY is set to AUTO or LIMITED.
Note:
*PARELLEL (MANUAL)
You can use the PARALLEL hint to force parallelism. It takes an optional parameter: the DOP at
which the statement should run.
The following example forces the statement to use Oracle Database 11g Release 1 (11.1)
behavior:
SELECT /*+ parallel(manual) */ ename, dname FROM emp e, dept d
WHERE e.deptno=d.deptno;
*PARALLEL_SERVERS_TARGET specifies the number of parallel server processes allowed to
run parallel statements before statement queuing will be used. When the parameter
PARALLEL_DEGREE_POLICY is set to AUTO, Oracle will queue SQL statements that require
parallel execution, if the necessary parallel server processes are not available. Statement queuing
will begin once the number of parallel server processes active on the system is equal to or greater
than PARALLEL_SERVER_TARGET.
By default, PARALLEL_SERVER_TARGET is set lower than the maximum number of parallel
server processes allowed on the system (PARALLEL_MAX_SERVERS) to ensure each parallel
statement will get all of the parallel server resources required and to prevent overloading the
system with parallel server processes.
Note that all serial (non-parallel) statements will execute immediately even if statement queuing
has been activated.
QUESTION 34
Examine the exhibit.
Which is true based on the information obtainable from the execution plan?
Exhibit:
A. A full partition-wise join performed between the EMPLOYEES and DEPARTMENTS tables.
B. A full table scan on the DEPARTMENTS table performed serially by the query coordinator.
C. A full table scan on the DEPARTMENTS table is performed serially by a single parallel execution server
process.
D. A partial partition-wise join performed between the EMPLOYEES and DEPARTMENTS tables.
E. A full table scan on the EMPLOYEES table is done in parallel.
Correct Answer: E
Section: (none)
Explanation
Explanation/Reference:
PX BLOCK ITERATORThis operation is typically the first step in a parallel pipeline.
The BLOCK ITERATOR breaks up the table into chunks that are processed by each of the parallel
servers involved.
Incorrect:
B, C: The scan on the Departsments table is done in parallel.
Note:
*As per exhibit: Line 7 is run first, followed by line 6.
*
Example with same structure of execution plan:
Here’s how to read the plan:1. The first thing done is at line 9 – an index fast full scan on
SYS.OBJ$.I_OBJ1 index. This is done in parallel, as indicated from the “PX SEND” line above.2.
In line 8, we’re doing a “PX SEND BROADCAST” operation. When joining tables in parallel,
Oracle can choose to either broadcast results (rows) from one operation to apply to the other table
scan, or it can choose PX SEND HASH. In this case, our CBO determined that a BROADCOAST
was appropriate because the results from the OBJ$ table were much lower than the MYOBJ
table3. Line 7, the PX RECEIVE step, is basically the consumer of the broadcasted rows in step
84. Line 6 is an in-memory BUFFER SORT of the rows returned from the index scan on
OBJ$5. Lines 11 and 10, respectively, indicate the full scan and PX BOCK ITERATOR operation
for the granules involved in the 8 PQ servers6. In line 5, Oracle is doing a hash join on the
resulting rows from the parallel scans on MYOBJ and OBJ$7. Line 4 is a per-PQ server sort of
data from the joind PQ servers8. Line 3 is the consumer QC that holds the result of the each of
the PQ servers9. Line 2 is the PX Coordinator (QC) collecting, or consuming the rows of the
joined data10. Line 1 is the final SORT AGGREGATE line that performs the grouping function
QUESTION 35
Examine the exhibit to view the query and its execution plan?
What two statements are true?
Exhibit:
A. The HASH GROUP BY operation is the consumer of the HASH operation.
B. The HASH operation is the consumer of the HASH GROUP BY operation.
C. The HASH GROUP BY operation is the consumer of the TABLE ACCESS FULL operation for the
CUSTOMER table.
D. The HASH GROUP BY operation is consumer of the TABLE ACCESS FULL operation for the SALES table.
E. The SALES table scan is a producer for the HASH JOIN operation.
Correct Answer: AE
Section: (none)
Explanation
Explanation/Reference:
A, not C, not D: Line 3, HASH GROUP BY, consumes line 6 (HASH JOIN BUFFERED).
E: Line 14, TABLE ACCESS FULL (Sales), is one of the two producers for line 6 (HASH JOIN).
QUESTION 36
A database supports three applications: CRM, ERP, and ACC. These applications connect to the
database by using three different services: CRM_SRV for the CRM application, ERP_SRV for the
ERP application, and ACC_SRV for the ACC application.
You enable tracing for the ACC_SRV service by issuing the following command:
SQL> EXECUTE DBMS for the ACC_SRV service by issuing the following command:
SQL> EXECUITIVE DBMS_MONITOR. SERV_MOD_ACT_TRACE_ENABLE (service_name =>
‘ACC_SRV’, waits => TRUE, binds = > FALSE, instance_name = > ‘inst1’);
Which statement is true?
A.
B.
C.
D.
All trace information for the service connection to inst1 will be stored in a single trace file.
A trace file is not created because the module name is not specified.
A single trace file is created for each session that uses the ACC_SRV service.
Only those SQL statements that are identified with the ACC_SRV service executed on the inst1
instance are recorded in trace files.
E. All trace information for the ACC_SRV service connected to inst1 is stored in multiple trace
files, which can be consolidated by using the tkprof utility.
Correct Answer: C
Section: (none)
Explanation
Explanation/Reference:
SERV_MOD_ACT_TRACE_ENABLE
serv_mod_act_trace_enable and serv_mod_act_trace_disable, which enables and disables trace
for given service_name, module and action.
For example for a given service name you can trace all session started from SQL*Plus.
Module and action in your own created application can be set using dbms_application_info
set_module and set_action procedures.
serv_mod_act_trace_enable fills sys table wri$_tracing_enabled and view dba_enabled_traces on
top of this table as follows:
SQL> exec dbms_monitor.serv_mod_act_trace_enable(service_name=>'orcl',
module_name=>'SQL*Plus')
PL/SQL procedure successfully completed.
SQL> select * from sys.wri$_tracing_enabled;
TRACE_TYPE PRIMARY_ID QUALIFIER_ID1 QUALIFIER_ID2 INSTANCE_NAME FLAGS
---------- ---------- ------------- ------------- ------------- ----4 orcl SQL*Plus 8
SQL> select * from dba_enabled_traces;
TRACE_TYPE PRIMARY_ID QUALIFIER_ID1 QUALIFIER_ID2 WAITS BINDS
INSTANCE_NAME
-------------- ---------- ------------- ------------- ----- ----- ------------SERVICE_MODULE orcl SQL*Plus TRUE FALSE
QUESTION 37
Examine the following anonymous PL/SQL code block of code:
Which two are true concerning the use of this code?
A. The user executing the anonymous PL/SQL code must have the CREATE JOB system
privilege.
B. ALTER SESSION ENABLE PARALLEL DML must be executed in the session prior to
executing the anonymous PL/SQL code.
C. All chunks are committed together once all tasks updating all chunks are finished.
D. The user executing the anonymous PL/SQL code requires execute privilege on the DBMS_JOB
package.
E. The user executing the anonymous PL/SQL code requires privilege on the
DBMS_SCHEDULER package.
F. Each chunk will be committed independently as soon as the task updating that chunk is
finished.
Correct Answer: AE
Section: (none)
Explanation
Explanation/Reference:
A(not D, not E):
To use DBMS_PARALLEL_EXECUTE to run tasks in parallel, your schema will need the CREATE
JOB system privilege.
E (not C):DBMS_PARALLEL_EXECUTE now provides the ability to break up a large table
according to a variety of criteria, from ROWID ranges to key values and user-defined methods.
You can then run a SQL statement or a PL/SQL block against these different “chunks” of the table
in parallel, using the database scheduler to manage the processes running in the background.
Error logging, automatic retries, and commits are integrated into the processing of these chunks.
Note:
*The DBMS_PARALLEL_EXECUTE package allows a workload associated with a base table to
be broken down into smaller chunks which can be run in parallel. This process involves several
distinct stages.
1.Create a task
2.Split the workload into chunks
CREATE_CHUNKS_BY_ROWID
CREATE_CHUNKS_BY_NUMBER_COL
CREATE_CHUNKS_BY_SQL
3.Run the task
RUN_TASK
User-defined framework
Task control
4.Check the task status
5.Drop the task
*The workload is associated with a base table, which can be split into subsets or chunks of rows.
There are three methods of splitting the workload into chunks.
CREATE_CHUNKS_BY_ROWID
CREATE_CHUNKS_BY_NUMBER_COL
CREATE_CHUNKS_BY_SQL
The chunks associated with a task can be dropped using the DROP_CHUNKS procedure.
*CREATE_CHUNKS_BY_ROWID
The CREATE_CHUNKS_BY_ROWID procedure splits the data by rowid into chunks specified by
the CHUNK_SIZE parameter. If the BY_ROW parameter is set to TRUE, the CHUNK_SIZE refers
to the number of rows, otherwise it refers to the number of blocks.
Reference:TECHNOLOGY: PL/SQL Practices,On Working in Parallel
QUESTION 38
Examine the following query and execution plan:
Which query transformation technique is used in this scenario?
A. Join predicate push-down
B. Subquery factoring
C. Subquery unnesting
D. Join conversion
Correct Answer: A
Section: (none)
Explanation
Explanation/Reference:
*Normally, a view cannot be joined with an index-based nested loop (i.e., index
access) join, since a view, in contrast with a base table, does not have an index defined on it. A
view can only be joined with other tables using three methods: hash, nested loop, and sort-merge
joins.
*The following shows the types of views on which join predicate pushdown is currently supported.
UNION ALL/UNION view
Outer-joined view
Anti-joined view
Semi-joined view
DISTINCT view
GROUP-BY view
QUESTION 39
You enabled auto degree of parallelism (DOP) for your instance.
Examine the query:
Which two are true about the execution of this query?
A.
B.
C.
D.
E.
Dictionary DOP will be used, ifpresent, on the tables referred in the query.
DOP is calculated if the calculated DOP is 1.
DOP is calculated automatically.
Calculated DOP will always by 2 or more.
The statement will execute with auto DOP only when PARALLEL_DEGREE_POLICY is set to AUTO.
Correct Answer: CD
Section: (none)
Explanation
Explanation/Reference:
C:*You can use the PARALLEL hint to force parallelism. It takes an optional
parameter: the DOP at which the statement should run. In addition, theNO_PARALLEL hint
overrides a PARALLEL parameter in the DDL that created or altered the table.
The following example illustrates computing the DOP the statement should use:
SELECT /*+ parallel(auto) */ ename, dname FROM emp e, dept d
WHERE e.deptno=d.deptno;
Not A: to override the dictionary dop,we could use hints at object level
Not E: statement hints overridethePARALLEL_DEGREE_POLICY.
Note:
*Automatic Parallel Degree Policy
When the parameter PARALLEL_DEGREE_POLICY is set to AUTO, Oracle Database
automatically decides if a statement should execute in parallel or not and what DOP it should use.
Oracle Database also determines if the statement can be executed immediately or if it is queued
until more system resources are available. Finally, Oracle Database decides if the statement can
take advantage of the aggregated cluster memory or not.
The following is a summary of parallel statement processing when parallel degree policy is set to
automatic.
QUESTION 40
View Exhibit1 and examine the structure and indexes for the MYSALES table.
The application uses the MYSALES table to insert sales record. But this table is also extensively
used for generating sales reports. The PROD_ID and CUST_ID columns are frequently used in
the WHERE clause of the queries. These columns have few distinct values relative to the total
number of rows in the table. The MYSALES table has 4.5 million rows.
View exhibit 2 and examine one of the queries and its autotrace output.
Which two methods can examine one of the queries and its autotrace output?
40_1 (exhibit):
40_2 (exhibit):
A. Drop the current standard balanced B* Tree indexes on the CUST_ID and PROD_ID columns
and re-create as bitmapped indexes.
B. Use the INDEX_COMBINE hint in the query.
C. Create a composite index involving the CUST_ID and PROD_ID columns.
D. Rebuild the index to rearrange the index blocks to have more rows per block by decreasing the
value for PCTFRE attribute.
E. Collect histogram statistics for the CUST_ID and PROD_ID columns.
Correct Answer: BC
Section: (none)
Explanation
Explanation/Reference:
B:The INDEX hint explicitly chooses an index scan for the specified table. You can
use the INDEX hint for domain, B-tree, bitmap, and bitmap join indexes. However, Oracle
recommends using INDEX_COMBINE rather than INDEX for the combination of multiple indexes,
because it is a more versatile hint.
C: Combiningthe CUST_ID and PROD_IDcolumns into an composite index would improve
performance.
QUESTION 41
Examine the Exhibit to view the structure of an indexes for the SALES table.
The SALES table has 4594215 rows. The CUST_ID column has 2079 distinct values.
What would you do to influence the optimizer for better selectivity?
Exhibit:
A.
B.
C.
D.
Drop bitmap index and create balanced B*Tree index on the CUST_ID column.
Create a height-balanced histogram for the CUST_ID column.
Gather statistics for the indexes on the SALES table.
Use the ALL_ROWS hint in the query.
Correct Answer: D
Section: (none)
Explanation
Explanation/Reference:
OPTIMIZER_MODE establishes the default behavior for choosing an optimization
approach for the instance. Values: FIRST_ROWS_N - The optimizer uses a cost-based approach
and optimizes with a goal of best response time to return the first n rows (where n = 1, 10, 100,
1000). FIRST_ROWS - The optimizer uses a mix of costs and heuristics to find a best plan for fast
delivery of the first few rows. ALL_ROWS - The optimizer uses a cost-based approach for all SQL
statements in the session and optimizes with a goal of best throughput (minimum resource use to
complete the entire statement).
QUESTION 42
Which three are tasks performed in the hard parse stage of a SQL statement executions?
A.
B.
C.
D.
Semantics of the SQL statement are checked.
The library cache is checked to find whether an existing statement has the same hash value.
The syntax of the SQL statement is checked.
Information about location, size, and data type is defined, which is required to store fetched
values in variables.
E. Locks are acquired on the required objects.
Correct Answer: BDE
Section: (none)
Explanation
Explanation/Reference:
Parse operations fall into the following categories, depending on the type of
statement submitted and the result of the hash check:
A)Hard parse
If Oracle Database cannot reuse existing code, then it must build a new executable version of the
application code. This operation is known as a hard parse, or a library cache miss. The database
always perform a hard parse of DDL.
During the hard parse, the database accesses the library cache and data dictionary cache
numerous times to check the data dictionary. When the database accesses these areas, it uses a
serialization device called a latch on required objects so that their definition does not change (see
"Latches"). Latch contention increases statement execution time and decreases concurrency.
B)Soft parse
A soft parse is any parse that is not a hard parse. If the submitted statement is the same as a
reusable SQL statement in the shared pool, then Oracle Database reuses the existing code. This
reuse of code is also called a library cache hit.
Soft parses can vary in the amount of work they perform. For example, configuring the session
cursor cache can sometimes reduce the amount of latching in the soft parses, making them
"softer."
In general, a soft parse is preferable to a hard parse because the database skips the optimization
and row source generation steps, proceeding straight to execution.
Incorrect:
A, C:During the parse call, the database performs the following checks:
Syntax Check
Semantic Check
Shared Pool Check
The hard parse is within Shared Pool check.
Reference:Oracle Database Concepts11g,SQL Parsing
QUESTION 43
You are administering a database supporting an OLTP application. The application runs a series
of extremely similar queries the MYSALES table where the value of CUST_ID changes.
Examine Exhibit1 to view the query and its execution plan.
Examine Exhibit 2 to view the structure and indexes for the MYSALES table. The MYSALES table
has 4 million records.
Data in the CUST_ID column is highly skewed.
Examine the parameters set for the instance:
Which action would you like to make the query use the best plan for the selectivity?
43_1 (exhibit):
43_2 (exhibit):
A.
B.
C.
D.
Decrease the value of the OPTIMIZER_DYNAMIC_SAMPLING parameter to 0.
Use the /*+ INDEX(CUST_ID_IDX) */ hint in the query.
Drop the existing B* -tree index and re-create it as a bitmapped index on the CUST_ID column.
Collect histogram statistics for the CUST_ID column and use a bind variable instead of literal
values.
Correct Answer: D
Section: (none)
Explanation
Explanation/Reference:
Using Histograms
In some cases, the distribution of values within a column of a table will affect the optimizer's
decision to use an index vs. perform a full-table scan. This scenario occurs when the value with a
where clause has a disproportional amount of values, making a full-table scan cheaper than index
access.
A column histogram should only be created when we have data skew exists or is suspected.
QUESTION 44
Which two are the fastest methods for fetching a single row from a table based on an equality
predicate?
A.
B.
C.
D.
Fast full index scan on an index created for a column with unique key
Index unique scan on an created for a column with unique key
Row fetch from a single table hash cluster
Index range scan on an index created from a column with primary key
E. Row fetch from a table using rowid
Correct Answer: CE
Section: (none)
Explanation
Explanation/Reference:
A scan is slower than a row fetch (from hash value or rowid).
QUESTION 45
An application user complains about statement execution taking longer than usual. You find that
the query uses a bind variable in the WHERE clause as follows:
You want to view the execution plan of the query that takes into account the value in the bind
variable PCAT.
Which two methods can you use to view the required execution plan?
A. Use the DBMS_XPLAN.DISPLAY function to view the execution plan.
B. Identify the SQL_ID for the statementsand use DBMS_XPLAN.DISPLAY_CURSOR for that
SQL_ID to view the execution plan.
C. Identify the SQL_ID for the statement and fetch the execution plan PLAN_TABLE.
D. View the execution plan for the statement fromV$SQL_PLAN.
E. Execute the statement with different bind values and set AUTOTRACE enabled for session.
Correct Answer: BD
Section: (none)
Explanation
Explanation/Reference:
D:V$SQL_PLAN contains the execution plan information for each child cursor
loaded in the library cache.
B:The DBMS_XPLAN package supplies five table functions:
DISPLAY_SQL_PLAN_BASELINE - to display one or more execution plans for the SQL statement
identified by SQL handle
DISPLAY - to format and display the contents of a plan table.
DISPLAY_AWR - to format and display the contents of the execution plan of a stored SQL
statement in the AWR.
DISPLAY_CURSOR - to format and display the contents of the execution plan of any loaded
cursor.
DISPLAY_SQLSET - to format and display the contents of the execution plan of statements stored
in a SQL tuning set.
QUESTION 46
Which statement is true about an SQL plan baselines that are fixed?
A. New plans are added automatically by the optimizer to the baseline and are automatically
evolved.
B. New, better plans are added automatically as a fixed plan baseline.
C. New plan can be manually loaded to the baseline from the cursor cache or a SQL tuning set.
D. New plans can be added as fixed plans to the baseline by using the SQL Tuning Advisor to
generate a SQL profile and by accepting the SQL profile.
Correct Answer: D
Section: (none)
Explanation
Explanation/Reference:
When a SQL statement with a fixed SQL plan baseline is tuned using the SQL
Tuning Advisor, a SQL profile recommendation has special meaning. When the SQL profile is
accepted, the tuned plan is added to the fixed SQL plan baseline as a non-fixed plan. However, as
described above, the optimizer will not use the tuned plan as long as a reproducible fixed plan is
present.Therefore, the benefit of SQL tuning may not be realized. To enable the use of the tuned
plan, manually alter the tuned plan to a fixed plan by setting its FIXED attribute to YES.
Note:
It is also possible to influence the optimizer’s choice of plan when it is selecting a plan from a
SQL plan baseline. SQL plan baselines can be marked as fixed. Fixed SQL plan baselines
indicate to the optimizer that they are preferred. If the optimizer is costing SQL plan baselines and
one of the plans is fixed, the optimizer will only cost the fixed plan and go with that if it is
reproducible.
If the fixed plan(s) are not reproducible the optimizer will go back and cost the remaining SQL
plan baselines and select the one with the lowest cost. Note that costing a plan is nowhere near
as expensive as a hard parse. The optimizer is not looking at all possible access methods but at
one specific access path.
Reference: OracleDatabase Performance Tuning Guide11g,Using Fixed SQL Plan Baselines
Reference:SQL Plan Management in Oracle Database 11g
QUESTION 47
Examine the initializing parameters:
An index exists on the column used in the WHERE of a query. You execute the query for the first
time today and notice that the query is not using the index. The CUSTOMERS table has 55000
rows.
View the exhibit and examine the query and its execution plan.
What can be the two reasons for full table scan?
Exhibit:
A. The value of the OPTIMIZER_INDEX_COST_ADJ parameter is set to a low value.
B. The blocks fetched by the query are greater than the value specified by the
DB_FILE_MULTIBLOCK_READ_COUNT parameter.
C. The statistics for the CUSTOMERS table and the indexes stale.
D. The OPTIMIZER_MODE parameter is set to ALL_ROWS.
E. Histogram statistics for CUST_CITY_ID are missing.
F. Average number of rows per block for the CUSTOMERS table is low.
Correct Answer: CD
Section: (none)
Explanation
Explanation/Reference:
C: Old statistics could cause this problem.
D:Histograms are feature in CBO and it helps to optimizer to determine how data are
skewed(distributed) with in the column. Histogram is good to create for the column which are
included in the WHERE clause where the column is highly skewed. Histogram helps to optimizer
to decide whether to use an index or full-table scan or help the optimizer determine the fastest
table join order.
Incorrect:
A: 100 is the maximum value ofOPTIMIZER_INDEX_COST_ADJ
Note:OPTIMIZER_INDEX_COST_ADJ lets you tune optimizer behavior for access path selection
to be more or less index friendly—that is, to make the optimizer more or less prone to selecting an
index access path over a full table scan.
The default for this parameter is 100 percent, at which the optimizer evaluates index access paths
at the regular cost. Any other value makes the optimizer evaluate the access path at that
percentage of the regular cost. For example, a setting of 50 makes the index access path look half
as expensive as normal.
B:DB_FILE_MULTIBLOCK_READ_COUNTdoes not apply:
DB_FILE_MULTIBLOCK_READ_COUNT is one of the parameters you can use to minimize I/O
during table scans. It specifies the maximum number of blocks read in one I/O operation during a
sequential scan. The total number of I/Os needed to perform a full table scan depends on such
factors as the size of the table, the multiblock read count, and whether parallel execution is being
utilized for the operation.
F: High (not low) row per block could make a table scan preferable.
QUESTION 48
You have created some materialized views to improve the performance of several queries.
Which four must be true to enable sessions to benefit from improved response time made possible
by these materialized views?
A. Query rewrite must be enabled for the sessions.
B. Bitmap indexes must exist on all the columns involved in the join operations for the defining
query of the MVIEWs.
C. All or part of the query results must be obtainable from one or more MVIEWs.
D. Bitmap join indexes must exist on all the columns involved in the join operations.
E. Session users must have query rewrite privilege.
F. The MVIEWs must be enabled for query rewrite.
G. All or part of the query results must be obtainable from one MVIEW.
Correct Answer: ABCF
Section: (none)
Explanation
Explanation/Reference:
A:For a given user's session, ALTER SESSION can be used to disable or enable
query rewrite for that session only.
B: Bitmap indexes on the join columns would improve performance.
C(notG) :One of the major benefits of creating and maintaining materialized views is the ability to
take advantage of query rewrite, which transforms a SQL statement expressed in terms of tables
or views into a statement accessing one or more materialized views that are defined on the detail
tables.
F:
*A materialized view is only eligible for query rewrite if the ENABLE QUERY REWRITE clause has
been specified, either initially when the materialized view was first created or subsequently with an
ALTER MATERIALIZED VIEW statement.
*Enabling or disabling query rewrite:
by the CREATE or ALTER statement for individual materialized views
by the initialization parameter QUERY_REWRITE_ENABLED
by the REWRITE and NOREWRITE hints in SQL statements
Note:
*A materialized view is a replica of a target master from a single point in time. The master can be
either a master table at a master site or a master materialized view at a materialized view site.
Whereas in multimaster replication tables are continuously updated by other master sites,
materialized views are updated from one or more masters through individual batch updates,
known as a refreshes, from a single master site or master materialized view site.
QUESTION 49
View the code sequence:
Which three statements are true about the execution of this query?
A.
B.
C.
D.
E.
F.
The optimizer uses an index join as an access path.
The optimizer uses a B* tree access path for the EMPLOYEES table.
The optimizer uses a bitmap access path for the EMPLOYEES table.
The optimizer performs bitmap conversion fromrow IDs.
The optimizer performs bitmap conversion to row IDs of the combined maps.
The optimizer performs bitmap conversion from the row IDs of the combined maps.
Correct Answer: ACF
Section: (none)
Explanation
Explanation/Reference:
C:The INDEX_COMBINE hint explicitly chooses a bitmap access path for the table.
F:If you specify indexspec, then the optimizer tries to use some Boolean combination of the
specified indexes. Each parameter serves the same purpose as in "INDEX Hint". For example:
SELECT /*+ INDEX_COMBINE(e emp_manager_ix emp_department_ix) */ *
FROM employees e
WHERE manager_id = 108
OR department_id = 110;
QUESTION 50
View the code sequence:
Examine the Exhibit to view the execution plan.
Which two statements are true about the query execution?
Exhibit:
A. The optimizer joins specified tables in the order as they appear in the FROM clause.
B. The CUSTOMERS and ORDERS tables are joined first and the resultant is then joined with
rows returned by the ORDER_ITEMS table.
C. The CUSTOMERS and ORDER_ITEMS tables are joined first the resultant is then joined with
rows by the ORDERS table.
D. The optimizer has chosen hash join as an access method as the OPTIMIZER_MODE
parameter is set to FIRST_ROWS.
Correct Answer: CD
Section: (none)
Explanation
Explanation/Reference:
The first executed join is in line 6.
The second executed join is in line 1.
Incorrect:
A: Line 7 and 8 are executed first.
QUESTION 51
Examine the parallelism parameters for your instance:
What are true about the execution of the query?
A.
B.
C.
D.
E.
F.
It will execute in parallel only if the LINEITEM table has a dictionary DOP defined.
DOP for the statement is determined by the dictionary DOP of the accessed objects.
It is generated to execute in parallel.
It will execute in parallel only if the estimated execution time is 10 or more seconds.
DOP for the statement is calculated automatically.
It may execute serially.
Correct Answer: EF
Section: (none)
Explanation
Explanation/Reference:
E:
F (not C): It may execute serially. See note below.
Incorrect:
A, B: Dictionary DOP not used with PARALLEL (AUTO) hint.
D: The default value of parallel_min_time_threshold is 30 (not 10) seconds.
Note:
* parallel_min_percent
PARALLEL_MIN_PERCENT operates in conjunction with PARALLEL_MAX_SERVERS and
PARALLEL_MIN_SERVERS. It lets you specify the minimum percentage of parallel execution
processes (of the value of PARALLEL_MAX_SERVERS) required for parallel execution. Setting
this parameter ensures that parallel operations will not execute sequentially unlessadequate
resources are available. The default value of 0 means that no minimum percentage of processes
has been set.
Consider the following settings:
PARALLEL_MIN_PERCENT = 50
PARALLEL_MIN_SERVERS = 5
PARALLEL_MAX_SERVERS = 10
If 8 of the 10 parallel execution processes are busy, only 2 processes are available. If you then
request a query with a degree of parallelism of 8, the minimum 50% will not be met.
QUESTION 52
You execute the following query:
Which statement is true about the usage of these hints in the query?
A.
B.
C.
D.
The optimizer pushes the join predicate into the inline view.
The optimizer evaluates the subquery as a first and then filters out rows.
The optimizer performs a join operation first and then filters out the rows.
The hint will have no effect because one of the join resultsets is an inline view.
Correct Answer: A
Section: (none)
Explanation
Explanation/Reference:
The PUSH_PRED hint forces pushing of a join predicate into the view.
For example:
SELECT /*+ NO_MERGE(v) PUSH_PRED(v) */ *
FROM employees e,
(SELECT manager_id
FROM employees
)v
WHERE e.manager_id = v.manager_id(+)
AND e.employee_id = 100;
When the PUSH_PRED hint is used without an argument, it should be placed in the view query
block. When PUSH_PRED is used with the view name as an argument, it should be placed in the
surrounding query.
Reference:Oracle Database Performance Tuning Guide,PUSH_PRED
QUESTION 53
You notice some performance degradation for a high-load SQL statement in your database. After
investigations, you run the SQL Tuning Advisor, which recommends a SQL Profile. You accept the
profile recommendation resulting in a new, tuned execution plan for the statement.
Your database uses SQL plan management and a SQL plan baseline exists for this SQL
statement.
http://www.gratisexam.com/
Which statement is true?
A.
B.
C.
D.
The database adds the tuned plan to the SQL plan baseline as a nonfixed plan.
The database adds the tuned plan to the SQL plan baseline as a fixed plan.
The optimizer uses the new tuned plan only when a reproducible fixed plan is present.
The created SQL profile will continuously adapt to all changes made to the database, the
object, and to the system statistics over an extended length of time.
Correct Answer: A
Section: (none)
Explanation
Explanation/Reference:
Note:
*When the SQL Tuning Advisor recommends that a SQL Profile be used, you should accept the
SQL Profile that is recommended. In cases where the SQL Tuning Advisor recommends that an
index and a SQL Profile be used, both should be used. You can use the
DBMS_SQLTUNE.ACCEPT_SQL_PROFILE procedure to accept a SQL Profile recommended by
the SQL Tuning Advisor. This creates and stores a SQL Profile in the database.
*When tuning SQL statements with the SQL Tuning Advisor, if the advisor finds a tuned plan and
verifies its performance to be better than a plan chosen from the corresponding SQL plan
baseline, it makes a recommendation to accept a SQL profile. When the SQL profile is accepted,
the tuned plan is added to the corresponding SQL plan baseline.
*If SQL plan management is used and there is already an existing plan baseline for the SQL
statement, a new plan baseline will be added when a SQL profile is created.
*SQL plan management is a preventative mechanism that records and evaluates the execution
plans of SQL statements over time, and builds SQL plan baselines composed of a set of existing
plans known to be efficient. The SQL plan baselines are then used to preserve performance of
corresponding SQL statements, regardless of changes occurring in the system.
*SQL plan baseline is fixed if it contains at least one enabled plan whose FIXED attribute is set to
YES.
*ACCEPT_SQL_PROFILE Procedure and Function
This procedure creates a SQL Profile recommended by the SQL Tuning Advisor. The SQL text is
normalized for matching purposes though it is stored in the data dictionary in de-normalized form
for readability.
QUESTION 54
Examine the initialization parameters for a database an OLTP overload.
What is the effect of changing the value of the parameter?
A. It influences the optimizer to always use the value of the parameter?
B. It influences the optimizer to use indexes instead of full table scans as the estimated cost ofthe
using index is reduced.
C. It influences the optimizer to use full table scans instead of index scans as the estimated cost of
full table scan is reduced.
D. It influenced the optimizer to use bitmap indexes as the estimated cost conversion from bimap
is rowed us reduced.
Correct Answer: B
Section: (none)
Explanation
Explanation/Reference:
OPTIMIZER_INDEX_COST_ADJ
OPTIMIZER_INDEX_COST_ADJ lets you tune optimizer behavior for access path selection to be
more or less index friendly—that is, to make the optimizer more or less prone to selecting an index
access path over a full table scan.
The default for this parameter is 100 percent, at which the optimizer evaluates index access paths
at the regular cost. Any other value makes the optimizer evaluate the access path at that
percentage of the regular cost. For example, a setting of 50 makes the index access path look half
as expensive as normal.
QUESTION 55
You executed the following statements:
Which two statements are true about the query execution?
A.
B.
C.
D.
The execution plan is generated and fetched from the library cache.
The query executes and displays the execution plan and statistics.
The query executes and inserts the execution plan in PLAN_TABLE.
The query executes and execution plan is stored in the library cache and can be viewed using
v$SQL_PLAN.
E. The query will always use the plan displayed by the AUTOTRACE output.
Correct Answer: BE
Section: (none)
Explanation
Explanation/Reference:
B: set autotrace traceonly:Displays the execution plan and the statistics (as set
autotrace on does), but doesn't print a query's result.
Note:
/Autotrace
Autotrace can be configured to run the SQL & gives a plan and statistics afterwards or just give
you an explain plan without executing the query. To achieve this use the following:
*Explain only
set autotrace traceonly explain
*Execute with stats and explain plan
set autotrace on explain stat (with data returned by query)
or
autotrace traceo expl stat (without data returned by query)
*To make the output from an autotrace more readable
col plan_plus_exp format a100
*Turn off autotrace
set autotrace off
/V$SQL_PLAN contains the execution plan information for each child cursor loaded in the library
cache.
QUESTION 56
How can you reduce fragmentation of an index without affecting the current transactions that are
using the index?
A.
B.
C.
D.
Use the ANALYZE INDEX . . . command
Use the ALTER INDEX . . . VALIDATE STRUCTURE command
Use the ALTER INDEX . . . REBUILD ONLINE command
Use the ALTER INDEX . . . DEALLOCATE UNUSED command
Correct Answer: D
Section: (none)
Explanation
Explanation/Reference:
Use the deallocate_unused_clause to explicitly deallocate unused space at the end
of the index and make the freed space available for other segments in the tablespace.
If index is range-partitioned or hash-partitioned, then Oracle Database deallocates unused space
from each index partition. If index is a local index on a composite-partitioned table, then Oracle
Database deallocates unused space from each index subpartition.
Reference: OracleDatabase SQL Language Reference11g, alter index
QUESTION 57
Examine the Following Query and execution plan:
Which query transformation technique is used by the optimizer?
A.
B.
C.
D.
Filter push down
Subquery factoring
Subquery unnesting
Predicate pushing
Correct Answer: D
Section: (none)
Explanation
Explanation/Reference:
Note:
*In the execution plan BX, note the keyword 'VIEW PUSHED PREDICATE' indicates that the view
has undergone the join predicate pushdown transformation.
QUESTION 58
Examine the Exhibit and view the structure of an indexes for the EMPLOYEES table.
Which two actions might improve the performance of the query?
Exhibit:
A.
B.
C.
D.
E.
Use the ALL_ROWS hint in the query.
Collect the histogram statistics for the EMPLOYEE_ID column.
Decrease the value for the DB_FILE_MULTIBLOCK_READ_COUNT initialization parameter.
Decrease the index on the EMPLOYEE_ID if not being used.
Set the OPTIMIZER_MODE parameter to ALL_ROWS.
Correct Answer: AE
Section: (none)
Explanation
Explanation/Reference:
A:The ALL_ROWS hint instructs the optimizer to optimize a statement block with a
goal of best throughput, which is minimum total resource consumption.
E:optimizer_mode=all_rows - This optimizer mode favors full-table scans (especially parallel full-table-scans)
in cases where the server resources will be minimized. The all_rows mode is
generally used during batch-oriented processing and for data warehouses where the goal is to
minimize server resource consumption.
QUESTION 59
Examine the parallel parameters for your instance.
All parallel execution servers are available and the session use default parallelism settings.
In which two cases will the degree of parallelism be automatically calculated?
A.
B.
C.
D.
E.
Statements accessing tables whom dictionary DOP is 2 or more
Statements accessing tables whose dictionary DOP is DEFAULT
Statements that are estimated to execute for more than 10 seconds serially
Statements accessing tables with any setting for Dictionary DOP
Statements with parallel hints
Correct Answer: BE
Section: (none)
Explanation
Explanation/Reference:
Note:In earlier versions of the Oracle Database, we had to determine the DOP more
or less manually, either with a parallel hint or by setting a parallel degree with alter table. There
was an automatic computation of the DOP available for the objects with dictionary DOP of default,
derived from the simple formula CPU_COUNT * PARALLEL_THREADS_PER_CPU.
QUESTION 60
A database instance is configured in the shared server mode and it supports multiple applications
running on a middle tier. These applications connect to the database by using different services
and tracing is enabled for the services. You want to view the detailed tracing setting for particular
service.
What would you use to view the tracing information?
A. DBMS_SERVICE package
B. DBMS_MONITOR package
C. DBA_ENABLED_TRACES view
D. Trcsess and tkprof
Correct Answer: C
Section: (none)
Explanation
Explanation/Reference:
DBA_ENABLED_TRACES displays information about enabled SQL traces.
Incorrect:
A:The DBMS_SERVICE package lets you create, delete, activate, and deactivate services for a
single instance.
B:The DBMS_MONITOR package let you use PL/SQL for controlling additional tracing and
statistics gathering.
Reference: OracleDatabase Reference,DBA_ENABLED_TRACES
QUESTION 61
Examine the statements being executed for the first time:
Steps followed by a SQL statement during parsing:
1. Search for a similar statement in the shared pool.
2. Search for an identical statement in the shared pool.
3. Search the SQL area of identical statement already in the shared pool.
4. Proceed through the remaining steps of the parse phase to ensure that the execution plan of
the existing statements is applicable to the view statement.
5. Perform hard parsing.
6. Share the SQL area of the similar statement already in the shared pool.
Identify the required steps in the correct sequence used by the third query.
A.
B.
C.
D.
E.
F.
5, 1, 3, 4
2, 4, 3
5, 2, 3, 4
1, 4, 3
Only 5
2, 5
Correct Answer: F
Section: (none)
Explanation
Explanation/Reference:
2) before 5).
Note:
*When application code is run, Oracle attempts to reuse existing code if it has been executed
previously and can be shared. If the parsed representation of the statement does exist in the
library cache and it can be shared, then Oracle reuses the existing code. This is known as a soft
parse, or a library cache hit. If Oracle is unable to use existing code, then a new executable
version of the application code must be built. This is known as a hard parse, or a library cache
miss.
Reference:Oracle Database Performance Tuning Guide,SQL Sharing Criteria
QUESTION 62
You execute the following query for the first time:
Examine the SQL statement processing steps:
1. The value of the variable SAL is obtained to run the query.
2. The syntax of the query is checked
3. A parse tree for the query is generated
4. Semantics for the query are checked
5. The required rows are fetched
6. The SQL is executed to produce the required result.
Which is the correct order of execution of the above query?
A.
B.
C.
D.
1, 2 3, 4, 5, 6
1, 4, 2, 3, 6, 5
2, 4, 1, 3, 6, 5
2, 3, 1, 4, 6, 5
Correct Answer: D
Section: (none)
Explanation
Explanation/Reference:
Incorrect:
A: First execute then fetch rows.
B: Check of syntax is before check of semantics.
C: Parse tree is before semantics.
QUESTION 63
You plan to bulk load data using INSERT /*+PARALLEL*/ INTO . . . . SELECT FROM statements.
Which four types of operations can execute in parallel on tables that have no bitmapped indexes
or materialized views defined on term?
A. Direct path insert of a million rows into a partitioned, index-organized table containing one
million rows.
B. Direct path insert of a million rows into a partitioned, index-organized table containing 10 million
rows.
C. Direct path insert of a million rows into a nonpartitioned, index-organized table containing one
million rows.
D. Direct path insert of a million rows into a nonpartitioned, heap-organized table containing 10
million rows.
E. Direct path insert of a million rows into a nonpartitioned, heap-organized table containing one
million rows.
Correct Answer: ABDE
Section: (none)
Explanation
Explanation/Reference:
Direct-path INSERT is not supported for an index-organized table (IOT) if it is not
partitioned, if it has a mapping table, or if it is reference by a materialized view.
QUESTION 64
Which three options are true about parallel queries when PARALLEL_DEGREE_POLICY is set to
MANUAL and the session is using the default settings for parallel query, DDL, and DML?
A. A subquery in a parallel DML isparallelizedonly if it includes a parallel hint.
B. The number of parallel execution servers requested for a cursor is based on the greatest
degree of parallelism associated with any object accessed by the cursor.
C. A SELECT statement can be executed in parallel only if no scalar subqueries are contained in
the SELECT list.
D. In a CREATE TABLE . . . AS SELECT (CTAS) statement, SELECT is parallelized only if create
TABLE is parallelized.
E. In an INSERT INTO . . . SELECT FROM statement, INSERT is parallelized if select is
parallelized.
F. Single row inserts are never executed is parallel.
Correct Answer: CEF
Section: (none)
Explanation
Explanation/Reference:
*Decision to Parallelize
A SELECT statement can be parallelized only if the following conditions are satisfied:
/The query includes a parallel hint specification (PARALLEL or PARALLEL_INDEX) or the schema
objects referred to in the query have a PARALLEL declaration associated with them.
/At least one of the tables specified in the query requires one of the following:
A full table scan
An index range scan spanning multiple partitions
/ (C)No scalar subqueries are in the SELECT list.
*By default, the system only uses parallel execution when a parallel degree has been explicitly set
on an object or if a parallel hint is specified in the SQL statement.
*CREATE TABLE ... AS SELECT in Parallel
Parallel execution lets you parallelize the query and create operations of creating a table as a
subquery from another table or set of tables. This can be extremely useful in the creation of
summary or rollup tables.
Clustered tables cannot be created and populated in parallel.
*PARALLEL_DEGREE_POLICY specifies whether or not automatic degree of Parallelism,
statement queuing, and in-memory parallel execution will be enabled.
MANUAL
Disables automatic degree of parallelism, statement queuing, and in-memory parallel execution.
This reverts the behavior of parallel execution to what it was prior to Oracle Database 11g Release
2 (11.2). This is the default.
Incorrect:
A:
*For parallel DML (INSERT, UPDATE, MERGE, and DELETE), the reference object that
determines the DOP(degree of parallelism)is the table being modified by and insert, update, or
delete operation. Parallel DML also adds some limits to the DOP to prevent deadlock. If the
parallel DML statement includes a subquery, the subquery's DOP is the same as the DML
operation.
*For parallel DDL, the reference object that determines the DOP is the table, index, or partition
being created, rebuilt, split, or moved. If the parallel DDL statement includes a subquery, the
subquery's DOP is the same as the DDL operation.
D:The CREATE TABLE ... AS SELECT statement contains two parts: a CREATE part (DDL) and
a SELECT part (query). Oracle Database can parallelize both parts of the statement.
The query part of a CREATE TABLE ... AS SELECT statement can be parallelized only if the
following conditions are satisfied:
Reference: OracleDatabase VLDB and Partitioning Guide,Using Parallel Execution
QUESTION 65
A table has three distinct values in its ID column. In the ID column, values 10 and 20 have more
than 20000 rows each and value 30 has only five rows. The statistics for the schema have been
updated recently.
The CURSOR_SHARING parameter is set to EXACT.
The query was executed recently and the cursor for the query is bind aware. Examine the exhibits
to view the commands executed.
You plan to execute the same query with a value of 30 for the bind variable V_ID.
Which statement is true in this scenario?
Exhibit:
A. The same execution plan will always be used irrespective of the value in the bind variable.
B. A new execution plan will be generated depending on the access pattern and the bind value.
C. Adaptive cursor sharing will ensure that a new cursor is generated for each distinct value in the
bind variable.
D. Adaptive cursor sharing will happen only if you use the literal values instead of bind variables in
the query.
Correct Answer: C
Section: (none)
Explanation
Explanation/Reference:
Note:
*CURSOR_SHARING determines what kind of SQL statements can share the same cursors.
*Setting CURSOR_SHARING to EXACT allows SQL statements to share the SQL area only when
their texts match exactly. This is the default behavior. Using this setting, similar statements cannot
shared; only textually exact statements can be shared.
*Values:
* FORCE
Forces statements that may differ in some literals, but are otherwise identical, to share a cursor,
unless the literals affect the meaning of the statement.
* SIMILAR
Causes statements that may differ in some literals, but are otherwise identical, to share a cursor,
unless the literals affect either the meaning of the statement or the degree to which the plan is
optimized.
* EXACT
Only allows statements with identical text to share the same cursor.
QUESTION 66
See the table below:
All parallel execution servers are available and sessions use default settings for parallelism.
Which three are true about parallel execution in your instance?
A. Parallel execution occurs when estimated serial execution time exceeds the minimum time
threshold.
B. Parallel execution occurs for all DML statements.
C. Parallel execution occurs for those statements that access tables with dictionary DOP defined.
D. Parallel execution occurs for those statements that access tables with no dictionary DOP
defined.
E. Parallel execution occurs for all DDL statements.
Correct Answer: ACD
Section: (none)
Explanation
Explanation/Reference:
A(not B, Not E):PARALLEL_MIN_TIME_THRESHOLD : Oracle 11gR2 will ascertain
if the query’s estimated execution time is likely to run longer than the acceptable value (in
seconds) for pARALLEL_MIN_TIME_THRESHOLD and, if sufficient resources for parallel
execution exist right now, it will allow the query to execute; otherwise, it will delay its execution
until sufficient resources exist. This helps prevent a single parallel query from consuming
excessive resources at the cost of other non-parallelizable operations.The default of this
parameter is 10 seconds.
C, D:In earlier versions of the Oracle Database, we had to determine the DOP more or less
manually, either with a parallel hint or by setting a parallel degree with alter table. There was an
automatic computation of the DOP available for the objects with dictionary DOP of default, derived
from the simple formula CPU_COUNT * PARALLEL_THREADS_PER_CPU.
If there were insufficient parallel servers to satisfy the requested DOP, one of three things could
occur:
/The SQL would be run at a reduced DOP (be downgraded)
/The SQL would run in serial mode (be serialized)
/If PARALLEL_MIN_PERCENT was specified and less than the nominated percentage of the DOP
was achievable, then the the SQL statement might terminate with "ORA-12827: insufficient parallel
query slaves available".
Note:
*PARALLEL_DEGREE_POLICY. It can have 3 values : MANUAL, LIMITED and AUTO
MANUAL - This is the default. Disables Auto DOP, statement queuing and in-memory
parallel execution. It reverts the behavior of parallel execution to what it was previous to Oracle
Database 11g, Release 2 (11.2).
*Oracle supports parallel processing for a wide range of operations, including queries, DDL and
DML:
• Queries that involve table or index range scans.
• Bulk insert, update or delete operations.
• Table and index creation.
*Oracle's parallel execution framework enables you to either explicitly chose - or even enforce - a
specific degree of parallelism (DOP) or to rely on Oracle to control it.
*Three modes are available to request a DOP :
default
fixed
adaptive
*The DOP is determined in the following priority order:
hint
session
table
and limited by the Oracle Database Resource Manager (DBRM) settings.
Reference:AUTOMATIC DEGREE OF PARALLELISM (DOP) IN ORACLE 11G R2
QUESTION 67
An application issues many expensive join aggregations type queries.
Examine the Exhibit to view the queries.
Which two could help improve the performance of these SQL statements without changing
application code?
Exhibit:
A. Create B*-Tree indexes on the join columns.
B. Create a materialized view with query rewrite enabled for the first statement and nested
MVIEWs for the other statements.
C. Collect histogram statistics on columns for which aggregating functions are performed.
D. Create an STS for these queries and use SQL Access Advisor, which may generate advice
about MVIEWs.
E. Create an STS for these queries and use SQL Performance Analyzer, which may generate
advice about MVIEWs.
Correct Answer: BD
Section: (none)
Explanation
Explanation/Reference:
B:Materialized views and indexes are essential when tuning a database to achieve
optimum performance for complex, data-intensive queries.
D:
* STS – SQL tuning set.
*A SQL Tuning Set is a database object that includes one or more SQL statements and their
execution statistics and execution context. You can use the set as an input source for various
advisors, such as SQL Tuning Advisor, SQL Access Advisor, and SQL Performance Analyzer.
* SQL Access Advisor:
Materialized views and indexes are essential when tuning a database to achieve optimum
performance for complex, data-intensive queries. The SQL Access Advisor helps you achieve your
performance goals by recommending the proper set of materialized views, materialized view logs,
and indexes for a given workload. Understanding and using these structures is essential when
optimizing SQL as they can result in significant performance improvements in data retrieval. The
advantages, however, do not come without a cost. Creation and maintenance of these objects can
be time consuming, and space requirements can be significant.
The SQL Access Advisor recommends bitmap, function-based, and B-tree indexes. A bitmap
index offers a reduced response time for many types of ad hoc queries and reduced storage
requirements compared to other indexing techniques. B-tree indexes are most commonly used in
a data warehouse to index unique or near-unique keys.
Note:
*Conventional wisdom holds that bitmap indexes are most appropriate for columns having low
distinct values--such as GENDER, MARITAL_STATUS, and RELATION. This assumption is not
completely accurate, however. In reality, a bitmap index is always advisable for systems in which
data is not frequently updated by many concurrent systems.Abitmap index on a column with 100percent unique values (a column candidate for primary key) is as efficient as a B-tree index.
*By default, the Oracle creates a b_tree index. In a b-tree, you walk the branches until you get to
the node that has the data you want to use. In the classic b-tree structure, there are branches from
the top that lead to leaf nodes that contain the data.
Incorrect:
not E:SQL Performance Analyzer enables you to assess the performance impact of any system
change resulting in changes to SQL execution plans and performance characteristics. Examples
of common system changes for which you can use SQL Performance Analyzer include:
Database upgrade
Configuration changes to the operating system, hardware, or database
Database initialization parameter changes
Schema changes, for example, adding new indexes or materialized views
Gathering optimizer statistics
Validating SQL tuning actions, for example, creating SQL profiles or implementing partitioning
QUESTION 68
View the exhibit and examine the findings provided by the SQL Tuning Advisor for SELECT
Statement.
A SWL plan baseline already exists for the execution plan.
What two methods can you use to ensure that an alternate plan becomes an accepted plan?
Exhibit:
A. Use the DBMS_SPM.ALTER_SQL_PLAN_BASELINE function.
B. Use the DBMS_SQLTUNE.CREATE_SQL_PLAN_BASELINE function.
C. Use the DBMS_SQLTUNE.CREATE_SQL_PLAN_BASELINE function and run the
DBMS_STATS to manually refresh stale statistics.
D. Use the DBMS_SPM.LOAD_PLANS_FROM_SQLSET function.
Correct Answer: CD
Section: (none)
Explanation
Explanation/Reference:
C:To adopt an alternative plan regardless of whether SQL Tuning Advisor recommends it, call
DBMS_SQLTUNE.CREATE_SQL_PLAN_BASELINE. You can use this procedure to create a
SQL plan baseline on any existing reproducible plan.
D:LOAD_PLANS_FROM_SQLSET Function
This function loads plans stored in a SQL tuning set (STS) into SQL plan baselines. The plans
loaded from STS are not verified for performance but added as accepted plans to existing or new
SQL plan baselines. This procedure can be used to seed SQL management base with new SQL
plan baselines.
Note:
*While tuning a SQL statement, SQL Tuning Advisor searches real-time and historical
performance data for alternative execution plans for the statement. If plans other than the original
plan exist, then SQL Tuning Advisor reports an alternative plan finding.
SQL Tuning Advisor validates the alternative execution plans and notes any plans that are not
reproducible. When reproducible alternative plans are found, you can create a SQL plan baseline
to instruct the optimizer to choose these plans in the future.
Incorrect:
Not A:ALTER_SQL_PLAN_BASELINE Function
This function changes an attribute of a single plan or all plans associated with a SQL statement
using the attribute name/value format.
Usage Notes
When a single plan is specified, one of various statuses, or plan name, or description can be
altered. When all plans for a SQL statement are specified, one of various statuses, or description
can be altered. This function can be called numerous times, each time setting a different plan
attribute of same plan(s) or different plan(s).
Reference:Oracle Database Performance Tuning Guide,Alternative Plan Analysis
QUESTION 69
You ran a high load SQL statement that used an index through the SQL Tuning Advisor and
accepted its recommendation for SQL profile creation. Subsequently you noticed that there has
been a 2% growth in number of rows in the tables used by the SQL statement and database
statistics have also been refreshed.
How does this impact the created SQL profile?
A. It becomes invalid and no longer used the optimizer.
B. It remains valid and ensures that the optimizer always use the execution plan that was created
before the changes happened.
C. It remains and allows the optimizer to pick a different plan required.
D. It becomes invalid and a new SQL profile is created for the statement by the auto tuning task.
Correct Answer: C
Section: (none)
Explanation
Explanation/Reference:
QUESTION 70
Auto DOP is enabled for your instance.
You execute the following statements:
Which three are true about the execution of the join?
A.
B.
C.
D.
Dictionary DOP is usedto calculate statements DOP
Hinted DOP is used to calculate statement DOP.
TheEMPLOYEEStableis accessed in parallel.
The DEPARTMENTS table is accessedin parallel.
E. The hint operates atthe level of each table accessed by the statement.
Correct Answer: BCE
Section: (none)
Explanation
Explanation/Reference:
C: As perALTER TABLE employees PARALLEL 2;
Incorrect:
not D: As perALTER TABLE departments NOPARALLEL;
QUESTION 71
Which two statements are true about the trcsess utility?
A. It merges multiple trace files and produces a formatted output file.
B. It merges multiple trace files from a particular session into one single trace file.
C. It produces multiple files only for DBA sessions, which can be consolidated into one formatted
file using the tkprof utility.
D. It produces multiple files for a service, which can be consolidated into one formatted file using
the tkprof utility.
E. It merges files pertaining to a user session scattered across different processes in a shared
server configuration.
Correct Answer: AB
Section: (none)
Explanation
Explanation/Reference:
The trcsess utility consolidates trace output from selected trace files based on
several criteria:
Session ID
Client ID
Service name
Action name
Module name
After trcsess merges the trace information into a single output file, the output file could be
processed by TKPROF.
Note:
*trcsess is useful for consolidating the tracing of a particular session for performance or debugging
purposes. Tracing a specific session is usually not a problem in the dedicated server model as a
single dedicated process serves a session during its lifetime. You can see the trace information for
the session from the trace file belonging to the dedicated server serving it. However, in a shared
server configuration a user session is serviced by different processes from time to time. The trace
pertaining to the user session is scattered across different trace files belonging to different
processes. This makes it difficult to get a complete picture of the life cycle of a session.
Reference:Oracle Database Performance Tuning Guide11g,Using the trcsess Utility
* Now there is a new tool, a command line utility called trcsess to help read the trace files.
The trcsess command-line utility consolidates trace information from selected trace files, based on
specified criteria. The criteria include session id, client id, service name, action name and module
name.
QUESTION 72
The following parameter values are set for the instance:
OPTIMIZER_CAPTURE_SQL_BASELINE = FALSE
OPTIMIZER_USESQL_PLAN_BASELINE = TRUE
The SQL plan baseline for a SQL statement contains an accepted plan.
You want to add a new plan automatically as an accepted plan to the existing SQL plan baseline.
Examine the following tasks.
1. Set the OPTIMIZER_CAPTURE_SQL_PLAN_BASELINE parameter to TRUE.
2. Evolve the new plan.
3. Fix the existing accepted plan.
4. Manually load the new plan.
Identify the task(s) that must be performed to accomplish this.
A.
B.
C.
D.
E.
F.
1, 2, and 3
4 and 3
1, 4, and 3
Only 4
1, 2, 4, and 3
1 and 2
Correct Answer: D
Section: (none)
Explanation
Explanation/Reference:
Manual Plan Loading
Manual plan loading can be used in conjunction with, or as an alternative to automatic plan
capture. The load operations are performed using the DBMS_SPM package, which allows SQL
plan baselines to be loaded from SQL tuning sets or from specific SQL statements in the cursor
cache. Manually loaded statements are flagged as accepted by default. If a SQL plan baseline is
present for a SQL statement, the plan is added to the baseline, otherwise a new baseline is
created.
Note:
*The value of the OPTIMIZER_CAPTURE_SQL_PLAN_BASELINES parameter, whose default
value is FALSE, determines if the system should automatically capture SQL plan baselines. When
set to TRUE, the system records a plan history for SQL statements. The first plan for a specific
statement is automatically flagged as accepted. Alternative plans generated after this point are not
used until it is verified they do not cause performance degradations. Plans with acceptable
performance are added to the SQL plan baseline during the evolution phase.
*Managing SQL plan baselines involves three phases:
Capturing SQL Plan Baselines
Selecting SQL Plan Baselines
Evolving SQL Plan Baselines
*Evolving SQL Plan Baselines
Evolving a SQL plan baseline is the process by which the optimizer determines if non-accepted
plans in the baseline should be accepted. As mentioned previously, manually loaded plans are
automatically marked as accepted, so manual loading forces the evolving process. When plans
are loaded automatically, the baselines are evolved using the EVOLVE_SQL_PLAN_BASELINE
function, which returns a CLOB reporting its results.
Reference:SQL Plan Management in Oracle Database 11g Release 1
QUESTION 73
Examine the Exhibit and view the query and its execution plan.
Which statement is correct about the parallel executions plan?
Exhibit:
A. The CUSTOMERS and SALES tables are scanned simultaneously in parallel and then joined in
parallel.
B. First, the CUSTOMERS table is scanned in parallel, then the SALES table is scanned in
parallel, and then they are joined serially.
C. First, the SALES table is scanned in parallel, then the CUSTOMERS table us scanned in
parallel, and then they are joined in parallel.
D. The CUSTOMERS and SALES tables are scanned simultaneously in parallel and then joined
serially.
E. First, the CUSTOMERS table is scanned in parallel, then the SALES table us scanned in
parallel, and then they are joined in parallel.
Correct Answer: D
Section: (none)
Explanation
Explanation/Reference:
As per exhibit:
Line 7 and line 11 are run in parallel.
Line 8 and line 12 are run in parallel.
Line 9 and line 13 are run in parallel.
Line 10 and line 14 are run in parallel.
Line 6 serially joins the two parallel scan (resulting from line 7 and 11).
Note:
* PX BLOCK ITERATOR
The PX BLOCK ITERATOR row source represents the splitting up of the table EMP2 into pieces
so as to divide the scan workload between the parallel scan slaves.
QUESTION 74
Which three statements are true the Automatic Tuning Optimizer (ATO)?
A. It identifies the objects with stale or missing statistics and gathers statistics automatically.
B. It investigates the effect of new or modified indexes on the access paths for a workload and
recommends running that statistics through the SQL Access Advisor.
C. It recommends a SQL profile to help create a better execution plan.
D. It picks up resource-intensive SQL statements from the ADDM and recommends the use of
materialized views to improve query performance.
E. It identifies the syntactic, semantic, or design problems with structure of SQL statements
leading to poor performance and suggests restricting the statements.
F. It identifies resource-intensive SQL statements, runs them through the SQL Tuning Advisor,
and implements the recommendations automatically.
Correct Answer: ADF
Section: (none)
Explanation
Explanation/Reference:
----MI DAVA D,F---Under tuning mode, the optimizer can take several minutes to tune a single
statement. It is both time and resource intensive to invoke Automatic Tuning Optimizer every time
a query must be hard-parsed. Automatic Tuning Optimizer is meant for complex and high-load
SQL statements that have nontrivial impact on the database.
Automatic Database Diagnostic Monitor (ADDM) proactively identifies high-load SQL statements
that are good candidates for SQL tuning (see Chapter 6, "Automatic Performance Diagnostics").
The automatic SQL tuning feature also automatically identifies problematic SQL statements and
implements tuning recommendations during system maintenance windows as an automated
maintenance task.
The Automatic Tuning Optimizer performs the following types of tuning analysis:
Statistics Analysis
SQL Profiling
Access Path Analysis
SQL Structure Analysis
Alternative Plan Analysis
Note:
*Oracle Database uses the optimizer to generate the execution plans for submitted SQL
statements. The optimizer operates in the following modes:
Normal mode
The optimizer compiles the SQL and generates an execution plan. The normal mode generates a
reasonable plan for the vast majority of SQL statements. Under normal mode, the optimizer
operates with very strict time constraints, usually a fraction of a second.
Tuning mode
The optimizer performs additional analysis to check whether it can further improve the plan
produced in normal mode. The optimizer output is not an execution plan, but a series of actions,
along with their rationale and expected benefit for producing a significantly better plan. When
running in tuning mode, the optimizer is known as the Automatic Tuning Optimizer.
QUESTION 75
In your database, the CURSOR_SHARING parameter is set to FORCE.
A user issues the following SQL statement:
Select * from SH.CUSTOMERS where REIGN=’NORTH’
Which two statements are correct?
A. The literal value ‘NORTH’ is replaced by a system-generated bind variable.
B. Bind peeking will not happen and subsequent executions of the statement with different literal
values will use the same plan.
C. Adaptive cursor sharing happens only if there is a histogram in the REIGN column of the
CUSTOMERS table.
D. Adaptive cursor sharing happens irrespective of whether there is a histogram in the REIGN
column of the CUSTOMERS table.
Correct Answer: AB
Section: (none)
Explanation
Explanation/Reference:
CURSOR_SHARING determines what kind of SQL statements can share the same
cursors.
Values:
* FORCE
Forces statements that may differ in some literals, but are otherwise identical, to share a cursor,
unless the literals affect the meaning of the statement.
* SIMILAR
Causes statements that may differ in some literals, but are otherwise identical, to share a cursor,
unless the literals affect either the meaning of the statement or the degree to which the plan is
optimized.
* EXACT
Only allows statements with identical text to share the same cursor.
QUESTION 76
Examine Exhibit 1 to view the query and its execution plan.
Examine Exhibit 2 to view the structure and indexes for the EMPLOYEES and DEPARTMENTS
tables.
Examine Exhibit 3 to view the initialization parameters for the instance.
Why is sort-merge join chosen as the access method?
76_1 (exhibit):
76_2 (exhibit):
76_3 (exhibit):
A.
B.
C.
D.
Because the OPTIMIZER_MODE parameter is set to ALL_ROWS.
Because of an inequality condition.
Because the data is not sorted in the LAST_NAME column of the EMPLOYEES table
Because of the LIKE operator used in the query to filter out records
Correct Answer: A
Section: (none)
Explanation
Explanation/Reference:
Incorrect:
B: There is not aninequality conditionin the statement.
C: Merge joins are beneficial if the columns are sorted.
D:All regular joins should be able to use Hash or Sort Merge, except LIKE, !=, and NOT ... joins.
Note:
*A sort merge join is a join optimization method where two tables are sorted and then joined.
*A "sort merge" join is performed by sorting the two data sets to be joined according to the join
keys and then merging them together. The merge is very cheap, but the sort can be prohibitively
expensive especially if the sort spills to disk. The cost of the sort can be lowered if one of the data
sets can be accessed in sorted order via an index, although accessing a high proportion of blocks
of a table via an index scan can also be very expensive in comparison to a full table scan.
*Sort merge joins are useful when the join condition between two tables is an inequality condition
(but not a nonequality) like <, <=, >, or >=. Sort merge joins perform better than nested loop joins
for large data sets. You cannot use hash joins unless there is an equality condition.
*When the Optimizer Uses Sort Merge Joins
The optimizer can choose a sort merge join over a hash join for joining large amounts of data if
any of the following conditions are true:
/The join condition between two tables is not an equi-join.
/Because of sorts already required by other operations, the optimizer finds it is cheaper to use a
sort merge than a hash join.
Reference: OracleDatabase Performance Tuning Guide,Sort Merge Joins
QUESTION 77
Which four types of column filtering may benefit from partition pruning when accessing tables via
partitioned indexes?
A.
B.
C.
D.
E.
F.
G.
Equality operates on List-Partitioned Indexes
Not Equal operates on a Global Hash-Partitioned Indexes
Equality operates on System-Partitioned Tables
In-List operates on Range-Partitioned Indexes
Not Equal operates on a local Hash-Partitioned Indexes
Equality operates on Range-Partitioned Indexes
Equality operates on Hash-Partitioned Indexes
Correct Answer: ADFG
Section: (none)
Explanation
Explanation/Reference:
Oracle Database prunes partitions when you use range, LIKE, equality(A, F), and
IN-list(D)predicates on the range or list partitioning columns, and when you use equality(G)and INlist
predicates on the hash partitioning columns.
Reference:Oracle Database VLDB and Partitioning Guide11g,Information that can be Used for
Partition Pruning
QUESTION 78
You executed the following statement:
Which three statements are true about EXPLAIN PLAN?
A. The execution plan is saved in PLAN_TABLE without executing the query.
B. The execution plan for the query is generated and displayed immediately as the output.
C. The execution plan generated may not necessarily be the execution plan used during query
execution.
D. The execution plan is saved in DBA_HIST_SQL_PLAN without executing the query.
E. The execution plan generated can be viewed using the DBMS_XPLAIN.DISPLAY function.
F. The execution plan generated can be fetched from the library cache by using the
DBMS_XPLAIN.DISPLAY function.
Correct Answer: ACE
Section: (none)
Explanation
Explanation/Reference:
* (A, not D): The explain plan process stores data in the PLAN_TABLE.
*EXPLAIN PLAN
The EXPLAIN PLAN method doesn't require the query to be run(A), greatly reducing the time it
takes to get an execution plan for long-running queries compared to AUTOTRACE.
E: Use the DBMS_XPLAN.DISPLAY function to display the execution plan.
*The DBMS_XPLAN package provides an easy way to display the output of the EXPLAIN PLAN
command in several, predefined formats. You can also use the DBMS_XPLAN package to display
the plan of a statement stored in the Automatic Workload Repository (AWR) or stored in a SQL
tuning set. It further provides a way to display the SQL execution plan and SQL execution runtime
statistics for cached SQL cursors based on the information stored in the V$SQL_PLAN and
V$SQL_PLAN_STATISTICS_ALL fixed views.
Note:
*
First the query must be explained.
SQL> EXPLAIN PLAN FOR
2 SELECT *
3 FROM emp e, dept d
4 WHERE e.deptno = d.deptno
5 AND e.ename = 'SMITH';
Explained.
SQL>
Then the execution plan displayed.(not B)
SQL> @$ORACLE_HOME/rdbms/admin/utlxpls.sql
Plan Table
-------------------------------------------------------------------------------| Operation | Name | Rows | Bytes| Cost | Pstart| Pstop |
-------------------------------------------------------------------------------| SELECT STATEMENT | | | | | | |
| NESTED LOOPS | | | | | | |
| TABLE ACCESS FULL |EMP | | | | | |
| TABLE ACCESS BY INDEX RO|DEPT | | | | | |
| INDEX UNIQUE SCAN |PK_DEPT | | | | | |
-------------------------------------------------------------------------------8 rows selected.
SQL>
For parallel queries use the "utlxplp.sql" script instead of "utlxpls.sql".
QUESTION 79
Which three factors does the estimator depend on for overall cost estimation of a given execution
plan?
A.
B.
C.
D.
E.
F.
G.
Cardinality
Sort area size
OPTIMIZER_FEATURE_ENABLE parameter
NOT NULL_FEATURE_ENABLE parameter
NOT NULL constraint on a unique key column
Library cache size
The units of work such as disk input/output, CPU usage, and memory used in an operation
Correct Answer: ACG
Section: (none)
Explanation
Explanation/Reference:
C:OPTIMIZER_FEATURES_ENABLE acts as an umbrella parameter for enabling a
series of optimizer features based on an Oracle release number.
Note:The estimator determines the overall cost of a given execution plan. The estimator generates
three different types of measures to achieve this goal:
*Selectivity
This measure represents a fraction of rows from a row set. The selectivity is tied to a query
predicate, such as last_name='Smith', or a combination of predicates.
*Cardinality
This measure represents the number of rows in a row set.
1,
*Cost
This measure represents units of work or resource used. The query optimizer uses disk I/O, CPU
usage, and memory usage as units of work.
If statistics are available, then the estimator uses them to compute the measures. The statistics
improve the degree of accuracy of the measures.
QUESTION 80
Examine the parallelism parameters for you instance.
Now examine the DSS_PLAN with parallel statement directives:
Which two are true about the DSS_PLAN resource manager plan?
A. URGENT_GROUPS sessions will always be dequeued before sessions from other groups.
B. OTHER_GROUPS sessions are queued for maximum of six minutes.
C. ETL_GROUP sessions can collectively consume 64 parallel execution servers before queuing
starts for this consumer group.
D. An ETL_GRP sessions will be switched to URGENT_GROUPS if the session requests more
than eight parallel executions servers.
E. URGENT_GROUP sessions will not be queued if 64 parallel execution servers are busy
because their PARALLEL_TARGET_PERCENTAGE is not specified.
Correct Answer: AB
Section: (none)
Explanation
Explanation/Reference:
B:PARALLEL_QUEUE_TIMEOUT
Parallel Queue Timeout
When you use parallel statement queuing, if the database does not have sufficient resources to
execute a parallel statement, the statement is queued until the required resources become
available. However, there is a chance that a parallel statement may be waiting in the parallel
statement queue for longer than is desired. You can prevent such scenarios by specifying the
maximum time a parallel statement can wait in the parallel statement queue.
The PARALLEL_QUEUE_TIMEOUT directive attribute enables you to specify the maximum time,
in seconds, that a parallel statement can wait in the parallel statement queue before it is timed out.
The PARALLEL_QUEUE_TIMEOUT attribute can be set for each consumer group.
Incorrect:
Not C: ETL_GROUP PARALLEL_TARGET_PERCENTAGE is 50%. So ETL_GROUP can only
consume 32 servers.
Note:
*If you want more per-workload management, you must use the following directive attributes:
/MGMT_Pn
Management attributes control how a parallel statement is selected from the parallel statement
queue for execution. You can prioritize the parallel statements of one consumer group over
another by setting a higher value for the management attributes of that group.
/PARALLEL_TARGET_PERCENTAGE
/PARALLEL_QUEUE_TIMEOUT
/PARALLEL_DEGREE_LIMIT_P1
* PARALLEL_DEGREE_LIMIT_P1
Degree of Parallelism Limit
You can limit the maximum degree of parallelism for any operation within a consumer group. The
degree of parallelism is the number of parallel execution servers that are associated with a single
operation. Use the PARALLEL_DEGREE_LIMIT_P1 directive attribute to specify the degree of
parallelism for a consumer group.
The degree of parallelism limit applies to one operation within a consumer group; it does not limit
the total degree of parallelism across all operations within the consumer group. However, you can
combine both the PARALLEL_DEGREE_LIMIT_P1 and the
PARALLEL_TARGET_PERCENTAGE directive attributes to achieve the desired control.
* PARALLEL_TARGET_PERCENTAGE
Parallel Target Percentage
It is possible for a single consumer group to launch enough parallel statements to use all the
available parallel servers. If this happens, when a high-priority parallel statement from a different
consumer group is run, no parallel servers are available to allocate to this group. You can avoid
such a scenario by limiting the number of parallel servers that can be used by a particular
consumer group.
Use the PARALLEL_TARGET_PERCENTAGE directive attribute to specify the maximum
percentage of the parallel server pool that a particular consumer group can use. The number of
parallel servers used by a particular consumer group is counted as the sum of the parallel servers
used by all sessions in that consumer group.
QUESTION 81
Which three statements are true about the usage of optimizer hints?
A. Whenever a query uses table aliases, the hints in the query must use the aliases.
B. The OPTIMIZER_FEATURES_ENABLE parameter must be set to a version supports the hints
used.
C. The optimizer uses the execution plan with lower cost even if a hint is specified.
D. A schema name for the table must be used in the hint if the table is qualified in the FROM
clause.
E. Hints can be used to override the optimization approach specified with the OPTIMIZER_MODE
parameter.
F. A statement block can have only one hint, and that hint must be immediately after SELECT,
UPDATE, INSERT, MERGE, or DELETE keyword.
Correct Answer: ACE
Section: (none)
Explanation
Explanation/Reference:
--MI DAVA A,E-A:You must specify the table to be accessed exactly as it appears in the statement.
If the statement uses an alias for the table, then use the alias rather than the table name in the
hint.
E:If a SQL statement has a hint specifying an optimization approach and goal, then the optimizer
uses the specified approach regardless of the presence or absence of statistics, the value of the
OPTIMIZER_MODE initialization parameter, and the OPTIMIZER_MODE parameter of the ALTER
SESSION statement.
Note:
*Optimizer hints can be used with SQL statements to alter execution plans.
*Hints let you make decisions usually made by the optimizer. As an application designer, you
might know information about your data that the optimizer does not know. Hints provide a
mechanism to instruct the optimizer to choose a certain query execution plan based on the
specific criteria.
For example, you might know that a certain index is more selective for certain queries. Based on
this information, you might be able to choose a more efficient execution plan than the optimizer. In
such a case, use hints to instruct the optimizer to use the optimal execution plan.
*Hints apply only to the optimization of the block of a statement in which they appear. A statement
block is any one of the following statements or parts of statements:
A simple SELECT, UPDATE, or DELETE statement
A parent statement or subquery of a complex statement
A part of a compound query
Reference:Oracle Database Performance Tuning Guide,Using Optimizer Hints
QUESTION 82
Which three tasks are performed by the parallel execution coordinator process?
A. Allocating parallel execution processes from the parallel execution server pool.
B. Determining the parallel execution method for each operation in the execution plan.
C. Managing the data flow between the producers and consumers during inter-operation
parallelism.
D. Any serial processing that is part of the execution plan.
E. Determining the desired number of parallel execution processes
F. Managing the data flow between the producers and consumers during intra-operation
parallelism.
Correct Answer: ABC
Section: (none)
Explanation
Explanation/Reference:
A:When executing a parallel operation, the parallel execution coordinator obtains
parallel execution servers from the pool and assigns them to the operation. If necessary, Oracle
Database can create additional parallel execution servers for the operation. These parallel
execution servers remain with the operation throughout execution. After the statement has been
processed completely, the parallel execution servers return to the pool.
B:
*The parallel execution coordinator examines each operation in a SQL statement's execution plan
then determines the way in which the rows operated on by the operation must be divided or
redistributed among the parallel execution servers.
*After the optimizer determines the execution plan of a statement, the parallel execution
coordinator determines the parallel execution method for each operation in the plan.
Note:
*Oracle Database can process a parallel operation with fewer than the requested number of
processes. If all parallel execution servers in the pool are occupied and the maximum number of
parallel execution servers has been started, the parallel execution coordinator switches to serial
processing.
Reference:Oracle Database VLDB and Partitioning Guide11g,How Parallel Execution Works
QUESTION 83
You need to upgrade you Oracle Database 10g to 11g. You want to ensure that the same SQL
plans that are currently in use in the 10g database are used in the upgraded database initially, but
new, better plans are allowed subsequently.
Steps to accomplish the task:
1. Set the OPTIMIZER_USE_SQL_BASELINE and
OPTIMIZER_CAPTURE_SQL_PLAN_BASELINE to TRUE.
2. Bulk load the SQL Management Base as part of an upgrade using an STS containing the plans
captured in Oracle Database 10g.
3. Evolve the plan baseline using the DBMS_SPM.EVOLVE_PLAN_BASELINE procedure.
4. Fix the plan baseline – using the DBMS_SPM.ALTER_SQL_PLANBASELINE procedure.
5. Accept new, better plans using the DBMS_SPM.ALTER_SQL_PLAN_BASELINE procedure
and manually load them to the existing baseline.
6. Set OPTIMIZER_CAPTURE_SQL_PLAN_BASELINES to FALSE.
Identify the required steps.
A.
B.
C.
D.
E.
F.
1, 3, 4, 5
1, 6, 3, 4, 5
1, 2, 3, 5
1, 2, 3, 4
1, 6, 3
1 and 2
Correct Answer: F
Section: (none)
Explanation
Explanation/Reference:
* (1)OPTIMIZER_CAPTURE_SQL_PLAN_BASELINES
In Oracle Database 11g a new feature called SQL Plan Management (SPM) has been introduced
to guarantees any plan changes that do occur lead to better performance. When
OPTIMIZER_CAPTURE_SQL_PLAN_BASELINES is set to TRUE (default FALSE) Oracle will
automatically capture a SQL plan baseline for every repeatable SQL statement on the system.
The execution plan found at parse time will be added to the SQL plan baseline as an accepted
plan.
* (2)Once you have completed the software upgrade, but before you restart the applications and
allow users back on the system, you should populate SQL Plan Management (SPM) with the 10g
execution plans you captured before the upgrade. Seeding SPM with the 10g execution plans
ensures that the application will continue to use the same execution plans you had before the
upgrade.Any new execution plans found in Oracle Database 11g will be recorded in the plan
history for that statement but they will not be used. When you are ready you can evolve or verify
the new plans and only implement those that perform better than the 10g plan.
Incorrect:
Not (3):DBMS_SPM.EVOLVE_PLAN_BASELINEis not used to evolve new
plans.DBMS_SPM.EVOLVE_SQL_PLAN_BASELINEshould be used:
It is possible to evolve a SQL statement’s execution plan using Oracle Enterprise Manager or by
running the command-line function DBMS_SPM.EVOLVE_SQL_PLAN_BASELINE. U
Note:
*SQL plan management (SPM) ensures that runtime performance will never degrade due to the
change of an execution plan. To guarantee this, only accepted (trusted) execution plans will be
used; any plan will be tracked and evaluated at a later point in time and only accepted as verified
if the new plan performs better than an accepted plan. SQL Plan Management has three main
components:
1. SQL plan baseline capture:
Create SQL plan baselines that represents accepted execution plans for all relevant SQL
statements. The SQL plan baselines are stored in a plan history inside the SQL
Management Base in the SYSAUX tablespace.
2. SQL plan baseline selection
Ensure that only accepted execution plans are used for statements with a SQL plan
baseline and track all new execution plans in the history for a statement as unaccepted
plan. The plan history consists of accepted and unaccepted plans. An unaccepted plan
can be unverified (newly found but not verified) or rejected (verified but not found to
performant).
3. SQL plan baseline evolution
Evaluate all unverified execution plans for a given statement in the plan history to
become either accepted or rejected
QUESTION 84
You are administering a database supporting an OLTP workload where the users perform
frequent queries for fetching a new rows as possible, involving join operations on recently inserted
data. In addition at night, a few DSS queries are also performed. Examine the initialization
parameters for the instance:
Which two options would you use for the optimizer?
A.
B.
C.
D.
E.
Set the OPTIMIZER_MODE initialization parameter to FIRST_ROWS_n.
Add the hint ALL_ROWS in the DOS queries.
Set the OPTIMIZER_INDEX_CACHING initialization parameter to 0.
Add a hint INDEX_COMBINE in all DSS queries.
Set the OPTIMIZER_INDEX_COST_ADJ initialization parameter to 100.
Correct Answer: AE
Section: (none)
Explanation
Explanation/Reference:
The last appended rows are more likely to be found quickly withFIRST_ROWS_n.
E: Make it not to prioritize index instead if table scan.
OPTIMIZER_INDEX_COST_ADJ
OPTIMIZER_INDEX_COST_ADJ lets you tune optimizer behavior for access path selection to be
more or less index friendly—that is, to make the optimizer more or less prone to selecting an index
access path over a full table scan.
The default for this parameter is 100 percent, at which the optimizer evaluates index access paths
at the regular cost. Any other value makes the optimizer evaluate the access path at that
percentage of the regular cost. For example, a setting of 50 makes the index access path look half
as expensive as normal.
QUESTION 85
See the table below:
All execution servers are currently available and the sessions use defaults for all parallel settings.
In which two cases will statements execute in parallel?
A. When parallel hints are used but only if estimated serial execution takes more than 10 seconds.
B. When parallelism is defined at the statement level.
C. When the degree of parallelism is explicitly defined in the data dictionary for tables and indexes
accessed by a query.
D. Parallel DDL statements but only if estimated serial DDL execution time is greater than 10
seconds.
E. When the degree of parallelism is explicitly defined for tables and indexes but only if estimated
serial execution takes more than 10 seconds.
Correct Answer: BC
Section: (none)
Explanation
Explanation/Reference:
Incorrect:
A, D, E: WhenPARALLEL_MIN_TIME_THRESHOLD is set to AUTO
thePARALLEL_MIN_TIME_THRESHOLD is set to 30, not to 10. See note below.
Note:
* parallel_min_time_threshold
PARALLEL_MIN_TIME_THRESHOLD specifies the minimum execution time a statement should
have before the statement is considered for automatic degree of parallelism. By default, this is set
to 30 seconds. Automatic degree of parallelism is only enabled
if PARALLEL_DEGREE_POLICY is set to AUTO or LIMITED.
QUESTION 86
You need to migrate database from oracle Database 10g to 11g. You want the SQL workload to
start the 10g plans in the 11g database instance and evolve better plans.
Examine the following steps:
1. Capture the pre-Oracle Database 11g plans in a SQL Tuning Set (STS)
2. Export the STS from the 10g system.
3. Import the STS into Oracle Database 11g.
4. Set the OPTIMIZER_FEATURES_ENABLE parameter to 10.2.0.
5. Run SQL Performance Analyzer for the STS.
6. Set the OPTIMIZER_FEATURES_ENABLE parameter to 11.2.0.
7. Rerun the SQL Performance Analyzer for the STS.
8. Set OPTIMIZER_CAPTURE_SQL_PLAN_BASELINE to TRUE.
9. Use DBMS_SPM.EVOLVE_SQL_BASELINE function to evolve the plans.
10. Set the OPTIMIZER_USE_SQL_PLAN_BASELINE to TRUE.
Identify the required steps in the correct order.
A.
B.
C.
D.
E.
1, 2, 3, 4, 5, 6, 7
4, 8, 10
1, 2, 3, 4, 8, 10
1, 2, 3, 6, 9, 5
1, 2, 3, 5, 9, 10
Correct Answer: C
Section: (none)
Explanation
Explanation/Reference:
Step 1: (1)
Step 2: (2)
Step 3: (3)
Step 4: (4)
By setting the parameter OPTIMIZER_FEATURES_ENABLE to the 10g version used before the
upgrade, you should be able to revert back to the same execution plans you had prior to the
upgrade.
Step 5: (8)
OPTIMIZER_CAPTURE_SQL_PLAN_BASELINES
In Oracle Database 11g a new feature called SQL Plan Management (SPM) has been introduced
to guarantees any plan changes that do occur lead to better performance. When
OPTIMIZER_CAPTURE_SQL_PLAN_BASELINES is set to TRUE (default FALSE) Oracle will
automatically capture a SQL plan baseline for every repeatable SQL statement on the system.
The execution plan found at parse time will be added to the SQL plan baseline as an accepted
plan.
Step 6: (10)
OPTIMIZER_USE_SQL_PLAN_BASELINES enables or disables the use of SQL plan baselines
stored in SQL Management Base. When enabled, the optimizer looks for a SQL plan baseline for
the SQL statement being compiled. If one is found in SQL Management Base, then the optimizer
will cost each of the baseline plans and pick one with the lowest cost.
QUESTION 87
A database instance is configured in the shared server mode and it supports multiple applications
running on a middle tier. These applications connect to the database using different services. You
enabled the statistics gathering for the service by using the following command:
SQL > EXECUTE DBMS_MONITOR.SERV_MOD_ACT_STAT_ENABLE (‘APPS1’, NULL, NULL);
Which two statements are true regarding statistics gathered for APPS1 service?
A.
B.
C.
D.
The statistics are gathered for all the modules and actions within the service.
The statistics are collected at the session level for all sessions connected using the service.
The statistics are aggregated and stored in the V$SERV_MOD_ACT_STATS view.
The statistics are gathered for all the modules using the service only when
DBMS_APPLICATION_INFO.SET_MODULE is executed to register with the service.
E. Statistics gathering is enabled only for the subsequent sessions using the service.
F. The statistics are gathered for all the applications using the service only when
DBMS_APPLICATION_INFO.SET_ACTION is executed to register with the service.
Correct Answer: BC
Section: (none)
Explanation
Explanation/Reference:
--MI DAVA A,B,C-SERV_MOD_ACT_STAT_ENABLE Procedure
This procedure enables statistic gathering for a given combination of Service Name, MODULE and
ACTION. Calling this procedure enables statistic gathering for a hierarchical combination of
Service name, MODULE name, and ACTION name on all instances for the same database.
Statistics are accessible by means of the V$SERV_MOD_ACT_STATS view.
Note:
*Syntax
DBMS_MONITOR.SERV_MOD_ACT_STAT_ENABLE(
service_name IN VARCHAR2,
module_name IN VARCHAR2,
action_name IN VARCHAR2 DEFAULT ALL_ACTIONS);
Parameter,Description
service_name
Name of the service for which statistic aggregation is enabled
module_name
Name of the MODULE. An additional qualifier for the service. It is a required parameter.
action_name
Name of the ACTION. An additional qualifier for the Service and MODULE name. Omitting the
parameter (or supplying ALL_ACTIONS constant) means enabling aggregation for all Actions for a
given Server/Module combination. In this case, statistics are aggregated on the module level.
Reference:Oracle Database PL/SQL Packages and Types Reference11g,
SERV_MOD_ACT_STAT_ENABLE Procedure
QUESTION 88
In which three situations must you collect optimizer statistics manually for database objects in
addition to automatic statistics collection?
A. When substantial DML activity occurs between the nightly automatic stats gathering
maintenance job
B. When substantial activity occurs on a partition of the partitioned table.
C. When a table is used for bulk loads that add 10% or more to the total size of the table
D. When an index is created or dropped for a column
E. When the degree of parallelism is explicitly defined for a table
Correct Answer: ABC
Section: (none)
Explanation
Explanation/Reference:
-- MI DAVA B,C-When to Gather Statistics
When gathering statistics manually, you not only need to determine how to gather statistics, but
also when and how often to gather new statistics.
For an application in which tables are being incrementally modified, you may only need to gather
new statistics every week or every month. The simplest way to gather statistics in these
environment is to use a script or job scheduling tool to regularly run the
GATHER_SCHEMA_STATS and GATHER_DATABASE_STATS procedures. The frequency of
collection intervals should balance the task of providing accurate statistics for the optimizer against
the processing overhead incurred by the statistics collection process.
(C)For tables which are being substantially modified in batch operations, such as with bulk loads,
statistics should be gathered on those tables as part of the batch operation. The DBMS_STATS
procedure should be called as soon as the load operation completes.
For partitioned tables, there are often cases in which only a single partition is modified. In those
cases, statistics can be gathered only on those partitions rather than gathering statistics for the
entire table. However, gathering global statistics for the partitioned table may still be necessary.
QUESTION 89
Examine the query and its execution plan:
Which two statements are true regarding the execution plan?
A. For every row of CUSTOMERS table, the row matching the join predicate from the ORDERS
table are returned.
B. An outer join returns NULL for the ORDERS table columns along with the CUSTOMERS table
rows when it does not find any corresponding rows in the ORDER table.
C. The data is aggregated from the ORDERS table before joining to CUSTOMERS.
D. The NESTED LOOP OUTER join is performed because the OPTIMZER_MODE parameter is
set to ALL_ROWS.
Correct Answer: BD
Section: (none)
Explanation
Explanation/Reference:
B:An outer join extends the result of a simple join. An outer join returns all rows that
satisfy the join condition and also returns some or all of those rows from one table for which no
rows from the other satisfy the join condition.
Note:
*All_rows attempts to optimize the query to get the very last row as fast as possible.
This makes sense in a stored procedure for example where the client does not regain
control until the stored procedure completes. You don't care if you have to wait to get
the first row if the last row gets back to you twice as fast. In a client
server/interactive application you may well care about that.
*The optimizer uses nested loop joins to process an outer join in the following circumstances:
/ It is possible to drive from the outer table to inner table.
/ Data volume is low enough to make the nested loop method efficient.
*First_rows attempts to optimize the query to get the very first row back to the client as
fast as possible. This is good for an interactive client server environment where the
client runs a query and shows the user the first 10 rows or so and waits for them to page
down to get more.
QUESTION 90
An application frequently executed similar types of queries that vary only in the usage of literals in
the WHERE clause. You plan to use bind variable in place of literal values.
The CURSOR_SHARING parameter to set to EXACT.
Which two statements are true about the usage of bind variables?
A. The number of latch gets in the library cache will be reduced.
B. Bind peeking will take place and subsequent execution of queries can have different plans
based on the cardinality of the bind value in the column.
C. Bind peeking will take place and subsequent execution of queries can have different plans only
when the histograms exists on a column used in WHERE clause.
D. Bind peeking will not happen and the optimizer will use the same plan for all bind values if no
histograms exist on a column use in the WHERE clause.
E. Bind peeking will happen and subsequent execution of queries will have the same parent cursor
but different child cursors.
Correct Answer: AB
Section: (none)
Explanation
Explanation/Reference:
CURSOR_SHARING determines what kind of SQL statements can share the same
cursors.
EXACT
Only allows statements with identical text to share the same cursor.
Note:
EXACT-This is the default setting. With this value in place, the query is not rewritten to use bind
variables.
With CURSOR_SHARING=EXACT (the default), every unique SQL statement executed will create
a new entry in V$SQL, it will be hard-parsed, and an execution plan will be created just for it.
There can be hundreds or thousands of very similar queries in the shared pool that differ only in
the literals used in the SQL statement itself. This implies that the database is forced to hard-parse
virtually every query, which, in turn, not only consumes a lot of CPU cycles but also leads to
decreased scalability.
The database just cannot hard-parse hundreds or thousands of SQL statements concurrently—the
application ends up waiting for the shared pool to become available. One of the major scalability
inhibitors in the database is not using bind variables. That was the motivation behind adding
CURSOR_SHARING=FORCE .
QUESTION 91
Examine the execution plan:
Which two are true regarding the execution plan?
A. The CUSTOMERS table is hash partitioned.
B. The SALES table is hash partitioned.
C. The CUSTOMERS table is scanned first and selected partitions from the SALES table are
scanned based on the BLOOM Filter created during the scan of the CUSTOMERS table.
D. The SALES table is scanned first and selected partitions from the CUSTOMERS table are
scanned based on the Bloom Filter created during the scan of the SALES table.
E. Both the CUSTOMERS and SALES tables are scanned simultaneously and rows from the
CUSTOMERS table are joined to row of the SALES table.
F. The CUSTOMERS table is range partitioned.
Correct Answer: BC
Section: (none)
Explanation
Explanation/Reference:
B: As per line 14 and 15.
C: As per exhibit line 13 is execute before line 15.
Incorrect:
Not E: As per line 13 and 15 – they are not executed simultaneously.
QUESTION 92
Which two tasks are performed during the optimization stage of a SQL statement?
A.
B.
C.
D.
E.
Evaluating the expressions and conditions in the query
Checking the syntax and analyzing the semantics of the statement
Separating the clauses of the SQL statement into structures that can be processed
Inspecting the integrity constraints and optimizing the query based on this metadata
Gathering the statistics before creating the execution plan for the statement
Correct Answer: DE
Section: (none)
Explanation
Explanation/Reference:
Note:
*Oracle SQL is parsed before execution, and a hard parse includes these steps:
*
The parsing process performs two main functions:
o Syntax Check: is the statement a valid one.Does it make sense given the SQL grammar
documented in the SQL Reference Manual. Does it follow all of the rules for SQL.
o Semantic Analysis: Going beyond the syntax ? is the statement valid in light of the
objects in the database (do the tables and columns referenced exist). Do you have access
to the objects ? are the proper privileges in place? Are there ambiguities in the
statement ? for example if there are two tables T1 and T2 and both have a column X, the
query ?select X from T1, T2 where ?? is ambiguous, we don?t know which table to get X
from. And so on.
So, you can think of parsing as basically a two step process, that of a syntax check to
check the validity of the statement and that of a semantic check ? to ensure the
statement can execute properly.
Reference:Oracle hard-parse vs. soft parse
QUESTION 93
Examine the Exhibit1 to view the structure of an indexes for the EMPLOYEES table.
Examine the query:
SQL> SELECT * FROM employees WHERE employees_id IN (7876, 7900, 7902);
EMPLOYEE_ID is a primary key in the EMPLOYEES table that has 50000 rows.
Which statement is true regarding the execution of the query?
Exhibit:
A. The query uses an index skip scan on the EMP_EMP_ID_PK index to fetch the rows.
B. The query uses the INLIST ITERATOR operator to iterate over the enumerated value list, and
values are evaluated using an index range scan on the EMP_EMP_ID_PK index.
C. The query uses the INLIST ITERATOR operator to iterate over the enumerated value list, and
values are evaluated using a fast full index scan on the EMP_EMP_ID_PK index.
D. The query uses the INLIST ITERATOR operator to iterate over the enumerated value list, and
values are evaluated using an index unique scan on the EMP_EMP_ID_PK index.
E. The query uses a fast full index scan on the EMP_EMP_ID_PK index fetch the rows.
Correct Answer: B
Section: (none)
Explanation
Explanation/Reference:
How the CBO Evaluates IN-List Iterators
The IN-list iterator is used when a query contains an IN clause with values. The execution plan is
identical to what would result for a statement with an equality clause instead of IN except for one
additional step. That extra step occurs when the IN-list iterator feeds the equality clause with
unique values from the IN-list.
Both of the statements in Example 2-1 and Example 2-1 are equivalent and produce the same
plan.
Example 2-1 IN-List Iterators Initial Statement
SELECT header_id, line_id, revenue_amount
FROM so_lines_all
WHERE header_id IN (1011,1012,1013);
SELECT header_id, line_id, revenue_amount
FROM so_lines_all
WHERE header_id = 1011
OR header_id = 1012
OR header_id = 1013;
Plan
------------------------------------------------SELECT STATEMENT
INLIST ITERATOR
TABLE ACCESS BY INDEX ROWID SO_LINES_ALL
INDEX RANGE SCAN SO_LINES_N1
Reference:Database Performance Tuning Guide and Reference
QUESTION 94
You are administering a database supporting a DDS workload in which some tables are updated
frequently but not queried often. You have SQL plan baseline for these tables and you do not want
the automatic maintenance task to gather statistics for these tables regularly.
Which task would you perform to achieve this?
A.
B.
C.
D.
Set the INCREMENTAL statistic preference FALSE for these tables.
Set the STALE_PERCENT static preference to a higher value for these tables.
Set the GRANULARITY statistic preference to AUTO for these tables.
Set the PUBLISH statistic preference to TRUE for these tables.
Correct Answer: B
Section: (none)
Explanation
Explanation/Reference:
With the DBMS_STATS package you can view and modify optimizer statistics
gathered for database objects.
STALE_PERCENT - This value determines the percentage of rows in a table that have to change
before the statistics on that table are deemed stale and should be regathered. The default value is
10%.
Reference:OracleDatabase PL/SQL Packages and Types Reference
QUESTION 95
Your database has the OLTP_SRV service configured for an OLTP application running on a
middle tier. This service is used to connect to the database by using connection pools. The
application has three modules. You enabled tracing at the service by executing the following
command:
SQL exec DBMS_MONITOR.SERV_MOD_ACT_TRACE_ENABLE (‘OLTP_SRV’);
What is the correct method of consolidating the trace files generated by the procedure?
A. Use all trace files as input for the tkprof utility to consolidate the trace files for a module.
B. Use one trace file at a time as input for the trcess utility and use tkprof utility to consolidate all
the output files for a module.
C. Use the trcess utility to consolidate all trace files into a single output file, which can then be
processed by the tkprof utility.
D. Use the tkprof utility to consolidate the trace files and create an output that can directly be used
for diagnostic purposes.
Correct Answer: C
Section: (none)
Explanation
Explanation/Reference:
Note:
*Oracle provides the trcsess command-line utility that consolidates tracing information based on
specific criteria.
The SQL Trace facility and TKPROF are two basic performance diagnostic tools that can help you
monitor applications running against the Oracle Server.
Note:SERV_MOD_ACT_TRACE_ENABLE Procedure
Enables SQL tracing for a given combination of Service Name, MODULE and ACTION globally
unless an instance_name is specified
Reference: OracleDatabase Performance Tuning Guide
QUESTION 96
Which statement is true about an automatic SQL task?
A. It will attempt to tune the currently running SQL statements that are highly resource intensive.
B. It will automatically implement new SQL profiles for the statements that have existing SQL
profiles.
C. It will attempt to tune all-long-running queries that have existing SQL profiles.
D. It will automatically implement SQL profiles if a three-fold benefit can be achieved and
automatic profile implementation is enabled.
E. It will tune all the top SQL statements from AWR irrespective of the time it takes to complete the
task in a maintenance window.
Correct Answer: D
Section: (none)
Explanation
Explanation/Reference:
Optionally, implements the SQL profiles provided they meet the criteria of threefold
performance improvement
The database considers other factors when deciding whether to implement the SQL profile. For
example, the database does not implement a profile when the objects referenced in the statement
have stale optimizer statistics. SQL profiles that have been implemented automatically show type
is AUTO in the DBA_SQL_PROFILES view.
If the database uses SQL plan management, and if a SQL plan baseline exists for the SQL
statement, then the database adds a new plan baseline when creating the SQL profile. As a result,
the optimizer uses the new plan immediately after profile creation.
Incorrect:
E:Oracle Database automatically runs SQL Tuning Advisor on selected high-load SQL statements
from the Automatic Workload Repository (AWR) that qualify as tuning candidates. This task,
called Automatic SQL Tuning, runs in the default maintenance windows on a nightly basis. By
default, automatic SQL tuning runs for at most one hour.
Note:
After automatic SQL tuning begins, the database performs the following steps:
Oracle Database analyzes statistics in AWR and generates a list of potential SQL statements that
are eligible for tuning. These statements include repeating high-load statements that have a
significant impact on the database.
The database tunes only SQL statements that have an execution plan with a high potential for
improvement. The database ignores recursive SQL and statements that have been tuned recently
(in the last month), parallel queries, DML, DDL, and SQL statements with performance problems
caused by concurrency issues.
The database orders the SQL statements that are selected as candidates based on their
performance impact. The database calculates the impact by summing the CPU time and the I/O
times in AWR for the selected statement in the past week.
During the tuning process, the database considers and reports all recommendation types, but it
can implement only SQL profiles automatically.
Reference: OracleDatabase Performance Tuning Guide,Automatic SQL Tuning
QUESTION 97
Examine the exhibit.
Which two are true concerning the execution plan?
Exhibit:
A. No partition-wise join is used
B. A full partition-wise join is used
C. A partial partition-wise join is used
D. The SALES table is composite partitioned
Correct Answer: BD
Section: (none)
Explanation
Explanation/Reference:
*The following example shows the execution plan for the full partition-wise join with
the sales table range partitioned by time_id, and subpartitioned by hash on cust_id.
---------------------------------------------------------------------------------------------| Id | Operation | Name | Pstart| Pstop |IN-OUT| PQ Distrib |
---------------------------------------------------------------------------------------------| 0 | SELECT STATEMENT | | | | | |
| 1 | PX COORDINATOR | | | | | |
| 2 | PX SEND QC (RANDOM) | :TQ10001 | | | P->S | QC (RAND) |
|* 3 | FILTER | | | | PCWC | |
| 4 | HASH GROUP BY | | | | PCWP | |
| 5 | PX RECEIVE | | | | PCWP | |
| 6 | PX SEND HASH | :TQ10000 | | | P->P | HASH |
| 7 | HASH GROUP BY | | | | PCWP | |
| 8 | PX PARTITION HASH ALL | | 1 | 16 | PCWC | |
|* 9 | HASH JOIN | | | | PCWP | |
| 10 | TABLE ACCESS FULL | CUSTOMERS | 1 | 16 | PCWP | |
| 11 | PX PARTITION RANGE ITERATOR| | 8 | 9 | PCWC | |
|* 12 | TABLE ACCESS FULL | SALES | 113 | 144 | PCWP | |
---------------------------------------------------------------------------------------------Predicate Information (identified by operation id):
--------------------------------------------------3 - filter(COUNT(SYS_OP_CSR(SYS_OP_MSR(COUNT(*)),0))>100)
9 - access("S"."CUST_ID"="C"."CUST_ID")
12 - filter("S"."TIME_ID"<=TO_DATE(' 1999-10-01 00:00:00', 'syyyy-mm-dd hh24:mi:ss') AND
"S"."TIME_ID">=TO_DATE(' 1999-07-01
00:00:00', 'syyyy-mm-dd hh24:mi:ss'))
*Full partition-wise joins can occur if two tables that are co-partitioned on the same key are joined
in a query. The tables can be co-partitioned at the partition level, or at the subpartition level, or at a
combination of partition and subpartition levels. Reference partitioning is an easy way to
guarantee co-partitioning. Full partition-wise joins can be executed in serial and in parallel.
Reference: Oracle Database VLDB and Partitioning Guide, Full Partition-Wise Joins: Composite Single-Level
QUESTION 98
You are working on a database that supports an OLTP workload. You see a large number of hard
parses occurring and several almost identical SQL statements in the library cache that vary only in
the literal values in the WHERE clause conditions.
Which two methods can you use to reduce hard parsing?
A. Replace literals with bind variables and evolve a baseline for the statement.
B. Use the RESULT_CACHE hint in the queries.
C. Create baselines for the almost identical SQL statement by manually loading them from the
cursor cache.
D. Set the CURSOR_SHARING parameter to SIMILAR.
Correct Answer: D
Section: (none)
Explanation
Explanation/Reference:
A:We can reduce this Hard parsing by using bindvariables
D:SIMILAR
Causes statements that may differ in some literals, but are otherwise identical, to share a cursor,
unless the literals affect either the meaning of the statement or the degree to which the plan is
optimized.
Note:
A hard parse is when your SQL must be re-loaded into the shared pool. A hard parse is worse
than a soft parse because of the overhead involved in shared pool RAM allocation and memory
management. Once loaded, the SQL must then be completely re-checked for syntax & semantics
and an executable generated.
Excessive hard parsing can occur when your shared_pool_size is too small (and reentrant SQL is
paged out), or when you have non-reusable SQL statements without host variables.
See the cursor_sharing parameter for a easy way to make SQL reentrant and remember that you
should always use host variables in you SQL so that they can be reentrant.
Reference: Oracle Database Reference, CURSOR_SHARING
QUESTION 99
You are administering a database that supports an OLTP workload. Automatic optimizer statistics
collection is scheduled in the night maintenance window. Some of the tables get updated
frequently during day time and you notice a performance degradation for queries using those
tables due to stale statistics.
Which two options might help to avoid the performance degradation of the queries?
A.
B.
C.
D.
E.
Set the global statistics preference STALE_PERCENT to 0.
Use dynamically sampling hint for the queries on frequently updated tables.
Create histogram for the columns used frequently in the WHERE clause.
Gather statistics with global statistics preference NO_VALIDATE to TRUE.
Set the OPTIMZER_USE_PENDING_STATISTICS parameter to TRUE.
Correct Answer: BC
Section: (none)
Explanation
Explanation/Reference:
B:Dynamic sampling first became available in Oracle9i Database Release 2. It is the
ability of the cost-based optimizer (CBO) to sample the tables a query references during a hard
parse, to determine better default statistics for unanalyzed segments, and to verify its “guesses.”
This sampling takes place only at hard parse time and is used to dynamically generate better
statistics for the optimizer to use, hence the name dynamic sampling.
There are two ways to use dynamic sampling:
The OPTIMIZER_DYNAMIC_SAMPLING parameter can be set at the database instance level and
can also be overridden at the session level with the ALTER SESSION command.
The DYNAMIC_SAMPLING query hint can be added to specific queries.
C:A histogram is a collection of information about the distribution of values within a column.In
some cases, the distribution of values within a column of a table will affect the optimizer's decision
to use an index vs. perform a full-table scan. This scenario occurs when the value with a where
clause has a disproportional amount of values, making a full-table scan cheaper than index
access. Histograms are also important for determinine the optimal table join order.
Incorrect:
A: Too much statistics would be gathered.
Note: STALE_PERCENT - This value determines the percentage of rows in a table that have to
change before the statistics on that table are deemed stale and should be regathered. The default
value is 10%.
D:In Oracle PL/SQL, the VALIDATE keyword defines the state of a constraint on a column in a
table.
E:OPTIMZER_USE_PENDING_STATISTICS specifies whether or not the optimizer uses pending
statistics when compiling SQL statements.
QUESTION 100
Examine the exhibit to view the query and its execution plan.
Identify the two correct interpretations that can be made from the execution plan.
Exhibit:
A. The DEPT table is driving table and the EMP table join is the driven table.
B. Rows from the DEPT table are first hashed by the join key into memory and then joined to the
EMP table on the join key.
C. The EMP table is the driving table and the DEPT table us the driven table.
D. The rows from the DEPT table are sorted first by the join key and then hashed into memory.
E. Rows from both the tables are sorted by the join key, but only rows from the DEPT table are
hashed into memory.
Correct Answer: AB
Section: (none)
Explanation
Explanation/Reference:
Note:
*A hash join is performed by hashing one data set into memory based on join columns and
reading the other one and probing the hash table for matches. The hash join is very low cost when
the hash table can be held entirely in memory, with the total cost amounting to very little more than
the cost of reading the data sets. The cost rises if the hash table has to be spilled to disk in a onepass
sort, and rises considerably for a multipass sort.
You should note that hash joins can only be used for equi-joins, but merge joins are more flexible.
In general, if you are joining large amounts of data in an equi-join then a hash join is going to be a
better bet.
*The 'driving' table is the table we will join FROM -- that is JOIN TO other tables. For
example, lets say you have the query:
select * from emp, dept where emp.deptno = dept.deptno;
In this case the driving table might be DEPT, we would fetch rows from DEPT in a full
scan and then find the rows in EMP that match. DEPT is the driving table.
QUESTION 101
You are administering a database that supports a DSS workload, where in an application a set of
queries use the query rewrite on materialized views. You notice that these queries are performing
poorly.
Which two actions can you make to improve the performance of theses queries?
A. Use DBMS_MVIEW.EXPLAIN_REWRITE to analyze whether the queries are rewritten.
B. USE DBMS_ADVISOR.QUICK_TUNE to analyze the query rewrite usage of materialized views
for the entire workload.
C. Create an STS for all the queries and use SQL performance Analyzer to generate
recommendations for determining the regressed SQL statements.
D. Create an STS for all the queries in the application and use the SQL Tuning Advisor to
generate recommendations.
E. Create an STS for all the queries in the application and use the SQL Access Advisor to
generate a recommendation for optimizing materialized views for maximum query rewrite usage
and fast refresh.
Correct Answer: BD
Section: (none)
Explanation
Explanation/Reference:
B:
*The dbms_advisor package has a procedure called dbms_advisor.quick_tune that allows the
DBA to quickly tune a single SQL statement with a single procedure call. This procedure performs
all of the stages that are necessary to launch the SQLAccess Advisor, e.g. creating a task,
creating and populating a workload, and executing the task.
*QUICK_TUNE Procedure
This procedure performs an analysis and generates recommendations for a single SQL statement.
This provides a shortcut method of all necessary operations to analyze the specified SQL
statement. The operation creates a task using the specified task name. The task will be created
using a specified Advisor task template. Finally, the task will be executed and the results will be
saved in the repository.
Note:
* DSS -decision support system
*In tuning mode, the optimizer performs additional analysis to check whether the execution plan
produced under normal mode can be improved further. The output of the query optimizer is not an
execution plan, but a series of actions, along with their rationale and expected benefit for
producing a significantly superior plan. When running in the tuning mode, the optimizer is referred
to as the Automatic Tuning Optimizer.
Incorrect:
A:DBMS_MVIEW.EXPLAIN_REWRITE
This procedure enables you to learn why a query failed to rewrite, or, if it rewrites, which
materialized views will be used. Using the results from the procedure, you can take the
appropriate action needed to make a query rewrite if at all possible. The query specified in the
EXPLAIN_REWRITE statement is never actually executed.
QUESTION 102
You are administering a database that supports an OLTP workload in which one of the
applications inserts rows in a table until 12 noon every, after which multiple years perform frequent
queries on the table. You want the statistics to be more representative of the table population.
What must be done to ensure that an optimizer uses the latest statistics of the table?
A.
B.
C.
D.
E.
Set the STALE_PERCENT preference to 0.
Set the OPTIMIZER_MODE parameter to ALL_ROWS.
Set the OPTIMIZER_DYNAMIC_SAMPLING parameter to 0.
Use the FIRST_ROWS_n hint in the queries.
Unlock and gather statistics for the table after inserts are done and lock them again.
Correct Answer: E
Section: (none)
Explanation
Explanation/Reference:
*
For tables that are substantially modified in batch operations, such as with bulk loads, gather
statistics on these tables as part of the batch operation. Call the DBMS_STATS procedure as
soon as the load operation completes.
* Statistics for a table or schema can be locked. After statistics are locked, you can make no
modifications to the statistics until the statistics have been unlocked. Locking procedures are
useful in a static environment in which you want to guarantee that the statistics never change.
The DBMS_STATS package provides two procedures for locking
(LOCK_SCHEMA_STATS and LOCK_TABLE_STATS) and two procedures for unlocking statistics
(UNLOCK_SCHEMA_STATS andUNLOCK_TABLE_STATS).
Incorrect:
A: STALE_PERCENT cannot be set to 0.
*With the DBMS_STATS package you can view and modify optimizer statistics gathered for
database objects.
STALE_PERCENT - This value determines the percentage of rows in a table that have to change
before the statistics on that table are deemed stale and should be regathered. The default value is
10%.
B: Optimizer_mode applies to the database, not to a specific table.
*
Possible values for optimizer_mode = choose/ all_rows/ first_rows/ first_rows[n]
Important facts about ALL_ROWS
C:Optimizer dynamic sampling refers to the ability of the SQL optimizer to take a sample of rows
from a table to calculate missing statistics. Dynamic sampling can be controlled with the
OPTIMIZER_DYNAMIC_SAMPLING parameter or the DYNAMIC_SAMPLING hint.
level 0 - do not use dynamic sampling
D: First_row_n cannot be used in this way.
QUESTION 103
Examine the query:
The RESULT_CACHE_MODE parameter is set to MANUAL for the database.
Which two statements are true about the usage of the result cache?
A. The SQL runtime environment checks whether the query result is cached in the result cache; if
the result exists, the optimizer fetches the result from it.
B. The SQL runtime environment does check for the query result in the result cache because the
RESULT_CACHE_MODE parameter is set to MANUAL.
C. The SQL runtime environment checks for the query result in the result cache only when the
query is executed for the second time.
D. If the query result does not exist in the cache and the query is executed, the result is generated
as output, and also sorted in the result cache.
Correct Answer: AD
Section: (none)
Explanation
Explanation/Reference:
Note:
*result_cache_mode: the result cache can be enabled in three ways: via hint, alter session or alter
system. Default is MANUAL which means that we need to explicitly request caching via the
RESULT_CACHE hint;
*As its name suggests, the query result cache is used to store the results of SQL queries for reuse
in subsequent executions. By caching the results of queries, Oracle can avoid having to
repeat the potentially time-consuming and intensive operations that generated the resultset in the
first place (for example, sorting/aggregation, physical I/O, joins etc). The cache results themselves
are available across the instance (i.e. for use by sessions other than the one that first executed the
query) and are maintained by Oracle in a dedicated area of memory. Unlike our homegrown
solutions using associative arrays or global temporary tables, the query result cache is completely
transparent to our applications. It is also maintained for consistency automatically, unlike our own
caching programs.
*RESULT_CACHE_MODE specifies when a ResultCache operator is spliced into a query's
execution plan.
Values:
/MANUAL
The ResultCache operator is added only when the query is annotated (that is, hints).
/FORCE
The ResultCache operator is added to the root of all SELECT statements (provided that it is valid
to do so).
For the FORCE setting, if the statement contains a NO_RESULT_CACHE hint, then the hint takes
precedence over the parameter setting.
QUESTION 104
How can you analyze an existing trace file to list the almost resource-intensive statements,
aggregation of statistics, and to either exclude recursive call details?
A.
B.
C.
D.
E.
By using the DBMS_TRACE package
By using the EXPLAIN PLAN command
By enabling the SQL_TRACE parameter for the session
By using the TKPROF utility
By using the TRCSESS utility
Correct Answer: D
Section: (none)
Explanation
Explanation/Reference:
D:You can run the TKPROF program to format the contents of the trace file and
place the output into a readable output file. TKPROF can also:
*Create a SQL script that stores the statistics in the database
*Determine the execution plans of SQL statements
TKPROF reports each statement executed with the resources it has consumed, the number of
times it was called, and the number of rows which it processed. This information lets you easily
locate those statements that are using the greatest resource. With experience or with baselines
available, you can assess whether the resources used are reasonable given the work done.
Incorrect:
A:DBMS_TRACE provides subprograms to start and stop PL/SQL tracing in a session. Oracle
collects the trace data as the program executes and writes it to database tables.
A typical session involves:
Starting PL/SQL tracing in session (DBMS_TRACE.SET_PLSQL_TRACE).
Running an application to be traced.
Stopping PL/SQL tracing in session (DBMS_TRACE.CLEAR_PLSQL_TRACE).
E:The trcsess utility consolidates trace output from selected trace files based on several criteria:
Session Id
Client Id
Service name
Action name
Module name
After trcsess merges the trace information into a single output file, the output file could be
processed by TKPROF
Reference: OracleDatabase Performance Tuning Guide,Understanding TKPROF
QUESTION 105
Which statement is true about the usage of the STAR_TRANSFORMATION hint in a query?
A. The optimizer always uses a plan in which the transformation is used.
B. The optimizer uses transformation only if the cost is less than a query executing without
transformation.
C. The optimizer always generates subqueries to transform a query.
D. The optimizer always uses bitmap indexes on the primary key column for any dimension table
to transform a query.
Correct Answer: A
Section: (none)
Explanation
Explanation/Reference:
The STAR_TRANSFORMATION hint makes the optimizer use the best plan in
which the transformation has been used. Without the hint, the optimizer could make a cost-based
decision to use the best plan generated without the transformation, instead of the best plan for the
transformed query.
Even if the hint is given, there is no guarantee that the transformation will take place. The
optimizer only generates the subqueries if it seems reasonable to do so. If no subqueries are
generated, then there is no transformed query, and the best plan for the untransformed query is
used, regardless of the hint.
Reference: OracleDatabase SQL Language Reference,STAR_TRANSFORMATION Hint
QUESTION 106
Examine the Exhibit to view the structure of and indexes for the EMPLOYEES and
DEPARTMENTS tables:
EXAMINE the SQL statement and its execution plan:
Which two statements are correct regarding the execution plan?
Exhibit:
A. Step 2 is performing nested operation on JOB_ID column of the JOBS table, which is the driven
table and the EMPLOYEES table is the driven table.
B. In step 2 for every row returned by the JOBS table matching rows from the EMPLOYEES table
are accessed.
C. Step 1 is performing nested loop operation on the DEPARTMENT_ID column of the
DEPARTMENTS table, which is the driven table and results returned by step 2 in the driving
resultset.
D. The performance of the query can be improved by creating bitmap index on the JOB_ID column
of the EMPLOYEES table.
E. The performance of the query can be improved by creating bitmapped index on the
DEPARTMENT_ID column of the EMPLOYEES table.
Correct Answer: BE
Section: (none)
Explanation
Explanation/Reference:
As per exhibit:
B, not A, Not C: First isline 5executed, followed by line 4, followed by line 3.
Step 2 is line 4.
E: The Department_ID column has lower cardinality compared to the JOB_ID column, so it is
better suited for a bitmapped index.
Note:
*Oracle bitmap indexes are very different from standard b-tree indexes. In bitmap structures, a
two-dimensional array is created with one column for every row in the table being indexed. Each
column represents a distinct value within the bitmapped index. This two-dimensional array
represents each value within the index multiplied by the number of rows in the table.
At row retrieval time, Oracle decompresses the bitmap into the RAM data buffers so it can be
rapidly scanned for matching values. These matching values are delivered to Oracle in the form of
a Row-ID list, and these Row-ID values may directly access the required information.
*The real benefit of bitmapped indexing occurs when one table includes multiple bitmapped
indexes. Each individual column may have low cardinality. The creation of multiple bitmapped
indexes provides a very powerful method for rapidly answering difficult SQL queries.
*Oracle bitmap indexes are very different from standard b-tree indexes. In bitmap structures, a
two-dimensional array is created with one column for every row in the table being indexed. Each
column represents a distinct value within the bitmapped index. This two-dimensional array
represents each value within the index multiplied by the number of rows in the table.
At row retrieval time, Oracle decompresses the bitmap into the RAM data buffers so it can be
rapidly scanned for matching values. These matching values are delivered to Oracle in the form of
a Row-ID list, and these Row-ID values may directly access the required information.
QUESTION 107
Which two statements are true about the use of the DYNAMIC_SAMPLING hint in a query?
A.
B.
C.
D.
E.
It estimates selectivity better for the filters.
It is always used for flashback queries that contain the AS OF clause.
It cannot be used if there is a single-table predicate in the WHERE clause.
It cannot be used for processing SQL statements in parallel.
It can compensate for the lack of extended statistics to get accurate cardinality estimates for
complex predicate expressions.
Correct Answer: DE
Section: (none)
Explanation
Explanation/Reference:
D:For parallel statements, the optimizer automatically decides whether to use dynamic sampling
and which level to use.
The decision depends on the size of the tables and the complexity of the predicates. The optimizer
expects parallel statements to be resource-intensive,so the additional overhead at compile time is
worth it to ensure the best plan. The database ignores the For serially processed SQL statements,
the dynamic sampling level depends on the value of the OPTIMIZER_DYNAMIC_SAMPLING
parameter andis not triggered automatically by the optimizer. Serial statements are typically shortrunning,
so that any overhead at compile timecould have a huge impact on their performance. the
value is honored.
Reference: OracleDatabase Administrator's Guide,About Oracle Database Resource Manager
QUESTION 108
Your database supports a workload consisting of three categories of SQL statements:
- Statements that should execute in less than one minute
- Statement that may execute for up to 15 minutes
- Statements that may be executed for more than 15 minutes
You set PARALLEL_DEGREE_POLICY to Auto.
You plan to prioritize queued statements by using the Database Resource manager.
Which two are true about parallelism prioritization by a consumer group?
A. PARALLEL_TARGET_PERCENTAGE is used to prioritize a consumer group’s use of the
overall PARALLEL_SERVER_TARGET.
B. Queuing is done for a consumer group exceeding its percentage, even if the number of busy
PX servers in the instance has not reached PARALLEL_SERVERS_TARGET.
C. PARALLEL_TARGET_PERCENTAGE us used to prioritize a consumer group’s use of the
overall SPAN CLASS = ‘OracleCode’> PARALLEL_MAX_SERVERS.
D. Having separate queues for consumer groups requires the use of management attributes
(MGMT_P1, MGMT_P2 etc. . . )
E. Separate queue timeout using PARALLEL_QUEUE_TIMEOUT require the use of management
attributes (MGMT_P1, MGMT_P2 etc . . . groups)
Correct Answer: AD
Section: (none)
Explanation
Explanation/Reference:
A:Parallel Target Percentage
It is possible for a single consumer group to launch enough parallel statements to use all the
available parallel servers. If this happens, when a high-priority parallel statement from a different
consumer group is run, no parallel servers are available to allocate to this group. You can avoid
such a scenario by limiting the number of parallel servers that can be used by a particular
consumer group.
Use the PARALLEL_TARGET_PERCENTAGE directive attribute to specify the maximum
percentage of the parallel server pool that a particular consumer group can use. The number of
parallel servers used by a particular consumer group is counted as the sum of the parallel servers
used by all sessions in that consumer group.
Incorrect:
Not B:PARALLEL_SERVERS_TARGET specifies the number of parallel server processes allowed
to run parallel statements before statement queuing will be used. When the parameter
PARALLEL_DEGREE_POLICY is set to AUTO, Oracle will queue SQL statements that require
parallel execution, if the necessary parallel server processes are not available. Statement queuing
will begin once the number of parallel server processes active on the system is equal to or greater
than PARALLEL_SERVER_TARGET.
Not C: Would be true if we
replacePARALLEL_MAX_SERVERSwithPARALLEL_SERVER_TARGET.
Not E:The PARALLEL_QUEUE_TIMEOUT directive attribute enables you to specify the maximum
time, in seconds, that a parallel statement can wait in the parallel statement queue before it is
timed out. The PARALLEL_QUEUE_TIMEOUT attribute can be set for each consumer group. This
attribute is applicable even if you do not specify other management attributes
(mgmt_p1, mgmt_p2, and so on) in your resource plan.
Note:
*PARALLEL_DEGREE_POLICY
AUTO
Enables automatic degree of parallelism, statement queuing, and in-memory parallel execution.
*The PARALLEL_TARGET_PERCENTAGE attribute enables you to specify when parallel
statements from a consumer group can be queued. Oracle Database maintains a separate parallel
statement queue for each consumer group.
*PARALLEL_SERVERS_TARGET specifies the number of parallel server processes allowed to
run parallel statements before statement queuing will be used. When the parameter
PARALLEL_DEGREE_POLICY is set to AUTO, Oracle will queue SQL statements that require
parallel execution, if the necessary parallel server processes are not available. Statement queuing
will begin once the number of parallel server processes active on the system is equal to or greater
than PARALLEL_SERVER_TARGET.
By default, PARALLEL_SERVER_TARGET is set lower than the maximum number of parallel
server processes allowed on the system (PARALLEL_MAX_SERVERS) to ensure each parallel
statement will get all of the parallel server resources required and to prevent overloading the
system with parallel server processes.
Note that all serial (non-parallel) statements will execute immediately even if statement queuing
has been activated.
*Oracle Database Resource Manager (the Resource Manager) enables you to optimize resource
allocation among the many concurrent database sessions.
The elements of the Resource Manager are:
/Resource consumer groupA group of sessions that are grouped together based on resource
requirements. The Resource Manager allocates resources to resource consumer groups, not to
individual sessions.
/Resource planA container for directives that specify how resources are allocated to resource
consumer groups. You specify how the database allocates resources by activating a specific
resource plan.
/Resource plan directiveAssociates a resource consumer group with a particular plan and
specifies how resources are to be allocated to that resource consumer group.
QUESTION 109
Examine the Exhibit.
Given two sets of parallel execution processes, SS1 and SS2, which is true?
Exhibit:
A. Each process SS1reads some of the rows from the CUSTOMERS table and sends all the rows
it reads to each process in SS2.
B. Each process in SS1 reads all the rows from the CUSTOMERS table and distributes the rows
evenly among the processes in SS2.
C. Each process in SS1 reads some of the rows from the SALES table and sends all the rows it
reads to each process in SS2.
D. Each process in SS1 reads all the rows from the SALES table and distributes the rows evenly
among the processes in SS2.
E. Each process in SS1 reads some of the rows from the SALES table and distributes the rows
evenly among the processes in SS2.
F. Each process in the SS1 reads some of the rows from the CUSTOMERS table and distributes
the rows evenly among the processes in SS2.
Correct Answer: D
Section: (none)
Explanation
Explanation/Reference:
Note:
* The execution starts with line 16 (accessing the SALES table), followed by line 15.
* PX BLOCKITERATOR
The PX BLOCK ITERATOR row source represents the splitting up of the table EMP2 into pieces
so as to divide the scan workload between the parallel scan slaves.
The PX SEND and PX RECEIVE row sources represent the pipe that connects the two slave sets
as rows flow up from the parallel scan, get repartitioned through the HASHtable queue, and then
read by and aggregated on the top slave set.
QUESTION 110
See the code fragment
You receive the following error message:
ORA-12827: insufficient parallel query slaves available
Which three parameter settings could you change to avoid this error?
A. Decrease the value of PARALLEL_MIN_PERCENT
B. Increase the value of PARALLEL_MAX_SERVERS
C. Increase the value of PARALLEL_MIN_SERVERS
D.
E.
F.
G.
Reduce the value of PARALLEL_MIN_TIME_THRESHOLF
Increase the value of PARALLEL_DEGREE_LIMIT
Set the PARALLEL_DEGREE_POLICY = AUTO
Set the PARALLEL_DEGREE_POLICY = LIMITED
Correct Answer: ABG
Section: (none)
Explanation
Explanation/Reference:
A:ORA-12827:
insufficient parallel query slaves available
Cause:PARALLEL_MIN_PERCENT parameter was specified and fewer than minimum slaves
were acquired
Action:either re-execute query with lower PARALLEL_MIN_PERCENT or wait until some running
queries are completed, thus freeing up slaves
B:Your query doesn't run because you've told Oracle not to run it unless at least 5% of the parallel
execution processes are available for your query.Set PARALLEL_MIN_PERCENT=0 or increase
the number of parallel execution processes by increasing the PARALLEL_MAX_SERVERS
parameter.
G:PARALLEL_DEGREE_POLICY
PARALLEL_DEGREE_POLICY specifies whether or not automatic degree of Parallelism,
statement queuing, and in-memory parallel execution will be enabled.
LIMITED
Enables automatic degree of parallelism for some statements but statement queuing and inmemory
Parallel Execution are disabled. Automatic degree of parallelism is only applied to those
statements that access tables or indexes decorated explicitly with the PARALLEL clause. Tables
and indexes that have a degree of parallelism specified will use that degree of parallelism.
Note:
PARALLEL_MIN_PERCENT operates in conjunction with PARALLEL_MAX_SERVERS and
PARALLEL_MIN_SERVERS. It lets you specify the minimum percentage of parallel execution
processes (of the value of PARALLEL_MAX_SERVERS) required for parallel execution. Setting
this parameter ensures that parallel operations will not execute sequentially unless adequate
resources are available. The default value of 0 means that no minimum percentage of processes
has been set.
Consider the following settings:
PARALLEL_MIN_PERCENT = 50
PARALLEL_MIN_SERVERS = 5
PARALLEL_MAX_SERVERS = 10
If 8 of the 10 parallel execution processes are busy, only 2 processes are available. If you then
request a query with a degree of parallelism of 8, the minimum 50% will not be met.
You can use this parameter in conjunction with PARALLEL_ADAPTIVE_MULTI_USER. In a multiuser
environment, an individual user or application can set PARALLEL_MIN_PERCENT to a
minimum value until sufficient resources are available on the system and an acceptable degree of
parallelism is returned.
QUESTION 111
Examine the parallelism parameters for your instance:
Examine the Exhibit to view the query and its explain plan output.
All sessions use default parallelism settings.
What two steps could you take to make the query execute in parallel?
Exhibit:
A.
B.
C.
D.
E.
Add a parallel hint.
Decrease the value of PARALLEL_MIN_TIMETHRESHOLD.
Increase the value of PARALLEL_MIN_SERVERS.
Increase the value of PARALLEL_MAX_SERVERS.
Decrease the value of PARALLEL_MIN_PERCENT.
Correct Answer: AB
Section: (none)
Explanation
Explanation/Reference:
A:You can rely on hints to set the degree of parallelism, but these can be hard to set
correctly.
B:
Decision tree for query parallelization
QUESTION 112
Examine the following command:
Which query transformation technique is used by the optimizer in this case?
A.
B.
C.
D.
View merging
Filter push-down
Predicate pushing
Predicate move-around
Correct Answer: C
Section: (none)
Explanation
Explanation/Reference:
In predicate pushing, the optimizer "pushes" the relevant predicates from the
containing query block into the view query block. For views that are not merged, this technique
improves the subplan of the unmerged view because the database can use the pushed-in
predicates to access indexes or to use as filters.
For example, suppose you create a view that references two employee tables. The view is defined
with a compound query that uses the UNION set operator, as follows:
CREATE VIEW all_employees_vw AS
( SELECT employee_id, last_name, job_id, commission_pct, department_id
FROM employees )
UNION
( SELECT employee_id, last_name, job_id, commission_pct, department_id
FROM contract_workers );
You then query the view as follows:
SELECT last_name
FROM all_employees_vw
WHERE department_id = 50;
Because the view is a compound query, the optimizer cannot merge the view's query into the
accessing query block. Instead, the optimizer can transform the accessing statement by pushing
its predicate, the WHERE clause condition department_id=50, into the view's compound query.
The equivalent transformed query is as follows:
SELECT last_name
FROM ( SELECT employee_id, last_name, job_id, commission_pct, department_id
FROM employees
WHERE department_id=50
UNION
SELECT employee_id, last_name, job_id, commission_pct, department_id
FROM contract_workers
WHERE department_id=50 );
Reference: OracleDatabase Performance Tuning Guide,Predicate Pushing
QUESTION 113
You identified some DSS queries that perform expensive join and aggregation operations.
The queries access historical data from noncurrent partition of the fact tables.
What three actions could you perform to improve the response time of the queries without
modifying the SQL statements?
A. Set the QUERY_REWRITE_ENABLED to TRUE at the session level.
B. Create an STS for the statements, run SQL Tuning Advisor for the STS, and implement any
generated recommendations for materialized views.
C. Set QUERY_REWRITE_ENABLED to TRUE at the instance level.
D. Create an STS for the statements, run SQL Access Advisor for the STS, and implement any
generated recommendations for materialized views.
E. Set QUERY_REWRITE_INTEGRITY to ENFORCED at the instance level.
Correct Answer: BCD
Section: (none)
Explanation
Explanation/Reference:
A:*QUERY_REWRITE_ENABLED allows you to enable or disable query rewriting
globally for the database.
Values:
false
Oracle does not use rewrite.
true
Oracle costs the query with rewrite and without rewrite and chooses the method with the lower
cost.
force
Oracle always uses rewrite and does not evaluate the cost before doing so. Use force when you
know that the query will always benefit from rewrite and when reduction in compile time is
important.
To take advantage of query rewrite for a particular materialized view, you must enable query
rewrite for that materialized view, and you must enable cost-based optimization.
C:You can use SQL Tuning Advisor to tune one or more SQL statements
D:Using the SQL Access Advisor Wizard or API, you can do the following:
Note:
* STS – SQL tuning set.
*A SQL Tuning Set is a database object that includes one or more SQL statements and their
execution statistics and execution context. You can use the set as an input source for various
advisors, such as SQL Tuning Advisor, SQL Access Advisor, and SQL Performance Analyzer.
Incorrect:
E: QUERY_REWRITE_INTEGRITY determines the degree to which Oracle must enforce query
rewriting. At the safest level, Oracle does not use query rewrite transformations that rely on
unenforced relationships.
Values:
enforced
Oracle enforces and guarantees consistency and integrity.
trusted
Oracle allows rewrites using relationships that have been declared, but that are not enforced by
Oracle.
stale_tolerated
Oracle allows rewrites using unenforced relationships. Materialized views are eligible for rewrite
even if they are known to be inconsistent with the underlying detail data
QUESTION 114
Identify two situations in which full table scans will be faster than index range scans.
A. A query with a highly selective filter fetching less than 5 percent of the rows from a table.
B. A highly selective query on a table having high clustering factor for an index.
C. A query fetching less number of blocks than value specified by
DB_FILE_MULTIBLOCK_READ_COUNT.
D. A query executing in parallel on a partitioned table with partitioned indexes.
E. A query on a table with sparsely populated table blocks.
Correct Answer: CD
Section: (none)
Explanation
Explanation/Reference:
D:DB_FILE_MULTIBLOCK_READ_COUNT is one of the parameters you can use to minimize I/O
during table scans. It specifies the maximum number of blocks read in one I/O operation during a
sequential scan. The total number of I/Os needed to perform a full table scan depends on such
factors as the size of the table, the multiblock read count, and whether parallel execution is being
utilized for the operation.
Online transaction processing (OLTP) and batch environments typically have values in the range
of 4 to 16 for this parameter. DSS and data warehouse environments tend to benefit most from
maximizing the value of this parameter. The optimizer is more likely to choose a full table scan
over an index if the value of this parameter is high.
Note:
*See 6) and 7) below.
The oracle optimizer choose the best plan and execute the query according the plan. It is common
to hear that my table has indexes but why oracle does not use indexes rather it is using full table
scan. There are several reasons behind choosing optimizer full table scans. 1)The table has no
indexes within it.
2)Table has indexes but they are not appropriate to queries. For example in the table there is
normal B-tree indexes but in the query the column used in the WHERE clause contains function.
3)Query access large amount of data. The table has indexes but query against it select almost all
of the rows. In that case optimizer might choose to full access of table.
4)Index creation order may not appropriate. You have composite indexes on a table but in the
where clause the leading column inside indexes are not used rather trailing columns are used.
5)The table is skewed. For example column gender contains value 'M' 10,000 times but value 'F'
only 10 times.6)The table is small. If a table can read in a single I/O call, then a full table scan
might be cheaper than an index range scan. Single I/O call is defined by
DB_FILE_MULTIBLOCK_READ_COUNT parameter and value defined by blocks.Check it
by,SQL> show parameter DB_FILE_MULTIBLOCK_READ_COUNTNAME TYPE VALUE----------------------------------- --------------------------------- ------------------------------db_file_multiblock_read_count
integer 16
7)High degree of parallelism. High degree of parallelism skews the optimizer toward full table
scans.
8)In the query if there is no filtering then full table scan is the choice.
*If an index has poor cardinality (ie. more than 4% rows with the same index key) then it will
perform poorly. It will usually be faster to perform a full table scan. eg. Table SALES has an index
on the column PAYMENT_METHOD which can contain values such as COD, CREDIT, CHEQUE,
CASH. The statement
SELECT *
FROM sales
WHERE payment_method = 'CASH'
will probably perform so badly that you are better off without the index.
*Oracle uses the full table scan as it assumes that it will have to read a certain part of the table.
Reference: OracleDatabase Reference,DB_FILE_MULTIBLOCK_READ_COUNT
QUESTION 115
You are administering a database, where an application frequently executes identical SQL
statements with the same syntax.
How will you optimize the query results without retrieving data blocks from the storage?
A.
B.
C.
D.
By setting the CURSOR_SHARING parameter to FORCE.
By using the bind variables and setting the CURSOR_SHARING parameter to EXACT.
By using the CACHE hint to pin the queries in the library cache
By ensuring that RESULT_CACHE_MODE parameter is set to MANUAL and using the
RESULT_CACHE hint in the queries.
E. By creating a SQL plan baseline for the identical statements.
Correct Answer: D
Section: (none)
Explanation
Explanation/Reference:
As its name suggests, the query result cache is used to store the results of SQL
queries for re-use in subsequent executions. By caching the results of queries, Oracle can avoid
having to repeat the potentially time-consuming and intensive operations that generated the
resultset in the first place (for example, sorting/aggregation, physical I/O, joins etc). The cache
results themselves are available across the instance (i.e. for use by sessions other than the one
that first executed the query) and are maintained by Oracle in a dedicated area of memory. Unlike
our homegrown solutions using associative arrays or global temporary tables, the query result
cache is completely transparent to our applications. It is also maintained for consistency
automatically, unlike our own caching programs.
Note:
RESULT_CACHE_MODE specifies when a ResultCache operator is spliced into a query's
execution plan.
Values:
MANUAL
The ResultCache operator is added only when the query is annotated (that is, hints).
FORCE
The ResultCache operator is added to the root of all SELECT statements (provided that it is valid
to do so).
For the FORCE setting, if the statement contains a NO_RESULT_CACHE hint, then the hint takes
precedence over the parameter setting.
Incorrect:
A, B:CURSOR_SHARING determines what kind of SQL statements can share the same cursors.
Values:
FORCE
Forces statements that may differ in some literals, but are otherwise identical, to share a cursor,
unless the literals affect the meaning of the statement.
SIMILAR
Causes statements that may differ in some literals, but are otherwise identical, to share a cursor,
unless the literals affect either the meaning of the statement or the degree to which the plan is
optimized.
EXACT
Only allows statements with identical text to share the same cursor.
C:The Oracle library cache is a component of the System Global Area (SGA) shared pool.
Similarly to other Oracle
cache structures, the point of the library cache is to reduce work – and therefore to improve
performance – by
caching the result of parsing and optimizing SQL or PL/SQL so that subsequent executions of the
same SQL or
PL/SQL require fewer preparatory steps to deliver a query result.
QUESTION 116
Which three options are true about MVIEWs?
A.
B.
C.
D.
E.
The defining query of an MVIEWs may be based on a populated table.
Queries that are rewritten to an MVIEW will never obtain results from the result cache.
All MVIEWS may be configured to support “refresh on demand”.
The defining query of an MVIEW may be based on non_partitioned table.
All MVIEWs may be configured to support “refresh on commit”
Correct Answer: ABC
Section: (none)
Explanation
Explanation/Reference:
A:The defining query of a materialized view can select from tables, views, or
materialized views owned by the user SYS, but you cannot enable QUERY REWRITE on such a
materialized view.
B:You cannot specify the following CREATE MATERIALIZED VIEW clauses: CACHE or
NOCACHE, CLUSTER, or ON PREBUILT TABLE.
C:Specify ON DEMAND to indicate that the materialized view will be refreshed on demand by
calling one of the three DBMS_MVIEW refresh procedures. If you omit
both ON COMMIT and ON DEMAND, ON DEMAND is the default.
Incorrect:
E:Materialized views can only refresh ON COMMIT in certain situations.
The materialized view cannot contain object types or Oracle-supplied types.
The base tables will never have any distributed transactions applied to them.
Note:
*Oracle uses materialized views (also known as snapshots in prior releases) to replicate data to
non-master sites in a replication environment and to cache expensive queries in a data warehouse
environment.
*A materialized view is a replica of a target master from a single point in time. The master can be
either a master table at a master site or a master materialized view at a materialized view site.
Whereas in multimaster replication tables are continuously updated by other master sites,
materialized views are updated from one or more masters through individual batch updates,
known as a refreshes, from a single master site or master materialized view site.
QUESTION 117
Partial details of an execution plan.
Which statement correctly describes the BITMAP AND operation?
A. It produces a bitmap, representing dimension table rows from all dimension tables that join with
qualified fact table rows.
B. It produces a concentration of the bitmaps for all dimension tables.
C. It produces a bitmap, representing fact table rows that do not join with qualified dimension table
rows from all dimension tables.
D. It produces a bitmap, representing fact table rows that join with qualified dimension table rows
from all dimension tables.
Correct Answer: D
Section: (none)
Explanation
Explanation/Reference:
Example:
Additional set operations will be done for the customer dimension and the product dimension. At
this point in the star query processing, there are threebitmaps. Each bitmap corresponds to a
separate dimension table, and each bitmap represents the set of rows of the fact table that satisfy
that individual dimension's constraints.
These three bitmaps are combined into a single bitmap using the bitmap AND operation. This final
bitmap represents the set of rows in the fact table that satisfy all of the constraints on the
dimension table.
Reference: OracleDatabase Data Warehousing Guide,Star Transformation with a Bitmap Index
QUESTION 118
Which two statements are true about index full scans?
A.
B.
C.
D.
An index fast full scan multi block I/O to read the index structure in its entirely.
Index nodes are not retrieved in the index order, and there fore the nodes are not in sequence.
An index fast full scan reads the index block by block.
An index fast full scan reads the whole index from the lowest value to the higher value.
Correct Answer: AB
Section: (none)
Explanation
Explanation/Reference:
A:To speed table and index block access, Oracle uses the
db_file_multiblock_read_count parameter (which defaults to 8) to aid in getting full-table scan and
full-index scan data blocks into the data buffer cache as fast as possible.
B:The index nodes are not retrieved in index order, the rows will not be sequenced.
Note:
there are some requirements for Oracle to invoke the fast full-index scan.
Reference:index fast full scan tips
QUESTION 119
An application supplied by a new vendor is being deployed and the SQL statements have plan
baselines provided by the supplier. The plans have been loaded from a SQL tuning set. You
require the optimizer to use these baselines, but allow better plans to used, should any be created.
Which two tasks would you perform to achieve this?
A.
B.
C.
D.
E.
Set the OPTIMIZER_USE_SQL_PLAN_BASELINES initialization parameter to TRUE.
Set the OPTIMIZER_CAPTURE_SQL_PLAN_BASELINES initialization parameter to TRUE.
Use the DBMS_SPM.ALTER_SQL_PLAN_BASELINE function to fix the plans.
Use the DBMS_SPM.EVOLVE_SQL_PLAN_BASELINE function to fix the new plans.
Use the DBMS_SPM.ALTER_SQL_BASELINE function to accept new plans.
Correct Answer: AD
Section: (none)
Explanation
Explanation/Reference:
A:OPTIMIZER_USE_SQL_PLAN_BASELINES enables or disables the use of SQL
plan baselines stored in SQL Management Base. When enabled, the optimizer looks for a SQL
plan baseline for the SQL statement being compiled. If one is found in SQL Management B
ase, then the optimizer will cost each of the baseline plans and pick one with the lowest cost.
D:EVOLVE_SQL_PLAN_BASELINE Function
This function evolves SQL plan baselines associated with one or more SQL statements. A SQL
plan baseline is evolved when one or more of its non-accepted plans is changed to an accepted
plan or plans. If interrogated by the user (parameter verify = 'YES'), the execution performance of
each non-accepted plan is compared against the performance of a plan chosen from the
associated SQL plan baseline. If the non-accepted plan performance is found to be better than
SQL plan baseline performance, the non-accepted plan is changed to an accepted plan provided
such action is permitted by the user (parameter commit = 'YES').
Incorrect:
B:OPTIMIZER_CAPTURE_SQL_PLAN_BASELINES enables or disables the automatic
recognition of repeatable SQL statements, as well as the generation of SQL plan baselines for
such statements.
C:ALTER_SQL_PLAN_BASELINE Function
This function changes an attribute of a single plan or all plans associated with a SQL statement
using the attribute name/value format.
QUESTION 120
You recently gathered statistics for a table by using the following commands:
You noticed that the performance of queries has degraded after gathering statistics. You want to
use the old statistics. The optimizer statistics retention period is default.
What must you do to use the old statistics?
A.
B.
C.
D.
Use the flashback to bring back the statistics to the desired time.
Restore statistics from statistics history up to the desired time.
Delete all the statistics collected after the desired time.
Set OPTIMIZER_USE_PENDING_STATISTICS to TRUE.
Correct Answer: B
Section: (none)
Explanation
Explanation/Reference:
Whenever statistics in dictionary are modified, old versions of statistics are saved
automatically for future restoration. Statistics can be restored using RESTORE procedures of
DBMS_STATS package. These procedures use a time stamp as an argument and restore
statistics as of that time stamp. This is useful in case newly collected statistics leads to some suboptimal
execution plans and the administrator wants to revert to the previous set of statistics.
Reference:Oracle Database Performance Tuning Guide,Restoring Previous Versions of Statistics
QUESTION 121
View the exhibit and examine the plans in the SQL baseline for a given statement.
Which interpretation is correct?
Exhibit:
A. A new plan cannot be evolved because SYS_SQL_bbedc41f554c408 is accepted.
B. Plan SYS_SQL_PLAN_bbdc741f554c408 will always be used by the optimizer for the query.
C. A new plan must be evolved using the DBMS_SPM.EVOLVE_SQL_PLAN_BASELINE function
before it can be used.
D. Plan SYS_SQL_bbedc741a57b5fc2 can be used by the optimizer if the cost of the query is less
than plan SYS_SQL_PLAN_bbedc741f554c408.
E. Plan SYS_SQL_PLAN_bbedc741f554c408 will not be used until it is fixed by using the
DBMS_SPM.EVOLVE_SQL_PLAN_BASELINE function.
Correct Answer: C
Section: (none)
Explanation
Explanation/Reference:
Note:
*Evolving a SQL plan baseline is the process by which the optimizer determines if non-accepted
plans in the baseline should be accepted. As mentioned previously, manually loaded plans are
automatically marked as accepted, so manual loading forces the evolving process. When plans
are loaded automatically, the baselines are evolved using the EVOLVE_SQL_PLAN_BASELINE
function, which returns a CLOB reporting its results.
SET LONG 10000
SELECT DBMS_SPM.evolve_sql_plan_baseline(sql_handle => 'SYS_SQL_7b76323ad90440b9')
FROM dual;
*Manual plan loading can be used in conjunction with, or as an alternative to automatic plan
capture. The load operations are performed using the DBMS_SPM package, which allows SQL
plan baselines to be loaded from SQL tuning sets or from specific SQL statements in the cursor
cache. Manually loaded statements are flagged as accepted by default. If a SQL plan baseline is
present for a SQL statement, the plan is added to the baseline, otherwise a new baseline is
created.
*fixed (YES/NO) : If YES, the SQL plan baseline will not evolve over time. Fixed plans are used in
preference to non-fixed plans.
QUESTION 122
You want to run SQL Tuning Advisor statements that are not captured by ADDM, AWR, and are
not in the library cache.
What is the prerequisite?
A.
B.
C.
D.
Enable SQL plan management
Create a SQL plan baseline for each query
Create a SQL Tuning Set (STS) containing the SQL statements
Gather statistics for objects used in the application
Correct Answer: C
Section: (none)
Explanation
Explanation/Reference:
You can use an STS as input to SQL Tuning Advisor, which performs automatic
tuning of the SQL statements based on other user-specified input parameters.
Note:
A SQL tuning set (STS) is a database object that includes one or more SQL statements along with
their execution statistics and execution context, and could include a user priority ranking. You can
load SQL statements into a SQL tuning set from different SQL sources, such as AWR, the shared
SQL area, or customized SQL provided by the user. An STS includes:
A set of SQL statements
Associated execution context, such as user schema, application module name and action, list of
bind values, and the cursor compilation environment
Associated basic execution statistics, such as elapsed time, CPU time, buffer gets, disk reads,
rows processed, cursor fetches, the number of executions, the number of complete executions,
optimizer cost, and the command type
Associated execution plans and row source statistics for each SQL statement (optional).
Reference:OracleDatabase Performance Tuning Guide,Managing SQL Tuning Sets
QUESTION 123
While tuning a SQL statement, the SQL Tuning Advisor finds an existing SQL profile for a
statement that has stale statistics. Automatic optimizer statistics is enabled for the database.
What does the optimizer do in this situation?
A.
B.
C.
D.
Updates the existing SQL profiles for which the statistics are stale.
Makes the statistics information available to GATHER_DATABASE_STATS_JOB_PROC
Starts the statistics collection process by running GATHER_STATS_JOB
Writes a warning message in the alert log file
Correct Answer: B
Section: (none)
Explanation
Explanation/Reference:
Automatic optimizer statistics collection calls
the DBMS_STATS.GATHER_DATABASE_STATS_JOB_PROC procedure. This internal
procedure operates similarly to the DBMS_STATS.GATHER_DATABASE_STATS procedure
using the GATHER AUTO option. The main difference is
that GATHER_DATABASE_STATS_JOB_PROCprioritizes database objects that require statistics,
so that objects that most need updated statistics are processed first, before the maintenance
window closes.
Note:
*The optimizer relies on object statistics to generate execution plans. If these statistics are stale or
missing, then the optimizer does not have the necessary information it needs and can generate
poor execution plans. The Automatic Tuning Optimizer checks each query object for missing or
stale statistics, and produces two types of output:
/Recommendations to gather relevant statistics for objects with stale or no statistics
Because optimizer statistics are automatically collected and refreshed, this problem occurs only
when automatic optimizer statistics collection is disabled. See "Managing Automatic Optimizer
Statistics Collection".
/Auxiliary statistics for objects with no statistics, and statistic adjustment factor for objects with
stale statistics
The database stores this auxiliary information in an object called a SQL profile.
*Oracle recommends that you enable automatic optimizer statistics collection. In this case, the
database automatically collects optimizer statistics for tables with absent or stale statistics. If fresh
statistics are required for a table, then the database collects them both for the table and
associated indexes.
Automatic collection eliminates many manual tasks associated with managing the optimizer. It also
significantly reduces the risks of generating poor execution plans because of missing or stale
statistics.
Automatic optimizer statistics collection calls the
DBMS_STATS.GATHER_DATABASE_STATS_JOB_PROC procedure. This internal procedure
operates similarly to the DBMS_STATS.GATHER_DATABASE_STATS procedure using the
GATHER AUTO option. The main difference is that GATHER_DATABASE_STATS_JOB_PROC
prioritizes database objects that require statistics,so that objects that most need updated statistics
are processed first, before the maintenance window closes.
Reference: OracleDatabase Performance Tuning Guide,Managing Automatic Optimizer Statistics
Collection
QUESTION 124
Refer to the Exhibit.
Execution plan:
What must be the correct order of steps that the optimizer executes based on the ID column the
execution plan?
Exhibit:
A.
B.
C.
D.
3, 5, 4, 6, 7
3, 5, 4, 7, 6
3, 4, 5, 7, 6
4, 5, 3, 7, 6
Correct Answer: D
Section: (none)
Explanation
Explanation/Reference:
QUESTION 125
Examine the Exhibit.
Which two statements are true about the bloom filter in the execution plan?
Exhibit:
A. The bloom filter prevents all rows from table T1 that do not join T2 from being needlessly
distributed.
B. The bloom filter prevents all rows from table T2 that do not join table T1 from being needlessly
distributed.
C. The bloom filter prevents some rows from table T2 that do not join table T1 from being
needlessly distributed.
D. The bloom filter is created in parallel by the set of parallel execution processes that scanned
table T2.
E. The bloom filter is created in parallel by the set of parallel execution processes that later
perform join.
F. The bloom filter is created in parallel by the set of parallel execution processes that scanned
table T1.
Correct Answer: BF
Section: (none)
Explanation
Explanation/Reference:
* PX JOIN FILTER CREATE
The bloom filter is created in line 4.
* PX JOIN FILTER USE
The bloom filter is used in line 11.
Note:
*You can identify a bloom pruning in a plan when you see :BF0000 in the Pstart and Pstop
columns of the execution plan and PART JOIN FILTERCREATE in the operations column.
*A Bloom filter is a probabilistic algorithm for doing existence tests in less memory than a full list of
keys would require. In other words, a Bloom filter is a method for representing a set of n elements
(also called keys) to support membership queries.
*The Oracle database makes use of Bloom filters in the following 4 situations:
- To reduce data communication between slave processes in parallel joins: mostly in RAC
- To implement join-filter pruning: in partition pruning, the optimizer analyzes FROM and WHERE
clauses in SQL statements to eliminate unneeded partitions when building the partition access list
- To support result caches: when you run a query, Oracle will first see if the results of that query
have already been computed and cached by some session or user, and if so, it will retrieve the
answer from the server result cache instead of gathering all of the database blocks
- To filter members in different cells in Exadata: Exadata performs joins between large tables and
small lookup tables, a very common scenario for data warehouses with star schemas. This is
implemented using Bloom filters as to determine whether a row is a member of the desired result
set.
http://www.gratisexam.com/
Download